• Tagging TFS builds automatically

    16 May 2017

    I recently had a requirement to automatically tag a TFS build based on a build variable.

    The build ran acceptance tests and the URL of the system under test was a build parameter. The build could either be triggered manually, or automatically by Octopus following a deployment to an environment.


    I wanted to tag the build based on the URL of the system under test. So, for example, any acceptance test build that ran against environment QA1 would be tagged with QA1 etc.

    To do this, we need to manipulate the startUrl build parameter to the format we want, and then apply that tag to the build. The simplest way to manipulate the build parameter was with a crude piece of PowerShell in a build task;

           $environment = $env:startUrl.Replace("http://","").Replace("https://","").ToLower()
           ##vso[task.setvariable variable=environment;]$environment

    This performs some string manipulation on the startUrl build parameter, storing it as a local variable and then sets the value of the environment build parameter to the value of the variable.

    We can now use the “Add Build Task” available from the TFS Diagnostic Tasks extension to tag our build with the environment build parameter.


    Running the build will now add the tag to the build as planned;


    And we can now filter on the tag when viewing the history of our build, allowing us to view the history of the build on a per-environment basis.


  • A compelling use case for partial classes

    19 Nov 2016

    Ever since partial classes were introduced as a C# 2.0 language feature back in 2005 I had considered them to be a bit of a hack, primarily intended to enable a code generator to generate code in one file whilst developers extend the auto-generated logic in another, eliminating the problem of generated code overwriting custom code.

    Another, use case suggested when they were introduced was that they could be used to improve the readability of large complex classes by partitioning related methods into separate files. I wholeheartedly rejected that as a useful use case. If the class is so big and complex that it requires partitioning then invariably it’s not respecting the Single Responsibility Principle.

    Creating partial classes to hide complexity is  just another form of code folding and should be considered a code smell just as much as the use of #region.

    Recently however I came across a use case where a partial class was exactly what was needed. There was a particular application class that was undergoing two streams of work. One team of developers was adding new functionality, whilst another team were performing heavy refactoring to the existing code. Without careful management there was likely to be complex merge issues. The use of a partial class in this case was an elegantly simple solution. By temporarily splitting the class into two partial class files both teams were able to work fairly independently. Once the first team had finished the two files were recombined.


    Nothing ground-breaking. But to me it was finally a good use case for a tool that I had for a long time viewed with suspicion.

    Was the use of a partial class here a code smell? Yes, the smell of work being done.

  • Enable application performance from the start

    12 Nov 2016

    "Application architecture determines application performance." - Randy Stafford
    "Premature optimization is the root of all evil." – Donald Knuth

    When creating a new application, common advice offered by senior developers and architects is “don’t worry about the performance, we’ll optimize it later, right now we need to focus on delivering the functionality”.

    Whilst I generally agree with that, experience has taught me that aspects of this approach does not always work.

    Yes, we can perform caching, algorithm optimisation and introduce a higher degree of parallelism at a later stage.

    But what if the poor performance is due to an inherent floor in your design? Back in 2015 the open source Umbraco CMS project was forced to completely scrap its version 5 release.  The Umbraco team had been concentrating on perfecting the functionality with the view that they could always improve the performance at the end. Warnings of the “perils of premature optimisation” were offered in the face of a growing number of concerns about poor performance. Their approach was summed up in this quote:

    "Make it work, then make it work good, then make it work fast."

    If as an application develops, performance of a certain area is very poor due to a lack of caching, developers will solve the problem themselves, in their own way. The result of this is that when the magical time comes to seriously consider performance

    Provide an initial caching framework
    Enable application performance from the start. Create a simple but extensible framework for caching.

    By providing a simple cache implementation, developers will know where to go when they need to cache something rather than rolling their own.

    When the time comes to really focus on performance, the cache will already be there and ready to be used. By designing with extensibility in mind from the outset we can change the implementation (for example making the cache distributed) without breaking our existing interface.

    Some points to consider when implementing an initial caching framework;

    • How do we invalidate the cache?
    • How do we view what is in the cache?
    • Do we need both a server side and a client side cache?

    Set some hard performance requirements
    There’s slow, and there’s unusable. Yes, right now, we’re focussed on delivering functionality, but if your web application takes 30 seconds to perform an operation with no user feedback then that’s just plain unacceptable for the current users of your system. Is this genuinely a long running process? Then consider providing the user with some feedback. Is it just slow? Why? Do we have a design flaw that needs fixing?

    Use your logs 
    Your application is undoubtedly being delivered to a number of different environments, QA, UAT etc. If your application is hosted by a web server then you’re already logging important information that can give you insight into your application’s performance.

    • What are the slowest requests to respond?
    • What are the largest request sizes? Are you using all of that incoming data?
    • What are the largest responses? Do you need to send all of that outgoing data?

    Most web servers can log directly to a database, if you can’t do that then you can always run a scheduled task to import them. The data is there and its trying to tell you something. Are you listening?

    "Enable application performance from the start." – Tristan Gaydon