Introducing Stack Mechanics

Andrew Harcourt Thu 5 Oct 2017

After living under a rock for far too long, I'm excited to introduce Stack Mechanics.

I've teamed up with two great friends of mine, Damian Maclennan and Nick Blumhardt to launch a series of deep-dive courses in distributed systems, microservices architectures and continuous delivery.

Don't panic - I'm not leaving ThoughtWorks :)

The longer version: Damian, Nick and I, alongside numerous other great engineers, have spent a lot of time solving the hard problems that organisations encounter when trying to move away from monolithic, legacy systems. We've seen lots of places like the potential that microservices offer with respect to organisational agility and independence, but generally they've been completely unprepared for the inherent complexities in such systems and how to manage the trade-offs. Usually we weren't lucky enough for it to be a green-fields project, and as a result we've all inherited our share of legacy, tighly-coupled systems with ugly, scary integrations, unknown black boxes, business logic hidden in integration layers (or ESBs, or stored procedures, or BPM layers), and with the many and various failure modes inherent in such ecosystems.

We arrived at a set of practices, patterns, techniques and tools that helped us solve these kinds of problems and we've had a lot of success at it. Unfortunately, we still see so many organisations making the same kinds of mistakes, through the best of intentions, that we first encountered many years ago. Many teams start off well but are unaware of the pitfalls; many have no idea how to even get started.

We've decided to get together and offer deep-dive training based on our real-world experiences. We're kicking off in November 2017 with a three-day, hands-on workshop on .NET architecture, microservices, distributed systems and devops. There will be both theory and practical sessions, covering topics like:

  • Design Patterns for maintainable software
  • Test Driven Development
  • DevOps practices such as monitoring and instrumentation
  • Continuous delivery
  • Configuration patterns
  • REST API implementation
  • Microservices
  • Asynchronous messaging
  • Scaling and caching
  • Legacy system integration and migration

Attendees will work with us and together to build an actual ecosystem to solve a small but real-world problem.

Book your ticket for the November 2017 (Brisbane) workshop now to lock in early-bird pricing.

We'll be scheduling other workshops in other cities (and probably some more in Brisbane), so register your interest in other dates and cities via, follow us on Twitter via @stack_mechanics and on Facebook as Stack Mechanics.

Command-line Add-BindingRedirect

Andrew Harcourt Tue 4 Oct 2016

One of the things I try to do as part of a build pipeline is to have automatic package updates. My usual pattern is something along the lines of a CI build that runs on every commit and every night, plus a Canary build that updates all the packages to their latest respective versions.

The sequence looks something like this:

  1. The Canary build:
    1. pulls the lastest of the project from the master branch;
    2. runs nuget.exe update or equivalent;
    3. then compiles the code and runs the unit tests.
  2. If everything passes, it does (roughly) this:

    git checkout -b update-packages
    git add -A .
    git commit -m "Automatic package update"
    git push -f origin update-packages
    # Note: There's a bit more error-checking around non-merged branches and so on,
    # but that's fundamentally it.
  3. The CI build then:

    1. picks up the changes in the update-packages branch;
    2. compiles the code (yes, again), to make sure that we didn't miss anything in the previous commit);
    3. runs the unit tests;
    4. deploys the package to a CI environment;
    5. runs the integration tests; and
    6. if all is well, merges the update-packages branch back down to master.

For what it's worth, if a master build is green (and they pretty much all should go green if you're building your pull requests) then out the door it goes. You do trust your test suite, don't you? ;)

All of this can be done with stock TeamCity build steps with the exception of one thing: the call to nuget.exe update doesn't add binding redirects and there's no way to do that from the console. The Add-BindingRedirect PowerShell command is built into the NuGet extension to Visual Studio and there's no way to run it from the command line.

That's always been a bit of a nuisance and I've hand-rolled hacky solutions to this several times in the past so I've re-written a slightly nicer solution and open-sourced it. You can find the Add-BindingRedirect project on GitHub. Releases are downloadable from the Add-BindingRedirect releases page.

Pull requests are welcome :)

ConfigInjector 2.2 is out

Andrew Harcourt Tue 6 Sep 2016

ConfigInjector 2.2 is out and available via the feed.

This release is a small tweak to allow exclusion of settings keys via expressions as well as via simple strings. Thanks to Damian Maclennan for this one :).

To exclude settings keys via exact string matches, as per before:

                         .RegisterWithContainer(configSetting => builder.RegisterInstance(configSetting)
                         .ExcludeSettingKeys("DontCareAboutThis", "DontCareAboutThat"))

To exclude settings keys via expression matches:

                         .RegisterWithContainer(configSetting => builder.RegisterInstance(configSetting)
                         .ExcludeSettingKeys(k => k.StartsWith("DontCare"))

Introducing NotDeadYet

Andrew Harcourt Sat 2 Apr 2016

NotDeadYet is a simple, lightweight library to allow you to quickly add a health-checking endpoint to your .NET application.

It has integrations for ASP.NET MVC, WebApi and Nancy.

Why do I want this?

To easily generate something like this:

When scaling out a web applications, one of the first pieces of kit encountered is a load balancer. When deploying a new version of application we generally pull one machine out of the load-balanced pool, upgrade it and then put it back into the pool before deploying to the next one.

NotDeadYet makes it easy to give load balancers a custom endpoint to do health checks. If we monitor just the index page of our application, it's quite likely that we'll put the instance back into the pool before it's properly warmed up. It would be a whole lot nicer if we had an easy way to get the load balancer to wait until, for instance:

  • We can connect to any databases we need.
  • Redis is available.
  • We've precompiled any Razor views we care about.
  • The CPU on the instance has stopped spiking.

NotDeadYet makes it easy to add a /healthcheck endpoint that will return a 503 until the instance is ready to go, and a 200 once all is well. This plays nicely with New Relic, Amazon's ELB, Pingdom and most other monitoring and load balancing tools.

Awesome! How do I get it?

Getting the package:

Install-Package NotDeadYet

In your code:

var healthChecker = new HealthCheckerBuilder()

Doing a health check

var results = healthChecker.Check();
if (results.Status == HealthCheckStatus.Okay)
    // Hooray!
} else {
    // Boo!

Adding your own, custom health checks:

By default, NotDeadYet comes with a single ApplicationIsOnline health check which just confirms that the application pool is online. Adding your own (which is the point, after all) is trivial. Just add a class that implements the IHealthCheck interface and off you go.

public class NeverCouldGetTheHangOfThursdays : IHealthCheck
    public string Description
        get { return "This app doesn't work on Thursdays."; }

    public void Check()
        // Example: just throw if it's a Thursday
        if (DateTimeOffset.Now.DayOfWeek == DayOfWeek.Thursday)
            throw new HealthCheckFailedException("I never could get the hang of Thursdays.");

        // ... otherwise we're fine.

    public void Dispose()

Or a slightly more realistic example:

 public class CanConnectToSqlDatabase : IHealthCheck
    public string Description
        get { return "Our SQL Server database is available and we can run a simple query on it."; }

    public void Check()
        // We really should be using ConfigInjector here ;)
        var connectionString = ConfigurationManager.ConnectionStrings["MyDatabaseConnectionString"].ConnectionString;

        // Do a really simple query to confirm that the server is up and we can hit our database            
        using (var connection = new SqlConnection(connectionString))
            var command = new SqlCommand("SELECT 1", connection);

    public void Dispose()

There's no need to add exception handling in your health check - if it throws, NotDeadYet will catch the exception, wrap it up nicely and report that the health check has failed.

Framework integration

Integrating with MVC

In your Package Manager Console:

Install-Package NotDeadYet.MVC4

Then, in your RouteConfig.cs:

var thisAssembly = typeof (MvcApplication).Assembly;
var notDeadYetAssembly = typeof (IHealthChecker).Assembly;

var healthChecker = new HealthCheckerBuilder()
    .WithHealthChecksFromAssemblies(thisAssembly, notDeadYetAssembly)


Integrating with Nancy

In your Package Manager Console:

Install-Package NotDeadYet.Nancy

Then, in your bootstrapper:

var thisAssembly = typeof (Bootstrapper).Assembly;
var notDeadYetAssembly = typeof (IHealthChecker).Assembly;

var healthChecker = new HealthCheckerBuilder()
    .WithHealthChecksFromAssemblies(thisAssembly, notDeadYetAssembly)



How do I query it?

Once you've hooked up your integration of choice (currently MVC or Nancy), just point your monitoring tool at /healthcheck.

That's it.

If you point a browser at it you'll observe a 200 response if all's well and a 503 if not. This plays nicely with load balancers (yes, including Amazon's Elastic Load Balancer) which, by default, expect a 200 response code from a monitoring endpoint before they'll add an instance to the pool.

Does this work with X load balancer?

If your load balancer can be configured to expect a 200 response from a monitoring endpoint, then yes :)

Can I change the monitoring endpoint?

Of course. In MVC land, it looks like this:

var healthChecker = new HealthCheckerBuilder()

routes.RegisterHealthCheck(healthChecker, "/someCustomEndpoint");

and in Nancy land it looks like this:

HealthCheckNancyModule.EndpointName = "/someCustomEndpoint";

Does this with with my IoC container of choice?

NotDeadYet is designed to work both with and without an IoC container. There's a different configuration method on the HealthCheckerBuilder class called WithHealthChecks which takes a Func<IHealthCheck[]> parameter. This is designed so that you can wire it in to your container like so:

public class HealthCheckModule : Module
    protected override void Load(ContainerBuilder builder)

        builder.RegisterAssemblyTypes(ThisAssembly, typeof (IHealthCheck).Assembly)
            .Where(t => t.IsAssignableTo<IHealthCheck>())


    private static IHealthChecker CreateHealthChecker(IComponentContext c)
        var componentContext = c.Resolve<IComponentContext>();

        return new HealthCheckerBuilder()
            .WithLogger((ex, message) => componentContext.Resolve<ILogger>().Error(ex, message))

This example is for Autofac but you can easily see how to hook it up to your container of choice.

Why don't the health checks show stack traces when they fail?

For the same reason that we usually try to avoid showing a stack trace on an error page.

Can I log the stack traces to somewhere else, then?

You can wire in any logger you like. In this example below, we're using Serilog:

var serilogLogger = new LoggerConfiguration()

return new HealthCheckerBuilder()
    .WithLogger((ex, message) => serilogLogger.Error(ex, message))

Do the health checks have a timeout?

They do. All the health checks are run in parallel and there is a five-second timeout on all of them.

You can configure the timeout like this:

var healthChecker = new HealthCheckerBuilder()

What does the output from the endpoint look like?

It's JSON and looks something like this:

  "Status": "Okay",
  "Results": [
      "Status": "Okay",
      "Name": "ApplicationIsRunning",
      "Description": "Checks whether the application is running. If this check can run then it should pass.",
      "ElapsedTime": "00:00:00.0000006"
      "Status": "Okay",
      "Name": "RssFeedsHealthCheck",
      "Description": "RSS feeds are available and have non-zero items.",
      "ElapsedTime": "00:00:00.0005336"
  "Message": "All okay",
  "Timestamp": "2015-11-14T11:42:35.3040908+00:00",
  "NotDeadYet": ""

Sensitive setting support in ConfigInjector

Andrew Harcourt Wed 30 Mar 2016

ConfigInjector 2.1 has been released with support for sensitive settings.

This is a pretty simple feature: if you have a sensitive setting and want to be cautious about logging it or otherwise writing it to an insecure location, you can now flag it as IsSensitive and optionally override the SanitizedValue property.

If you just want to mark a setting as sensitive, just override the IsSensitive property to return true. This will allow you to make your own judgements in your own code as to how you should deal with that setting. You can, of course, still choose to log it - it's just an advisory property.

If you want to be a bit more serious, you can also override the SanitizedValue property to return a sanitized version of the value. By default, if you're logging settings to anywhere you should log the SanitizedValue property rather than just the Value one.

public class FooApiKey: ConfigurationSetting<string>
    public override bool IsSensitive => true;

    public override string SanitizedValue => "********";

It's worth noting that these properties do not change the behaviour of ConfigInjector; they simply allow us to be a bit more judicious when we're dealing with these settings.

Talk video: Back to basics: simple, elegant, beautiful code

Andrew Harcourt Fri 25 Mar 2016

This is the video of the talk I gave at DDD Brisbane 2015.

As a consultant I see so many companies using the latest and greatest buzzwords, forking out staggering amounts of cash for hardware and tooling and generally throwing anything they can at the wall to see what sticks. The problem? Their teams still struggle to produce high-quality output and are often incurring unsustainable technical debt. Codebases are still impossible to navigate and there's always that underlying dread that one day soon someone is going to discover what a mess everything is.

How can this happen? It wasn't supposed to be this hard! Don't we all know all this stuff by now?

Let's take a look at some patterns and practices to reduce the cognitive load of navigating a codebase, maintaining existing features and adding new ones, and all while shipping high-quality products. Fast.

Back to basics: simple, elegant, beautiful code

What's new in ConfigInjector 2.0

Andrew Harcourt Mon 14 Dec 2015

ConfigInjector 2.0 has hit the NuGet feed.

What's new? A handful of things:

  • Support for overriding settings via environment variables (useful for regression suites on build servers).
  • Support for loading settings from existing objects.
  • Logging hooks to allow callers to record where settings were loaded from and what their values were.

There are some breaking changes with respect to namespaces so it's a major version bump but unless you're doing anything really custom it should still be a straight-forward upgrade.

Building Tweets from the Vault: Twitter OAuth

Andrew Harcourt Sun 1 Feb 2015

In Building Tweets from the Vault: NancyFX tips and tricks we took a look at some of the refactoring-friendly approaches we can take when building a NancyFX application.

In this post we'll see a very simple example of how Tweets from the Vault uses Twitter and Tweetinvi, a nice-to-call .NET library for Twitter. Tweetinvi has a whole lot more features than authentication but authentication in general is the main focus of this post, so here goes.

I'll state in advance that this advice only briefly touches upon some really elementary application security. I'll write a bit more about that in subsequent posts but please don't treat this advice as anything other than a few brief pointers on the most basic of things. Do your homework. This stuff is important.

Requiring authentication in NancyFX

To begin with, our Nancy application has a base module class from which almost all others derive:

public abstract class AuthenticatedModule : RoutedModule
    protected AuthenticatedModule()

Our AuthenticatedModule just demands authentication and leaves everything else to its derived classes. It's worth noting that there's also a convention test (which we'll discuss in another post) that asserts that every single module in the app must explicitly derive from either AuthenticatedModule or UnauthenticatedModule so as to leave no room for "Oh, I forgot to set the security on that one."

In Tweets from the Vault, we're using NancyFX's StatelessAuthentication hook. We actually add an item to the request pipeline to check for 401 responses and send a redirect. In this way, our individual modules can just demand an authenticated user and return a 401 if not. It's up to the rest of our pipeline to figure out that we should probably present a kinder response.

In our bootstrapper:

protected override void ApplicationStartup(ILifetimeScope container, IPipelines pipelines)
    using (Log.Logger.BeginTimedOperation("Application starting"))
        // A bunch of irrelevant stuff elided here

private static void ConfigureAuth(IPipelines pipelines)
    // Yes, we're using the container as a service locator here. We're resolving
    // a .SingleInstance component once in a bootstrapper so I'm okay with that.
    var authenticator = IoC.Container.Resolve<Authenticator>();

                                   new StatelessAuthenticationConfiguration(authenticator.Authenticate));

    pipelines.AfterRequest.AddItemToEndOfPipeline(ctx =>
        if (ctx.Response.StatusCode == HttpStatusCode.Unauthorized)
            var response = new RedirectResponse(Route.For<SignIn>());
            ctx.Response = response;

Let's have a look at what this is doing. We can see that the item is being added to the end of the pipeline. That means that it will be executed after our module has done its thing and returned. If the module does an early exit and returns a 401, that will be observable in ctx.Response.StatusCode and we'll mess with it; otherwise we'll just pass the response straight through.

If we've observed a 401, we clobber the 401 response with a 302 and bounce the user back to the SignIn page using the Route.For expression that we looked at in Building Tweets from the Vault: NancyFX tips and tricks. It's noteworthy that the browser will never see a 401; just a 302.

What about Twitter and OAuth?

The assumption I'm making here is that you'll actually want to do something on behalf of a user using the Twitter API. That's pretty obvious as it's what Tweets from the Vault does, but I'm going to state up-front: if all you want is an identity via OAuth, this is a harder way to do it than you need. If you want API access, however, then read on.

The first thing you'll need is an application on Twitter. Go to to create one.

The next thing you'll need is an x.509 certificate. You'll be telling Twitter to pass keys to access other people's accounts via GET parameters, so don't be sending those around the place in plaintext. Incidentally, Twitter does support localhost as a valid redirect URL target, so you'll be fine for your own testing. Just make sure that you never present a sign-in/sign-up page other than via HTTPS, and likewise make sure your callback URL is HTTPS as well.

You'll also want the Tweetinvi package:

Install-Package Tweetinvi

Once we've hit our SignIn page, it creates a Twitter sign-in credentials bundle using Tweetinvi. This isn't exactly the code in Tweets from the Vault as there are a few abstractions here and there - I've inlined a few things - but it's pretty close. In our SignIn module:

// You'll want to stash these somehow as there's a single-use token in there
// that you'll need to decode the response.
var temporaryCredentials = TwitterCredentials.CreateCredentials(userAccessToken,
var authenticationUrl = CredentialsCreator.GetAuthorizationURLForCallback(temporaryCredentials, redirectUrl);

// Hack because the Tweetinvi library doesn't seem to support just authentication - it wants to make an
// authorize call all the time. This will happen anyway on the first time someone uses your app but
// forever after an authenticate call will just bounce straight back whereas an authorize call will
// continue to prompt.
authenticationUrl = authenticationUrl.Replace("oauth/authorize", "oauth/authenticate");

We redirect the user to that authenticationUrl, which will be somewhere on, and Twitter will present them with an "Authorize this App" page.

Then, in our SignInCallback module:

var temporaryCredentials = /* fetch these from wherever you stashed them */
var userCredentials = CredentialsCreator.GetCredentialsFromCallbackURL(callbackUrl, temporaryCredentials);
var twitterUser = User.GetLoggedUser(userCredentials);

At this point, we have a valid Twitter user who's been verified for us by Twitter (thank you :) ) We'll also have a set of keys to allow us to make API calls as that user to the extent permitted by the privileges that your app was granted by the user.

Of course, if the user declines to authorise the Twitter app to use their account then you'll get back a different response. Be sure to handle that.

Now what?

Now we have a user who's just presented us with a valid set of callback tokens from Twitter via a redirect URL. That's nice, but we shouldn't be leaving those lying around. What we should be doing is generating our own authentication token of some sort and sending that back as a cookie. (Remember to give people some way to destroy that cookie once they leave a machine, too - you need a "Sign out" button[1]!)

A good way to do this is using a JSON Web Token or similar. There are a bunch of libraries (and opinions) out there on The One True Way™ to do it but the general principle is roughly the same as HTTP cookies: you shove a bunch of claims into a JSON object, sign it and give it to the browser. When it makes a request it can supply that via a cookie.

The JWT standard doesn't specify encryption - it's about sending information in plaintext but making it verifiable. That said, if you don't have to inter-operate with anyone else (i.e. you're just doing your own sign-on, not implementing SSO across a group of sites) then go ahead and encrypt it. It will help prevent other people stickybeaking into what you've bundled into there but still let you use someone else's library code rather than hand-rolling your own. It should go without saying[2] that if you're going to put any sensitive information into it then 1) have a careful think about whether you actually need to do that, and 2) make sure you're using a reputable encryption algorithm with a decent-length key.

Using this approach you can put pretty much anything into your token. As a general rule, I'd like to be able to load a page and only hit persistent storage for data specific to that page. Loading a user's name, profile picture URL or anything else that is part of the ambient experience goes into the encrypted token. This means that I can render most pages without hitting a dbo.Users or similar. The token doesn't need to be readable by anyone else but it does need to be relatively small as it's going to be transmitted by the browser on every request. Also think about what you'll do in the case of wanting to disable a user account - if you're not checking dbo.Users every request then how will you know to return a 403?

Be sensible. Don't create another ViewState. Don't treat it like session state[3].

So we're done?

Not quite. You'll probably want to create your own representation of a user once you have a confirmed Twitter identifier. I'd also use the Twitter 64-bit int as your foreign key, not the username, as that may well change.

It's worth bearing in mind that Twitter's OAuth solution does not provide users' email addresses so that's something you'll have to either request for yourself or live without. That's up to you :) Likewise, we're relying on Twitter's anti-spam measures to prevent malicious sign-ups. That's not unreasonable in the first instance but don't expect it to be perfect.

In the next post in this series, we'll take a look at some interesting domain event modelling as part of implementing payments using Stripe.

[1] And it should generate a POST request, not a GET one, but that's a story for another day.

[2] Hence, of course, needing to say it...

[3] An evil to be discussed some other time...

Building Tweets from the Vault: NancyFX tips and tricks

Andrew Harcourt Mon 19 Jan 2015

In Building Tweets from the Vault: Azure, TeamCity and Octopus, I wrote about the hosting and infrastructure choices I made for Tweets from the Vault. This article will cover a bit more about the framework choices, notably NancyFX.


NancyFX may sound like a bit more of an esoteric choice, especially to the Microsoft-or-die crowd. I've been having a pretty fair amount of success with Nancy, however. I love the way that it just gets out of the road and provides a close-to-the-metal experience for the common case but makes it simple to extend behaviour.

By all means, it's not perfect - I'm not a huge fan of "dynamic, dynamic everywhere" - but it's way better than MVC for my needs. The upgrade path is a whole lot less troublesome, too - the best advice I've found for upgrading between major versions of MVC is to create a new project and copy the old content across.

Application structure

The equivalent of an MVC controller in NancyFX is the module. In a typical MVC controller, there are lots (usually far too many) methods (controller actions) that do different things. While this isn't strictly a feature of the framework, all the sample code tends to guide people down the path of having lots of methods on an average controller, with a correspondingly large number of dependencies.

In MVC, routing to controller actions is taken care of my convention, defaulting to the controller's type name and method name. For instance, the /Home/About path would (by default) map to the About() method on the HomeController class.

Nancy routes are wired up a little bit differently. Each module gets to register the routes that it can handle in its constructors, so if I were to want to emulate the above behaviour I'd do something like this:

public class HomeModule : NancyModule
    public HomeModule()
        Get["/Home/Index"] = args => /* some implementation here */;
        Get["/Home/About"] = args => /* some implementation here */;
        Get["/Home/Contact"] = args => /* some implementation here */;

Obviously, if we want the same Nancy module to handle more than one route then we just wire up additional routes in the module's constructor and we're good.

This is nice in a way but it's also a very easy way to cut yourself and I tend to not be a fan. Not only that, but it still leads us down the path of violating the Single Responsibility Principle in our module.

My preference is to have one action per module and to name and namespace each module according to its route. Thus my application's filesystem structure would look something like this:


This makes it incredibly easy to navigate around the application and I never have to wonder about which controller/module/HTTP handler is serving a request for a particular path.

My About.cs file would therefore look something like this (for now):

public class About : NancyModule
    public About()
        Get["/Home/About"] = args => /* some implementation here */;


One problem with the above approach is that it's not refactoring-friendly. If I were to change the name of the About class then I'd also need to edit the route registration's magic string. Magic strings are bad, mmmkay?

A simple approach for the common case (remembering that it's still easy to manually register additional routes) is to just derive the name of the route from the name and namespace of the module. (Hey, I didn't say that all of MVC was bad.)

public abstract class RoutedModule : NancyModule
    protected RoutedModule()
        var route = Route.For(GetType());
        Get[route, true] = (args, ct) => HandleGet(args, ct);
        Post[route, true] = (args, ct) => HandlePost(args, ct);

    protected virtual async Task<dynamic> HandleGet(dynamic args, CancellationToken ct)
        return (dynamic) View[ViewName];

    protected virtual Task<dynamic> HandlePost(dynamic args, CancellationToken ct)
        throw new NotSupportedException();

    protected virtual string ViewName
        get { return this.ViewName(); }

This now allows for our About.cs file to look like this:

public class About : RoutedModule


We're not quite there yet. I'm not a fan of magic strings and in the above example you can see a call to a static Route.For method. That method is where the useful behaviour is, and it looks like this:

public static class Route
    private static readonly string _baseNamespace = typeof (Index).Namespace;

    public static string For<TModule>() where TModule : RoutedModule
        return For(typeof (TModule));

    public static string For(Type moduleType)
        var route = moduleType.FullName
                              .Replace(_baseNamespace, string.Empty)
                              .Replace(".", "/")
                              .Replace("//", "/")
        return route;

    public static string ViewName(this RoutedModule module)
        // Left as an exercise for the reader :)

This allows us to have a completely refactor-friendly route to an individual action. There are a couple of similar routing efforts for MVC, notably in MVC.Contrib and MvcNavigationHelpers, but this lightweight approach doesn't require building and parsing of expression trees. (It's worth noting that it doesn't account for a full route value dictionary, either, but you can add that if you like.)

In our views, our URLs can now be generated like this:

<a class="navbar-brand" href="@(Route.For<Index>())">
    Tweets from the Vault

and in our modules, like this:

return new RedirectResponse(Route.For<Index>());

A quick ^R^R (Refactor, Rename, for all you ReSharper Luddites) of any of our modules and you can see that we haven't broken any of our links or redirects.

In the next post in this series, we'll take a quick look at authenticating with Twitter using OAuth.

Building Tweets from the Vault: yet another Bootstrap site?

Andrew Harcourt Sat 17 Jan 2015

There's even a Tumblr for this.

The reality, however, is that Bootstrap is incredibly popular for good reason. It's responsive out-of-the-box, is delivered via other people's CDNs (for which I thank you :) ) and provides a relatively familiar UI paradigm.

In keeping with the "minimum viable product" theme, Bootstrap allows for very quick... err... bootstrapping of a pleasant, clean, simple web application with the minimum of fuss.

This is just a quick post to get the "Yet another Bootstrap site?" question out of the way. In the next post in this series I'll look in a bit more detail at NancyFX and some sneaky tricks to make it refactoring-friendly.

Building Tweets from the Vault: Azure, TeamCity and Octopus

Andrew Harcourt Thu 15 Jan 2015

In the previous post in this series, Building Tweets from the Vault: Minimum Viable Product, I wrote about the absolute minimum feature set to get Tweets from the Vault off the ground.

In this post I'm going to write a bit more about some of the technology choices. So... what were they, and why?

Azure + TeamCity + Octopus Deploy

Obviously, I was going to need a hosting platform that was easy to get started with but that could scale if (when? ha!) my app hits the big-time. The app (and the article you're currently reading) is running on Azure VM-hosted IIS deployed to via Octopus Deploy.

To ship a feature

git add -A .
git commit -m "Added some feature"
git push

TeamCity will pick up that change, build it, run my tests, push a package to Octopus Deploy and then deploy that package to my test environment. A quick sanity-check there[1] and then it gets promoted to the live Azure environment. Using this tool suite a change can go from my MacBook to production via a sensible build + test + deploy pipeline in under two minutes.

For anything more complicated, I'll use a feature branch. TeamCity will automatically pick up and build refs/heads/* so all of my branches get the same treatment, all the way through to packaging in Octopus and deploying to a test site.

Hotfixes are treated in the same way as feature branches. If I have to revert to any particular revision, it's simple:

git checkout some-hash
git checkout -b hotfix-some-fix
git add -A .
git commit -m "Fixed some bug"
git push

That build will go straight to my test environment through the normal build + test + deploy pipeline and I can then tell Octopus to promote that hotfix package to production. No mess; no fuss.

In the next posts in this series, I'll write a bit about Bootstrap and NancyFX.

[1] I trust my test suite. You should trust yours, too - or write better ones ;)

Building Tweets from the Vault: Minimum Viable Product

Andrew Harcourt Tue 13 Jan 2015

Tweets from the Vault is a service that will take a random[1] historical item from your RSS feeds and tweet a link to it.

When I started building the service, my goals were simple:

  • Solve my own problem
  • Get to a minimum viable product as quickly as possible

In this series of posts I'm going to look at each of those points in a little more detail.

Solve my own problem

There's a bunch of content in my blog that is still very relevant today. Lots of stuff on agile; lots on software principles and some valuable odds and ends that were starting to be lost to the archives.

I have IFTTT tweeting content as soon as it hits my RSS feeds, which is great, and obviously that leads to people's visiting those posts while that tweet appears in their timeline. Once that tweet falls off their timeline, though, all bets are off - nobody's likely to see that tweet ever again.

I wanted a solution that would periodically fish out a historical but relevant article and tweet a link to it and there wasn't a service that did that for me in a way that I liked. The closest I could find was an outdated Wordpress plugin (and I don't use Wordpress). Well... I blog mostly about software so why wouldn't I write one for myself? And, if I were to write one for myself, perhaps I could tweak it a bit and make it useful to other people. And thus, Tweets from the Vault was born.

Mimimum viable product

The minimum viable product for me was pretty simple:

  • Sign in using a Twitter account
  • Set up a small, recurring payment
  • Add and remove RSS feeds
  • On a schedule:
    • Pick a random article from that set of RSS feeds
    • Tweet it

And that's pretty much it.

In the next post in this series, I'll start looking at some of the technology choices for the app.

[1] For a given definition of "random".

Tweets from the Vault

Andrew Harcourt Wed 7 Jan 2015

I built a thing!

I keep linking people to old blog posts of mine. Sometimes it's to solve a problem that was solved a long time ago; other times it's to make a point that the argument they're having isn't new. Either way, there's a whole bunch of valuable content locked up with nothing but an "Archives" link on a web site to show that it ever existed.

I went looking for a way to re-publish some of this content and couldn't find anything that did what I wanted. Thus, Tweets from the Vault was born.

Tweets from the Vault is a paid service that will pick a random item out of any set of RSS feeds you give it and tweet it from your account.

Landing Page Screenshot

Dashboard Screenshot

I've priced the lowest plan of one tweet per day at $1/month. That's less than the average person would spend on electricity to power their laptop for the month, and way less than you'd spend on even a half-decent coffee.

You'll be seeing it in my Twitter feed - and I hope I'll see it in yours :)

Am I interviewing you? Here's what I'm going to ask.

Andrew Harcourt Thu 20 Mar 2014

If you have an interview scheduled with me, here's what I'm going to ask you.

This is a tongue-in-cheek guide to interviewing with anyone. There'll be some fun poked, some snark and some genuine advice. I'll leave it to you to decide which is which :)

So... Hi! Firstly, if you've arrived here because you're doing your homework, good for you. Have a point. Have two, even. I'm feeling generous today.

I do a lot of interviews. Depressingly, even with all the advice available to them, people still fall down on the same things time after time after time. If you're diligent enough to be doing your homework by reading this, you should be fine :) If you're reading this after your interview with a sinking feeling in your stomach... well... this homework was probably due yesterday. Sorry.

Be on time.

If we're interviewing in person, be on time. I'll be there. If we're interviewing via Skype, I will add you as a contact at precisely our scheduled time. Expect this. Be online. Failing at time zone calculations for any position in the software industry does not bode well.

If you're horrified that I even have to say this then have another point :)

Ask me questions.

With respect to consulting, I want you to treat me as you'd treat a client. The questions you ask in advance and during the interview will help both of us. It will help you understand if you're answering my questions well, and it will help me understand that you know what questions to ask.

Ask me about stuff you're curious about. Ask me about pretty much anything. Just show that you can hold a conversation and elicit useful knowledge about a topic at the same time.

Ask me about anything that helps you decide whether we're a good cultural fit. I'll be asking you similar questions and fair's fair.

I'm hiring colleagues, not minions. I want to like you.

I'll be hoping that you'd like to work with me. Your mission is to make me want to work with you. I'm looking for people who are interesting, engaging and fun to hang out with. I want people on my teams who other people will want to work with. We'll probably be spending a fair bit of time together and I'd like that time to be enjoyable for all parties.

I'm going to ask you about what you're strongest in.

There's no value in finding weaknesses in things you've already told me you're not great at. That's fine. If you say you hate Windows Forms then why would I ask esoteric questions about it? That serves neither of us well. (Besides, I hate WinForms, too.) I'm going to play to your strengths. If your strengths are strong, good for you. If your strengths are weak then I don't need to dig much further.

If you tell me you rate yourself as a thought leader in a space, I expect you to be able to teach it to me from first principles because that's what your clients will be paying four figures per day for. If you tell me, for instance, that you're a thought leader on an open-source framework, I'll assume you're a committer to it and ask you what you last pushed.

If you're good at something, say so. It's your opportunity to show knowledge and enthusiasm. If you're not good at something, say so and we'll move on. That's perfectly okay and I won't hold it against you. Don't bluff. I'll call.

I'm going to ask about what you're interested in, not what I'm interested in.

I want you to get enthusiastic about something. Teach me something. Make me enthusiastic about/interested in something. Creating enthusiasm and engagement in other people is a life skill, not just a consulting one. I'm hiring for that skill.

I will expect you to know your fundamentals.

In the .NET space, this means that I'm going to ask you about the CLR, stack, heap, memory allocation, garbage collection, generics and all the other stuff that you use day in and day out.

In the agile space I'm going to ask you for opinions about Scrum, Kanban, lean and so on. You're going to need to discuss these, not just parrot the definition of a user story.

We'll cover lots of other topics but not knowing your fundamentals is a cardinal sin. It's akin to stealing from your client and it's... not a behaviour I'd encourage. They're called fundamentals for a reason :)

If you'll be paid to write code then, yes, I will expect you to write code.

You're probably interviewing for some kind of software engineering position. Be prepared to demonstrate that you can walk the walk.

Final notes.

People generally take this kind of advice in one of two ways:

  1. They're offended because some of it applies to them.
  2. They're horrified that it would apply to anybody.

If you're the latter then we'll probably have war stories to share, heaps to chat about and I'll be looking forward to meeting you. If you're the former then even if you're offended by what I've written I hope it's constructive in one way or another for you. Have fun storming the castle!

Brisbane Azure User Group talk on Azure Service Bus Made Easy

Andrew Harcourt Tue 18 Mar 2014

Damian Maclennan and I did a talk at the Brisbane Azure User Group on Azure Service Bus Made Easy. Here's the video :)

Azure Service Bus Made Easy

And here are Damian's slides from the night:

Azure Service Bus Made Easy

Support for long-running handlers in Nimbus

Andrew Harcourt Thu 13 Mar 2014

Shiny, new feature: Nimbus now allows command/event/request handlers the option to run for extended time periods.

From day zero Nimbus has supported competing command handlers, allowing us to spin up an arbitrary number of handlers to increase throughput. One issue that we've run into is to do with the way that we have to deal with reliable handling of messages and how and when retries are attempted.

You'd think (naively) that a normal workflow would looks something like this:

  1. Pop a command from the queue.
  2. Handle that command.

But what happens when the command handler goes bang? We need some way of putting that command back onto the queue for someone else to attempt. Again, a naive approach would be something like this:

  1. Pop a command from the queue.
  2. Handle that command.
  3. If that command goes bang, put it back onto the queue.

So... where does the command live during Step #2? The only place for it to live is on the node that's actually doing the work - and this is a problem. If that node simply throws an exception then we could catch it and put the message back onto the queue. But what if the power goes out? Or a disk goes crunch? (Or crackle, given that we're in SSD-land now?) What if that node never comes back?

If that node never comes back, the message never gets re-enqueued, which means we've violated our delivery guarantee. Oops.

Thankfully, that's not how it works. Under the covers, the Azure Message Bus does some clever stuff for us. The actual workflow looks something like this:

  1. Tentatively pop a message from the head of the queue.
  2. Attempt to handle that message.
  3. If we succeed, call BrokeredMessage.Complete()
  4. If we fail, call BrokeredMessage.Abandon()

The missing piece in this puzzle is still what happens if the power goes out. In this case, the Azure Message Bus will automatically re-queue the message after a certain time period (called the peek-lock timeout) and won't allow the original (now-timed-out) handler to call either .Complete() or .Abandon() on the message any more. In essence, it's saying "You get XX seconds to handle the message and if I don't hear back from you one way or the other before that time elapses then I'll assume you've vanished and will give someone else a chance to handle it."

So what's the problem, then?

The problem arises when we have a command handler that legitimately takes longer than the peek-lock timeout to do its thing. We've seen this scenario in the wild with people doing things like Selenium-based screen-scraping of legacy web sites, really long-running aggregate queries or ETL operations on databases and a bunch of other scenarios.

Let's have a look at our PizzaMaker as an example. Here's our IncomingOrderHandler class:

public class IncomingOrderHandler : IHandleCommand<OrderPizzaCommand>
    private readonly IPizzaMaker _pizzaMaker;

    public IncomingOrderHandler(IPizzaMaker pizzaMaker)
        _pizzaMaker = pizzaMaker;

    public async Task Handle(OrderPizzaCommand busCommand)
        await _pizzaMaker.MakePizzaForCustomer(busCommand.CustomerName);

and our PizzaMaker looks something like this:

public class PizzaMaker : IPizzaMaker
    private readonly IBus _bus;

    public PizzaMaker(IBus bus)
        _bus = bus;

    public async Task MakePizzaForCustomer(string customerName)
        await _bus.Publish(new NewOrderRecieved {CustomerName = customerName});
        Console.WriteLine("Hi {0}! I'm making your pizza now!", customerName);

        await Task.Delay(TimeSpan.FromSeconds(45));

        await _bus.Publish(new PizzaIsReady {CustomerName = customerName});
        Console.WriteLine("Hey, {0}! Your pizza's ready!", customerName);

Let's say that the peek-lock timeout is set to 30 seconds and making a pizza takes 45 seconds. What will happen in this case is that the first handler will be spun up and given the command instance to handle. It will start to do its thing and all is well and good. Thirty seconds later, the bus decides that that handler has died so it revokes its lock, puts the message back at the head of the queue and promptly gives it to someone else.

After another 15 seconds, the first handler will finish (presumably successfully) and will attempt to call .Complete() on its message, which will make it throw an exception as it no longer holds a lock. What's worse is that this will repeat until the maximum number of delivery attemps has been exceeded.

We've just made five pizzas for the one order. And none of them has been recorded as successful. Oops.

What do I have to do to make it all Just Work™?

All you need to do is implement the ILongRunningHandler interface on your handler class. Let's update our IncomingOrderHandler example from earlier:

public class IncomingOrderHandler : IHandleCommand<OrderPizzaCommand>, ILongRunningHandler  // Note the additional interface
    private readonly IPizzaMaker _pizzaMaker;

    public IncomingOrderHandler(IPizzaMaker pizzaMaker)
        _pizzaMaker = pizzaMaker;

    public async Task Handle(OrderPizzaCommand busCommand)
        await _pizzaMaker.MakePizzaForCustomer(busCommand.CustomerName);

    // Note the new method
    public bool IsAlive
        get { return true; }

The ILongRunningHandler interface has a single, read-only property on it: IsAlive. All you need to do is return true if your handler is still happily executing or false if it's not. In this case, we've taken the very naive approach of just returning true but it might make more sense, for instance, to ask our PizzaMaker instance if they still have an order for the customer in the works.

Under the covers, Nimbus will automatically renew the lock it's taken out on the message for you so that you can take as long as you like to handle it.

The Readify Firehose: An aggregated feed of a bunch of random Readifarians

Andrew Harcourt Wed 12 Mar 2014

In true Readify style, we had the idea for a thing the other day and launched it the next morning. It's a small thing but still a thing.

Well, we built it, launched it, got a bunch of people to agree that it was a good idea, populated it and publicised it. That was Thursday night/Friday morning.

The Firehose

The Readify Firehose is a simple aggregated RSS feed of a bunch of participating Readify consultants and other bloggers. It's entirely opt-in so you may or may not find your favourite author there (yet) but we hope it's useful.

You can have a look around at or subscribe directly to the feed.

Your domain model is too big for RAM

Andrew Harcourt Tue 11 Mar 2014

Here's the video from my DDD Brisbane 2013 talk.

Your domain model is too big for RAM (and other fallacies)

Stopping a Visual Studio build on first error

Andrew Harcourt Sat 8 Mar 2014

I can't believe I've lived without this for so long.

Einar Egilsson has written a wonderfully-useful little plugin for Visual Studio that will allow you to stop the entire build process as soon as there's a single project that fails.

This helps when you have a solution with tens (Hundreds? Please don't do that.) of projects in them and there's a compilation failure in one of the core projects upon which most of the others depend. You know that the build's going to fail but often it's just too much hassle to stop it manually - especially if you're on a keyboard that doesn't have a Break key.

The plugin has been around for a few years now and I can't believe I've never searched for something like it before today.

You can get the plugin from the Visual Studio Gallery.

ConfigInjector now supports static loading of settings

Andrew Harcourt Thu 6 Mar 2014

ConfigInjector 1.1 has just been released.

It now supports static loading of individual settings so you can grab settings directly from your app/web.config files without using the dreaded magic strings of ConfigurationManager.AppSettings.

This is a necessary feature but please give some serious thought to whether it's a good idea in your particular case. If you genuinely need access to settings before your container is wired up, go ahead. If you're using ConfigInjector as a settings service locator across your entire app, you're holding it wrong :)

Here's how:

var setting = DefaultSettingsReader.Get<SimpleIntSetting>();

ConfigInjector will make an intelligent guess at defaults. It will, for instance, walk the call stack that invoked it and look for assemblies that contain settings and value parsers. If you have custom value parsers it will pick those up, too, provided that they're not off in a satellite assembly somewhere.

If you need to globally change the default behaviour, create a class that implements IStaticSettingReaderStrategy:

public class MyCustomSettingsReaderStrategy : IStaticSettingReaderStrategy
    // ...

and use use this to wire it up:

DefaultSettingsReader.SetStrategy(new MyCustomSettingsReaderStrategy());

If you're using ConfigInjector and like it, please let me know. There's Disqus below and there's always Twitter :)

Request and response with Nimbus

Andrew Harcourt Wed 5 Mar 2014

In this article we're going to have a look at the request/response patterns available in Nimbus.

We've already seen Command handling with Nimbus and Eventing with Nimbus about command and event patterns respectively; now it's time to take a look at the last key messaging pattern you'll need to understand: request/response.

To get this out of the way right up-front, let's be blunt: request/response via a service bus is the subject of religious wars. There are people who argue adamantly that you simply shouldn't do it (possibly because their tools of choice don't support it very well ;) ) and there are others who are in the camp of "do it but use it judiciously". I'm in the latter. Sometimes my app needs to ask someone a question and wait for a response before continuing. Get over it.

Anyway, down to business.

The first item on our list is a simple request/response. In other words, we ask a question and we wait for an answer. One key principle here is that requests should not change the state of your domain. In other words, requests are a question, not an instruction; a query, not a command. There are some exceptions to this rule but if you're well-enough versed in messaging patterns to identify these (usually but not exclusively the try/do pattern) then this primer really isn't for you.

Let's take another look at our inspirational text messaging app. If you're not familiar with it, now would be a good time to have a quick flick back to the previous two posts in the series. Go ahead. I'll wait :)

So a customer has just signed up for our inspirational text message service and we're in the process of taking a payment. Our initial payment processing code might look something like this:

public async Task BillCustomer(Guid customerId, Money amount)
    await _bus.Send(new BillCustomerCommand(customerId, amount));

and our handler code might look something like this:

public class BillCustomerCommandHandler: IHandleCommand<BillCustomerCommand>


    public async Task Handle(BillCustomerCommand busCommand)
        var customerId = busCommand.CustomerId;
        var amount = busCommand.Amount;

        var creditCardDetails = _secureVault.ExtractSecuredCreditCardDetails(customerId);

        var fraudCheckResponse = await _bus.Request(new FraudCheckRequest(creditCardDetails, amount));

        if (fraudCheckResponse.IsFraudulent)
            await _bus.Publish(new FraudulentTransactionAttemptEvent(customerId, amount));
            _cardGateway.ProcessPayment(creditCardDetails, amount);
            await _bus.Publish(new TransactionCompletedEvent(customerId, amount));

So what's going on here? We can see that our handler plucks card details from some kind of secure vault (this isn't a PCI-compliance tutorial but nonetheless please, please, please don't pass credit card numbers around in the clear) and performs a fraud check on the potential transaction. The fraud check could involve the number of times we've seen that credit card number in the past few minutes, the number of different names we've seen associated with the card, the variation in amounts... the list is endless. Let's assume for the sake of this scenario that we have a great little service that just gives us a boolean IsFraudulent response and we can act on that.

Scenario #1: Single fraud-checking service

Single fraud-checking service

In this scenario we have our app server talking to our fraud-checking service. We'll ignore our web server for now. It still exists but doesn't play a part in this scenario.

This is actually pretty straight-forward: we have one app server (or many; it doesn't matter) asking questions and one fraud-checking service responding. But, as per usual, business is booming and we need to scale up in a hurry.

Scenario #2: Multiple fraud-checking services

Multiple fraud-checking services

We've already done pretty much everything we need to do to scale this out. Our code doesn't need to change; our requestor doesn't need to know that its requests are being handled by more than one responder and our responders don't need to know of each others' existence. Just add more fraud checkers and we're all good.

Only one instance of a fraud checker will receive a copy of each request so, as per our command pattern, we get load-balancing for free.

Scenario #3: Multicast request/response (a.k.a. Black-balling)


Let's now say that we want our fraud checking to take a different shape. We don't have a single fraud-checking service any more; we have a series of different fraud checkers that each do different things. One might do a "number of times this card number has been seen in the last minute" check and another might do an "Is this a known-compromised card?" check.

In this scenario, we might just want to ask "Does anybody object to this transaction?" and let different services reply as they will.

The first cut of our billing handler could now look something like this:

public class BillCustomerCommandHandler: IHandleCommand<BillCustomerCommand>


    public async Task Handle(BillCustomerCommand busCommand)
        var customerId = busCommand.CustomerId;
        var amount = busCommand.Amount;

        var creditCardDetails = _secureVault.ExtractSecuredCreditCardDetails(customerId);

        var fraudCheckResponses = await _bus.MulticastRequest(new FraudCheckRequest(creditCardDetails, amount),

        if (fraudCheckResponses.Any())
            await _bus.Publish(new FraudulentTransactionAttemptEvent(customerId, amount));
            _cardGateway.ProcessPayment(creditCardDetails, amount);
            await _bus.Publish(new TransactionCompletedEvent(customerId, amount));

Let's take a closer look.

var fraudCheckResponses = await _bus.MulticastRequest(new FraudCheckRequest(creditCardDetails, amount),

This line of code is now fetching an IEnumerable<FraudCheckResponse> from our fraud checking services rather than a single response. We're waiting for one second and then checking if there were any responses received. This means that we can now use a "black-ball" style pattern (also known as "speak now or forever hold your peace") and simply allow any objectors to object within a specified timeout. If nobody objects then the transaction is presumed non-fraudulent and we process it as per normal.

One optimisation we can now make is that we can choose to take:

  1. The first response.
  2. The first n responses.
  3. All the responses within the specified timeout.

In this case, a slightly tidied version could look like this:

var isFraudulent = await _bus.MulticastRequest(new FraudCheckRequest(creditCardDetails, amount),

if (isFraudulent)
    await _bus.Publish(new FraudulentTransactionAttemptEvent(customerId, amount));
    _cardGateway.ProcessPayment(creditCardDetails, amount);
    await _bus.Publish(new TransactionCompletedEvent(customerId, amount));

Note the call to .Any(). Nimbus will opportunistically return responses off the wire as soon as they arrive, meaning that your calls to Enumerator.GetNext() will only block until there's another message waiting (or the timeout elapses). If we're only interested in whether anyone replies, any reply is enough for us to drop through immediately. If nobody replies saying that the transaction is fraudulent then we simply drop through after one second and continue on our merry way.

We could also use some kind of .Where(response &eq;> response.LikelihoodOfFraud > 0.5M).Any() or even a quorum/voting system - it's entirely up to you.

Eventing with Nimbus

Andrew Harcourt Thu 27 Feb 2014

In this article we're going to have a look at some of the eventing patterns we have in Nimbus

In Command handling with Nimbus we saw how we deal with fire-and-forget commands. This time around we care about events. They're still fire-and-forget, but the difference is that whereas commands are consumed by only one consumer, events are consumed by multiple consumers. They're broadcast. Mostly.

To reuse our scenario from our previous example, let's imagine that we have a subscription-based web site that sends inspirational text messages to people's phones each morning.

Scenario #1: Monolithic web application (aka Another Big Ball of Mud™).

Big Ball of Mud

We have a web application that handles everything from sign-up (ignoring for now where and how our data are stored) through to billing and the actual sending of text messages. That's not so great in general, but let's have a look at a few simple rules:

  1. When a customer signs up they should be sent a welcome text message.
  2. When a customer signs up we should bill them for their first month's subscription immediately.
  3. Every morning at 0700 local time each customer should be sent an inspirational text.

Business is great. (It really is amazing what people will pay for, isn't it?) Actually... business is so great that we need to start scaling ourselves out. As we said before, let's ignore the bit about where we store our data and assume that there's just a repository somewhere that isn't anywhere near struggling yet. Unfortunately, our web-server-that-does-all-the-things is starting to chug quite a bit and we're getting a bit worried that we won't see out the month before it falls over.

But hey, it's only a web server, right? And we know about web farms, don't we? Web servers are easy!

We provision another one...

Multiple servers means multiple text messages

... and things start to go just a little bit wrong.

Our sign-up still works fine - the customer will hit either one web server or the other - and our welcome message and initial invoice get generated perfectly happily, too. Unfortunately, every morning, our customer is now going to receive two messages: one from each web server. This is irritating for them and potentially quite expensive for us - we've just doubled our SMS delivery costs. If we were to add a third (or tenth) web server then we'd end up sending our customer three (or ten) texts per morning. This is going to get old really quickly.

Scenario #2: Distributed architecture: a first cut

The obvious mistake here is that our web servers are responsible for way more than they should be. Web servers should... well... serve web pages. Let's re-work our architecture to something sensible.

Web servers backed by single application server

We're getting there. This doesn't look half-bad except that we've now simply moved our problem of scaling to one layer down. We can have as many web servers as we want, now, but as soon as we start scaling out our app servers we run into the same problem as in Scenario #2.

Scenario 3: Distributed event handlers

Our next step is to separate some responsibilities onto different servers. Let's have a look at what that might look like:

Single distributed worker for each action

This looks pretty good. We've split the load away from our app server onto a couple of different servers that have their own responsibilities.

This is the first example that's actually worth writing some sample code for. Our code in this scenario could look something like this in our sign-up logic:

public async Task SignUp(CustomerDetails newCustomer)
    // do sign-up stuff
    await _bus.Publish(new CustomerSignedUpEvent(newCustomer));

and with these two handlers for the CustomerSignedUpEvent:

namespace WhenACustomerSignsUp
    public class SendThemAWelcomeEmail: IHandleMulticastEvent<CustomerSignedUpEvent>
        public async Task Handle<CustomerSignedUpEvent>(CustomerSignedUpEvent busEvent)
            // send the customer an email

    public class GenerateAnInvoiceForThem: IHandleMulticastEvent<CustomerSignedUpEvent>
        public async Task Handle<CustomerSignedUpEvent>(CustomerSignedUpEvent busEvent)
            // generate an invoice for the customer

We're actually in pretty good shape here. But business is, by now, booming, and we're generating more invoices than our single invoicer can handle. So we scale it out...

Multiple distributed workers for some handlers

... and wow, but do the phones start ringing. Can you spot what we've done? Yep, that's right - every instance of our invoicer is happily sending our customers an invoice. When we had one invoicer, each customer received one invoice and all was well. When we moved to two invoicers, our customers each received two invoices for the same service. If we were to scale to ten (or a thousand) invoicers then our customers would receive ten (or a thousand) invoices.

Our customers are not happy.

Scenario #4: Competing handlers

Here's where we introduce Nimbus' concept of a competing event handler. In this example:

public class GenerateAnInvoiceForThem: IHandleMulticastEvent<CustomerSignedUpEvent>
    public async Task Handle<CustomerSignedUpEvent>(CustomerSignedUpEvent busEvent)
        // generate an invoice for the customer

we implement the IHandleMulticastEvent<> interface. This means that every instance of our handler will receive a copy of the message. That's great for updating read models, caches and so on, but not so great for taking further action based on events.

Thankfully, there's a simple solution. In this case we want to use a competing event handler, like so:

public class GenerateAnInvoiceForThem: IHandleCompetingEvent<CustomerSignedUpEvent>
    public async Task Handle<CustomerSignedUpEvent>(CustomerSignedUpEvent busEvent)
        // generate an invoice for the customer

By telling Nimbus that we only want a single instance of each type of service to receive this event, we can ensure that our customers will only receive one invoice no matter much much we scale out.

A key concept to grasp here is that a single instance of each service type will receive the message. In other words:

  • Exactly one instance of our invoicer will see the event
  • Exactly one instance of our welcomer will see the event

Combining multicast and competing event handlers

It's entirely possible that our invoicer will want to keep an up-to-date list of customers for all sorts of reasons. In this case, it's likely that our invoicer will want to receive a copy of the CustomerSignedUpEvent even if it's not the instance that's going to generate an invoice this time around.

Our invoicer code might now look something like this:

namespace WhenACustomerSignsUp
    public class GenerateAnInvoiceForThem: IHandleCompetingEvent<CustomerSignedUpEvent>
        public async Task Handle<CustomerSignedUpEvent>(CustomerSignedUpEvent busEvent)
            // only ONE instance of me will have this handler called

    public class RecordTheCustomerInMyLocalDatabase: IHandleMulticastEvent<CustomerSignedUpEvent>
        public async Task Handle<CustomerSignedUpEvent>(CustomerSignedUpEvent busEvent)
            // EVERY instance of me will have this handler called.

So there we go. We now have a loosely-coupled system that we can scale out massively on demand, without worrying about concurrency issues.

This is awesome! But how do we send our inspirational messages every morning?

Sneak peek: have a look at the SendAt(...) method on IBus. We'll cover that in another article shortly.

Command handling with Nimbus

Andrew Harcourt Wed 26 Feb 2014

We've had a quick introduction to Nimbus, so let's look at some messaging patterns in a bit more detail.

Picture this: you're running a successful company that sends people a "Good morning!" text message every day. (It's amazing what people will pay for, isn't it?) People pay $5/month for your inspirational text message and business is great.

Let's say you have some clever logic that decides what to send people in the morning. Let's call that the Thinker. The Thinker is quite fast and it can churn out many inspirational thoughts per second. The Thinker code initially looks something like this:

Scenario 1: Big Ball of Mud

public class Thinker
    private readonly SMSGateway _smsGateway;

    public Thinker(SMSGateway smsGateway)
        _smsGateway = smsGateway;

    public void SendSomethingInspirational(Subscriber[] subscribers)
        foreach (var subscriber in subscribers)
            var inspirationalThought = ThinkOfSomethingNiceToSay();
            _smsGateway.SendSMS(subscriber.PhoneNumber, inspirationalThought);

    private string ThinkOfSomethingNiceToSay()
        throw new NotImplementedException();

which means our logical design looks like this:

Thinker coupled to SMS gateway

That's a bit silly - we've coupled our Thinker to our SMS gateway, which means two things:

  1. The Thinker can only generate messages as fast as the SMS gateway can receive them; and
  2. If the SMS gateway falls down, the Thinker can't work.

Let's try decoupling them and see how we go.

Scenario 2: Decoupled Thinker from SMS gateway

In this scenario, our code looks like this:

public class Thinker
    private readonly IBus _bus;

    public Thinker(IBus bus)
        _bus = bus;

    public void SendSomethingInspirational(Subscriber[] subscribers)
        foreach (var subscriber in subscribers)
            var inspirationalThought = ThinkOfSomethingNiceToSay();
            _bus.Send(new SendSMSCommand(subscriber.PhoneNumber, inspirationalThought));

    private string ThinkOfSomethingNiceToSay()
        throw new NotImplementedException();

and we have a handler that looks something like this:

public class SendSMSCommandHandler: IHandleCommand<SendSMSCommand>
    private readonly SMSGateway _smsGateway;

    public SendSMSCommandHandler(SMSGateway smsGateway)
        _smsGateway = smsGateway;

    public async Task Handle<T>(T)

    public async Task Handle(SendSMSCommand busCommand)
        _smsGateway.SendSMS(busCommand.PhoneNumber, busCommand.Message);

Our topology now looks something like this:

Decoupled Thinker from SMS sender

This is much better. In this scenario, our Thinker can generate inspirational thoughts as fast as it can think and simply queue them for delivery. If the SMS gateway is slow or goes down, the Thinker isn't affected and the texts can be delivered later by the retry logic built into the bus itself.

What? Retry logic? Did we forget to mention that we get that for free? If your SendSMSCommandHandler class blows up when it's trying to send a message, don't worry about handling exceptions or failing gracefully. Just fail. Nimbus will catch any exception you throw and automatically put the message back onto the queue for another attempt. If the gateway has a long outage, there are compensatory actions we can take pretty cheaply, too. (Dead letter queues are a topic for another day, but they're there.)

So... business is great, and we've hit the front page of Reddit. Everyone wants our inspirational thoughts. As far as our Thinker is concerned, that's no problem - it can generate thousands of happy thoughts per second all morning. Our telco's SMS delivery gateway looks like it's getting a bit swamped, though. Even though we've decoupled our Thinker from our SMS sender, messages are still taking too long to arrive and the SMS gateway itself is just too slow.

Scenario 3: Scaling out our command handlers

This is where we discover that distributed systems are pure awesome.

When designed well, a good system will allow us to scale parts out as necessary. We're going to scale out our SMS sending architecture and solve our throughput problem. All we need to do is:

  1. Spin up another SendSMSCommandHandler server; and
  2. Point it at a different telco's SMS gateway.

Job done.

What - we didn't have to reconfigure our Thinker to send to two gateways? And what about the first SMS gateway? Doesn't it need to know about load balancing? Well... no.

This is what our system now looks like:

Scaled out SMS sender

Stuff we get for free out of this design includes:

  • Zero code changes to our Thinker
  • Zero code changes to our existing SMS sender
  • Automatic, in-built load-balancing between our two SMS senders

Implicit load-balancing is part and parcel of a queue-based system like Nimbus. Under the covers, there's a message queue (we'll talk about queues soon) for each type of command. Every application instance that can handle that type of command just pulls messages from the head of the command queue as fast as it can. This means that there's no load-balancer to configure and there are no pools to balance - it all just works. If one handler is faster than another (say, for instance, you have different hardware between the two) then the load will be automatically distributed between the two just because each node will pull commands at a different rate.

How cool is that?

Stay tuned for more messaging patterns and how to use them with Nimbus.

Handler interface changes in Nimbus 1.1

Andrew Harcourt Tue 25 Feb 2014

We've tweaked the handler interfaces slightly for the 1.1 release of Nimbus.

In the 1.0 series, handlers were void methods. I admit it: this was a design flaw. We thought it would make for a more simple introduction to using the bus - and it did - but the trade-off was that it was much more complicated to do clever stuff.

Consider this handler method:

public void Handle(DoFooCommand busCommand)

Pretty straight-forward, right? Except what happens when doing foo requires us to publish an event afterwards?

public void Handle(DoFooCommand busCommand)
    _bus.Publish(new FooWasDoneEvent());

That doesn't look so bad, except that we've missed the fact that _bus.Publish actually returns a Task and executes asynchronously. What if doing foo required us to ask a question first?

public void Handle(DoFooCommand busCommand)
    var result = await _bus.Request(new WhoLastDidFooRequest());
    _bus.Publish(new FooWasDoneEvent());

Now things are a bit more complicated. The above method won't compile, as it's not marked as async. But there's a simple fix, right?

public async void Handle(DoFooCommand busCommand)
    var result = await _bus.Request(new WhoLastDidFooRequest());
    await _bus.Publish(new FooWasDoneEvent());

Problem solved. Except that it's not. Because although our code will compile and execute happily, what's going on under the covers is that the Nimbus command dispatcher has no easy way of waiting for your async handler method to complete. As far as the dispatcher is concerned, your handler executed successfully - and really quickly - and we then mark the message as successfully handled.

Think about what happens in this example case below (courtesy of the immortal Krzysztof Kozmic via this GitHub issue):

public async void Handle(DoFooCommand busCommand)
    throw new InvalidOperationException("HA HA HA, you can't catch me!");

As far as the dispatcher is concerned, your handler method executed just fine. And now we've broken our delivery guarantee. Not so good.

The fix for this is simple:

public async Task Handle(DoFooCommand busCommand)
    throw new InvalidOperationException("HA HA HA, you can't catch me!");

Done. Your method now returns a Task - which the Nimbus dispatcher can await - and if your handler throws then we know about it and can handle it appropriately. So your actual handler would look like this:

public async Task Handle(DoFooCommand busCommand)
    var result = await _bus.Request(new WhoLastDidFooRequest());
    await _bus.Publish(new FooWasDoneEvent());

So why is this worth an article? Because in order to make this change, we've had to alter the IHandleCommand, IHandleRequest etc. interfaces to have the handler methods return tasks. This:

public interface IHandleCommand<TBusCommand> where TBusCommand : IBusCommand
    void Handle(TBusCommand busCommand);

is now this:

public interface IHandleCommand<TBusCommand> where TBusCommand : IBusCommand
    Task Handle(TBusCommand busCommand);

This means that when you upgrade to the 1.1 versons of Nimbus you'll need to do a quick Ctrl-Shift-H for all your instances of "void Handle(" and replace them with "Task Handle(".

Nimbus: What is it and why should I care?

Andrew Harcourt Sun 23 Feb 2014

So Damian Maclennan and I built a thing. We're quite proud of it.

At my current employer, Readify, we deal with a large number of problems whose solution is a distributed system of one kind or another. We've used several messaging frameworks in past projects - everything from raw MSMQ through to RabbitMQ, MassTransit, NServiceBus and all sorts of other odds and ends.

All of them had their weak points and we kept finding that we had to write custom code no matter which framework we chose.

So... @damianm and I built a thing. That thing is called Nimbus. And here's why you want to use it.

What is Nimbus?

Nimbus is a nice, easy-to-use service bus framework built on top of the Azure Message Bus and Windows Service Bus stack.

It runs on both cloud-based service bus instances and on-premises installations of Windows service bus and will happily support federation between the two.

Why do I want it?

It's easy to get up and running

Getting an instance up and running is fast and easy. You'll need an Azure account (free) and a service bus namespace if you don't have a local Windows Service Bus installation, after which:

Install-Package Nimbus

followed by some simple configuration code (just copy/paste and change your application name and connection string):

var connectionString = ConfigurationManager.AppSettings["AzureConnectionString"];
var typeProvider = new AssemblyScanningTypeProvider(Assembly.GetExecutingAssembly());
var messageHandlerFactory = new DefaultMessageHandlerFactory(typeProvider);

var bus = new BusBuilder().Configure()
                            .WithNames("TODO Change this to your application's name", Environment.MachineName)
return bus;

That's it. You're up and running.

It's really easy to use

Want to send a command on the bus?

public async Task SendSomeCommand()
    await _bus.Send(new DoSomethingCommand());

Want to handle that command?

public class DoSomethingCommandHandler: IHandleCommand<DoSomethingCommand>
    public async Task Handle(DoSomethingCommand command)
        //TODO: Do something useful here.

It supports both simple and complicated messaging patterns

Nimbus supports simple commanding and publish/subscribe in a way that you're probably familiar with if you've ever used NServiceBus or MassTransit.

It also supports an elegant, awaitable request/response, like so:

var response = await _bus.Request(new HowLongDoPizzasTakeRequest());

It also supports much more advanced patterns like publish and competing subscribe and multicast request/response. I'll cover each of these in subsequent articles.

Did we mention that it's free? And open-source? And awesome?

There's no "one message per second" limit or anything else with Nimbus. It's free. And open source. You can clone it for yourself if you want - and we'd love it if you did and had a play with it.

If you'd like a feature, ask and we'll see what we can do. If you need a feature in a hurry, you can just code it up and send us a pull request.

Please... have a look and let us know what you think.

RSS as a primary source of truth

Andrew Harcourt Tue 18 Feb 2014

I'm experimenting with making RSS my authoritative source for blog posts.

I was thinking the other day about how RSS still seems to be the poor second cousin of most blogging platforms. Everything (well, everything civilised) generates RSS feeds but it's done as an after-thought, not as the primary experience.

As a software engineer, I tend to consume most content directly from my RSS reader. I want that to be the most polished experience. I also want to be able to set a bunch of recipies for different feeds, including feeds of my own activities.

I've also re-jigged the BlogMonster library to generate RSS as its primary source of truth. It still works if you want to stick it on a web page, of course, but the underlying model is now a SyndicationFeed and your individual blog posts are SyndicationItem instances. You can, of course, simply bind those to a Razor view.

Introducing ConfigInjector

Andrew Harcourt Fri 4 Oct 2013

So I've been using this pattern for a while and promising to blog it for almost as long.

Code is on GitHub; package is on NuGet. Here you go :)

How application settings should look:

Here's a class that needs some configuration settings:

public class EmailSender : IEmailSender
    private readonly SmtpHostConfigurationSetting _smtpHost;
    private readonly SmtpPortConfigurationSetting _smtpPort;

    public EmailSender(SmtpHostConfigurationSetting smtpHost,
                        SmtpPortConfigurationSetting smtpPort)
        _smtpHost = smtpHost;
        _smtpPort = smtpPort;

    public void Send(MailMessage message)
        // NOTE the way we can use our strongly-typed settings directly as
        // a string and int respectively
        using (var client = new SmtpClient(_smtpHost, _smtpPort))

Here's how we declare the settings:

// This will give us a strongly-typed string setting.
public class SmtpHostConfigurationSetting : ConfigurationSetting<string>

// This will give us a strongly-typed int setting.
public class SmtpPortConfigurationSetting : ConfigurationSetting<int>
    protected override IEnumerable<string> ValidationErrors(int value)
        if (value <= 0) yield return "TCP port numbers cannot be negative.";
        if (value > 65535) yield return "TCP port numbers cannot be greater than 65535.";

Here's how we set them in our [web|app].config:

<?xml version="1.0" encoding="utf-8" ?>
    <add key="SmtpHostConfigurationSetting" value="localhost" />
    <add key="SmtpPortConfigurationSetting" value="25" />

... and here's how we provide mock values for them in our unit tests:

var smtpHost = new SmtpHostConfigurationSetting {Value = ""};
var smtpPort = new SmtpPortConfigurationSetting {Value = 25};

var emailSender = new EmailSender(smtpHost, smtpPort);


Getting started

In the NuGet Package Manager Console, type:

Install-Package ConfigInjector

then run up the configurator like this:

    .FromAssemblies(/* TODO: Provide a list of assemblies to scan for configuration settings here  */)
    .RegisterWithContainer(configSetting => /* TODO: Register this instance with your container here */ )

You can pick your favourite container from the list below or roll your own.

Getting started with Autofac

var builder = new ContainerBuilder();


    .FromAssemblies(typeof (DeepThought).Assembly)
    .RegisterWithContainer(configSetting => builder.RegisterInstance(configSetting)

return builder.Build();

Getting started with Castle Windsor

var container = new WindsorContainer();


    .FromAssemblies(typeof (DeepThought).Assembly)
    .RegisterWithContainer(configSetting => container.Register(Component.For(configSetting.GetType())

return container;

Getting started with Ninject

var kernel = new StandardKernel();


    .FromAssemblies(typeof (DeepThought).Assembly)
    .RegisterWithContainer(configSetting => kernel.Bind(configSetting.GetType())

return kernel;

Quick demonstration of continuous delivery

Andrew Harcourt Wed 24 Jul 2013

I'm running Readify's Making Legacy Apps Awesome workshop right now.

Part of making legacy apps easy to maintain is getting a deployment pipeline functioning.

This post is a quick demo of how easy it should be to deploy code into production.

git push

Say hello to the nice people of the world, children :)

The story so far: a fairy tale.

Andrew Harcourt Thu 11 Apr 2013

I've just pushed the latest to the Making Legacy Apps Awesome workshop's Git repo.

You'll note that the app is actually quite small. You could say that that's because Krzysztof and I felt dirty writing it (which we did) but in reality it's small because we're going to cover a large number of topics in two days and we don't want people getting bogged down in doing the same repetitive refactorings time after time.

The code's probably going to undergo a few tweaks before the workshop but it's worth cloning now for links, download instructions and other odds and ends.

Oh, and there's a story.

The story so far...

Once upon a time in land far, far away, there was a little town called Derp Town.

The citizens of Derp Town were a proud bunch, and one day they decided that they would create their own university for the betterment of all human-kind (and, of course, to show those snobby villagers over in Herp Town that they were not to be outdone).

The villagers were a poor but sincere lot and they were determined to build their university the Right Way™. They formed a Board of Directors (this was Modern Times, after all, and the old, fuddy-duddy Academic Council could leave their robes at home, thankyouverymuch) and resolved that their university would do the best of everything. It would be the most grand university in all the land. (Being more grand than the nearby Herp College, of course, was a thought that occurred to nobody at all and the citizens would not have dreamed of slighting their neighbours.)

The university had no technology budget or staff to speak of yet, but that was not to stop them from becoming the market leader in technological engagement with their students, who would travel over all the lands and across all the seas to study at such a prestigious institution and gaze with awe upon the wonder that was Derp University's... Enrolment Portal.

Undeterred by their lack of suitably qualified engineers, and being a resourceful lot, they asked the teenage child of the Dean of the Rainbows and Unicorns faculty to write their grand portal using the latest and greatest technologies of the time. It would, they claimed, be a sight to behold.

The portal was unveiled, and all gasped with wonder, for it was great. The villagers rejoiced and praised the Board of Directors for their foresight and wisdom.

Of course, there were a few hitches along the way; a few things that went mysteriously wrong and a couple of enrolments that got eaten by the Terrible Greedy Fossifoo who snuck into the system one night, but by and large the villagers were pleased.

One fine day, the Dean's teenage offspring decided to embark upon an adventure. The child packed some belongings, said some good-byes and set off to find the Ivory Tower of the Architect. The villagers rejoiced, for the Architect would surely praise their Enrolment Portal and speak of them in tones of wonder. (And not at all smug, of course, that Herp College had never seen or spoken to the Architect or been so praised.)

During the time in which the Dean's teenage child and creator of the Enrolment Portal was adventuring, it came time for the villagers to extend the portal. While it had been quite good for the first semester of Derp University's existence, a few (very minor, of course) shortcomings had come to light. Although the portal's author had been the only one to know all the ins and outs of the system, the villagers were confident that these shortcomings could surely be quickly addressed by the villagers themselves if they merely put their minds to it.

Weeks came and went; mid-term examinations happened; students caroused; the leaves began to brown and the seasons to turn. The villagers were no closer to making the required changes to their vaunted portal, and time was running out.

The villagers knew fear.

The villagers worked, and patched, and cobbled, and hacked, and eventually they came to accept that their vaunted Enrolment Portal was unknowable by any but its author, and its author was nowhere to be found.

The villagers knew despair.

In their misery, the villagers came to accept that what they had created for their university had not been Done Right This Time™, but instead was a Brand New Legacy Application™.

A young traveller from far away chose this moment to enter the village, seeking food and shelter. The traveller carried in their luggage a USB stick, upon which the villagers discovered wondrous tomes of knowledge and tools of refactoring. In desperation, the villagers begged the traveller to renew their hopes and restore their grand Enrolment Portal to its former glory.

Despite being young and inexperienced, the traveller took pity upon the villagers and agreed to aid them.

To be continued...

Almost sold out: only 5 tickets left to Making Legacy Apps Awesome.

Andrew Harcourt Thu 11 Apr 2013

We're almost sold out for the Making Legacy Apps Awesome workshop - there are only five tickets left.

I'll be pushing some code to the GitHub repository soon. Krzysztof and I almost cried when we wrote this code for you all, so stay tuned for our hand-crafted disaster :)

Two-day workshop: Making Legacy Applications Awesome

Andrew Harcourt Tue 12 Mar 2013

I'm running a two-day Readify workshop with Krzysztof Kozmic in Brisbane in April.

If you've ever had the privilege of maintaining a legacy application once it's been in production for a while, you'll likely appreciate some of the lessons on offer.

Once in a blue moon software engineers have the privilege of embarking on a new project: to do away with the old; to start over; to Do It Right This Time™. More often, software developers are saddled with existing legacy applications with poor code quality, no regression tests to speak of and frustrated, angry customers and stakeholders to boot. This downward spiral is all too common in the software industry and it would appear that there’s no way out – or is there?

What makes a legacy application? Every big ball of mud had its origins in a green-fields project. Where do we draw the distinction? And why does it matter? Isn’t every application a legacy after it’s released? How do we maintain our software so that "legacy" isn’t a bad word any more – and how can we improve our existing software to that standard?

This two-day workshop will start with the exploration of an utter disaster of a codebase. We’ll investigate how it got into that state in the first place, decide on an end goal, devise a rough strategy to get there and then fire up the compiler. We’ll finish the workshop with a well-factored, usable, maintainable application and a whole lot of appreciation for the tools available to us.

At each stage of the journey you’ll be given the opportunity to have a go at refactoring the application to the next point, after which you’ll be able to pull that completed exercise from GitHub. You will be writing code and you won’t be left behind.

You will need:

  • Laptop (WiFi key will be shared on the day)
  • Visual Studio 2012
  • SQL Server 2012 Express Edition
  • Git (download TortoiseGit if you’re unfamiliar with Git)

You will want:

  • ReSharper (trial versions are available from
  • Visual Studio 2012
  • A pair-programming partner. Partners will be arranged on the day if necessary but you’ll probably prefer to bring a colleague. If you want to go it alone, that’s fine, too.

In advance:

git clone git://

Further instructions for the workshop will be made available within this repository so make sure you do this before the day!

Tickets are available via the Readify event page.

"We tried agile and it didn't work."

Andrew Harcourt Sun 10 Mar 2013

So, you tried agile and it didn't work?

Let's first look at this via an analogy: You fall into a lake. You try swimming but you're not very good at it. Should you stop?

So why are agile methods supposed to work in the first place? Forget the hype about agile. Forget Scrum, Kanban, Lean and all those other buzzwords and instead, consider this very simple question. Why does agile work? Or, at least, why should we try it again when we tried it once or twice and all we encountered was failure?

When you say "We tried agile and it didn't work," what you're really saying is "We tried agile and we kept running into failure. Releasing so frequently was hard; our testing cycle was too long to fit into a sprint; our developers couldn't keep up."

In other words, your agile methodology was doing exactly what it was supposed to do: highlight failures early.

When I hear "We tried agile and it didn't work," I hear "We tried agile and it worked too well but we didn't like the message so we stopped listening."

I hate to break it to you, but highlighting failures is actually the entire reason for existence of an agile process. Everything else is window dressing.

  • Every feedback point is an opportunity to identify failings, both large and small.
  • Every missed user story is a message that the team can't yet estimate well enough.
  • Every bug discovered by end users rather than automated tests tells the story of human error.
  • Every pain point is a warning to fix it before it gets worse.

Teams that "go agile" usually experience pain because previously they just deferred the pain that already awaits them by only trying to release their software at the end of a multi-year project. It's not that there's less pain in single-release projects; just that all the pain is felt at once. That kind of pain is often enough to cause individual nervous breakdowns and company bankruptcies.

When you feel the pain from going agile, don't view it as failure. View it as the process's helpfully surfacing problems early so that you can deal with them while there's still time.

Video from my #dddbrisbane talk yesterday is now online

Andrew Harcourt Sun 2 Dec 2012

DDD Brisbane 2012 yesterday was great fun. If you weren't there, you really missed out.

Massive thanks to Damian Brady, Bronwen Zande, John O'Brien, Brendan and Lin Kowitz and David Cook for putting on a great event.

The video from my talk (abstract here) is now online:

Some of my favourite tweets from the talk:

Vote for my @dddbrisbane talk: Inversion of Control from First Principles: Top Gear Style

Andrew Harcourt Sat 3 Nov 2012

So I'm throwing my hat into the ring again to present at DDD Brisbane.

DDD Brisbane 2012 is on the 1st of December (a Saturday) and sessions are peer-voted so you get to see what you want to see.

Inversion of Control from First Principles: Top Gear Style

Tonight: James May writes "Hello, World!", Richard Hammond cleans up the mess and Clarkson does some shouting.

When most people first try to apply good OO design the wheels fall off as soon as their app starts to get complex. TDD, Mock<T>, IoC, WTF? What are these TLAs, why should you care and where's that owner's manual when you need it, anyway?

Most people are afraid of trying TDD and IoC because they don't really know what they're doing. In true Top Gear spirit we're not going to let ignorance prevent us from having a go, so sit back and watch us point a compiler in the general direction of France and open the throttle.

In this talk we're going to introduce inversion of control from first principles so that it's not just an abstract concept but a real, "I finally get it" tool in your toolbox. We'll start with "Hello, world!" and finish by writing a functioning IoC container - live, in real-time and without a seat-belt - and you can take the code home afterwards and test-drive it yourself.

In the right hands, IoC is a very sharp tool. Just don't let Clarkson drop it on his foot...

*Actual Top Gear presenters may not be present. But it will be awesome anyway.

You should submit something, too.

Don't forget to vote for me :)

In software, the iron triangle is a lie

Andrew Harcourt Fri 31 Aug 2012

Everyone's heard the old adage, "Fast, good, cheap: pick two." It's called the Iron Triangle or Project Triangle.

Fast, good, cheap

I'm not going to make this argument about the world in general but in software this just doesn't work.

Why? Because software quality is paramount and poor-quality software is a complete showstopper as far as "fast" is concerned. You can't build any decent-sized piece of software on a poor foundation. If the code is good it will be easy and quick to change. If the code is poor it will be slow and painful to change.

Cheap will start out cheap and nasty by design but will morph into "expensive and nasty" very, very quickly, and then you'll be stuck with your expensive-yet-cheap-and-nasty legacy application and a team of developers quickly heading for the door before the midden hits the windmill.

In software your best options are "fast and good" (if you can find a crack team) or "slow and good" but neither of those is cheap.

What risks are you taking with your business?

Andrew Harcourt Tue 21 Aug 2012

I had a potential client contact us a while ago. We hadn't dealt with them before and they didn't end up retaining us - largely, I think, because the message about how much trouble they were in might have been a bit too unpalatable to heed.

They're in a world of pain through a combination of bad luck and poor planning although, to be fair, it's more of the latter.

I can't help you with bad luck but I can prompt you to plan for it.

If you ship software, please ask yourself these questions:

  1. If you had to ship a build tomorrow, could you?
  2. How long would it take? Be honest - a day? A week? A month?
  3. What dependencies do you have that could cause you to need to ship one?
    • Third-party web services?
    • iOS provisioning profiles?
    • Expired x.509 certificates?
    • Changes to certificate revocation lists?
    • A critical security flaw?
    • A leap-year bug?
    • A leap-second bug?
    • An operating system patch?
  4. What monitoring do you have in place so that you're the first to know about any of these problems?
  5. How much will it hurt if any of these fails?
  6. How quickly do you need to be back up and running?
  7. How many people are going to sue you if your software/platform/application falls down? And for how much?
  8. How much do you stand to lose?

Back to that potential client: I honestly don't think their business is going to survive this particular flavour of disaster. In other words, I think the entire company is going to fold - and all because someone else moved their cheese and they didn't have a contingency plan. I wish them the best but I can't help them now - not at this late stage :(

I can't help them but I can remind you that the unexpected does happen, and will to you at some point. If your answers to any of the questions above frighten you... better me than fate :)

UPDATE: It brings me no happiness to report that they indeed did go bankrupt. Please don't let that happen to you for such a preventable reason.

Introducing YACLP: Yet Another Command-Line Parser

Andrew Harcourt Thu 28 Jun 2012

It's on NuGet:

Install-Package YACLP

Why another one?

Because there were a bunch out there but all of them focused more on the parsing than on being quick and easy to call.

I want my command-line parser to not only parse arguments (which it does, but isn't very flexible about) but to automatically generate a usage message so that I don't have to.

Simple Usage

var options = DefaultParser.ParseOrExitWithUsageMessage<MyCommandLineParameters>(args);


I'd recommend using an IConfiguration or similar interface so that anything that depends on it doesn't need to know about command-line parameters.

Our main program would look like this:

public class Program
    private static void Main(string[] args)
        var configuration = DefaultParser.ParseOrExitWithUsageMessage<CommandLineParameters>(args);

        new Greeter(configuration).SayHello();

... and our Greeter like this:

public class Greeter
    private readonly IConfiguration _configuration;

    public Greeter(IConfiguration configuration)
        _configuration = configuration;

    public void SayHello()
        var message = string.IsNullOrEmpty(_configuration.Surname)
                          ? string.Format("Hi, {0}!", _configuration.FirstName)
                          : string.Format("Hello, Mr/Ms {0}", _configuration.Surname);


Note that our Greeter takes a dependency on an IConfiguration, which looks like this:

public interface IConfiguration
    string FirstName { get; set; }
    string Surname { get; set; }

... and that IConfiguration interface is implemented by our CommandLineParameters class:

public class CommandLineParameters : IConfiguration
    [ParameterDescription("The first name of the person using the program.")]
    public string FirstName { get; set; }

    [ParameterDescription("The last name of the person using the program.")]
    public string Surname { get; set; }

The key point here is that our Greeter knows absolutely nothing about command-line parameters as everything is segregated using the IConfiguration interface.

The Principle of Least Privilege and other fallacies

Andrew Harcourt Thu 7 Jun 2012

The Principle of Least Privilege states that a user (or service) should be given the absolute bare minimum privileges required in order to fulfil its function.

On the surface, how could this possibly be bad? If I have everything I need in order to do my job then by definition I have everything I need. Likewise, if my app has all the privileges it needs in order to function correctly then, again, by definition it can function correctly. Right?

For the purpose of this post I'm going to focus on application security. The parallels between that and user-level permissions are obvious, so I'll leave you to draw your own conclusions.

Where this all falls down is in defining "least privilege" in a sensible manner. How do we normally decide what privileges an application will require? When we decide on what the application will do, of course. And how do we decide what an application will do? We gather our requirements, of course. And when do we do this? We (of course, of course) gather all our requirements up-front, because that's how we roll.

To rephrase that:

  1. We gather our requirements up-front.
  2. We know these requirements to be inaccurate, incomplete or just plain wrong.
  3. We set our security policies according to these requirements.
  4. We have our policies "signed off" by some governance group or other.
  5. We send our security requirements off to our sysadmins to implement in the form of AD security groups etc.

In other words:

We send our known-broken security requirements, based on our known-broken application requirements, off to be set in stone before we ever even ship our application. Now try telling me that it makes sense. Of course, we can change security policies after they've been written - and constitutional reform is theoretically possible, too, but how long did it take for women to get the vote?

If you're going to set strict security policies for your app then your development team should be responsible - and held accountable - for setting sensible policies and updating them quickly according to changing requirements. If you're going to wrap security policies in endless red tape then don't be surprised when 1) people ask for more privileges than they need just to avoid administrative pain; and 2) your project ends up with a sub-optimal result because of a bunch of stupid security work-arounds that decrease your overall security anyway.

TL;DR: Hire smart people. Trust them. Get out of their road. Hold them accountable.

If your DBA makes your schema changes, you're doing it wrong

Andrew Harcourt Wed 6 Jun 2012

Does your DBA make schema changes for you? Here's a simple question: why?

One of the fundamental principles of an agile team is that of cross-functionality. Everyone should have at least a passing familiarity with everyone else's role, and there should be no one bottleneck within the team. There should also be minimal external dependencies that could prevent the team from delivering its stated sprint goal. If you have an external dependency then you're betting your own team's success on something that you don't control. Tell me how that makes sense, someone?

If you have a crack DBA within your team then that's one thing. I still don't think it's wise, but at least they're operating within your team. Even so, they're a bottleneck: if more than one person needs to make a schema change then all but the first person can hurry up and wait for the DBA to be available.

Is your DBA a developer? Does s/he have commit rights and responsibilities just like any other member of your team? Will s/he fix the build if it breaks? Does s/he decide on your class names? Or on your project/solution structure? Then why have them act as a gatekeeper for same? Your database is just a persistent record of your domain model, and should change accordingly. The schema should be updated by your own scripts, kept in your own source repository, and applied automatically by your application. It is part of your application.

Have infrastructure people do infrastructure and software developers write software. Database servers are infrastructure. Databases themselves are software components, not infrastructure.

This might sound like I'm against DBAs in principle. Not entirely, but I am against the kind who feel the need to insert themselves into application design decisions after the fact. To be fair, I'm also against developers who treat databases as an abstraction that they don't have to understand. My position is that both attitudes are irresponsible.

As a developer using a database you're responsible for knowing your tools and using them well, and that includes SQL. Likewise, as a DBA responsible for any component of a software development project you're responsible for knowing your tools, and that includes being able to code to the extent that you can write migrations if that's what your team needs.

Dear DBAs

Andrew Harcourt Sat 2 Jun 2012

Applications need to own their own data.

The job of a DBA is a relatively thankless one. To make things easier for all parties, there needs to be a better understanding of where the responsibilities lie between DBAs and applications developers.

Applications should be perfectly capable of maintaining their own schemas and data. A database is just a big variable store, in the same way as are the stack and the heap. It's a clever store, yes, but still a variable store. The structure of that variable store and the access to it should be governed by the application itself. The application should be able to migrate its own schema up and down in the case of a rollout or rollback, and should need no human intervention for any part of its release.

Making changes to an app's database outside of the build pipeline (for instance, adding uniqueness constraints that may end up crashing the app), is putting the application into an inconsistent state with what's in development and what will be deployed when it next goes to production. This isn't going to help anybody.

A good software engineer will keep mechanical sympathy in mind when doing database work, and will ask for help when out of his/her depth. A good software engineer will know about nth normal forms, indices, sharding and will be as responsible with the database(s) owned by his/her app to the same degree that he/she would be responsible with the stack and heap.

A good DBA will ensure that each app sharing a database server behaves as a good citizen and doesn't unnecessarily or unfairly utilise resources. A good DBA will be able to help identify and debug poorly-performing queries, and contribute to changing them via the normal build/deployment pipeline.

We can play nicely in the sandpit together. Let's do that :)

This week's version control workflow

Andrew Harcourt Mon 6 Feb 2012

So this is my current workflow in order to commit a single change from my development VM to the client's environment:

  1. Push to github from My_VM
  2. Pull from github to My_Laptop.
  3. Push from My_Laptop to USB stick.
  4. Transfer USB stick to Client_Workstation.
  5. Pull from USB stick to Client_Workstation.
  6. Push from Client_Workstation to NTFS share.
  7. Pull from NTFS share to Client_VM

Software Project Rescue: A Fairy Tale (@QALMUG on Friday the 3rd)

Andrew Harcourt Mon 30 Jan 2012

I'm presenting this on Friday morning at the QLD ALM User Group:

This is a tale of a naïve protagonist, misguided advisors, princesses[1], dragons[2] and knights[3] in shining armour[4].

Like most fairy tales, this story has an idyllic beginning, a middle, and a happy ending. Also like most fairy tales, the middle of this tale is a grim, dark, scary journey through the Woods of Requirements, blithely past the Ivory Tower of Architecture, into the Depths of Design Despair, under the Mountain of Technical Debt and finishing with an agile leap of faith over the Waterfall of Doom to reach the rainbow on the other side.[5]

This talk starts with a post-mortem of a 3.5-year, $2m project that went horribly wrong. We’ll look at where the project failed: the architectural choices; the management strategies; the personalities involved and some sample code. We’ll also look at the changes that were made to bring the project back on track, get its wildly spiralling technical debt under control, re-release a functioning version and refactor it to something testable – and all in 3.5 weeks.

Finally, we’ll discuss ways to identify the issues encountered in this project so that you can spot them before they bite, strategies for regaining control over a project that’s already in trouble, and effective methods for managing troublesome stakeholders.

[1] There may or may not be actual princesses.
[2] Or dragons.
[3] Or knights.
[4] Motorcycle helmets.
[5] Bingo!

I hope people think it's worth getting up early for :)

Wow. DISQUS rocks.

Andrew Harcourt Mon 9 Jan 2012

Wow. I was introduced today by Andrew Tobin (@tobin) to DISQUS.

I'd tweeted about my replacement blog engine, and mentioned in my previous post that I hadn't yet implemented commenting. He suggested DISQUS, which is an online commenting service that I'd somehow never heard of. How had I not heard of this?!

From sign-up to comments working and tested, it's taken about half an hour of effort. If you need a public commenting solution, I genuinely can't think why you'd write your own any more. Nice work :)

New Blog Engine

Andrew Harcourt Sun 8 Jan 2012

As per my New Beginnings post, I've tried a couple of times recently to move to FunnelWeb. I've failed. The reasons for my failure are simple:

  • I wanted to host everything on AppHarbor.
  • I wanted everything (including posts and comments) under source control.
  • I wanted a custom look and feel.

Everything that I wanted could be done using FunnelWeb, but when I started doing it I realised that I was re-working lots of code that I didn't actually expect to use in production and that, in a nutshell, a blog is just a web site and making a database-driven web site only really makes sense when there are frequently-changing data.

In addition, I kept finding myself firing up Visual Studio in order to write code snippets, then doing horrible things to them (I'm looking at you, Windows Live Writer) in order to make them appear. If I'd used FunnelWeb then I could use markdown (good) but then I'd have had to use Visual Studio to write any code snippets anyway.

I thought about it some more, and decided that I like writing code in Visual Studio, and can live with using it as my main blog text editor. So... blog posts are all now written using VS.

The other advantage of having a blog that's entirely under source control is that it's trivial to back up, restore and redeploy - and I can deploy it anywhere.

There's a bunch of stuff that doesn't work yet, but I can live with that. RSS isn't hooked up properly; hence feeding to Twitter (TwitterFeed via RSS) doesn't work. Comments aren't ready for prime-time either, but I expect they'll be done shortly. There's also no mobile browser detection, but that's in the pipeline.

Why did I want to change in the first place? Because there have been lots of blog posts that I've found myself wanting to write, but needing to include too many code snippets. Gists are fine but hurt to include via script tags; pre-formatted code blocks are ick and screenshots are kind of pointless when my goal is to allow people to easily copy/paste the code I'm posting. So... now that my roadblocks are mostly out of the way, expect to see more useful stuff here. You can hold me to that :)

The Book of Process

Andrew Harcourt Fri 14 Oct 2011

  1. Once upon a time, a company's youthful founder lucked upon a successful method of performing a task.
  2. The task was profitable, and therefore it was good.
  3. The founder wrote down that method and bestowed it unto his/her minions.
  4. S/he said unto them, "This is The Process, and it is good."
  5. The minions performed The Process until the end of days.
  6. And they all lived happily ever after.

Not quite.

The adage, "If it ain't broke, don't fix it" has a corollary best expressed by Tess Ferrandez: If broken it is, fix it you should. Or at least, "If broken it is, don't inflict it on everyone just because it's all too hard to bother changing it."

I'm referring to internal corporate processes that serve no purpose other than demonstrating to some ISO 9000 certification minion that a documented process exists.

It seems as if every organisation, once it reaches a certain size, goes into the "create process and perish" stage. If it's a private enterprise it'll die a long, slow, horrible death of three-thousand triplicate signatures, but if it's a government enterprise then it's never going to die and we're all going to hate it.

Is hate too strong a word? I don't think so. Show me a single person who's dealt with a government department and left happy, and I'll show you someone who's on far too many psychedelics to be on the same planet as the rest of us. We hate government agencies because they're slow, bloated and inefficient. (We hate individual governments, too, but for different reasons. That's just not the point of this post.)

So, given the choice, why do organisations choose to have processes that make them slow, bloated and despised? Your guess is as good as mine, but I think it's to do with some misguided idea that they should be able to have any human follow the process and get the same result. Guess what, ladies and gentlemen: if you have a muppet-followable process then you'll end up hiring muppets to implement it - which is at best embarrassing while the process makes sense, but an utter disaster once the process becomes obsolete and everyone refuses to recognise it.

The Book of Process above should read something like this:

  1. Once upon a time, a company's youthful founder lucked upon a successful method of performing a task.
  2. The task was profitable, and therefore it was good.
  3. The founder wrote down that method and bestowed it unto his/her minions.
  4. S/he said unto them, "This is The Process, and it is good."
  5. The minions performed The Process until the end of days.
  6. The end of days arrived with a pitchfork-waving, torch-brandishing mob of angry citizens who burned the company's offices down around the minions.
  7. The minions could not find their backsides with both hands the "In Case of Fire" process document quickly enough to escape.
  8. And the citizens all rejoiced, and lived happily ever after.

Is process hurting your company? If so, it might be worth considering whether you actually need all your documented processes, or whether you can just set desired outcomes and performance metrics, and leave your smart people to figure things out for themselv...

... oh. I get it. Smart people. They've already left. Never mind.

Farewell, Steve

Andrew Harcourt Thu 6 Oct 2011

There's nothing I can say that hasn't been said before by someone else, about someone else, for similar reasons. Nonetheless: today the world has lost a giant and we are all the poorer for it.

Steve Jobs changed the game so many times that people lost count. His visionary genius, his personal drive and his dedication to making absolute perfection commonplace have left an indelible legacy in which we all share.

His arrogance, his of-course-my-way-is-better approach and his unwillingness to compromise cultivated dislike amongst many, but then, nobody else gave the world the iPhone or the MacBook. Steve's arrogance was justified and, well, his way generally was better.

Steve's example should prompt all of us - in all industries - to treat elegant, beautiful design as a first-class consideration when creating something. If you build something people love, they will love you for it.

Farewell, Steve.


The Forgotten Convention-Based Test Harness

Andrew Harcourt Thu 29 Sep 2011

I'm writing another MVC3 app. I'm in the same world of pain with respect to magic strings and anonymous classes. I don't like it here.

I'm sorry, but who on earth thought that this was a good idea for a method signature?


I mean, seriously? Six strings and a Dictionary<string, object>? And that's a sensible collection of arguments? Why not add in a RouteValueDictionary (another string/object dictionary) for good measure? Oh, never mind.

But surely there are smarter overloads than that, right? Well, yes - and honestly, you just couldn't make this stuff up:


Oh My Friendly Geranium, but who on earth thought of this - and why wasn't it knocked on the head by someone sensible? MVC team: I'm really sorry, but IMO you've completely missed the point of a strongly-typed language. I know you've taken a lot of flak for this over the past few years but, to be honest, it's kind-of deserved :(

So, what on earth do we do about it?

When we're in a world of magic-string pain, the first thing to do is generally start creating some conventions. The second thing, of course, is that we test those conventions using unit tests. Hold on a second, though: we're using a strongly-typed language and yet we're *writing unit tests *to test our conventions because we're using strings and anonymous classes? Why don't we have a tool that does this for us?

We want a convention-based test harness that:

  1. Runs on every build.
  2. Has a set of conventions that are clear and unambiguous.
  3. Doesn't make us manually write tests.
  4. Will auto-generate test cases for every new piece of code we write.
  5. Is refactoring-friendly.
  6. Is fast.
  7. Will fail the build if something's wrong.

I think we've forgotten something important. Can anyone point to a tool that's all of the above, comes pre-installed with every version of Visual Studio and requires zero integration work, no NuGet packages and just works?

Anyone? Anyone? No? Here's one: csc.exe. Yep, that's right: use the compiler.

Call me old-school, but a compiler is all of the above. Consider this method:


Think about it: why don't I have to test that a and b are integers? Sure, I should be testing for edge cases here, but I don't have to type-check or null-check my inputs. Why not? Because the compiler is enforcing the convention that when I say "int", I mean a 32-bit integer and it simply won't let anyone pass in anything else.

I don't have to write a single unit test to enforce these conventions. The compiler will provide me with fast and reliable feedback - at compile time - if I've broken anything, which is far better than getting the feedback at run-time using unit tests (or worse, at run-time when a user hits an issue).

I think we as developers can afford to take a bit more time to write strongly-typed code, e.g. HtmlHelpers for controller actions. Try this one:


You can make your code infer the controller name, the action name, the area and all sorts of things without ever having a magic string. You could even add strongly-typed parameters to it (built using a fluent interface) so that it's effectively impossible to get it wrong without the compiler complaining.

So why don't more people use such a great convention-based test tool? I have no idea.

iPhone/MonoTouch Unit Testing with Team Foundation Server

Andrew Harcourt Mon 26 Sep 2011

I know, I know: apples, oranges etc. It's not really, though - this is actually quite straight-forward. But first, some background.

I was recently involved in building another iPhone application for an enterprise customer. We had previously dealt with that customer and had a good working relationship and level of trust with them, but a huge part of that trust was the visibility that we provided as to what was going on. It's mostly off-topic for this post, but the way we did it involved using Team Foundation Server, giving the client's people accounts and making sure that the client's product owner could log in at any time and see how the project was going.

With the iPhone application, we wanted to do exactly the same. The problem was that we hadn't had that much experience using TFS to build iPhone apps. Most of our collective efforts had either been in Objective C using git (the stock-standard approach that Xcode pushes) or C#/MonoTouch using Mercurial (my experience). While both of those approaches are fine for personal and small projects, we really wanted all the other bonus points that TFS provides (continuous integration, work item tracking, reporting, web-based project portal etc.)

So how'd we do it? Well, the first thing to note is that we're not actually building the application bundle using TFS - yet. That still requires a MacBook, MonoDevelop and a bunch of other stuff. We'll probably get there soon using custom build tasks, rsync, ssh and a few other things, but we're not quite there yet.

What we do have is a working continuous integration and nightly build, plus running tests using MSTest. The nice thing is that it actually wasn't that hard.

  1. Open the project in Visual Studio (not MonoDevelop). You'll probably need something like Chris Small's MonoTouch Project Converter to make this work happily.
  2. Include monotouch.dll in your /lib directory and reference it from there rather than from the GAC. (It won't be in the GAC on your build server, and nor should it be.)
  3. If you have other dependencies (e.g. System.Data), copy those from your MacBook into /lib as well and reference those from there.
  4. Done :)

The key point to note when you're building your app is that you're not going to be able to easily test your ViewController classes using MSTest, so make them dumb. If there's business logic in there, extract it out into your domain model. If there's data access logic in there... well... you're doing it wrong anyway and you should definitely extract that out :)

You'll end up with an app that has a dumb(ish) UI shell wrapped around a bunch of well-tested business logic classes. The added bonus of doing it this way is that you can then re-use a lot of that code when you write your WP7 or Android version.

The outcome? The visibility we wanted from the reporting and work item tracking side of things, plus a CI build that didn't require witchcraft to configure, plus automated unit tests.

The only real down-side of this approach is that the build we're unit-testing isn't the build we're shipping - we still have to build that manually on a MacBook somewhere. It does, however, give us a good indication of our overall code quality and a reliable safety net for refactoring.

An iPhone Eye for the C# Guy at @dddbrisbane

Andrew Harcourt Sat 27 Aug 2011

I just submitted this abstract for DDD Brisbane 2011. Don't forget to vote for me!

An iPhone Eye for the C# Guy

iPhone Development using MonoTouch

This session will cover the basics of developing an iPhone application using C#/MonoTouch, from how to create a "Hello, world!" app through to a look at a real-world, production codebase.

We'll cover the use of web services, threads, databases, generics (yes, you can use generics), reflection, inversion of control (yes, you can use IoC, too!) and general application architecture, and finish with a look at some tools, tips and tricks to make life as an iPhone developer much less painful.

This session will assume prior knowledge of threading, reflection, generics, inversion of control and why you'd want to use all of these, but don't let that scare you :)

Crash Logging in a MonoTouch App

Andrew Harcourt Fri 26 Aug 2011

Customer: Your app crashed again.
Developer: How? What were you doing when it crashed? What happened?
Customer: I don't know. I was playing with it and it crashed.
Developer: Do you remember which page you were on?
Customer: ?
Developer: bangs head against wall
Sound familiar?
One of the first things I do when I start a project (or when I inherit one) is set up logging. You'd be amazed and depressed at how many projects just don't have any, or bolt it on as an after thought. Here's a hint: it's much easier to debug your app while you're developing it if you know where it's breaking. Ground-breaking, I know Smile with tongue out
There are a couple of things we want to do:
1. Log when the app crashes. Do it quickly and reliably, and don't rely on any app infrastructure (e.g. injected loggers) as it's already been torn down at this point. 2. Send the log message the next time the app starts. This will allow us to use all our nice web services etc. and means we can use just the one logging mechanism rather than having several different ones. MonoDevelop creates a fairly standard-looking Main.cs for us:

Let's change that to add a simple try/catch block:

The key here is line 11 - it makes it simple and obvious as to what it's doing.
So... the CrashLog class itself is a static class and doesn't do very much at all. The idea is that it's simple to call and doesn't rely on having any of the app's components still available.

We're using the My Documents special folder as that's where we're reliably allowed to write things to on the filesystem.
So now that we have our crash logger, let's hook things up so that we can log when the app starts again. In our AppDelegate class:

Yes, we're using our IoC container as a service locator. This isn't good, but we can't use constructor injection in our AppDelegate as it's the class responsible for creating our IoC container.
So how does the logger work? Well, that's up to you. You can choose to make a web-service call; you could spit it to another text file and periodically upload it; you could even send an email if you really wanted to.
My preference is using a web service as I tend to just hook it straight to the server-side logger (which usually uses log4net under the covers), but your mileage may vary.
The next time a customer tells you that your app crashed, though, you'll be able to respond with, "I know - and I've already fixed that bug."

Cargo Cult Software

Andrew Harcourt Tue 26 Apr 2011

Ever heard of a cargo cult? It's a term describing the philosophy of many pre-industrial tribes in the Pacific during World War II with respect to the "cargo"; i.e. the foodstuffs and equipment called in by American and Japanese radio operators. The thinking went that the Americans and Japanese had appropriated (stolen) the cargo that rightfully belonged to the natives by means of liaising with the gods.

The interesting thing isn't the belief that the cargo was stolen but more that the means of stealing it back were novel. The natives began to imitate the radio operators in the hope that building the same items of worship (realistic-looking radio sets, landing strips and in some cases even mock aircraft), and memorising the noises that the operators made and faithfully reproducing them, would help them divert the cargo back to its rightful recipients.

Go on. Laugh. Laugh rudely and insensitively at the primitive natives. How could they be so naive?

So what on earth does this have to do with software? Well, take a look around at most software projects you've been involved in. Why do we have multiple web service layers? Why do we separate concerns? Why do we abstract our data layer(s)?

There are legitimate (and good) answers to these questions, but the most common one is "because everyone else does it." In other words, other people do these things and receive good software ("cargo") in return, so if we do it then perhaps *we'll* receive the cargo instead. The key is that some people understand why they do it, and some people just mimic it.

This post was prompted by a recent experience in which I ended up tearing apart an entire application (data access layer, service layer to access data, business logic layer, view/presenter layer etc.) and rebuilding it with something sensible - and all because someone tried to do it right and sadly had no idea how to do it.

I've seen so many software projects started with the absolute best of intentions. In most cases I honestly can't fault the effort or the diligence displayed by the original developers - after all, almost nobody deliberately sets out to do a poor job. It's heartbreaking, for example, to see someone having spent weeks or even months of their life inserting a web service layer for data access without understanding why they're doing it, then complaining that their app's too slow because every action ends up requiring a full table to be fetched and serialized over a SOAP call. Equally, I've seen people follow the Single Responsibility Principle to a ridiculous extent but end up with utterly unmaintainable code because they didn't properly understand why it's important. MVC, MVVM, n-tier... they're all useful tools in the box, but all too often they're just fake radio sets made of wood or landing lights that don't emit light.

Having earlier laughed rudely at the primitive natives, let's now cry. Cry, because you know that, at least once, each of us has been guilty of the same lack of understanding. Each of us has done something "because it's best practice" and each of us has subsequently realised our own error.

Finally, to stand on my fake soapbox for a second: go and teach your craft, so that other people don't have to make the errors in the first place, and so that you don't have to clean up messes like these and destroy someone's prized wooden radio set while you're at it.

P.S. Credit to Paul Stovell, whose tweet finally prompted me to finish and publish this blog post.

Why merely "very good" employees don’t get promoted

Andrew Harcourt Thu 21 Apr 2011

I saw a question on /. this morning about exactly this and decided to blog it rather than comment as it's another one of those "I hear this question all the time" posts.

The question's usually along the lines of, "I'm really good at my job. How come I can't get promoted?"

The simple-yet-offensive answer is this: you haven't done a good enough job to be promoted.

Before everyone starts screaming, let's look at this from the perspective of a mythical manager who actually wants his/her employees to succeed. I know, I know, but they really are out there. So... what goes on?

Usually the internal monologue goes something like this:

  1. John's doing a really good job running X/Y/X.

  2. I like John and want to promote him.

  3. There's a position running a new project and I'd really like to offer it to him.

  4. I have nobody to replace him with and would need to train a replacement.

Oops. John's just made it impossible for even the best-intentioned manager to promote him. Now imagine what a less-well-intentioned one would do.

At any one of these stages a promotion is easily killable. If John isn't doing a really good job running X/Y/ then there's no way that he's going to be promoted except under the Dilbert Principle. Similarly, if John isn't well-liked then there's no way he's going to get promoted simply because dealing with unpleasant people is unpleasant and a good manager isn't going to inflict an unpleasant person on other people if he/she can at all avoid it. The third one's obvious: if there's no promotion available then there's no promotion available.

The fourth point, though, is the one that almost invariably gets forgotten. I've blogged about this before but the gist of it is that if John's made himself indispensible then that's the end of it. Indispensible means that he cannot be done without; in other words, he's locked himself into his current position all by himself. John hasn't done a good enough job to get promoted because he hasn't trained his own replacement.

Training your replacement is part of your job.

So why don't people do it?

The most obvious answer is fear. Fear of being replaced by someone younger and cheaper; fear of being shown up by someone who ends up knowing more than you; fear that your management won't see this as a valuable use of your time. There's also fear of the unknown - it's much easier in many cases to perennially gripe about being really good but not getting promoted than it is to actually get promoted and run the risk of failing in your new position.

There are other likely culprits (budgetary constraints, time pressure etc.) but the point of this article is that they're all surmountable once the fear is overcome.

I want a promotion. Should I stay in my current company or look elsewhere?

The short answer to this is: go wherever your interests take you.

The longer answer: if you like your company and want a particular position, ask for it. It's much easier to just tell people what you want and ask what you need to do in order to get it. You never know - they might not have realised that you're bored or unhappy where you are, especially if you're keeping things running smoothly and appear to have it all under control.

If your current company can't or won't accommodate you then, by all means, look elsewhere. Before doing that, though, take a good, hard, honest look at yourself and ask whether you'd promote you if you ran the company. If the answer's yes but the company won't then it's time to move on. If the answer's no then it's probably time to ask for help - and also to start making some serious efforts towards your own professional development.


Finally, if your company's regularly appearing on FC or NGE then it's time to jump ship no matter how good you are :)

Fix what you know is broken

Andrew Harcourt Tue 19 Apr 2011

As a consultant, there's a very common complaint that I hear from clients. The complaint is along the lines of, "It's all such a mess," or "We need to re-write it from scratch." They're almost always right in the first case, and almost invariably plain wrong in the second. A messy codebase is a pain, but learning the wrong lesson from it just means that they're going to experience the same pain all over again once they've done their re-write - if they're still in business when they finish.

The first question to ask is simple: why is it all such a mess? If it's a mess because you made a completely wrong technology choice (e.g. classic ASP for a point-of-sale application, or a thick client where a web client was required) or the team that wrote it simply didn't have a clue and have all been fired, then perhaps a re-write is in order. Other than that, there's almost no good reason to do a complete re-write. Regardless, that's not the point of this post.

The point of this post is that it's usually such a mess because people don't know how to fix it - or, more probably, people don't know how to even decide on a strategy to follow and are drowning in technical debt as a result. Here's a simple one:

Fix what you know is broken.

If you honestly have no idea where to start, ask for help. Plenty of people will. Firstly, though, try this:

Do you have source control? No? Then download Git and fix that.
Do you have continuous integration? No? Then download TeamCity and fix that. Do you have unit tests? No? Then go and write at least a "Hello, world!" test to get yourself started. Do you have an issue-tracking system? No? Then go to AgileZen or similar and get one.
Do you have an automated deployment solution? No? Well... you know the drill.

There's really no excuse for not having these sorts of tools. Moreover, there's no excuse for not having the agility that these sorts of tools offer.

Once you have a build, a rudimentary test suite and a deployment solution, the next step is clear:

Fix what you know is broken.

What's at the top of your issue-tracking list? Does it make sense? If so, then that's what's broken. Go and fix it. If not, then its priority is what's broken. Fix it by re-prioritising it so that it does make sense.

I visited one client recently that had a test automation task as a "drop everything and fix now" priority - but below that were cases that were costing parts of their business money every single day. In this case, the prioritisation was broken. So... fix it and move on.

Once you've fixed something that was broken, release it. That's right: release the thing. "Oh," you might say, "but it has to go through n levels of QA, UAT and sign-off first." Guess what: that's the next thing that's broken. So... given that you know what's broken, what now?

Fix what you know is broken.

I'm starting to sound like a broken record here, but I'm also starting to sound like a broken record whenever I have to deliver this lesson in person :)

You need to get your release cycles down to something manageable, and if you've had a messy codebase for a while then I guarantee that you're afraid of releasing to production because of what might have changed while you weren't looking.

The solution is to start releasing earlier and more often. Get used to the idea that a production release is boring and routine, not unfamiliar and scary. Releasing to production should be scripted and ideally entirely automated (but that's the subject of a squillion other blog posts) so I'm not going to re-hash it here. Just accept that if you're afraid of releasing to production then that's the next thing that's broken. After all, if you haven't changed the code since your last release then what's likely to go wrong? If you have changed the code, then what you're really afraid of is your testing regime, not deployment per se.

Once you have your releases automated and none of the above things are scary any more, you're down to the boring, menial task of just chipping away at your technical debt. Identify the highest-priority item to fix; fix it; release it.

It really is that simple, ladies and gentlemen :)

MSTest throws System.AccessViolationException on build server

Andrew Harcourt Tue 8 Mar 2011

The problem: the MS Test framework, QTTask.exe, sgen.exe and a bunch of others were throwing AccessViolationExceptions in all sorts of places, but most painfully during my CI build. This mean that our build server couldn't run our unit tests, which meant that TFS Deployer wouldn't deploy because it thought the build was bad.

A red herring presented itself in the form of SQL Server Management Studio also falling in a heap, which led us to believe that it was an issue with the server image.

After wasting an entire day re-provisioning VMs from images, the simple solution was to turn off Symantec's endpoint protection.

Correctly Creating and Using a WCF RIA Services Class Library

Andrew Harcourt Sun 6 Feb 2011

I recently had a situation with a client who had a requirement to use Silverlight 4.0 with RIA Services for an application under development. I've used WCF RIA Services on and off and have a bit of a love-hate thing with them (they wreak havoc on ReSharper's Adjust Namespaces feature, for instance, which is an OCD habit of mine), but using them really did make sense in this client's case.

Anyway... The client's application had to interface with a legacy database and no schema changes are permitted (sound familiar?). The database was a monster, with over 300 tables, 255 columns per table in some cases, no referential integrity constraints and eight-character field and table names.

If I'd had a month just to map it, I'd have suggested NHibernate for its ability to tie itself into knots to allow arbitrary key constraints and other cleverness. As I just don't have a month, however, the client's going with EF because it automatically builds a map that makes me cry a beautiful map of the database schema as-is.

Adding a Domain Service based on an EF data layer is also trivial - RIA Services has a magical code generator that builds the entire suite of CRUD WCF services for us and it's entirely possible that we'll never need to tweak them.


If you have a look at the SilverlightApp project at the top, you'll see a "Generated_Code" directory that contains all of the generated proxy code for all the domain services advertised by the SilverlightApp.Web project.

The catch is that it generates proxy code for every single class advertised by EF (although it's selectable, in this case it's actually required), which in this case results in approximately 300,000 lines of code.

The Problem

It takes a long time to compile 300,000 lines (about 10Mb) of code, especially every single time a completely unrelated change is made to the UI.

Enter RIA Services Class Libraries.

RIA class libraries are not your garden-variety of class library. They're not particularly well documented and are fairly unintuitive. If you follow the standard directions you'll miss the really clever part, which is the part that makes it worth-while to use them in the first place.

The key is that we want all the generated code in a library that we don't compile every single time we make a UI change. So... how do we do that?

Creating and using a WCF RIA Services Class Library

1. Create your Silverlight project (use the Business Application template if you don't have any pressing need not to - it's great).

2. Add a new "WCF RIA Services Class Library" to your solution. This will create two projects: the class library itself and another web project to host it in.


3. Add a reference from your existing web site to your new class library's web site.

4. Add your domain services into your class library's web site instead of your main web site. For the sake of an example I've added a domain service around pet ownership, with Pet and Owner entities.

You're not done yet! This is the part where it will all actually work, so people tend to forget the next step. I'll show you that in a minute.

Let's have a look in the Generated_Code folder in our main Silverlight app at this point. Firstly, make sure that you've checked the "Show All Files" button for the SilverlightApp project so that you can see the Generated_Code directory:

image image

Open the SilverlightApp.Web.g.cs file and have a look. You should see something like this:


Note the definition of a proxy class for the "Owner" entity. Hang on a minute, though - this class was supposed to be in the class library - we don't want it in our main application. The whole point of this exercise was to not have all this generated code in our main Silverlight application as it takes too long to compile. The next step then:

5. Add a reference from your Silverlight app to your Silverlight class library. This is the obvious step but it's the one that lots of people miss, hence this blog post.

The WCF RIA code generator is actually very clever and it works out which entities you have proxy code for already to and only generates code for the ones you're missing.

Once you add the reference, if you leave the generated code file open, this is what you should see:


Ooh! No proxy code for our entities! Our compilation times just plummeted Smile

"Unable to switch servers at this time. The Team Explorer is busy."

Andrew Harcourt Fri 9 Jul 2010

This one's a very irritating gotcha. I guess it illustrates the value of good, descriptive and above all, appropriate, error messages.

One configuration below is obviously wrong and evil; one is all that is good and wholesome. You spot the difference Smile


Hint: don't put a trailing slash on your TFS server name or you'll get the error that's in the title of this post.

If you’re so smart, why does all your code look simple?

Andrew Harcourt Sun 4 Jul 2010

I hear variations on the theme of this question all the time:

"Oh, so that's all. That's really easy."

"Really? That's all it does?"

"So where's the hard part? I understood that straight away when I looked at it."

Sometimes these questions are even asked of me, which is flattering ;) The flippant answer, of course, is "So that any idiot can come along and understand it." Disappointing though it might be, this answer is also quite correct.[1]

If you only ever take one piece of advice from this blog, take this one: Code yourself out of your job.

Coding yourself out of a job doesn't mean you're going to get fired when you're done.[2] Coding yourself out of a job means that you don't end up responsible for the same chunk of a project forever, always fixing bugs in it because it's too complicated, too involved or just plain scary for newcomers.

There's an art to having someone look at your code and understand it at a glance. It's a subtle display of skill, the result of which being that the person reading your code doesn't even realise that you got inside their head before they ever arrived, understood what they'd be thinking when they looked at it and gave them subtle cues as to which parts of the code were the ones they were looking for.

There's also a corresponding amount of confidence required in order to be able to choose a less optimal solution in terms of performance for the sake of comprehension. Why confidence? Because as part of having your code look simple, you have to be willing to have someone stroll along after the fact and ask within two seconds of looking at your solution, "Why did you choose this way? This other way's almost twice as fast..." That, my friends, is the whole point: they understood it within two seconds of looking at it.

One "I don't understand" question about a piece of code and unless it's a really, really fast and effective algorithm and the alternatives are awful, it's already cost more in real terms than its fast performance has saved.

Obviously in heavily-hit code paths this would not a good choice, but if I have a method that only gets executed a few times per second then I honestly don't care whether it takes 0.01s or 0.001s to execute. If there's no other real difference between the two then of course I'm going to pick the faster one, but if the screamingly fast one is horrendously complicated and I'm going to be the one who'll have to come back to it every time it needs modifying, then I'll take the quite fast one, thank you very much.

At the point where someone asks why you made a sub-optimal performance decision, you have to be sufficiently confident in your own ability to explain to them that yes, you could have done it another way, but then it'd take longer for every single person who came after you to understand what was going on, you'd have to field more questions about it, and that it actually didn't matter[3]. You also have to be sufficiently confident in yourself to evaluate their solution and confess on occasion, "Actually, I really do like your solution and it's better than mine. Let's do it your way instead."

In a nutshell, the optimal solution is not necessarily the fastest, but the most effective in terms of the long-term maintenance of the application.

Of course, the really smart ones amongst us can write code that's simple, obvious and extremely fast. And we're all that good, right? ;)


[1] Especially when that idiot might be yourself six months later...

[2] Unless you work for a stupid company, in which case you need to leave now anyway.

[3] Be correct about this, though. If it matters, do it better.

Introducing EasyTfs

Andrew Harcourt Thu 13 May 2010

One of the most common tasks a software developer will perform is that of creating a bug or a task in an issue-tracking system. Before a task or a case is created, the developer should search for an existing case in the system so as to avoid duplicates. This should, then, according to Amdahl's Law1, be very fast and simple to do.

Team Foundation Server, however, does not make this simple and fast. Please excuse me for not having the energy to articulate the failings of TFS in this respect, and simply offer a solution instead.

Enter EasyTfs.

EasyTfs will search your TFS database as you type, allows per-field matching, regular expressions and all sorts of other goodies that real issue tracking systems have had for years.

Searching for "coffee" in our TFS database here results in this:


We could also search for "createdby:tullemans coffee", "cr:dan coffee" or simply "3165" (the work item number). Image attachments (.png and .jpg) are displayed as thumbnails by default and can be displayed full-screen just by clicking on them.

Searches are quick, too :)


It's read-only and ignores work-item security for the present (early beta and all that), but it's fast and has already saved our team a huge amount of time, both in simply finding cases that we know exist, and also in avoiding raising duplicate cases because it's so much easier to search for them.

It's open-source (GPLv2), and you can find both the source code and a Windows installer (.MSI) at

1 which can be summarised as, "Make the common case fast, damn you!"

Visual Studio: You're Doing it Wrong! Again!!

Andrew Harcourt Tue 4 May 2010

Here's a hint to the Visual Studio team:

If I have to wait so sodding long for a modal dialog box to just GTFO of my road that I can write an entire blog post about it, you're doing it wrong!



Every VS developer knows about that stupid sodding help popup, and we all hate it. It's of absolutely no help whatsoever and, just FYI, most of us just re-map F1 to a do-nothing macro.

Code Review Watch List

Andrew Harcourt Fri 30 Apr 2010

When reviewing code, everyone picks on different things. This is a good thing, provided that code reviewers move around frequently and everyone gets to benefit from the resulting sharing of knowledge.

Here's a bunch of anti-patterns that have bitten teams and companies that I've worked with in the past, and some ways to avoid them. This is by no means a complete list; it's simply the ones that keep jumping out at me and, more so, the ones that tend to cause pain later rather than sooner with a corresponding degree of compound interest on the original technical debt.

Always picking the same code reviewer.

Asking the guy next to you for a code review is great; just don't do it all the time. That one guy next to you will always pick on the same things, and will likely miss (or simply not value) other points that another reviewer might pick up.

The guy next to you is also probably going to become familiar with the code you're working on. Again, this is often a good thing but it also means that your code, while obvious to you and your favourite reviewer, might become impenetrable to others without your realising it.

Additional "using" Statements

"Using" statements, especially when highlighted by your favourite diff tool, are a good (and simple) indicator of increased class coupling. This isn't always a bad thing, but should always be studied rather than glossed over.

This should also be your cue to check whether the new code has dependencies on concrete classes in the newly-used namespace, or whether someone's been clever and has coded to an interface.

using System;
using System.Text;
using My.Useful.Namespace;
using My.Specific.Namespace.That.I.Shouldnt.Know.About; // one of these things is not like the other...

Pointless Safe Casts

This is a complete pet hate of mine, as many times I've spent ages tracking down some random NullReferenceException or Debug.Assert( ... != null) failure because someone's used a safe cast and then blithely assumed that the result will be non-null.

// BAD!! This is a NullReferenceException waiting to happen!
ISpecificThingy specificThingy = thingy as ISpecificThingy;

// GOOD. This will (correctly) throw an InvalidCastException at the point of the error, rather
// than allowing a null value to propagate along the code path and trip someone up later.
ISpecificThingy specificThingy = (ISpecificThingy)thingy;

Non-thread-safe Collection Modification

This probably won't bite you, but it will bite someone else who has to come along three months later and figure out why there's some random, non-deterministic explosion. That person, however, will probably kick you, so it all balances out in the end.

// BAD! This is not thread-safe, and is an ArgumentException (item with same key...) waiting to happen.
public void Remember(string k, string v)
if (!_strings.ContainsKey(k))
_strings[k] = v;

// BETTER. This is thread-safe and won't explode. What's missing, though?
public void Remember(string k, string v)
if (!_strings.ContainsKey(k))
lock (_strings)
if (!_strings.ContainsKey(k))
_strings[k] = v;

public void Remember(string k, string v)
#region Argument Checking

// ... actually check arguments, here :)


if (!_strings.ContainsKey(k))
lock (_strings) // careful, though - when locking on arbitrary objects, it's easy to create deadlocks.
if (!_strings.ContainsKey(k))
_strings[k] = v;

Catching NullReferenceExceptions

It's just bad. Please don't do it - and never let your name be associated with a review of code that does.

// Hi. I'm so utterly lazy that I can't be bothered to null-check things, *or* to
// de-couple my classes so that I don't *have* to null-check so many things.
// Please beat me around the head with a stick.

Mapping Symbol Names to Strings

// BAD!
string fooTypeName = foo.GetType().FullName;
if (fooTypeName.Contains("Widget"))
// we have a widget!

WidgetBase widget = foo as WidgetBase;
if (widget != null)

IWidget widget = foo as IWidget;
if (widget != null)
This is easy to avoid in the example above, but less easy to avoid when messing with types and type names without having an instance to safe-cast. Use Type.GetType("your_class_name") and .IsSubClassOf() instead.
Not an issue in C#? Fine, then. Wait until you have to inter-operate with JavaScript, and then see how you go about retro-fitting subtype checks to your entire codebase :)

Automatic Updates? Bollocks.

Andrew Harcourt Mon 19 Apr 2010

I'm an upgrade junkie. I really like having the latest and greatest of every piece of software on my workstation. That having been said, I absolutely hate with a passion having to update the whole lot of it manually.

I just returned to work from a week's holiday. (It was a great holiday, by the way - I did almost entirely nothing - but that's not the point of this post.) Let's go through the list of software that pestered me to update it upon my return:

  • Firefox (Twice - once to 3.5.7 then another to 3.6! What happened there, guys?)
  • Adobe Reader
  • Opera
  • Windows (surprise)
  • Java
  • Forefront
  • Seesmic
  • Firefox (Again!! ("Downloading and installing updates to your add-ons." WTF?)

image image

The list actually goes on, believe it or not, but I'm already fed up with the sight of little scrolly bars.

A plea, then, to all software developers: make your software update itself automatically and silently.

I've been walking around all morning asking people, "Does anyone know what version of Google Chrome we're up to now? Four! Chrome 4.0 and nobody's noticed, because it updates itself silently." As it turns out, we're actually up to version five, and I didn't notice - for the same reason. Chrome team, I take my hat off to you.

Here are a couple of simple questions you should ask yourself as a developer:

  1. Is my software life-, mission- or business-critical? That is, will people die or lose vast sums of money if my software fails?
  2. Are my unit and regression test suites up to date? Will I be able to tell if I've broken my software?
  3. Am I going to change my licence agreement?

If you're writing, for example, an enterprise-grade business intelligence suite, then perhaps you shouldn't auto-upgrade. By all means, make it easy for people to upgrade, but don't do the whole ninja-update-sneak-it-in-at-midnight thing.

If you're writing pretty much any other type of software, here's a tip: users don't care about point releases. They don't care about bug fixes unless it directly impacts them. They don't even really care about major feature additions - they're obviously already using your software, so it must be fulfilling some need for them. By all means, add features, but do it in a way that isn't disruptive to your existing user base.

Users, to be honest, really don't care about anything much, other than how to get their work done - and your stupid "Update Me! Now! Now!" dialog box is just getting in their way.

Generic solution for testing flag enums in C#

Andrew Harcourt Sat 27 Feb 2010

This is one that's irritated me just a little for ages, but never as much as this morning, when I needed to create a whole swag of small-ish flags enums and then test for bits set in them.

Here's a quick solution:

public static class FlagExtensions
    public static bool HasFlag<T>(this T target, T flag) where T: struct
        if (!typeof(T).IsEnum) throw new InvalidOperationException("Only supported for enum types.");

        int targetInt = Convert.ToInt32(target);
        int flagInt = Convert.ToInt32(flag);

        return (targetInt & flagInt) == flagInt;

Notably, while we can't use a where T: Enum constraint in our extension method, an enum (lower-case e) is actually a struct, so we can at least constrain it to structs and then do a quick type-check.

To use it to test whether an enum value has a particular flag set, try this:

MyFlagEnum flags = MyFlagEnum.Default;

if (flags.HasFlag<MyFlagEnum>(MyFlagEnum.Coffee))
    // our "Coffee" bit is set. Yum!

Ugly Photos Screen Saver

Andrew Harcourt Wed 3 Feb 2010

This is a very, very simple screen saver. I wrote it because I was fed up with not having one that did precisely what I wanted.

There will probably be updates to it, but it does what I want for now. Hopefully it'll work for some others, too.


  • Displays photos from your My Pictures folder and/or any other folders that you configure.
  • Performs a simple cross-fade between images.
  • Navigate forward/backward between individual images and image galleries.
  • Multiple monitor support.




What's it cost?

Nothing. It's free. If I update it to do some cool stuff that other screensavers don't do then I'll start asking a small price for it but, for now, it's free.

Licence Agreement

Permission is hereby granted, free of charge, to any person obtaining a copy of this software to use it without charge. All other rights, including those to modify, copy, merge, sell, license, sub-license, publish or otherwise distribute the software, are reserved by the author.


Download and Install

To install it under Windows Vista or Windows 7, save it to your Downloads folder, then right-click the saved file and select Install.

To install it under Windows XP, just save it to C:\Windows\System32 and you're done.

I accept the licence agreement above. Download Ugly.scr.

Please let the screen saver report errors to me

I neither want nor care about your bank account details, Facebook password or whether you tweet or not. I just want to know when my app crashes or experiences an unexpected failure. You can, of course, turn off error reporting, but by leaving it turned on you'll provide anonymous feedback to me so that I can fix things that go wrong.

Quick JavaScript debugging: the browser’s address bar

Andrew Harcourt Mon 7 Dec 2009

Okay, so I feel a bit stupid for not having noticed this before. I wanted to geo-tag my blog in FeedBurner and so flicked to Google Maps to work out the latitude and longitude of the head office of the company at which I work.

I found a quick guide on how to extract lat/long information from Google Maps on the Tech-Recipes site, which suggested centring[1] the map and then copying and pasting the following JavaScript code snippet into the browser's address bar:


For the record, the code works, but that's not what caught my attention. The code runs, but the part that was new to me is that the code runs in the context of the current browser window and has access to all its variables. I'd never realised that before, hence why I now feel stupid.

Nonetheless, it means that I now have a quick way of executing arbitrary JavaScript when I want to poke around on a page but don't want to fire up a debugging tool.

Oh, and yes, jQuery expressions do work, but watch out for characters requiring escaping in your selectors.

[1] I'm Australian. That's correct spelling. So there :)

Never again will I forget the address of a DNS server...

Andrew Harcourt Fri 4 Dec 2009

Google has just opened its Google Public DNS service to the public. It's very cool, but I'll come back to that. The coolest features of all, however, are the IP addresses at which the DNS servers live:

Odds are I'll never need to remember the IP address of another DNS server as long as I live :)

Okay, the other cool stuff. Firstly, it pre-fetches a bunch of records; secondly, it handles recursive lookups; very importantly, it doesn't do the dodgy resolve-to-ugly-spammer-sites that lots of other DNS services do when they can't resolve a particular hostname.

It'd be nice to see localised versions of these so that those of us not in the United States of Litigation don't have to wait for packets to hop the pond and back, but I'm sure that will happen with time.

Oh, and for all the tin-foil-hat-wearing conspiracy theorists out there: yes, this means that Google can tell whose hostname you're looking up. Face it, though, you're probably going to be going there as a result of a Google search anyway, so they already know. I'd trust Google to "Do no evil" way before I'd trust my local ISP.

"Your browsers are bad and you should feel bad."

Andrew Harcourt Sun 22 Nov 2009

Microsoft released a preview of Internet Explorer 9 the other day. The world sniggered.

My favourite comment of all of the ones on the IEBlog was by someone known only as Justin": "Your browsers are bad and you should feel bad." Justin, if you send me an email I'll send you a cookie for that quote.

Don't get me wrong; I think it's great that Microsoft's finally staring to accept that their IE8 browser was obsolete before it even shipped. Their own graph shows that IE7 is around three times slower than IE8, which is in turn around five times slower than the next-slowest browser in circulation (Firefox), which is in turn at least twice as slow as Chrome. In other words, Microsoft's own graph shows an absolutely staggering performance difference between IE7 and the rest of the civilised world, and IE8 sitting all by itself in the middle. It's worth noting that while IE6 is also still a supported browser, Microsoft appears to have simply been too embarrassed to even include it.

Microsoft claims that IE9 today is about as fast as the current version of Firefox. Oh, but wait - IE9 hasn't shipped yet, and won't for ages, whereas Firefox is out in the market, systematically eating away at IE's anti-trust-derived market share along with its younger, slimmer cousins like Opera, Chrome and Safari.

Microsoft's next claim to fame is a massive improvement on their Acid3 test score, from 32/100 to... umm... 32/100. Hang on - what? You mean you've got an entirely new browser under development, guys, but you haven't fixed the core of what was broken with the previous one?

For the record, IE's performance is awful. We as professional developers accept that and dislike it, but it's only a reason to dislike Internet Explorer. The reason that anyone who's ever had to code for it hates the bloody thing with a passion is simple: it doesn't support nearly enough of the web standards that every other browser in common use does. Oh, and it leaks memory like a bloody sieve, too.

Oh, but wonderful - I can now have 96-point Gabriola text render really quickly and without jaggies. I can honestly say that of all my complaints about having to write code for Internet Explorer, that's been number one on my all-time wish list. Wow. I can die happy, now.

I'm not suggesting that Microsoft should stop development on IE. Far from it. Competition is good, right? If everyone were to adopt WebKit then we'd only be in a whole new world of hurt five years from now. What the entire development community would absolutely love to see, however, is this:

  1. Fix the update mechanism. Has anyone noticed that we're up to Chrome 4.xx now? No? Why not? Because the thing updates itself, that's why. Make IE do the same, and ditch support for all legacy versions. If large corporate clients want to keep using IE6 for their intranet applications, fine. Let them. But if you continue to inflict IE6 and its misbegotten children on the world because of a few large corporate clients, the rest of the world is going to despise you for it. Rightly so.
  2. Just make it work. We honestly don't care how fast it runs as long as it's within cooee of the others. We do care about having to write custom code to deal with the broken rendering engine, memory leaks and other nasties.
  3. Stop bragging about until you've done it.

To be civil (which is less than you're getting from most of the web community), but blunt nonetheless: put up or shut up.

Google launches SideWiki for Chrome

Andrew Harcourt Fri 30 Oct 2009

A while ago, I blogged about how it was amusing that Google had a new-ish product, SideWiki, that only worked in Firefox.

Google has just released a bookmarklet that will allow SideWiki to work in Chrome, Safari and others. Read their official blog post about it here, or, if you already have it installed, perhaps even consider annotating this blog post :)

UI Fail

Andrew Harcourt Sun 25 Oct 2009

This morning I cancelled an old account with a DNS hosting provider. I'd been signed up to them involuntarily by a web hosting company, which is one of the reasons it's no longer my osting company. No hyperlinks - I can't really recommend either of them and that's not what this post is about.

I clicked the "Account | Cancel" link and was presented with this dialogue box.


Please, guys. If you don't have a UI designer, get one. If you do have one, get another one instead.


On corporate blogging: corporate culture and personal ethics

Andrew Harcourt Tue 13 Oct 2009

Let me start by demonstrating just a tiny amount of corporate cynicism.

"Blogging is so Web two-point-oh."

"Oh, but everyone has a blog."

"If you're not on, like, Twitter an' Facebook 'n' stuff, then you're like, a dinosaur or IBM or something."

Let's now all pause for a moment to welcome those companies that aren't, strictly speaking, technological leaders any more, to the blogosphere[1]. Congratulations, you've finally arrived. But what are we doing, here? What, when you get down to it, is corporate blogging all about? What should it be about? Why should a company do it? And when should it not?

A company should blog, first and foremost, when it actually has something to contribute to the world at large. Rambling personal blogs are fine for... well... personal purposes[2], but corporate blogs should be useful to current and potential customers, partners, shareholders and so on. They should also allow people within the company who normally wouldn't be at the front line of public relations to provide their perspective on things - otherwise, why have a corporate blog at all? Why not just have a series of press releases from your marketing department?

Having people within your company contribute to your corporate blog can be a two-edged sword. Unfortunately, it's one upon which many companies cut themselves. The pitfall should be obvious, but one damaging strategy that many companies employ is to sanitise or censor what their people say. I'm not arguing that you should permit people to blather your corporate secrets all over the web; merely that you should hire good people and then trust them to do their job well.

If you have a trusted, valued employee who's worked on your business-critical systems or in client-facing roles where they've already represented the company, you should trust in their professionalism and let them post what they deem appropriate.

If you have an irresponsible blogger who's publishing commercial-in-confidence information to the web at large, then you don't just have an irresponsible blogger. You have an irresponsible employee, who should be swiftly counselled or transformed into a former employee.

To Companies:

When asking your employees to blog on your behalf, you need to give them some guidelines about what they should and should not say, and you should also give them some benefits from doing so. I suggest these as a minimum:

  1. We won't edit your posts. Period. We reserve the right to pull them from the site if absolutely necessary, after which we'll explain to you why we did so, but we will never edit anything that's been published under your name. Period.
  2. We won't require an approval process. Post as you will. The corollary to this: be responsible with what you publish, because you will be held responsible for what you publish. We expect - and trust - you to do the right thing.
  3. It's infeasible to track whether it was personal or company time on which you wrote a blog post, so it must be accepted that if you publish a post on our corporate blog then we own the content.
  4. We grant you a perpetual and irrevocable (but non-exclusive) licence to use the content that you generate for our blog for your own purposes, including but not limited to syndication to your own personal blog, re-publishing as journal articles, or any other purpose you wish - specifically including for personal profit - provided that such use does not harm the company's reputation.

If you don't offer at least these guarantees, then your blogging-inclined employees will simply go off and publish their own content anyway, and you'll derive no benefit from their efforts whatsoever. Moreover, you'll have demonstrated a fundamental lack of trust in your employees that may well cause them to take their talents, not just their blogs, elsewhere.

If you can't offer at least these guarantees, then I respectfully suggest that a corporate blog is not for you.

To Individuals:

Carefully consider your personal reputation. If you haven't one, consider what you'd like it to be in a year or so's time. If you have one, consider whether the opportunity to blog for your company is more likely to enhance or damage it.

Consider whether you're likely to be permitted to express your real opinion on a matter, or whether you're going to be asked to be a corporate mouthpiece. If the former, great. Thank your employer and do the right thing by them. If the latter, I suggest that you respectfully and courteously decline the invitation to contribute.

Never lie. You needn't (and shouldn't) air your corporate dirty linen in public - everyone has it, and nobody appreciates seeing someone else's - but your personal integrity is yours to defend. If you disagree with a tactical or strategic decision made by your company, simply don't write about it. If you think Joe from Accounts is a cretin, keep it to yourself. Write about your area of expertise or influence; ask for feedback from your audience; allow your readers to understand that your company is thinking about their issues and exploring ways and means of helping its stakeholders.

Remember, if you blog as a corporate automaton, those of your readers who aren't very smart will probably take what you say at face value. Those of your readers who are smart and observant (those whom you should hope are your peers) will see straight through it, and these are the people who won't be hiring you at your next job interview - or whom you'll be trying to hire yourself. Presumably your peers are the ones you want to respect and admire you.

In General:

I was recently asked to contribute to a corporate blog. When I responded asking for the guarantees suggested above, I was surprised to learn that nobody had actually considered it yet. On reflection, that shouldn't have been much of a surprise - blogging is, after all, a relatively new beastie in corporate land.

The lessons to be learned from that, I guess, are to not be afraid to ask, and not be afraid to decline the opportunity if it isn't appropriate for you.

Will you see my by-line on a corporate blog any time soon? We'll have to wait and see :)

[1] Wow. Kudos to the Windows Live Writer team - their spell-checker actually recognises "blogosphere" as a word. Pity it didn't recognise "Facebook" :)

[2]  See my personal blog at for a perfect example of that - anyone want a trailer? :)

I am not trying to connect to the Internet, you stupid... oh, never mind...

Andrew Harcourt Mon 12 Oct 2009

Sir Winston Churchill once said, "Those who fail to learn from history are doomed to repeat it."

Cast your memory back to this particular gem:

That stupid paperclip was one of the worst UI bungles in history, and one would think that Microsoft's UI designers would have done their absolute best to never irritate their users so much again. The lesson to learn is simple:

Never, ever steal the focus when your user is doing something unless it's absolutely critical that you do so.

Short on battery? Flash an icon in the notification area. (Hint to UI designers: in Windows, the system tray is generally at the bottom-right of the display. It has all these little notification-ish thingies in it. It's really remarkably useful.)

I have new email? That's nice, but I'm busy right now. Flash an icon in the notification area.

Someone's unplugged my headphones? Oh, that's right - that'd be me! Don't even bother flashing an icon - I know I unplugged the things. It'd be nice if your media player didn't stop when I did it, though.

Can't connect to the Internet? That's okay, too - you're already taking up two icons' worth of space in the notification area for no good reason, so perhaps you could re-use some of that.

Just about the only reason I can think of off the cuff to steal focus is if the device is critically low on power and is going to fail within the next thirty seconds or so. That's it. And even then, only do so once you've saved my work.

Everything else can be done by - you guessed it! - sticking a notification in the area reserved for them.

Enter Windows Mobile.

Windows Mobile has the extremely annoying habit of popping stupid notification balloons up over whatever I'm doing, just to tell me about something completely unrelated. I finally got so incredibly fed up with this that I decided to photograph it before I fed the device to the Kreepy Krauly. This behaviour is remarkably like Clippit, and even more annoying. What's worse is that this behaviour comes in a small, easy-to-fling-at-the-wall-when-you're-infuriated package.

Seriously, if you haven't taken the time to watch the Salmon Days clip above, now's the time:

"Hey, it looks like you're writing a text message!"



F***| off.

"Hey, it looks like you're driving somewhere and really can't afford to be interrupted!"



F***| off.

"Hey, it looks like you're trying to tell me where you want to go today!"



F*| off! I am not trying to connect to the f*|ing Internet, you stupid, f***|ing papercl.. oh, never mind.

Where do I want to go today? Well, I'm off to a dealership to replace this awful contraption. The only question is this: Android or iPhone?

Google "why is..."

Andrew Harcourt Wed 7 Oct 2009

A friend just dropped around and suggested that I have a look at Google's suggested searches for "why is".


Enough said?

Google Toolbar requires Firefox?

Andrew Harcourt Mon 5 Oct 2009

Umm... what? Guys, didn't you realise that you have a browser all of your own? :)


I'm sure SideWiki's very cute, but I'm not willing to go back to Firefox to use it.

Live Mesh, I'm breaking up with you

Andrew Harcourt Fri 14 Aug 2009

I'll be honest, Live Mesh: It's not me, it's you.

You're too slow.

You're unresponsive.

I need something larger, faster and more flexible.

I'm in love with Dropbox. (Disclosure: that's a referrer link from my user ID; I get free space if you sign up.)

To be fair, Live Mesh was (and is) a great idea. Unfortunately, it's been in its "Tech Preview" state for far too long. I can live with technical preview status, but what comes with that status is a 5Gb limit, which I just can't survive any more.

I want my data to be on someone's cloud, somewhere. I'm perfectly comfortable that I can encrypt my own stuff as I need to, before handing it over to somebody else. And Dropbox appears to fit the "someone else" bill pretty much perfectly.

It works across platforms; it has a browser interface; it does everything that Mesh does (that I want, anyway), but does it faster and more transparently.

Oh, and it does photo sharing as well. Picasa's going to get a run for its money, here - this is so incredibly easy to store and arrange files on that I'm seriously considering ditching my (paid-for) Picasa web albums.

Worth Reading: Automate, else Enforce otherwise Path of Least Resistance

Andrew Harcourt Fri 7 Aug 2009

A friend and colleague, Matthew Rowan, has just spent some time formalising some of the knowledge that many developers will have grasped intuitively with respect to process management - but that many won't have.

Automate, else Enforce otherwise Path of Least Resistance.

This is well worth a read as far too many companies get burdened with process documentation over actual workable process.

Automatically rejecting appointments in Microsoft Outlook 2007

Andrew Harcourt Sat 1 Aug 2009

There's a nice feature in Outlook that allows users to automatically accept appointments, and even decline conflicting appointments. Unfortunately, what it can't do is allow you to specify specific reasons for rejecting meeting invitations.


A particular pet hate of mine is when people send a meeting invitation entitled "Foo Discussions" or some such, and fail to specify a location or any content. It's even more irritating when I'm trying to be a good little corporate citizen and have my calendar auto-accept appointments, but they send it ten minutes before the thing actually starts. They're going to receive an acceptance notice (of course) but my phone's not going to synch for a good half-hour, and there's just no way I'm going to be there. Funnily enough, I'm not just sitting around on my backside, waiting for someone to invite me to a meeting.

Oh, a meeting! How exciting! I've been waiting for one of these all day!

Of course, if you simply decline offending appointments manually, people tend to get offended. (Which may or may not be a good thing, depending on who it is.) A better way, however, is to automate the process.

Nothing personal, old chap - my calendar just has automation rules that apply to everyone.

The rules for getting into my calendar are simple:

  1. Tell me everything I need to know about the meeting. This includes, specifically, its location. Outlook enforces pretty much everything else, but fails to enforce this one.

  2. Please do me the courtesy of checking my free/busy information and *do not *attempt to trump something that's already been organised. It shows a complete and utter disregard for my time and that of anyone with whom I've already agreed to meet.

  3. Do me the courtesy of giving me at least 24 hours' notice. Don't send me a meeting request at 7pm on Monday evening for 7:30am on Tuesday morning. I'm not going to read it, and I'm not going to be there.

I finally snapped today, after another imbecilic meeting request, and wrote these two quick methods. They enforce the three rules above, automatically accept the request if it passes and automatically decline otherwise. They appear to work for me; your mileage may vary. No warranties, express or implied, etc.

Sub AutoProcessMeetingRequest(oRequest As MeetingItem)

    ' bail if this isn't a meeting request
    If oRequest.MessageClass <> "IPM.Schedule.Meeting.Request" Then Exit Sub

    Dim oAppt As AppointmentItem
    Set oAppt = oRequest.GetAssociatedAppointment(True)

    Dim declinedReasons As String
    declinedReasons = ""

    If (oAppt.Location = "") Then
        declinedReasons = declinedReasons & " * No location specified." & vbCrLf
    End If

    If (HasConflicts(oAppt)) Then
        declinedReasons = declinedReasons & " * It conflicts with an existing appointment." & vbCrLf
    End If

    If (DateTime.DateDiff("h", DateTime.Now, oAppt.Start) < 24) Then
        declinedReasons = declinedReasons & " * The meeting's start time is too close to the current time. " & vbCrLf
    End If

    Dim oResponse As MeetingItem
    If (declinedReasons = "") Then
        Set oResponse = oAppt.Respond(olMeetingAccepted, True)
        Set oResponse = oAppt.Respond(olMeetingDeclined, True)
        oResponse.Body = _
            "This meeting request has been automatically declined for the following reasons:" & vbCrLf & _
    End If


End Sub

Function HasConflicts(oAppt As AppointmentItem) As Boolean
    Dim oCalendarFolder As Folder
    Set oCalendarFolder = ThisOutlookSession.Session.GetDefaultFolder(olFolderCalendar)

    Dim apptItem As AppointmentItem

    For Each apptItem In oCalendarFolder.Items
        If ((apptItem.BusyStatus <> olFree) And (oAppt <> apptItem)) Then
            If (apptItem.Start < oAppt.End) Then
                ' if this item starts before the given item ends, it must end before the given item starts
                If (apptItem.End > oAppt.Start) Then
                    HasConflicts = True
                    Exit Function
                End If
            End If
        End If

    HasConflicts = False
End Function

Just open the VBA editor from within Outlook (Alt-F11) and paste the subroutines into the ThisOutlookSession project.


Then go and create an Outlook rule that calls the AutoProcessMeetingRequest subroutine for every meeting request you receive:


Those of your colleagues who persistently refuse to learn how to use email (an essential business tool!) will receive responses along the following lines:



Andrew Harcourt Thu 30 Jul 2009


Don't be spamming me, Windows Live

Andrew Harcourt Fri 17 Jul 2009


New features are all very nice, guys, but:

a) that feature's been around for ages;

b) I hate it, as does everyone I know;

c) I've already told Messenger not to pop up the today box when I launch it, and this looks remarkably like you're ignoring your users' wishes for your own marketing purposes.

Back to Pidgin, methinks.

What’s wrong with this sequence? A message to the VSTS guys:

Andrew Harcourt Fri 17 Jul 2009

Stupid Question #1: What actually constitutes a conflict?

Is a conflict when two people have changed a file, or is it when two people have changed the same part of a file and VSTS needs assistance to resolve it? Every other (sane) version control system in the world interprets a conflict as the latter.


Stupid Question #2: How is it "Auto Merge" when I have to tell VSTS to merge?


Stupid Question #3: If VSTS could not-quite-auto-but-with-the-press-of-a-button merge, and it managed to resolve everything all by itself, why on earth does it pester me about non-existent conflicts all the time?!?!?


Perhaps these questions are stupid, but I most respectfully submit that the answers are stupider.

Microsoft on why a memory leak isn't really a leak

Andrew Harcourt Fri 10 Jul 2009

The genesis of this rant is that a colleague and I have just spent a couple of days diagnosing and fixing memory leaks (sorry, "pseudo-leaks"," according to Microsoft, which presumably means that the memory explosion we were seeing wasn't actually real) caused by awful, awful garbage collection in Internet Explorer.

The icing on the cake was finding this article: Understanding and Solving Internet Explorer Leak Patterns. The markworthy text is this:

Pseudo-leaks almost always appear on the same page during dynamic scripting operations and should rarely be visible after navigation away from the page to a blank page.
In other words, Microsoft Word doesn't leak memory either. All you have to do is close it, open it again and miraculously all the memory that it allocated and failed to release (but that somehow fails to meet the definition of "leaked") is released.

The whole point of Internet Exploder 8 was to build an AJAX-friendly browser. The subtext went along the lines of, "Well, all those AJAX-heavy sites like GMail, Google Maps, Hotmail etc don't perform well under IE7 and we're losing browser market share, so let's make a browser that is AJAX-friendly. But, at the same time, let's make developers require people to navigate away from that application before any of its memory is released."

We've just spent days diagnosing as many of the various ways that IE leaks memory (ways, incidentally, with which none of the other browsers seem to have problems), patching the jQuery core to cope with its idiocy and writing our own DOM garbage collection handlers to deal with it. The jQuery and GWT discussion forums reveal that these guys are having just as much pain, and for similar reasons.

To the IE development team: Please, please, please, guys, fix your sodding .removeChild method and everything that has anything to do with it. And don't talk to me about setting .innerHTML properties either until you have a browser that doesn't seg-fault when I do that to a table element, or when I manually break the relationship between a node and its parent. Finally, at least have the courage to confess that your browser does leak memory in these scenarios rather than some pathetic attempt at explaining why a leak isn't a leak. Grr!!

If you want to cringe, grab the sIEve tool (see Memory leak detector for Internet Explorer for a link) and point it at Navigate around a bit and then have a look at the number of orphan DOM nodes. Consider how many full-page reloads MSDN causes, and compare this to your own AJAX application. Weep.

Memory leak detector for Internet Explorer

Andrew Harcourt Thu 9 Jul 2009

I've been playing with Drip and sIEve in order to find some memory leaks that we've been encountering under Internet Exploder.

Drip / IESieve, Memory leak detector for IE Internet Explorer .

If you haven't looked at your application with sIEve, you really should.

"The control collection cannot be modified during DataBind, Init, Load, PreRender or Unload phases."

Andrew Harcourt Tue 21 Apr 2009

Garbage. Of course it can. What a stupid error message.

What it should say is, "Ancestors' control collections cannot be modified..." or "Control collections in a base server control class where a derived class uses markup cannot be modified..."

You'll see this error for a bunch of reasons to do with modifying control collections, and by and large it's probably because you're messing with someone else's collection. (Hint: don't do that. It's a bad practice. Clean up your code and don't mess around with other classes' internals. It's like reaching into someone else's trousers and adjusting things. You just don't do it. Ugh.)

Consider this scenario, though:

public class FooControl: UserControl
HiddenField _hiddenFoo;

protected override void CreateChildControls()

_hiddenFoo = new HiddenField { ID = "hiddenFoo" };

// Presumably we'll also actually *do* something with hiddenFoo, but
// that's neither here nor there.

Remarkably boring. Until, that is, someone creates a control using markup (a .ascx control) and uses FooControl as the base class. All of a sudden, the exception above is going to be thrown, and you're going to be scratching your head, wondering what on earth's going on.

The most common scenario for this is where there's post data for your hiddenFoo field, and the page's ProcessPostData method indirectly calls EnsureChildControls() on everything it needs to in order to re-populate the control tree to where it was last time and then stuff the post data into the relevant field. This can sometimes (and don't ask me about "sometimes") lead to the above exception.

Calling EnsureChildControls() in your OnInit handler won't work, as your control's events won't reliably have started firing yet. (It might work, but it won't guarantee it.) Inspecting your Controls property at a breakpoint in CreateChildControls method will give you a clue, though: your child controls that are declared in markup already exist at this point.

What you want to do instead is the following:

public class FooControl: UserControl
HiddenField _hiddenFoo;

protected override void AddParsedSubObject(object obj)

protected override void CreateChildControls()

_hiddenFoo = new HiddenField { ID = "hiddenFoo" };

// Presumably we'll also actually *do* something with hiddenFoo, but
// that's neither here nor there.

The AddParsedSubObject method is what's called to add controls that were parsed from markup to your control tree. As we already know that our parsed controls are happily ensconced in our control tree, all we have to do is demand that all of our other controls get loaded first.

Bear in mind that this won't work if you're doing silly things with your control collection, and it will incur a slight performance hit: AddParsedSubObject is called once per markup element (including for whitespace literals) and calling EnsureChildControls will result in a check against ChildControlsCreated, so you're going to have a lot of those hits. Still', it's better than an unhandled exception, right?

My Life is Debugging

Andrew Harcourt Mon 6 Apr 2009

Hmm. I spent today at work, optimising JavaScript and re-jigging chunks of Microsoft's scripting framework. It's a Sunday, by the way.

I then came home and spent the rest of the evening debugging WordPress XMLRPC.

My life is debugging.

Official Gmail Blog: New in Labs: Undo Send

Andrew Harcourt Fri 20 Mar 2009

Here's a cute new feature from Google: "Undo Send" in Gmail.

Official Gmail Blog: New in Labs: Undo Send.

Cool :)

Scripting Gotchas

Andrew Harcourt Fri 20 Mar 2009

Microsoft JScript runtime error: Sys.ArgumentUndefinedException: Value cannot be undefined.
Parameter name: type  



Type name defined in .js does not match GetType().FullName in .cs.

console.log() Equivalent for Internet Explorer

Andrew Harcourt Thu 12 Mar 2009

There are a bunch of people out there who are fed up with the lack of a console.log() equivalent in Internet Explorer. It shouldn't come as any surprise that I'm one of them.

For anyone who's ever tried to debug a whole bunch of JavaScript code and ended up with myriad alert('here') and alert('here2') calls just so they could see what was happening, console.log() became our friend very, very quickly. No surprise, however, that IE didn't have it.

It becomes significantly more painful, however, when you're trying to clean up JS code for the sake of performance. The usefulness of any metrics collected goes out the window once there's user activity involved. (Besides, clicking "OK" for all those alert boxes is a royal pain.)

(Hint to the IE8 team: Your product is still in beta. You must have a logging call somewhere. Publish it, please. Please. All the other browsers of note do.)

There are quite a few good console.log() equivalents out there, not the least of which are Faux Console and the Yahoo User Interface Logger Widget. For extremely light-weight applications, though, there was nothing that did just what I wanted, so I wrote one. You'll be depressed at how simple it is, and how easy it would have been for the IE team to have included this functionality at almost any point in IE's development cycle.

The JavaScript code:

// rudimentary javascript logging to emulate console.log(). If there
// already exists an object named "console" (defined by most *useful*
// browsers :p) then we won't do anything here at all.
if (typeof (console) === 'undefined') {

    // define "console" namespace
    console = new function() {
        // this is the Id of the console div. It doesn't actually need
        // to be a div, as long as it has an innerHTML property.
        this.ConsoleDivId = "JavaScriptConsole";

        // maintains a reference to the console output div, so that we
        // don't have to call document.getElementById a bunch of times.
        this.ConsoleDiv = null;

        // allows us to cache whether or not the console div exists, so
        // that we can just do an early exit from the console.log method
        // and similar if we're not going to put any useful output anywhere.
        this.ConsoleDivExists = null;

    // this is an expensive (really quite expensive) string padding function.
    // Don't use it for large strings.
    console.padString = function(s, padToLength, padCharacter) {
        var response = "" + s;
        while (response.length < padToLength) {
            response = padCharacter + response;

        return response;

    console.log = function(message) {

        // this will be executed once, on first method invocation, to
        // get a reference to the output div if it exists
        if (console.ConsoleDivExists == null) {
            console.ConsoleDiv = document.getElementById(console.ConsoleDivId);
            console.ConsoleDivExists = (console.ConsoleDiv != null);

        // only do any logging if we actually have an output div.
        // (Check using the cached variable so that we don't end up
        // with a bunch of failed calls to document.getElementById).
        if (console.ConsoleDivExists) {
            var date = new Date();
            var entireMessage =
                console.padString(date.getHours(), 2, "0") + ":" +
                console.padString(date.getMinutes(), 2, "0") + ":" +
                console.padString(date.getSeconds(), 2, "0") + "." +
                console.padString(date.getMilliseconds(), 3, "0") + " " + message;
            delete date;

            // append the message
            console.ConsoleDiv.innerHTML = console.ConsoleDiv.innerHTML + "<br />" + entireMessage;

            // scroll the div to the bottom
            console.ConsoleDiv.scrollTop = console.ConsoleDiv.scrollHeight;

Ideally you'd drop this into an included script file, but it's more likely that you'll paste it into a <script> tag in the header of your HTML document.

The HTML that creates the DIV to contain the output:

    This is here for JavaScript debugging. Please use calls to
    console.log(message) to log to this console, as we're emulating
    the console.log() function that real browsers provide.
<div id="JavaScriptConsole" style="position: absolute; bottom: 30px; left: 30px; width: 600px; height: 200px; overflow: scroll; background-color: Yellow; color: Red;">
    <a href="javascript:document.getElementById('JavascriptConsole').style.visibility = 'hidden';" style="float: right;">Close</a> <span style="font-weight: bold;">JavaScript Console</span><br />

Note that this div also contains a hyperlink with JavaScript code in it to hide it.

A simple hello world script to log to it:

<script type="text/javascript">
    console.log("Hello, world!");

... and finally, the output:


Microsoft's Azure Services Platform

Andrew Harcourt Wed 18 Feb 2009

... and why you really should care.

I'm sitting in a Microsoft user group meeting right now, and I am, to be honest, pretty unimpressed. Not with the presenters - they're doing a good job - or with the presentation, which is on what should be a fascinating topic, but with the people. Sorry, Microsoft, but the greatest problem you face right now is not your technology; it's pretty damn good[1]. It's the people who are afraid of using it - who, sadly, aren't very.

OK, I'm home now, so I can type properly.

The presentation topic was the� Azure Services Platform, which is� Microsoft's� � answer to the Google cloud. Azure is a fascinating topic, both technically and strategically. The technical merits I'll discuss in a minute. Strategically, however, this platform shows that Microsoft is quaking in its boots over what Google's been doing with cloud computing, and is now trying to play catch-up. The degree of success a) remains to be seen; and b) depends upon the aforementioned people who are going to have to want to learn to use and exploit its strengths.

This platform gives immeasurable advantages to whomever wants them: almost infinite scalability, massive parallelism and redundancy, no more worries about server provisioning or downtime... the list goes on.

One of the reasons I'm so irritated is that instead of asking intelligent questions like, "How much can we scale a single computational task?" or even "How does this compare to the Google cloud in terms of speed, flexibility and response time?" people asked questions around keeping their own servers ("Can I still be woken up at 3am when a server falls over, please?") and security ("Can I host my own database and have the platform talk to it?" or, in other words, "Can I still trust Microsoft with my unencrypted data, but nonetheless re-introduce my own single point of failure into an otherwise-well-designed system?"). Honestly.

I don't want to rant, so suffice it to say this:

Learn about your craft. Go and sign up for the Azure CTP. Go and get your Google App Engine key. Read about the Google file-system and Amazon's S3. And, while you're at it, go and re-read some Knuth and some Fowler[2], just because you should, and probably haven't.

Get some enthusiasm about what you're seeing, people. It's brilliant. Go and learn about it. For what it's worth, if you haven't been hanging out for a cloud computing solution from Microsoft for a very long time, I most respectfully suggest that you might be in the wrong profession.

[1] Except for Live Writer. What were you thinking, guys? Writing this post has been painful. I tried to screenshot the crash messages and embed them into another blog post (also in Live Writer) and it crashed, too. Fail.

[2] Who are they? Shame on you.

#if DEBUG Considered Harmful

Andrew Harcourt Fri 16 Jan 2009

I know, I know. Lots of people have written about this one, but nonetheless it still gets used and I feel I should add my $0.02. (That's Australian money, by the way, so it probably works out at not very much in your own currency.)

This post is specific to C#, as .NET has the very nice feature of code attributes, specifically the ConditionalAttribute class which allows methods to be compiled and invoked by the JIT compiler only if there's a particular compilation variable set.

Consider the code below:

private static void Hello()
Console.WriteLine("Hello, world!");

private static void Goodbye()
Console.WriteLine("Goodbye, cruel world!");

public static void GreetTheWorld()


Let's say that we compile this in Debug mode with code analysis turned on and warnings set to errors. (We all compile with warnings == errors, right?) All is well.

We go to run our unit tests again in Release mode prior to check-in, so we recompile in Release mode. (Or, if we're lazy, we just check in from our Debug build and let our build server compile and run the tests in Release mode.)

Oops. CA1811 violation: you have uncalled private methods in your code. Please call them if you meant to call them, or remove them if not. The FxCop engine will never notice that our #if DEBUG directive has compiled out the call to our Hello() method, so code analysis throws an error.

Use this one instead:

private static void Hello()
Console.WriteLine("Hello, world!");

private static void Goodbye()
Console.WriteLine("Goodbye, cruel world!");

public static void GreetTheWorld()

This makes the compiler much happier.

Let's consider the first piece of code again, though, and edit it in Release mode. Perhaps we'd like to rename our methods to something more descriptive of what they do: PrintHello() and PrintGoodbye(). So, we whip out our trusty refactoring tool (^R ^R in Visual Studio) and tell it to rename our methods.

Here's what we end up with (remembering that we're in Release mode):

private static void PrintHello()
Console.WriteLine("Hello, world!");

private static void PrintGoodbye()
Console.WriteLine("Goodbye, cruel world!");

public static void GreetTheWorld()


Oh, sod. We've introduced a compilation error because the refactor/rename operation uses the compiled version of the code to check for symbol usage, and our call to the former Hello() method doesn't appear in the compiled assembly because the #if DEBUG check caused it to not be compiled. We've left the old call to Hello() unchanged.

If we'd performed the same operation on the second piece of code instead, we'd be laughing.

Brisbane Alt.Net User Group Launched

Andrew Harcourt Wed 14 Jan 2009

The Brisbane Alt.Net User Group has launched. Check it out at Brisbane Alt.Net or, even better, turn up to the first meeting in February.

The Windows 7 Beta Kicks Off This Week - Windows 7 Team Blog - The Windows Blog

Andrew Harcourt Mon 12 Jan 2009

Just in case you missed it, the Windows 7 Beta is now available.

The Windows 7 Beta Kicks Off This Week - Windows 7 Team Blog - The Windows Blog .

As I've been officially on holidays for the last four days (yes, four days includes two days of weekend; /sigh) I haven't fetched it yet so I have no wisdom for anyone about what's good and what's not. Why not try it and see?

More Messenger Bugs

Andrew Harcourt Tue 2 Dec 2008

This tells me that I'm signed in in two places, even though a) I'm not, as I signed out on my laptop before I came in to work, and b) the first thing it's supposed to to if I am signed in somewhere else is sign me out.


This option still exists, but apparently isn't honoured.


Braces in string.Format()

Andrew Harcourt Tue 2 Dec 2008

I'm really quite surprised that I've never needed this before, but today I wanted to embed some JavaScript within a string contained in a C# class and format it using string.Format().

The problem? My JavaScript was a function declaration and therefore contained braces, but the placeholder delimiters in string.Format also use braces.

The solution: braces get escaped using another brace of the same sort.

var jsConditionalHelloWorldTemplate =
    "if ({0}) {{\r\n" +
    "    alert('Hello, world!');" +
    "}}" +

var sendToBrowser = string.Format(jsConditionalHelloWorldTemplate, "true");

How did I not know this before?

Windows Live Messenger Beta

Andrew Harcourt Sat 29 Nov 2008

If you read this post, please download the new beta of Windows Live Messenger, install it and turn on the Customer Experience Improvement Program feature.

Why? Because if enough of us do it, perhaps Microsoft will fix some of the issues before we're lumped with the thing for real.

There are a couple of things about the new beta (build 14.0.5027.908), however, that are driving me completely insane.

Firstly, Microsoft, could you please remove the stupid warning bar telling me that "Clicking this link might open your computer to security risks." For one thing, funnily enough, I know that. I should at least be able to make it go away after it's been displayed once. For another thing, it's warning me about a link that I sent, that just happens to be in my message history.

Secondly, the sign-on in multiple places feature is an absolute pain, even when it's supposedly turned off. It still keeps telling me that I've signed in in multiple places, even when the very next thing it does is sign me out. It also means that if I turn the feature on and then sign out of one machine for a few minutes, messages to me aren't presented as offline messages; they just never seen at all until I happen to move to the other one. That strikes me as a bit silly.

Finally, although it's petty: The whole point of having an adjustable profile picture size is to provide more screen real estate for messages. In the new Messenger, if I make my profile picture smaller, that's all that happens. That's kind of silly...

All in all, IMHO the new Messenger "beta" should really have been called an early alpha. Good ideas, but awful implementation.

JavaScript .cloneNode() doesn't clone event handlers

Andrew Harcourt Fri 28 Nov 2008

Here's one that will one day bite you. Consider this code:

        <script type="text/javascript">

            var OnLoad = function() {
                document.getElementById("cloneme").onclick = function() { alert("onload");};

            window.attachEvent("onload", OnLoad);
        <a id="cloneme" href="javascript:alert('href');">Test Hyperlink</a><br />
        <br />

If you drop it into a local .html file and point a browser at it, both links will look pretty much identical. There is a catch, however: although one hyperlink is a direct clone of the other, they're not identical.

If you click the first one, you'll see the order in which the events should fire represented by two alert boxes, the first one shouting "onload" and the second "href".

If you click the second one, you'll only see the "href" message.

Why is this?

The key point to remember is that the DOM object is being cloned from its textual representation. In other words, .clone() does not do a deep copy; rather, it effectively just creates a new node based on the .outerHtml property of the old node. Notably, that does not include any event handlers that have been attached programmatically.

.NET Provider Model and Code Snippet

Andrew Harcourt Tue 25 Nov 2008

While I think the .NET provider model is useful as a means of introducing dependency inversion if you don't want a container (!!), it really irritates me that we have to create so many peripheral classes in order to use it.

For example, we need to create a strongly-typed collection class that contains them all (presumably a left-over from the .NET 1.x days where there were no generic types), we need a configuration section class just to support an addition to the (web|app).config file, we need the provider class itself (effectively a factory class) and we need the class(es) of which it provides instances. Oh, and the interface that our provider stuff actually provides.

Here's a code snippet (what's a code snippet?) for creating a .NET provider and all the associated paraphernalia. Unfortunately it dumps all the classes into one .cs file, but AFAIK there's no way to get a single snippet to create multiple files. You can (and should) do that yourself, though.

Handily, the snippet will also generate XML for you that can be copied/pasted directly into your (web|app).config file.

<?xml version="1.0" encoding="utf-8" ?>
<CodeSnippets xmlns="">
<CodeSnippet Format="1.0.0">
<Description>Code snippet for a .NET Provider implementation</Description>
<Author>Andrew Harcourt</Author>
<ToolTip>The name of the provider (e.g. "Cache", "Licence").</ToolTip>
<ToolTip>The name of the interface that the provider will return (e.g. "ICache", "ILicence").</ToolTip>
<ToolTip>The name of the default provider instance to use (e.g. "Web", "File"). The suffix "$providerName$Provider" will be added automatically.</ToolTip>

<Literal Editable="false">
<ToolTip>The type of the owning class.</ToolTip>

<Code Language="csharp">
<![CDATA[using System;
using System.Collections.Specialized;
using System.Configuration;
using System.Configuration.Provider;
using System.Diagnostics;
using System.Reflection;
using System.Web.Configuration;

#region $providerName$Provider

public abstract class $providerName$Provider : ProviderBase

protected abstract string DefaultName { get; }
protected abstract string DefaultDescription { get; }

public abstract $interfaceName$ Get$providerName$();

protected static void CheckForUnrecognizedAttributes(NameValueCollection config)
if (null == config)
throw new ArgumentNullException("config");
if (config.Count > 0)
string attr = config.GetKey(0);
if (!string.IsNullOrEmpty(attr))
throw new ProviderException("Unrecognized attribute: " + attr);

protected string VerifyInitParams(NameValueCollection config, string name)
if (null == config)
throw new ArgumentNullException("config");

if (string.IsNullOrEmpty(name))
name = DefaultName;

if (string.IsNullOrEmpty(config["description"]))
config.Add("description", DefaultDescription);

return name;


#region $defaultProvider$$providerName$Provider

public class $defaultProvider$$providerName$Provider : $providerName$$end$Provider //TODO Implement abstract class "$providerName$$end$Provider"
//TODO Add or merge the following into your (web|app).config file.
<section name="$providerName$ProviderService" type="FULL_NAMESPACE_HERE.$providerName$ProviderSection, ASSEMBLY_NAME_HERE" />

<$providerName$ProviderService defaultProvider="$defaultProvider$$providerName$Provider">
<clear />
<add name="$defaultProvider$$providerName$Provider" type="FULL_NAMESPACE_HERE.$defaultProvider$$providerName$Provider, ASSEMBLY_NAME_HERE" />



// The code below here is auto-generated and shouldn't need any manual
// editing unless you want to do interesting stuff. -andrewh 18/9/08

#region $providerName$ProviderSection

[Obfuscation(Feature = "renaming", Exclude = true, ApplyToMembers = false)]
public class $providerName$ProviderSection : ConfigurationSection
public ProviderSettingsCollection Providers
get { return (ProviderSettingsCollection)base["providers"]; }

[StringValidator(MinLength = 1)]
[ConfigurationProperty("defaultProvider", DefaultValue = "$defaultProvider$$providerName$Provider")]
public string DefaultProvider
get { return (string)base["defaultProvider"]; }
set { base["defaultProvider"] = value; }


#region $providerName$ProviderService

public class $providerName$ProviderService
private static $interfaceName$ _instance;
private static $providerName$Provider _provider;
private static $providerName$ProviderCollection _providers;
private static object _lock = new object();

public static $providerName$Provider Provider
get { return _provider; }

public static $providerName$ProviderCollection Providers
get {
return _providers;

public static $interfaceName$ $providerName$
if (_instance == null)
_instance = LoadInstance();

return _instance;

private static $interfaceName$ LoadInstance()
$interfaceName$ instance = _provider.Get$providerName$();

// if the default provider fails, try the others
if (instance == null)
foreach ($providerName$Provider p in _providers)
if (p != _provider) // don't retry the default one
instance = p.Get$providerName$();
if (instance != null) // success?
_provider = p;

Debug.Assert(instance != null);
return instance;

private static void LoadProviders()
if (null == _provider)
lock (_lock)
// do this again to make sure _provider is still null
if (null == _provider)
$providerName$ProviderSection section = LoadAndVerifyProviderSection();

private static void BuildProviderCollection($providerName$ProviderSection section)
_providers = new $providerName$ProviderCollection();
ProvidersHelper.InstantiateProviders(section.Providers, _providers, typeof($providerName$Provider));

if (_providers.Count == 0)
throw new ProviderException("No providers instantiated");

_provider = _providers[section.DefaultProvider];
if (null == _provider)
throw new ProviderException("Unable to load provider");

private static $providerName$ProviderSection LoadAndVerifyProviderSection()
// fetch the section from the application's configuration file
$providerName$ProviderSection section = ($providerName$ProviderSection)ConfigurationManager.GetSection("$providerName$ProviderService");
if (section == null)
throw new ProviderException("$providerName$ProviderService section missing from (web|app).config");

return section;


#region $providerName$ProviderCollection

public class $providerName$ProviderCollection : ProviderCollection
public new $providerName$Provider this[string name]
get { return ($providerName$Provider)base[name]; }

public override void Add(ProviderBase provider)
if (null == provider)
throw new ArgumentNullException("provider");

if (!(provider is $providerName$Provider))
throw new ArgumentException("Invalid provider type", "provider");



I'm in love with jQuery

Andrew Harcourt Tue 25 Nov 2008

What more need I say? jQuery is making my life bearable.

From Simon Willison's blog:


Microsoft JScript compilation error: 'return' statement outside of function

Andrew Harcourt Tue 25 Nov 2008

Internet Explorer runs script evaluated using the JavaScript eval() function in the global scope.

So what?

Well, if you're doing something unorthodox, as I'm being obliged to do right now, especially involving third-party controls (over which I have no control), you're eventually going to run into the error above when you try to take a string ontaining JavaScript code returned to you by something else, execute it and get hold of the return value.

Rather than using

var result = eval(someStringContainingSomeJavaScript);

try using

var f = new Function(someStringContainingSomeJavaScript);
var result = f();

or just

var result = new Function(someStringContainingSomeJavaScript)();

IE8 JavaScript Profiling

Andrew Harcourt Fri 21 Nov 2008

If you haven't played with it yet, IE8's JS profiling is awesome. Happy, happy, joy, joy and all that.

Just don't look at the actual execution times below, or you'll cry (as I am :'( ).


Code Analysis Rule for Parameter Checking

Andrew Harcourt Wed 12 Nov 2008

I know, I know. Everyone's a gun coder, and nobody ever forgets to check the inputs to their public methods - in the same way as no coder ever makes a mistake, right? Which suggests that all bugs are deliberate...

If you can get away with it, it's worth considering aspect injection for common argument checks, but that's a topic for another day.

Anyway, to the point. Below is a code analysis rule to encourage people to check all parameters to their public methods.

It's not the complete rule; you'll need your own BaseRule class and XML rule definition file, but you can find examples of those elsewhere.

internal class EnforceArgumentChecking : BaseRule
    public EnforceArgumentChecking() : base(typeof(EnforceArgumentChecking).Name) { }

    public override Microsoft.FxCop.Sdk.TargetVisibilities TargetVisibility
        get { return TargetVisibilities.All; }

    public override ProblemCollection Check(Member member)
        ProblemCollection problems = new ProblemCollection();

        Method method = member as Method;
        if (ShouldCheck(method))
            Dictionary<string, int> parameterExceptions = GetParameterExceptionCounts(method);

            // require that each parameter have at least one /Argument.*Exception/ associated with it
            foreach (string parameterName in GetParametersToCheck(method))
                // if there's no count for this parameter name, or the count's less than one, we have a problem.
                if ((!parameterExceptions.ContainsKey(parameterName)) || ((parameterExceptions[parameterName] < 1)))
                    Resolution resolution = GetNamedResolution("ArgumentNotChecked", parameterName, method.Name.ToString());
                    problems.Add(new Problem(resolution));

        return problems;

    /// <summary>
    /// Gets a list of the parameter names to check. This will return all parameter names except those
    /// that have a MyCompany.Attributes.SuspressParameterCheck entry for that individual parameter.
    /// </summary>
    /// <param name="method">The method whose parameter collection to scan.</param>
    private IEnumerable<string> GetParametersToCheck(Method method)
        List<string> parametersToIgnore = new List<string>();

        foreach (AttributeNode attr in method.Attributes)
            if (attr.Type.FullName.Equals("MyCompany.Attributes.SuppressParameterCheck"))
                Expression paramName = attr.GetPositionalArgument(0);
                Expression justification = attr.GetPositionalArgument(1);

                if (!string.IsNullOrEmpty(justification.ToString()))

        foreach (Parameter p in method.Parameters)
            string paramName = p.Name.ToString();

            if (parametersToIgnore.Contains(paramName))

            yield return paramName;

    /// <summary>
    /// Decides whether we should apply this rule to the given method.
    /// </summary>
    /// <param name="method">The method.</param>
    /// <returns>False if the method is null, non-public, an interface or abstract method, a public property setter; true otherwise.</returns>
    private static bool ShouldCheck(Method method)
        if (method == null) { return false; }

        if (!method.IsPublic) { return false; }

        // this will catch methods that are defined as either abstract or interface methods
        if (method.IsAbstract) { return false; }

        if (method.IsAccessor) { return false; }

        // we don't check operators - they all generally call the AreEqual, Add, Append or other named methods anyway - and they're almost
        // always overloads.
        if (method.Name.ToString().StartsWith("op_", StringComparison.Ordinal)) { return false; }

        return true;

    /// <summary>
    /// Looks for the creation of instances of /Argument.*Exception/ and counts them for each parameter on the method.
    /// </summary>
    /// <returns>A dictionary mapping the parameter name to the number of /Argument.*Exception/ thrown against that parameter.</returns>
    private static Dictionary<string, int> GetParameterExceptionCounts(Method method)
        Dictionary<string, int> exceptionsThrownOnParameters = new Dictionary<string, int>();

        // count the exceptions thrown on each parameter.
        for (int i = 2; i < method.Instructions.Count; i++) // start from 2 because it's pretty much impossible to throw a useful exception before this.
            Instruction instruction = method.Instructions[i];

            // are creating a new Argument*.Exception object?
            if (instruction.OpCode == OpCode.Newobj)
                InstanceInitializer initializer = (InstanceInitializer)instruction.Value;
                if (Regex.IsMatch(initializer.FullName, "Argument.*Exception"))
                    for (int paramIdx = 0; paramIdx < initializer.Parameters.Count; paramIdx++)
                        Parameter p = initializer.Parameters[paramIdx];
                        if (p.Name.ToString().Equals("paramName"))
                            int paramOffset = initializer.Parameters.Count - paramIdx;

                            Instruction loadStringInstruction = method.Instructions[i - paramOffset];
                            if (loadStringInstruction.OpCode == OpCode.Ldstr)
                                string parameterValue = loadStringInstruction.Value.ToString();

                                if (!exceptionsThrownOnParameters.ContainsKey(parameterValue))
                                    exceptionsThrownOnParameters[parameterValue] = 1;

        return exceptionsThrownOnParameters;

Hand in hand with this rule is a code attribute that allows the programmer to suppress warnings on individual method parameters, rather than just suppressing the entire rule for a particular method:

[AttributeUsage(AttributeTargets.Method, Inherited = false, AllowMultiple = true)]
public sealed class SuppressParameterCheck : Attribute
    private readonly string _parameterName;
    private readonly string _justification;

    public SuppressParameterCheck(string parameterName, string justification)
        if ((string.IsNullOrEmpty(justification)) || (justification.Length < 10))
            throw new ArgumentNullException("justification", "If you're going to suppress a parameter check, you should provide a reason.");

        _parameterName = parameterName;
        _justification = justification;

    public string ParameterName
        get { return _parameterName; }

    public string Justification
        get { return _justification; }

And an example of how to use the SuppressParameterCheck attribute:

public void Insert(string key, object value)
    if (string.IsNullOrEmpty(key)) throw new ArgumentNullException("key");
    if (value == null) throw new ArgumentNullException("value");

    // Accept the object, but don't cache it.

[SuppressParameterCheck("key", "This will be checked in a called method.")]
[SuppressParameterCheck("value", "This will be checked in a called method.")]
public void Insert(string key, object value, CacheExpireType expireType)
    Insert(key, value);    // Accept the object, but don't cache it.

Of course, it goes without saying that you're compiling with warnings == errors, right? :)

Way to End an Interview...

Andrew Harcourt Fri 7 Nov 2008

Todd's quote for the day:

"I tried to explain to him why these things were important, but then I gave up, and just said, 'You should probably go now.'"

Not the way you'd want your interview to end...

Windows Mobile Error 0x8503001c

Andrew Harcourt Thu 9 Oct 2008

OK, the entire point of this blog post is to save some other poor sod from the pain that I've just been through.

If you have a Windows Mobile device and get synchronization error 0x8503001c, your life is close to over. Trust me.

You can delete and re-create the sync partnerships between your PC and your Windows Mobile device as many times as you want, with whatever permutations of synchronization settings you choose, but it's not likely to help. Try it anyway, but don't waste too much time on it.

What you're going to do, at some point after you've realized that it's all hopeless, is reset everything to factory defaults and start over.

The first tool you'll want is Dotfred's PIM Backup tool, available at Get it and back your device up (NOT to your device memory!).

Reset your device to factory defaults. On my Dopod you hold both of the multi-function buttons (the "-" buttons) while poking the reset button with your stylus. Press the Send button when it prompts you to confirm. Yours is probably different.

Re-create the sync relationship between your phone and your PC.

Re-sync everything.

Restore your backup.


The Value of Check-In Policies

Andrew Harcourt Fri 29 Aug 2008

See? Now isn't everyone glad that we have check-in policies that search for words like this? :)

Don't go overboard with them!

public property != public variable

Andrew Harcourt Wed 27 Aug 2008

Here's a particularly nasty gotcha that new .NET developers should be aware of. It should *go without saying, but nonetheless it appears that it *does need to be said. Hmph.

The sordid details are not included here to preserve the dignity of the guilty parties, but basically it boils down to: a property getter/setter is NOT the same as a public instance variable.

In other words, you are never guaranteed that the value you put in will be the value you get back. Consider the following:

this.Foo = "Hello, world!";
Debug.Assert(this.Foo.Equals("Hello, world!");

If I were evil, my definition of Foo could be as below:

private string _foo;
public string Foo
return _foo;
Trace.WriteLine("The caller asked me to store '{0}', but I'm going to drop it on the floor instead. <snigger />".FormatWith(value));

As I said, this is an elementary distinction (and a trivial example) but do not not not assume that just because you've asked a setter to store a value that you'll get the same value back from the getter. You should, but "should" is the most over-used word in this industry.

JavaScript Code Re-Use in Microsoft CRM

Andrew Harcourt Fri 25 Jul 2008

Microsoft's CRM tool offers some pretty powerful JavaScript event hooks. One thing it doesn't appear to offer, however, is a way to import a library of JS functions and re-use them across different event handlers.

For example, if one wanted to display a "Hello, world!" message whenever several different attributes were changed, the conventional approach would be to embed the call to alert() in each of the event handlers. Obviously, for such a simple example, this isn't such a big deal, but for more sophisticated logic it becomes unwieldy very, very rapidly.

One common approach is to use externally-referenced script files. Great, but imagine the horror when you suddenly discover that your system administrator has been religiously backing up your CRM server for the last six years, but hasn't backed up the web server from which you were serving your scripts... We still have the problem of how to reference them, too.

Variable declarations in JavaScript (evilly) default to global. What you can do to exploit this, however, is to declare a global function pointer from within an OnLoad event handler as follows:

// This is the OnLoad event handler provided by CRM
function OnLoad() {
// This is the function that we want to make
// available globally. Note the lack of a
// "var" declaration.
helloWorldFunction = function() {
alert("Hello, world!");

Then, in your event handler for other controls on the page, you can re-use that global variable:

// This is the OnChange event handler provided by CRM
function OnChange() {
// ... and here's the one we prepared earlier.

... and Bob's your father's creepy brother, you have code reuse with no external dependencies.

Writing Good Unit Tests

Andrew Harcourt Fri 4 Jul 2008

Writing Unit Tests

  • Why do we write unit tests?

    • Improve code quality

    • Fewer reported defects

    • Make checking code faster

    • Tell us when we've broken something

    • Tell us when our work is done

    • Allow others to check our code

    • Encourage modular design

    • Keep behaviour constant during refactoring Functions as a spec (think TDD)

  • The law of diminishing returns most definitely applies here

    • Testing everything is infeasible. Don't be unrealistic.

    • 70% code coverage is actually pretty decent for most codebases.

    • First, test the common stuff.

    • Next, test the common exception-case stuff.

    • Then test the critical stuff.

    • Add other tests as appropriate.

  • When to write a unit test

    • First :)

    • Use a unit test to provide a framework for writing your code

    • If you find yourself running up an entire application more than once or twice to test a particular

    • method you've written, wrap it in a unit test and use that test to invoke it directly

    • ^R ^T is your friend

    • When someone comes to you with a bug report, write a test to reproduce the bug.

Key Principles for Unit Tests

  • Each and every test must be able to be run in isolation

    • Tests should the environment up for themselves and clean up afterwards

      • Use your ClassInitialize, ClassCleanup, TestInitialize and TestCleanup attributes if you're in MSTest-land, and the equivalents for NUnit, XUnit etc.
    • Tests should never rely on being executed in any particular order (that's part of the meaning of "unit")

    • Tests should not rely overmuch on their environment

      • Don't depend on files' being anywhere

      • Don't hard-code paths. This will bite you.

    • If a class depends on another class that depends on another class that you can't easily instantiate in your unit test, this suggests that your classes need refactoring. Writing tests should be easy. If your classes make it hard, fix your classes first.

  • Tests should be cheap to write

    • Don't worry about exception-handling - if an unexpected exception is thrown, the test fails. Don't bother catching it and manually asserting failure.

    • Be as explicit as you can

      • Don't allow for variations in your output unless you absolutely have to.

      • If there are going to be different outputs, ideally there should be different tests

  • Tests should be numerous and cheap to maintain

    • Each test should test one (perhaps two or three, but generally just one) behaviour

    • It's much better to have lots of small tests that check individual functionality rather than fewer, complex tests that test many things.

    • When a test breaks, we want to know exactly where the problem is, not just that there's a problem somewhere in a call stack seven classes deep.

  • Tests should be disposable

    • When the code it tests is gone, the test should be dropped on the floor.

    • If it's a simple, obvious test, it will be simple and obvious to identify when this should happen.

  • Tests need not be efficient

Useful Stuff

  • Private Accessors

    • These allow you to call private methods and access private variables from another class

    • This deliberately breaks OO principles

    • Use it for testing implementation-specific stuff, but depend on concrete types when you do.

  • clip_image001

  • ASPX page testing

    • You can automate all sorts of stuff with respect to ASPX pages

    • Button clicks

    • Form inputs

    • clip_image002

    • This does not replace regression test automation (e.g. Selenium, Mercury et al), but should be used by individual developers when writing new ASPX pages and ASCX controls.

    • Don't run the application and click stuff manually

    • Write a unit test and tell it to click stuff automatically

  • Testing web service methods

    • The MSTest framework will get confused if you try to actually invoke things via HTTP.

    • It's better to just call the web endpoints directly in-process.

  • Using the unit testing framework for integration testing

    • Have a couple of common-case "unit" tests that actually represent an end-to-end use case of your application.

Note for young players: if you get an invalid cast exception such as...

Test method Test.Zap.CubeModel.DigitalSearchTreeTest.TestInsert threw exception: System.InvalidCastException: Unable to cast object of type 'Node`1[System.Char,System.Char]' to type 'Node`1[System.Char,System.Char]'.. may be that your type is a nested class or similar and does not have public visibility.

Code Snippet for WPF Routed Event

Andrew Harcourt Fri 13 Jun 2008

It's useful for me. Your mileage may vary. I wouldn't mind knowing if it's useful for anyone else, though :)

<?xml version="1.0" encoding="utf-8" ?>
<CodeSnippets  xmlns="">
  <CodeSnippet Format="1.0.0">
      <Description>Code snippet for a WPF Routed Event</Description>
      <Author>Andrew Harcourt</Author>
          <ToolTip>The name of the routed property (should *end* in ...Event).</ToolTip>
        <Literal Editable="false">
          <ToolTip>The type of the owning class.</ToolTip>
      <Code Language="csharp">
        <![CDATA[#region $eventName$ Routed Event

        public static readonly RoutedEvent $eventName$Event = EventManager.RegisterRoutedEvent(

        public event RoutedEventHandler $eventName$
            add { AddHandler($eventName$Event, value); }
            remove { RemoveHandler($eventName$Event, value); }

        /// <summary>
        /// Invoke this method when you wish to raise a(n) $eventName$ event
        /// </summary>
        private void Raise$eventName$Event()
            RoutedEventArgs newEventArgs = new RoutedEventArgs($className$.$eventName$Event);


Windows Communication Foundation Introduction

Andrew Harcourt Thu 1 May 2008

Here's a very (very!) quick WCF overview I prepared the other day for the team at Zap. It's intended as a soldier's five on the topic; no more and no less.

Service Contracts and Operation Contracts

This is what your WCF service promises faithfully to do for its callers.


Data Contracts

These are the data types that your WCF service expects its callers to understand. Thankfully, it will happily explain these data types to its callers.


Events (... or "Duplex Contracts")

Out of scope for this presentation, but see for a pretty decent explanation.

Hosting Your WCF Service


Note the endpoint address:


Adding a service reference to your project

The metadata exchange address is the equivalent of the old Web Service Definition Language (WSDL) address.


Calling the WCF Service

Call it just as you would your local methods.


It's that easy.

Purely for edification, this is what some of the generated code looks like:


“The trust relationship between the primary domain and the trusted domain failed.”

Andrew Harcourt Tue 25 Mar 2008

This error will occur for many reasons. If you've arrived at this blog, however, the odds are that you're searching for something .NET-related, possibly even a provider-related issue.

A .NET provider specified in a web.config file has its status set to enabled=false by default. Don't ask me why.

This problem, when using a custom role provider, might manifest as the above error. Very frustrating, as none of the Google results I've seen will mention this particular cause of this error, anything about how to actually fix it.

Using DBML/LINQ to Generate WCF DataContracts

Andrew Harcourt Mon 24 Mar 2008

Yes, you can actually use the DBML editor to generate classes tagged with the WCF DataContract attribute.

.NET 3.5 rocks.

WPF Context Menu Doesn't Display on First Load

Andrew Harcourt Thu 14 Feb 2008

The problem:

When using WPF OnContextMenuOpening, ContextMenu doesn't display on first load.

The reason:

The OnContextMenuOpening routed event is used for dynamically creating a ContextMenu object for a particular UIElement.

Each UIElement has a ContextMenu property which dictates what gets displayed when a user right-clicks on it. If the property is null, nothing will be displayed. If the property is not null, the context menu that it references will be displayed.

The catch? The ContextMenu property must not be null before the event handler first fires, or the menu won't load. This appears to be a WPF bug, but it's a pain either way.

The solution:

Create an empty ContextMenu object and assign it to each UIElement that's going to have any context menu displayed. In the OnContextMenuOpening event, either Clear() the existing context menu or just create a new one and assign the property to the object reference. Either will work.

Fetchmail Multidrop

Andrew Harcourt Mon 2 Apr 2001

Disclaimer: This is an OLD, OLD blog post.

I shudder to think how much this PERL code looks like it was written by a C programmer...

If you still find it useful, great. If you'd like to mess with the code, great too. If you're really annoyed with fetchmail and it doesn't already have this feature built in, why not change the fetchmail code yourself and submit it to the maintainers? :)

Why don't I? Because I wanted to learn PERL...

I recently had some trouble with fetchmail and multidrop mailboxes. fetchmail handles mail well when a local address is found in the To: field of an email, but badly when it has to extract address information from the other headers of an email. In particular, people on the BCC list of an email (and who never appear in "official" mail headers) are likely to never receive emails addressed to them when handled via fetchmail with a multidrop mailbox.

I was also looking for an excuse to learn PERL (08/2004 update: wow, this page really is old...).

qmail has a nice solution where it inserts an Delivered-To: line into the mail headers. For virtual hosts, it prepends the domain name to the email account it was delivered to. My domain is, so when the MX host for accepts mail for me, it dumps it into a single account. The headers it inserts look like this:

Return-Path: [email protected]
Delivered-To:[email protected]
Received: (cpmta 5066 invoked from network); 2 Apr 2001 21:59:52 -0700
Delivered-To:[email protected]
Received: (cpmta 5062 invoked from network); 2 Apr 2001 21:59:51 -0700

Read this from the bottom up. You'll see that the first Delivered-To: line reads[email protected] and the second Delivered-To: line reads[email protected].

fetchmail can be configured to read envelope addresses. These are the addresses (such as the Delivered-To: line) that mail servers include in mail headers to record which account the mail was delivered to. This is an important distinction from which address the mail was intended for. You can tell fetchmail to use these headers by specifying the following in your .fetchmailrc file:

envelope "Delivered-To:"
qvirtual ""

Obviously, please change the qvirtual line to read your domain :)

The problem here is that fetchmail will only read headers top-down, and it matches the first one it finds. This breaks the envelope/qvirtual delivery process completely as fetchmail is unable to ignore the first line it receives. So if the envelope/qvirtual settings do not solve your problem, read on.

What we need to do is specify an alternate delivery agent, so that we can handle our processing ourselves. You can do this by specifying an external Mail Delivery Agent for fetchmail. Use the mda keyword to point mail to a script that you can copy and paste from here:

-- /usr/sbin/fetchmail-inject -
# fetchmail-inject
# Andrew Harcourt, 1 May, 2001
# this script removes a nasty header that fetchmail can't handle
# from incoming mail and then passes the mail to sendmail to
# deliver locally

# write the mail to a temp file on disk
# as we do, parse it for the address
# lines
local($outputName, $fromAddress, $toAddress, $cmd);

$outputName = "/tmp/message.".$$;
open(OUTFILE,"> ".$outputName)
        || die "could not open $outputName!";

while (&ltSTDIN&gt) {
        if (/^Delivered-To: uglybugger\.org\%uglybugger\@uglybugger\.org/) {
                # just ignore this line - it's ugly
        elsif (/^Delivered-To: uglybugger\.org\%(.*)\ {
                $toAddress = $1."\";
        elsif (/^From: .* /) {
                $fromAddress = $1;
        elsif (/^From: /) {
                $fromAddress = $1;
        elsif (/^From: .*/) {
                $fromAddress = $1
        print OUTFILE $_;


# check that we have our addresses correct
if ($toAddress eq "") {
        $toAddress = "postmaster\";
if ($fromAddress eq "") {
        $fromAddress = "postmaster\";

# now call sendmail to deliver the message
$cmd = "/usr/sbin/sendmail -f $fromAddress $toAddress < $outputName";

Good luck :)

About me

My name is Andrew Harcourt.

I'm a software engineer and project rescue specialist. I'm a Principal Consultant and Market Tech Principal at ThoughtWorks, a co-founder at Stack Mechanics and in my spare time (ha!) I also run my own photography business, Ivory Digital.


I'm a solutions architect and software engineer with extensive experience in large-scale, high-load, geographically-distributed systems. I specialise in project rescue, governance and development methodologies.

My main areas of interest are domain-driven design, event sourcing, massively-scalable service architectures and cloud computing.

I'm a regular speaker and presenter at conferences and training events. My mother wrote COBOL on punch cards and I've been coding in one form or another since I was five years old.


Cyclist. Photographer. Ballroom dancer. Motorcyclist. I love my outdoor sports - and anyone who won't dance is chicken.