DDD North 2

Went to Developer Developer Developer North 2 the other day. I’d not been to one of these before, but it was free, and local, so seemed like a good option for a bunch of us from work to attend.

I went to the following sessions:

10 Practices that make me the Developer I am today – Garry Shutler

Nice talk to start the day. Nothing ground-breaking, but I was encouraged to be more of a Code Nazi, and reminded that I should really try to learn a new language next year…

The Good, The Bad and The Automated – Paul Stack

Brilliant talk around Continuous Delivery and DevOps. Some amazing stats around how the likes of FaceBook and Amazon deploy up to ~1000 times a day…

We’ve done some good work ourselves with our build scripts, but we really need to work more closely with our Ops team to automate more of the environment set-up. We also just started implementing feature switches, and I think this is something we must make more use of. If we set it up properly we’d all be able to develop in a single Main code branch. Apparently FaceBook have their next six months of features all deployed to production, but all initially turned off using feature switches.

I need to read Jez Humble’s Continuous Delivery and the Dev Ops Cookbook, when it comes out.

BDD – Look Ma! No Frameworks – Gemma Cameron

Lot of familiar stuff about BDD, especially having just finished Gojko’s Specification by Example. Some good points around the importance of using conversations and examples to reach a shared understanding, and ubiquitous language. I like the idea of thinking of the spec as being a reminder for yourself in six months time.

Whilst I agreed with Gemma’s point that learning the right techniques is more important than picking up shiny new tools, to say that all frameworks are bad seemed flawed – especially when the first line of the code sample was

using NUnit.Framework;

JavaScript Sucks and It Doesn’t Matter – Rob Ashton

I think Rob wound a few people up, but I found him pretty amusing. I couldn’t help thinking throughout the talk that he could be Trevor’s evil twin.

Need to check out the zombie and PhantomJS headless browsers for testing.

Event Driven Architectures – Ian Cooper

Good deep dive talk. Reminded me I find this kind of architectural work really interesting…


Overall a good day, and I even won a book in the raffle at the end – something about business metrics – there wasn’t much to choose from by the time I got up there…

Delivery, Delivery, Delivery

When I started this job it was towards the end of a big release. I witnessed a long and painful bug-fixing period, and got to thinking about what improvements could be put in place to make the next release smoother. It soon became apparent though, that the releases all year had been late, and as such a backlog of work had built up. What also became apparent was that all of this work was contractually required to be delivered by the end of 2010. My first full release was certainly going to be interesting, if not smooth…

According to the PMs all of the required work would fit into the time we had, but unfortunately the estimates that this assertion was based on had all been provided by individuals who would not actually deliver the work, or by developers who had been forced to ‘estimate’ to a specific figure. In my mind these so-called estimates are pretty worthless, as the whole point of estimating is to be able to plan well (in many environments it’s also to cost things up, but in our case the costs are already fixed by the overall contract), but more on estimation in a future post…

So we ended up in a situation with the resource, time and scope were effectively fixed – not ideal.

We mitigated this to a degree by ensuring we worked on the right things first. Although the overall scope was fixed, there are usually ‘nice-to-have’ features that the business can truly live without. The business owners weren’t used to having to prioritise in this way – we had to gain their trust, and explain that we weren’t planning to drop their features, rather we needed to avoid a situation where if the sh*t really did hit the fan, we wouldn’t be left with critical features not implemented, based on their advice. This seemed to work okay, and we had more confidence that we were working on the right things in the right order. We also tightened the testing feedback loop by getting the testers to test everything in an earlier environment. This reduced the total cycle time to deliver bug-fixed requirements.

Even after those minor improvements, it was a tough release. The team worked a lot of overtime, something I hope to avoid in future. We worked late nights and we worked from home some weekends. When we worked in the office at the weekend we had to get portable heaters in as it was so cold that our fingers were seizing up, and when it really started snowing we booked people into hotels so they could carry on working instead of leaving early.

And we delivered. We got the release out on time, and we partied when it was all over. Would I want to do another release like that again – no way… However there was something positive about the team pulling together to beat the odds. It was a time when we worked hard and played hard together, and it’s still one of the releases that some of those involved talk about with a wry smile.

Introducing Iterations

New year, new start… We’ve just got a big release under our belts and it feels like there is now enough trust from senior management to start making a few changes to the way we do things.

So what first..?

One problem seems to be the feedback loop from the business. They come up with an idea, it spends a few weeks/months being spec’d up, and then we develop it. Finally it gets tested and eventually the business owner gets to see the ‘finished’ product…

This sounds okay in practice, but it doesn’t work because things change along the way.

We need to tighten the feedback loop so the business are involved and engaged throughout the whole of the design, develop, and testing process.

Another problem is that time is wasted in spec’ing up features that never get delivered because we then run out of time in the development phase.

Ultimately we need to move away from the long phase-based waterfall approach that makes the flawed assumption that we can get everything right and complete in one phase before moving onto the next.


I think the ideal solution to this will be to introduce a pull-based continuous-flow pipeline type of approach. We’d take one minimal marketable feature at a time and deliver it all the way through the pipeline from start to finish.

Although this type of Kanban approach says ‘start with what you do now’ – I see us having the problem that this is going to require a lot of regular communication and engagement between different teams, who work in different geographical locations. The organisation isn’t currently used to this level and style of communication.


After some thought, I reckon it’s probably going to be better to try an iterative approach first.

We’ll try working in two week iterations; taking a small chunk of work at a time – maybe a couple of features – developing and testing them, and then finally demo’ing them to the business owners to get approval that we’ve done the right thing, and/or feedback that we need to change things.

By getting this early and regular feedback we should avoid the nasty surprise of finding we’ve delivered something incorrectly right at the end of a release when we don’t have the time to do anything about it. Ideally if something is going to fail, we want it to fail as early as possible! We can tackle high priority and high risk items in the earlier iterations, to drive out risk, and ensure we’re delivering the core requirements early on.

My reasoning of choosing iterations over continuous flow, is that because of our split site, it will be good to have specific markers in time where the different teams can come together to look at where we are, review progress and then plan the next steps to take.

I actually hope that over time the iterations might naturally disappear and we’ll end up with the continuous flow system that will work even better in the long run. Until then I think that introducing the discipline required to make an iterative approach work, will be a good thing…

Input Queues => More Output

One of the first ‘quick wins’ that we’ve implemented is to let the development team manage their own workflow. A bug tracking tool was in place when I arrived (a good thing as there were a LOT of bugs, but more on that later) but the process in place was for Project Managers to ‘manage’ these bugs by assigning up to three bugs at a time to any given developer.

Maybe it gave the PMs some sense of security that ‘all bugs are assigned, therefore we will fix them and deliver on time’ but I’d argue that this is a false sense of security.

There are a few other things wrong with this approach:

  • How can a developer be working on three things at once?
  • How does the PM know the best person to work on a particular bug?
  • Not all bugs are equal – one might a be a tiny copy change, another could mean a fundamental problem in an important component.

We resolved this by introducing the concept of a prioritised input queue. Between us, we (Dev Team Leads) and the PMs would triage new bugs into the queue, constantly re-prioritising it so that we have collective control of which bugs get looked at next.

The developers then just have to pick the next bug off the list that makes sense for them to work on. We’d usually set some rule like ‘you have to pick from the top three bugs’, so that Developers have some control and ownership of what they work on, and still ensuring they’re working on the one of the most important things for the business.

In our case the input queue was a physical one – a new column on the Kanban board that’s in place on our giant whiteboards. We’d add a new post-it to the input queue for each bug triaged into there.

This worked well as it reduced the amount of time wasted in context-switching every time a developer was ‘assigned’ a new bug, and each bug got to the relevant person more quickly and with less fuss. The hardest part was persuading the PMs to relinquish the perceived control that they had over the process.

By introducing the input queue, we were able to demonstrate that we’d be able to waste less time, and therefore fix more bugs, by managing our own workflow.