What’s in a name – Owner, Manager, or Leader?

My esteemed colleague @benjiportwin just wrote a parting post which talks about job titles, and how much they matter, if at all.

He opened with the the Product Owner vs. Product Manager job title thing, which I’ve also been thinking about.

When I joined the NHS Choices team a few years back we had Product Leads who each looked after a specific area of the service. They did a great job of defining the changes needed for their particular products, but didn’t always interact directly on a day-to-day basis with the people building those products.

Changing titles to indicate change

We spent a couple of years changing this as we implemented agile methods across the programme. At the time I pushed for these roles to be called Product Owners, mainly because I wanted to force a distinction between the old and the new way, and that’s what the methodologies we were adopting (like Scrum) tended to call that role.

Shared ownership rules

I tend to associate the Product Owner role title with Scrum, and over time have gone off it a bit. Partly because I don’t like the idea of sticking with just one fixed methodology, and partly because it could imply one person having sole ownership of the product. I much prefer the idea of a team collectively owning the product that they build and run together.

Industry-standard

Instead I shifted towards the Product Manager job title. This seems to be much more of an industry-standard these days. If I see a Product Owner job ad I think “they do Scrum”, when I see Product Manager I think “they have Product teams”. Generalisations I know, but that’s what it conjures up in my mind.

Full circle

Most recently I’ve come back around to Product Lead. I like the idea of somebody leading the development of a product, rather than managing it. I think we all know the difference between a manager and a leader.

Managing a product could perhaps be read as holding it back, pruning it, keeping it in check (thanks to @st3v3nhunt for this). Whereas leading it talks of setting a vision, inspiring progress, and taking the product forward to exciting new places!

Does it matter?

I’ve thought about this mainly because I’ve been taking on a product role myself, but really, as Benji said in his post;

job titles are interchangeable and frankly unimportant, but what matters is the impact you make each and every day.

Good luck in NYC, Benji. See you on the sun deck!

Resources, or People?

Workplace language is interesting. We hear a lot of different jargon and cliches that we wouldn’t ordinarily encounter outside of the office. I think we know some of it is a bit silly and acknowledge it as such, but other workplace language is treated as totally normal.

One of the most common, but perhaps pernicious, examples of workplace language is the use of the word Resources, when we mean People. It really does seem to be workplace-only vernacular too – I’ve yet to hear anyone say “We need some more resources for 5-a-side tonight.”

I understand that it’s a fairly industry-standard term, but that doesn’t mean there aren’t issues with it. Lots has been written on this before – posts like this, and this – and there’s even a day dedicated to the cause.

 

oxforddictionaries.com defines resources as

A stock or supply of money, materials, staff, and other assets that can be drawn on by a person or organisation in order to function effectively

So technically speaking using the word resources to describe people isn’t wrong, and sometimes it makes sense to use the word resources to describe a particular ‘thing’. Resource management isn’t just about people – it could be about laptops, office space, or beanbags.

But 99% of the times I hear ‘resources’ mentioned at work, it’s being used to refer solely to people.

 

The problem is that the word resources seems to imply an interchangeable quantity –

“we need more resource”

“add some more resources”

“this resource is leaving, let’s get another one in”

From a management point of view this is great. It would be fantastic to be able to swap interchangeable resources in and out at will, in order to maintain performance.

 

But of course it doesn’t work like that. People have different skills, personalities, likes, dislikes, attitudes and so on. You cannot simply switch people in and out of a team, and expect things to continue at the same level or pace.

Because in knowledge work, value is generally delivered by teams, not individuals. A team is not just the sum of its parts – it’s the product of its interactions. The relationships between the people in the team determines the success of the team – it’s not about just adding up the raw skills of the individuals in the team.

The word resource obfuscates this fact. It helps us to kid ourselves that we can work in this way – swapping people around. It’s hiding the reality of the situation that when you’re dealing with knowledge work – people are not always going to be completely interchangeable.

That’s not to say I’m not against moving people around between teams on some regular basis. This can keep things fresh, and helps to share knowledge. But I think I’d get better results from trying out a new winger in my regular line-up, than swapping out the entire midfield.

 

I know a lot of people don’t like being referred to in this way, but I don’t think this is just about individuals being sensitive to being called a ‘resource’ – it’s about our cultural definition of how we view and treat our people, and about how we plan and manage work in a realistic way.

Having shared language is really important for building shared understanding, but next time you’re about to use the the word Resources, maybe pause and think; would the word People explain the situation in a clearer, more helpful, and more realistic way?

Building consensus around product ideas

I’ve spent the last year or so working with teams in the early stages of Product Discovery – examining the needs of users, and forming ideas and prototypes of services we can deliver to meet those needs.

One thing I’ve seen is the tendency in those teams to have the same conversations about a particular topic or idea several times over. Sometimes in the office, sometimes over lunch, and sometimes in the pub after work (which is where a lot of good ideas form).

This isn’t necessarily a bad thing while we’re shaping our ideas and building consensus in small groups. But once we’ve found ourselves repeating, and agreeing, the same concepts a few times, it feels like it’s time to get things out of our heads and onto some paper, so that we can share the idea further, and build a broader consensus.

The worst case scenario is if we don’t do this, that a small group forms an idea and assumes everyone else is also thinking the same thing. In the early stages of delivering a new service for users, building shared understanding across the whole team is critical.

Audio recording

A nice idea we tried recently was to get a small group of us (three worked nicely) to record the conversation. Kudos to @evilstreak for taking inspiration from the work @mattsheret has done with us on blogging.

We found a quiet space and used a USB mic (that we use for remote meetings) and Quicktime to record our conversation for about 30 minutes.

We didn’t write anything down beforehand, but the conversation was fairly structured, as we’d already had similar versions of it before.

Recording a conversation

One side-effect of the recording was that we were a bit more conscious of what we were saying, and avoided any unnecessary rambling. Being aware of the recording also stopped us from talking over each other at all. This is something we sometimes suffer from, as everyone has so many ideas to share, people just want to get them out there.

After the recording we transcribed it verbatim into a document. By sharing the audio file, two people were able to work in parallel for 30 minutes to get the whole conversation transcribed.

Pro-tip – using VLC to play back the conversation at a slightly slower speed (about 80%) allowed the typist to keep pace with the conversation without having to pause and replay bits. If you’re a really fast typist maybe you won’t need this.

Listening again to the recording actually helped ideas to crystallise in peoples’ minds. You were listening more intently, rather than thinking about what to say next, as you might in conversation.

Getting feedback from the wider team

As it was just three of us that had the initial conversation, we needed to gather feedback from the wider team.

Initially sharing the transcript was a useful way to share the ideas. Teammates reported that reading the transcript was a very natural way to read, and understand the ideas.

We then began to structure it into a more coherent narrative. Using a tool that enables quick electronic collaboration (like Google Docs) is really handy here. Everyone can be in the same version of the document at the same time – teammates can add comments, responses can be seen instantly, and everyone can see updates as they’re being made.

Crits

We supplement this electronic collaboration with group crit sessions.

Anyone interested in providing feedback gathers around a table and we read through some ideas and have a time-boxed discussion with the goal of gathering specific change to make to the narrative.

In this case we quickly realised that we didn’t have consensus across our team. Lot of new ideas came from these sessions, and the original narrative gave us a way of framing this discussion.

A picture speaks quite a few words

Another (fairly obvious) way of clarifying thoughts and ideas was to do quick sketches of your thoughts. Drawing something and showing it around was a really quick way of prompting further conversation to build shared understanding.

A sketch about booking

It’s handy to draw something and then pass it to a colleague without explaining what it shows, then ask them to explain their understanding of it.

Building consensus in public

So we’ve built some consensus amongst ourselves, within our team. Next we need to build some consensus with the wider world outside of our team – with our stakeholders and with the public.

We need to be open with our ideas as early as possible for a couple of reasons –

  • All the services we’re delivering are created for, and paid for by, the public. We should be transparent about how we’re spending public money.
  • The health and care system is complex. There are many different organisations involved. Being open about our ideas and proposals early on is a good way of getting the message out to these organisations, and starting a conversation with them.

One way in which we share our ideas openly is by blogging.

The document and drawing described above were drafted into an initial blog post by @evilstreak and @paul_furley, the drawing was redrawn by @demotive, and the resulting post is here – Booking is critical for transforming healthcare

My first User Research

Note – I originally posted this here on the NHS Choices blog back in February. It was written as three separate posts about User Research, from the point of view of someone who hasn’t been involved in this kind of thing before.

 

Over the last fortnight I’ve been observing User Research with my team, and it has been quite an eye-opener.

We’re at the point in our Discovery phase where we’ve made a bunch of assumptions about our users and their needs, and gathered information around these assumptions from various sources – on and off-site analytics, existing literature and research, social media, our service-desk tickets, and on-site surveys.

Now it’s time for us to talk directly to some USERS*

* Not all of the people we interview are necessarily users of the current NHS Choices service. Some of them might be potential users too.

User Research like this isn’t new to us. NHS Choices has had a dedicated Research team since 2007, but it’s in the last year or so that we’ve really started to more tightly integrate the work that the researchers do, into our delivery cycle. This is the first time we’ve involved the whole of the multi-disciplinary transformation team in observing and note-taking for the research sessions, doing the analysis and deciding on next steps within a couple of intensive research days.

Who do we interview?

For the two topics we’re focusing on right now – we’ve been talking to two distinct groups of people

  • Parents of children who’ve had Chickenpox in the last three months
  • People who sought a new Dentist in the last three months

We make sure we talk to a mixture of men and women from different socio-economic groups, of different ages, and with differing levels of internet skill.

We ask some quite detailed questions, so it’s good to get people who have had a relatively recent experience (hence the three month time-window) as the experiences they’re recalling will tend to be more accurate.

We use some dedicated participant recruitment agencies to source the specific people we want to interview. We supply a spec, like the parents described above, and they go and find a selection of those people. Obviously there’s a cost attached to this service, but the recruitment can be time-consuming, and it would be difficult to find a big enough cross-section of people ourselves.  Outsourcing this to an agency frees up our researchers to focus on the actual research itself.

The setup

We do some interviews in the participants’ own homes – interviewing people in their own environment gives us a much better sense of how people look for information and where this fits into their lives. Also we get to meet participants who would not want to go to a viewing facility.

We also do interviews in a dedicated research facility – these are the ones that the rest of the team and I have been observing.

We’ve used a couple of facilities so far, one in London, and SimpleUsability in Leeds – just a five minute walk from our Bridgewater Place office.

Our interviews have been one hour long. The participants sit with a researcher – who conducts the interview – and a note-taker in the interview room. The note-taker might be another researcher or other member of the team – we’ve had UX Architects and service desk analysts taking notes in our sessions.

With the participants, the researcher and the note-taker in the interview room, the rest of the team are behind a one-way mirror with the sound piped in, observing the whole show.

User Research Observation Room

And yes, with the one-way mirror, it fell to @seashaped and myself to make all the obligatory unfunny gags about being in a police interrogation scenario…

Interviewing

The interviews are based around a Topic Guide prepared beforehand by the researcher. This is based on input from our previous research, and includes specific subjects around which we want to learn more. The whole team feeds their ideas into the Topic Guide.

The interviews aren’t run strictly to the guide though – we’re talking to people about their lives, and the health of them and their families, so naturally the discussion can wander a little. But our researchers are great at steering the discussion such that we cover everything we need to in the interviews.

We decided not to put any prototypes in front of users in the first round of research. We’re trying to learn about users’ needs and their state of mind as they’re trying to fulfil those needs, so we didn’t want to bias them in any way by putting pre-formed ideas in front of them.

We did run a card-sorting exercise with users in the first round of Dental research – getting the participants to prioritise what would be most important to them when searching for a new dentist, by letting them sort cards.

We had a camera set up for the card-sorting exercise, so we could all see it clearly, without crowding around the mirror in the observation room.

Card Sorting

As the interview takes place, the note-taker is busy capturing all of the insights and information that come up. As the participant talks, the note-taker captures each individual piece of information or insight on a separate post-it note. This results in a lot of post-its – typically we’ve been getting through a standard pack of post-its per interview.

GDS have written in more detail about some note taking good practices.

Lots of post-its

Sorting into themes

Once the interviews are over, we have to make sense of everything the users have told us. We have a whole load of insights – each one logged on an individual post-it note. We need to get from what the users have said, to some actionable themes, as quickly as possible, without producing heavyweight research reports. We use the affinity-sorting technique to help us do this.

This basically involves us sorting all of the post-its into themes. We’ve been having a stab at identifying themes first, and then sorting the post-its into those groups first. As the sort takes place we’ll typically find that a theme needs to be split into one or more themes, or sometimes that a couple of existing themes are actually the same thing.

list of chickenpox themes

This isn’t the job of just the researcher and note taker who conducted the interviews. The whole team that’s been carrying out and observing the User Research takes part in this process, shifting post-its around on the wall until we feel we have some sensible groupings that represent the main themes that have come out of the interviews.

Dental Affinity Sorting

Chickenpox Affinity Sorting

Although we’re not presenting our research findings as big research reports or presentations, we are logging every insight electronically. After the sort, every insight gets logged in a spreadsheet with a code to represent the participant, the date and the theme under which the insight was grouped. We’re reviewing our approach to this, but the idea is that over time this forms our evidence base, and is a useful resource for looking back over past research, to find new insights.

Hypotheses

Once we have our themes we have to prioritise them and decide what to do with them next. At this early stage this usually means doing some more learning around some of the important themes. We’ve been forming Hypotheses from our Themes – I think this helps to highlight the fact that we’re at a learning stage, and we don’t know too much for sure, just yet.

We’ve been playing around with the format of these Hypotheses. As an example, one of the strong themes from our first round of User Research on Chickenpox was around visual identification. We expressed this as follows –

We believe that providing an improved method of visual identification of Chickenpox

for parents

will achieve an easier way for parents to successfully validate that their child has Chickenpox.

When testing this by showing a variety of visual and textual methods of identification to parents of children who’ve had chickenpox

we learned …

So we will

If you’re familiar with User Stories, you can see how this hypothesis would translate into that kind of format too. You could argue that all User Stories are Hypotheses really, until they’re built and tested in the wild.

Low-fidelity Prototypes

In order to test this hypothesis, we’re going to need some form of prototype to put in front of users. We’re working on a weekly cycle at present so we only have a few days before the next round of research. Speed of learning is more important at this stage, than how nice our prototypes look, so we’re just producing really low-fidelity prototypes and presenting them on paper.

For the visual identification hypothesis, here are some of the prototypes we’re presenting to users in our second round of research – see what I mean by low-fidelity, but this is just what we needed in order to explore the concepts a bit further, and learn a bit more.

Chickenpox prototype 1

Chickenpox prototype 2

We’ll base some of the questioning in our second round of research around these prototypes, and capture what we learn in our hypothesis template.

Based on this learning from the second round of research, we’ll either capture some user needs, write some new hypotheses to test, create some further prototypes to test, or maybe a mixture of all three.

Side effects

One interesting side-effect of our research sessions that we noticed was that some users were unaware of some aspects of our existing service, and as @kev_c_murray pointed out, some users left the sessions with an increased knowledge of what is available to them.

With comments like “Yeah I’m definitely going to go and look that up on your site now.” – we’re actually driving a little bit of behaviour change through the research itself. Okay, so if this was our behaviour-change strategy we’d have to do another 7 million days of research to reach the whole UK adult population, but every little helps, right…

What have we learned about how we do User Research?

  • The one week cycle of doing two full days of Research, then sorting and prototyping, is hard work. In fact it probably isn’t sustainable in the way we’re doing it right now, and we’ll need to adapt as we move into an Alpha phase.
  • Do a proper sound check at the start of the day – in both facilities we’ve used we’ve had to adjust the mic configuration during or after the first interview.
  • Research facilities do good lunches.
  • The observers should make their own notes around specific insights and themes, but don’t have everyone duplicating the notes that the note-taker makes – you’ll just end up with an unmanageable mountain of post-its.

More please

Lean UX BookWe plan to do much more of this as we continue to transform the NHS Choices service. As we move into an Alpha phase, we’ll continue to test what we’re building with users on a regular basis – we’ll probably switch from a one-week cycle to testing every fortnight.

As someone from more of a Software Development background, I find it fascinating to be able to get even closer to users than I have before, and start to really understand the context and needs of those people who we’ll be building the service for.

If you’re interested in reading more around some of the ideas in this post, try Lean UX – it’s a quick read, and talks in more detail about integrating User Research into an agile delivery cycle.

Lego Flow Game

We run regular Delivery Methodology sessions for a mixture of Delivery Managers and other folk involved in running Delivery Teams. It’s the beginning of a Community of Practice around how we deliver.

One of the items that someone added to our list for discussion recently was about how we forecast effort, in order to predict delivery dates. Straight away I was thinking about how we shouldn’t necessarily be forecasting effort, as this doesn’t account for all of the time when things spend blocked, or just not being worked on.

Instead we should be trying to forecast the flow of work.

We’d been through a lot of this before, but we have bunch of new people in the teams now, and it seemed like a good idea for a refresher. My colleague Chris Cheadle had spotted the Lego Flow Game, and we were both keen to put our Lego advent calendars to good use, so we decided to run this as an introduction to the different ways in which work can be batched and managed, and the effect that might then have on how the work flows.

Lego Advent Calendar

The Lego Flow Game was created by Karl Scotland and Sallyann Freudenberg, and you can read all of the details of how to run it on Karl’s page. It makes sense to look at how the game works before reading about how we got on.

We ran the game as described here, but Chris adapted Karl’s slides very slightly to reflect the roles and stages involved in our delivery stream, and he tweaked the analyst role slightly so they were working from a prioritised ‘programme plan’.

Boxes of Lego kits

Round 1 – Waterfall

Maybe we’re just really bad at building Lego, but we had to extend the time slightly to deliver anything at all in this first round! Extending the deadline, to meet a fixed scope, anyone?

The reason we only got two items into test and beyond was that the wrong kits were selected during the ‘Analysis’ phase for three items. The time we spent planning and analysing these items was essentially wasted effort, as we didn’t deliver them.

The pressure of dealing with a whole batch of work at that early stage took it’s toll. This is probably a fairly accurate reflection of trying to do a big up-front analysis under lots of pressure, and then paying the price later for not getting everything right.

It was also noticeable that because of the nature of the ‘waterfall rules’, people working on the later stages of delivery were sat idle for the majority of the round – what a waste!

Our Cumulative Flow Diagram (CFD) for the Waterfall Round looked like this –

Waterfall CFD

You can see how we only delivered two items, and these weren’t delivered until 7:00 – no early feedback from the market in this round!

CFDs are a really useful tool for monitoring workflow and showing progress. I tend to use a full CFD to examine the flow of work through a team and for spotting bottlenecks, and a trimmed down CFD without the intermediate stages (essentially a burn-up chart) for demonstrating and forecasting progress with the team and stakeholders.

You can read more about CFDs, and see loads of examples here.

Round 2 – Time-boxed

We did three three-minute time-boxes during this round. Before we started the first time-box we estimated we’d complete three items. We only completed one – our estimation sucked!

In the second time-box we estimated we’d deliver two items and managed to deliver two, just!

Before the third time-box we discussed some improvements and estimated that we’d deliver three again. We delivered two items – almost three!

Team members were busier in this round, as items were passed through as they were ready to be worked on.

Timeboxed CFD

The CFD looks a bit funny as I think we still rejected items that were incorrectly analysed (although Karl’s rules say we could pass rejected work back for improvement)

The first items were delivered after 3:00 and you can the regular delivery intervals at 6:00 and 9:00, typical of a time-boxed approach.

Round 3 – Flow

During the flow round, people retained their specialisms, but each team member was very quick to help out at other stages, in order to keep the work flowing as quickly as possible.

Initially, those working in the earlier stages took a little getting used to the idea of not building up queues, but we soon got the hang of it.

The limiting of WIP to a single item in each stage forced us to swarm onto the tricky items. Everyone was busier – it ‘felt faster’.

We’ve had some success with this in our actual delivery teams – the idea of Developers helping out with testing, in order to keep queue sizes down – but I must admit it’s sometimes tricky to get an entire team into the mindset of working outside their specialisms, ‘for the good of the flow’.

Here’s the CFD –

Flow CFD

The total items delivered was 7, which blows away the other rounds.

You can see we were delivering items into production as early as 2:00 into the round. So not only did we deliver more in total, but we got products to market much earlier. This is so useful in real life as we can be getting early feedback, which helps us to build even better products and services.

The fastest cycle time for an individual item was 2:00

A caveat

Delivering faster in the final round could be partly down to learning and practice – I know I was getting more familiar with building some of the Lego kits.

With this in mind, it would be interesting to run the session with a group who haven’t done it before, but doing the rounds in reverse order. Or maybe have multiple groups doing the rounds in different orders.

A completed Lego kit

What else did we learn

* Limiting WIP really does work. The challenge is to take that into a real setting where specialists are delivering real products.

* I’ve used other kanban simulation tools like the coin-flip game and GetKanban. This Lego Flow Game seemed to have enough complexity to make it realistic, but kept it simple enough to be able to focus on what we’re learning from the exercise.

* Identifying Lego pieces inside plastic tubs is harder than you’d think.

 

Overall a neat and fun exercise, to get the whole team thinking about how work flows, and how their work fits into the bigger picture of delivering a product.

Short rant on Cross Country trains and User Experience

I travelled a couple of times recently on Cross Country trains, and was reminded of some apparent UX fails in the design of their train carriages.

I shouldn’t single out Cross Country for blame on this, as I think these particular carriages made their debut in the early 2000s as Virgin trains.

They have some nice touches that the train I catch down to London is missing – window blinds, coat hooks and personal lights.

But there’s three things I noticed that just don’t seem to have been thought through from the point of view of the user – in this case the passenger.

Electronic display of seat reservations

Unlike paper reservation tickets stuck onto each seat, if you’re travelling without a reservation, and you’re trying to figure out where there’s unreserved seats, you can’t do this easily from the platform.

Even when you’ve boarded the train you have to wait at each row while the seat reservations scroll across the tiny screen. Plus you’d have to be pretty tall for this to be anywhere near eye-level.

Okay, so the electronic display probably saves time and money for the train operator, but I think this is at the expense of the user experience. With a paper reservation slips in each seat, you could see at a glance where unreserved seats where.

And what happens when the electronic system fails? Musical chairs as everyone grabs a free seat, until the passenger with the reservation ticket comes along.

The ceiling

The ceiling of the carriage isn’t a straightforward concave curve. Instead the ceiling includes a convex curve. Presumably this is aesthetically pleasing.

train carriage interior

However it also means that the luggage space above the seats is smaller, and passengers can’t fit as much in. This seems like style over substance.

Open Sesame

The button to open and close the door between the carriage and the ‘vestibule’ is ON the door. When you approach the closed door this seems fine, as the button is right in front of you.

train carriage door

But when the open door starts to close, you’re trying to hit a moving target. Many times I’ve seen people queuing to get into a carriage (because those in front of them are peering at the electronic displays, or trying to cram their luggage into the undersized overhead luggage space) getting squashed by the doors because they can’t figure out where the button is.

Even R2D2 might’ve struggled to stop the garbage compacter if the inch-wide UI to do so had been a moving target…

This seems poor as you’re making your user interface, to operate something as fundamental as a door, inconsistent. Plus it’s really not great for people with poor motor skills.

To top it all off, the door-mounted button is accompanied by a sign warning you not to place your hands on the door.

I just thought of another one, which others have written about too

Electronic toilet doors

So you press a button to open the toilet door, you enter and press a button to close it, then you have to press another button to ensure its locked.

What a faff. I mean, it’s a door, why does it need to be this complicated? Sure, there are instruction in the loo to ensure you lock it before dropping your trousers, but should you need instructions to operate a door?

What happens when the LED for the locked indicator fails. How can you be confident the door is locked?

What happens when the power fails? Do all the toilet doors just open, revealing some poor passengers mid-poo? Or do they all just lock, leaving them stuck in a stinky toilet all the way to Aberdeen?

A friend walked in on a guy in one of those toilets once, although she suspected from the grin on his face that he’d deliberately left it unlocked… urgh.

Usability Testing?

I’d be really interested in what kind of usability testing was done on these types of train carriage when they were designed.

Maybe this comes across as a bit of a rant, but it seems that the basic user experience for passengers aboard these trains could’ve been improved by a few simple changes.

Image Credit
Class 390 Interior” by Sunil060902Own work. Licensed under CC BY-SA 3.0 via Wikimedia Commons.

Stubbing API calls using stub.by

Typically, if I was developing an application that talks to an API, the TDD approach I’ve historically followed would end up with me using a mocking framework to stub a service component that talks to the API, in my unit tests. I’d then include this service in my implementation code via dependency injection. A fairly common approach of writing tests and stubbing dependencies I guess.

WebMock and Stubby

Colleagues of mine who build rails apps often use the WebMock gem to ‘catch’ API requests and return a specific known response – a different kind of stub.

In the spirit of @b_seven_e‘s DDDNorth 2014 talk at the weekend – of our ruby and .NET teams learning from each other – I found a .NET package called Stubby which does a similar thing to WebMock.

The source code and documentation can also be found on github – https://github.com/mrak/stubby4net

Getting Started

To try Stubby out, I’m building a simple ASP.Net MVC web app, that will get it’s data from a ReSTful API.

I began by adding the Stubby NuGet package

PM> Install-Package stubby

So to try out Stubby I began with a single ‘happy path’ test that tests – given an API returns some data – if my web application returns some data.

[Test]
public void When_you_view_a_muppet_profile_then_you_see_the_muppet_name()
{
var controller = new MuppetsController();


var actionResult = controller.GetMuppet("gonzo");


var viewModel =
(MuppetViewModel)((ViewResult)actionResult).Model;


Assert.That(
viewModel.Name,
Is.EqualTo("Gonzo"));
}

This seems straightforward, but where is the actual stub for the API I’m going to call..? This is set up via the following code in the test fixture –

private readonly Stubby _stubby = new Stubby(new Arguments
{
Data = "../../endpoints.yaml"
});


[TestFixtureSetUp]
public void TestFixtureSetUp()
{
_stubby.Start();

}


[TestFixtureTearDown]
public void TestFixtureTearDown()
{
_stubby.Stop();
}

The YAML file which defines the actual behaviour of the stubbed API looks something like this –

- request:
url: /muppets/gonzo/?$
response:
status: 200
headers:
content-type: application/json
body: >

{
"name":"Gonzo",
"gender":"Male",
"firstAppearance":"1976"
}

This is a basic request/response pair. In this case it matches any GET request to muppets/gonzo and returns a 200 response with the specified headers and body.

There is much more you can do in terms of defining requests and responses – all detailed in the Stubby documentation. You can also set up the request/response directly in code, but I went with the YAML file approach first.

When I run my test – Stubby starts running on port 8882 by default. I add some config to my test project to ensure that my implementation points at this host when making API calls, and then every API call that is made will be caught by Stubby. If I make a call to /muppets/gonzo then this will be matched against the YAML file and the response above is returned.

So now I have this failing test, so I can go and write some basic implementation code to make it pass. In my case I add some code to the controller which makes an API call, de-serialises the JSON returned into an object, and then maps this to a ViewModel which is returned with a View.

More Tests

Once I had this test passing I extended my API stub to include the scenarios where the API returns a 404 or a 500 status code.

- request:
url: /muppets/bungle/?$
response:
status: 404


- request:
url: /muppets/kermit/?$
response:
status: 500

This allowed me to explore how my application would respond if the API was unavailable, or if it was available but returned no resource. In this case I decided that I wanted my application to act in different ways in these two different scenarios.

Refactoring & Design

With these green tests, it now feels like past time to refactor the implementation code.

I haven’t ended up with the service and repository components that I might normally end up with if I’d followed my old TDD approach of writing a test that stubbed the API component in code.

I can put these components in myself now, but it feels like I am a lot more free to exercise some design over my code at this point.

This feels like a good thing. I have a set of tests that stub the external dependency, and give me confidence that my application is working correctly. But the tests don’t influence the implementation in any way, nor do they mirror the implementation in the way that you sometimes get with my previous approach. The tests feel more loosely-coupled from my implementation.

This also feels a bit more like the approach outlined by @ICooper in his “TDD – Where did it all go wrong” video – stub the components which you don’t have control over, make the test pass in the simplest way possible, then introduce design at the refactoring stage – without adding any more new tests.

 Testing Timeouts

Another interesting thing we can do with Stubby is to test what happens if the API times out. If I set up a request/response pair in my YAML file that looks like this

- request:
url: /muppets/beaker/?$
response:
status: 200
latency: 101000

this will ensure that the API responds after a delay of 1:41 – 1 second longer than the standard HttpClient timeout.

So by requesting this URL,  I can effectively test how my application reacts to API timeouts. Obviously I wouldn’t want to run lots of timeout tests all the time, but it could run as part of an overnight test run.

This gets more useful if I’m creating an application which makes multiple API calls for each screen, and I want my application to fail more gracefully than just displaying an error page if an individual API error occurs.

Is this a worthwhile approach?

Well it feels pretty good, so after discussion with my colleagues, we’re going to try it out on some production work, and see how it goes… watch this space…

Failure IS an option

I wrote this post after coming off a daily ‘stand-up’ call where one Developer admitted he “didn’t know what he was doing” because he’s covering for one of our UI Developers while she’s off , and a DBA told us we weren’t seeing the data we expected that morning because he ran the wrong package by mistake.

3095099782_1306a8169c_z

It got me thinking about how it’s important to encourage an atmosphere where people aren’t afraid to talk about the mistakes they’ve made.

No-one admonished these people for saying these things. We respect someone for being open about the fact that they made a mistake, and then fixed it. We admire someone for actively stepping out of their comfort zone to work on something they’re not used to – it broadens their skills and reduces the number of single points of failure in our team – which in turn helps to keep our work flowing.

Whilst failure can be bad – it is also a chance to learn, and improve. It’s okay to make mistakes, and to admit when you don’t know the best way to do something, as long as you learn from it!

 


 

 

photo credit: ncc_badiey cc

10% time hall of fame

I wrote a post last year about introducing 10% time in our workplace, and the twitter feedback idea that I started working on.

A few months ago we had our first entry into the 10% time hall of fame. This is a coveted position reserved only for those ideas that have been developed and iterated to the point where they’ve been released into our main live service.

Read more about Michael Cheung’s 10% time project – Optimise Health A-Z for small viewports.

MC-Responsive-Pres-Screenshot

Hopefully this will be the first of many ideas to start out as 10% time projects, and make it all the way into our live service…

But why no technical stories?

In my current workplace we’ve been using User Stories in various guises for a while now. One of the things that frequently crops up is whether these Stories can or should be technical or not?

To start with it may be useful to remind ourselves of some of the aspects of a user story…

A User Story describes something that a user needs to do, and the reason they need to do this. They are always written from a user’s point of view, so that they represent some value to the user, rather than a technical deliverable.

They represent the whowhat and why – an example might be –

As an expectant parent
I want to receive emails about parenting
So that I can read information about how to best care for my child

They are intentionally brief, so as to encourage further conversation, during which the needs of the user can be explored further, and potential solutions discussed. They are not intended to be detailed up-front specifications – the detail comes out of the conversations.

So who owns the User Stories?

User stories are designed to represent a user need. Most of the time these users are members of the public, but we don’t have our actual users in the office writing and prioritising stories, so we have a proxy for them instead – which we call the Product Lead (PL).

Part of the PL’s job is to represent what our users need – they use User Stories to capture these needs, ready for future discussion. So the PL owns the stories and their relative priority. If this is the case, then the PL needs to understand the stories, so that they can own them. If the backlog has technical stories in it, then it is difficult for them to prioritise these against other user needs.

For example, if the PL sees a story about enabling the public to search for GPs, and another story about reconfiguring a data access layer – they’re likely to prioritise the user-focussed story as they can see the tangible benefit. The technical work is totally valid, but it needs to be derived from a User Story – everything should start with a user need.

What if it’s an internal user?

As suggested above – most of the time the users we are capturing the needs of, are members of the public. Sometimes we use more specific personas in order to capture the needs of specific groups of people e.g. expectant parents

Other times, the users are our own internal users. We can still express their needs in terms of a user story –

As a data manager in the Data Workstream
I want to configure search results views
So that I can change the information displayed in accordance with DH policy

Here there is still an underlying need of the public, but the story is expressed from the point of view of the Data Manager. We could have just written down a technical story like –

Build a data configuration system for results views

but if we do this we have skipped a step. We have assumed that we know the single best solution straight away. Maybe there are other options to achieve their initial need – maybe if we stop to think about these other options we will find one that is cheaper/better/faster too.

When should we do technical design?

Although we need to some technical design up-front in order to set a general direction of travel, the detailed design work for a particular story should be done as close to actually implementing the story as possible.

In the past we have suffered from doing lots of detailed design of technical implementation well in advance of actually being ready to deliver that piece of work. Often by the time it came to deliver that piece of work our understanding had evolved and the up-front design was no longer valid.

Don’t try to do the technical design work when you first create the story. Wait until we are ready to deliver that story, and then look at the technical options available. By doing this work Just-In-Time we are much less likely to waste effort thinking about a solution that will never be delivered.

How do we track progress?

We track progress in terms of completed stories. If we keep those as User Stories then we are measuring our progress in terms of actual value delivered to our users.

If we break stories down into lots of technical stories then it may look like we are making lots of progress, and that we are very busy, but we could have delivered very little genuine value at the end of it all. If, for example we reported that –

“We’ve completed 90% of the business layer code”

that sounds very positive, but we could have delivered no actual working tested functionality for our users at this point. By keeping our stories user-focussed, our progress is also measured in terms of value delivered to users.

How do we get from User stories to technical scope

We’ve talked about how important it is to start with user needs, but ultimately we need to build something, so we have to get down to the technical detail at some point.

One way of ensuring that all scope maps back to an overall goal is to use a technique called Impact Mapping. We used this on a very technical project to ensure that all of the technical deliverables mapped back to an overall goal.

At the same time as deriving those initial stories, we’d usually be thinking about the overall technical approach. We’d look for answers to high-level questions around what technologies to use, and what approach to use. These wouldn’t be technical stories though – this would likely be documented in a lightweight high-level design document.

Story Decomposition and Technical Tasks

Once we’ve derived some initial user stories from our goal, we’d continue to break those stories down until we arrive at the technical scope.

User stories can be split into smaller stories but we always try to retain value to the user in each story, rather than making them technical.

For example, the story above about parenting emails might be split into smaller stories like –

As an expectant parent
I want to sign up for email notifications
So that I receive useful information about caring for my baby

 

As an expectant parent
I want my email address to be validated
So that it is clear when I have entered an invalid email address

 

As an expectant parent
I want to provide my first name when signing up
So that the emails I receive are personally addressed to me

Each of these stories is a smaller deliverable, but still makes sense from a user’s point of view.

Further to that, once we end up with nice small stories, we can create a list of technical tasks. Each story might contain the tasks needed to deliver that particular story. The tasks get down to the level of technical detail around what components and packages need to be altered, in order to deliver.

Ultimately – we will end up with technical pieces of work to do. Key is that all of these are derived from user needs.

* We don’t have to use User Stories for EVERYTHING

Okay, so we go on about User Stories a lot, but ultimately they’re just a tool for communication. A User Story represents some change we want to make in order to deliver some value. It’s a cue to go and have a further conversation about that piece of work, and that value.

If we can have these conversations, and deliver the value, without writing stories every time, then maybe that’s okay. The most important thing is for the people concerned to talk to each other frequently, deliver the work in really small chunks, and get feedback as often as possible.

User stories do really help us with this, but there might be occasions when they’re not the best tool…

Conclusions

Yes, we’re going to end up with stories that are technical in their implementation, but it’s important to not jump straight into that implementation. Think about our users, and their behaviours – and derive the stories from that.