I’ve just spent two days with two Jeffs (Gothelf and Patton). They ran a great product management course, and I wanted to quickly get down some of my personal takeaway points, while it’s fresh in my mind.
The term MVP has been wrecked
This rang true as one of my bugbears. I’ve always referred to the definition from Eric Ries’ 2011 book The Lean Startup:
The smallest thing you can make or do, to test a hypothesis
But I hadn’t realised the phrase was originally coined in 2001, by Frank Robinson, as:
The smallest product to meet it’s desired market outcomes
I can see why these clash, and how the wording of Ries’ definition causes confusion.
The term has become overloaded where I work, often being used to refer to:
The scope that’s left once we’ve fixed the people working on the team, and the timeframe they have available.
I’m aiming to use the two alternative terms that Jeff Patton suggested, to be more specific in referring to the Robinson or Ries definition, respectively:
Smallest Successful Release
Next Best Test
Defining metrics for impacts and outcomes
This is pretty much what we’ve been doing on our work around self-referral into psychological therapies; figuring out what the key impact is, quantifying that, and then defining measurable outcomes that are leading indicators of us achieving that impact. We’ll do more on visualising this.
Hypotheses are going to hang around
I’ve used hypotheses to track user research experiments and activities, but I think a mistake we’ve made is to think that a hypothesis is done with, either validated or not, after our first experiment.
It’s fine to run a succession of experiments against a given hypothesis, each experiment becoming higher fidelity, as we learn more (see Giff Constable’s Truth Curve).
A hypothesis likely won’t be completely proven until we get something into production at scale. The earlier, lower-fidelity experiments allow us to stop earlier if a hypothesis is disproven.
Visualising multiple backlogs
We’ve tried before to visualise the end-to-end flow of work, from idea through to delivered user stories, and ended up ditching it in favour of a much simpler ‘Todo, Doing, Done’ type of board, for the whole team.
Jeff Patton showed some examples of separate backlogs and boards, for different types of work. I think we should have another go at this, embrace the idea that there are different types of Discovery and Delivery work going on, and visualise accordingly.
This also ties in with the ideas around dual-track development, another thing we could get more rigorous with. There were some great ideas around adapting your ‘scrum’ sessions to play towards these two different types of work going on.
Three ways of prioritising, for three different situations
These different types of work, and separate backlogs, should be managed and prioritised in different ways too:
New opportunities identified should be prioritised against our overall strategy or vision, to ensure we engage with things that keep us heading in that direction.
For Discovery activities we should prioritise based on what we need to learn the most about; assessing hypotheses based on risk and potential value.
Once we, as Product Managers, have defined a minimum successful release, then the team need to prioritise the user stories in that release based on a different set of criteria, such as:
- What are the risks around feasibility of delivering this story
- What else is dependent on this story
- How likely is this to break something else
- How long do we need to test this story
The Product Manager probably isn’t the best person to make these more granular prioritisation calls. Engineering are more qualified to understand a lot of the criteria above, based on believability-weighted decision-making.
Lots of tips on better collaboration – less talking, more intuition, stricter time-boxing.
I’ve often tried to include the whole team (maybe 8-10 people) in decision-making, but Jeff Patton made a good case for running sessions with fewer people. He talked about teams having a mix of deciders and executors, and having a core trio of Product Manager, Design Lead, and Engineering Lead, to lead on making a lot of decisions.
The course was technically a Certified Scrum Product Owner course, but the time we spent on Scrum itself was mainly focused on ways in which we can adapt it.
The things listed above are those sticking in my mind right now, but there was heaps of other good stuff over the two days, and loads to apply right away.
I had to leave early, so thank you, Jeffs.