Prioritise product work and be happy

Tl;dr

I use a simple framework that characterises each new <something that you might want to do> according to its Impact (to the business) and its Risk (if we actually do it). I give each new <something> a score of 1-5 for both Impact and Risk (1 being low, 5 being high) and then we sort things accordingly.

That sounds ridiculously simple, and it is. But lurking within that simplicity is enough to talk about that I've written a five million word post about it below which, among other highlights, includes sentences like 'If you can drive the cost of failure low enough, you can cheat the universe.'

Proceed at your own risk (see what I did there? You've been warned).

Impact and Risk

Prioritising the work we do is one of the most fundamental skills in life. It is the difference between being productive or not. But it's not usually something that we need to get too worked up over for ourselves, as individuals. Most people do this by hunch or gut feel for most things, and that generally works well enough.

When what we're confronted with, however, is the need to prioritise the work of a group, things get more complex. In this case, the obvious challenge is reaching consensus — your hunch and my hunch will likely sometimes (often!) disagree. When, as in any non-trivial business context, we need to coordinate priorities over multiple groups, this complexity can become quite a pain.

If you've recently heard me do one of my slightly deranged-sounding rants about Impact and Risk, and how to use those as a framework for prioritising things, you may now be feeling quite chuffed in this regard — 'Ha!' you may now be thinking. 'I've got this cracked, that was so simple, why didn't I think of that? Now this whole mess will be easy-peasy.'

Or, you might not.

Recap: what are these for?

Impact and Risk are two sides of a prioritisation coin. Taken together, they give you a fast and simple way to assess <something that you might want to do> and decide:

  1. Is it something you want to do

  2. If yes, then: How urgent it is

  3. And then: When to do it

But crucially, the purpose of these tools is to reach consensus with other people about shared priorities. They are a tool to both make that assessment, and a common vocabulary — a language of prioritisation — to talk about it with.

Impact

The first thing to think — and talk — to one another about is the impact on the group. For the purposes of the rest of this post, I'm going to assume that 'the group' we're talking about is a for-profit business, although if you use your imagination, you should be able to see how these tools can be used for many other group prioritisation contexts as well.

Impact on the business should always be expressed in commercial terms. At its most basic, answer the question: 'How much money will this earn us?' This is the positive dimension of Impact.

Now, some things that you will be considering doing won't always be about a positive Impact to the business. Sometimes, they'll be negatives. In narrative form, they might be expressed as 'If X happens, we'll get sued into extinction, so we need to prevent that'.

That's fine. Impact can be used to assess either the positive impact on the business — the upside — or the negative — the downside.

And then finally, there's the small matter of chance. How likely is the impact to the business, positive or negative? Have a think about that probability and factor it in as well.

And that's it. You look at <something that you might want to do>, and you discuss:

  1. How much money will it make (or cost) us?

  2. How likely is that?

Then you glance over at the pile of <things you're already doing or going to do>, and you compare the answers to those questions with the answers to those same questions for everything already on the pile. And in the process, comparing them to each other, you give them a simple score — from '1' (very little Impact) to '5' (a bloody lot of Impact).

Note this last step... Impact is a relative score. It's a 5 or a 1 compared to all of the <things you're already doing or going to do>.

Now, for some people, this narrative-driven description of Impact suffices. If that's you, off you go — skip down and start reading about Risk, please. But if you're a wee bit more analytically minded, or if this has still left you feeling like there's some level of depth missing, consider the following pseudo-maths.

Pseudo-mathematics safety warning

I personally don't like to think about Impact in these terms, but I'm aware that this may simply be because I've gotten so used to evaluating stuff this way that I no longer need to — maybe I am always doing the following formula in my head, and I just don't feel it any longer. Most skills are like that, after all — the whole 10,000 hours of practice silliness. Actually, that example is quite illustrative of my uneasiness about the pseudo-maths I'm about to reveal. There has been plenty of subsequent research that pretty much confirms that the 10k number is basically pulled out of Gladwell's rear end. The interesting thing is, that doesn't make the core observation incorrect — it really does take <some large number of hours> to be proficient at a skill, and once you are, you internalise it to the extent that you're hardly aware of it any longer. The observation is true, it's the specific 10k number that's probably bunk.

In the same sense, I worry about the 'cliff of diminishing returns' with these pseudo-maths. Taking them too seriously — in the worst possible case, actually trying to calculate them — would be a huge mistake. Their purpose is to give the narrative above just a tiny bit more rigour, to show you, in a mathematical way, how to think about Impact.

If, in the course of discussing and assessing <something that you might want to do>, you find yourself writing the pseudo-maths version down on a cocktail napkin, and pointing at part of it to help make the other person(s) understand something you're saying — brilliant! Thumbs enthusiastically up. You're using it as part of your common vocabulary, and that's great. If, immediately after that moment, the conversation shifts into a debate about calculating the formula, you've just crashed over the cliff of diminishing returns — and if I'm around, I'll be pulling out my Donald Trump 'So Sad!' meme and waving it vigorously about.

Impact pseudo-maths: The Formula

With that important and massive grain of salt positioned plainly in front of you, here is a simple way to think about what I've suggested as an approach to assessing Impact

Impact = probability ( potential upside to the business ) * probability ( potential downside to the business )

To recap in English, you assess the possible upside (and / or downside), and how likely that is. Then, by comparing it to your already planned and in-flight tasks, you give it a score of 1-5.

Risk

You're now halfway done. To complete your assessment of <something that you might want to do> you now need to evaluate how risky it might be to actually go ahead and do it.

All you want to be thinking about, when evaluating the Risk, is what might happen if you actually do the <something that you might want to do>

Recall that you've already taken a look at any commercial downside to the business in your Impact assessment. Don't double count by trying to do that again in Risk. Instead, think about Risk purely in terms of 'what is likely to happen when we start executing on this task?'

Think about the things that will be touched by this task — what persons or teams are involved? What systems or subsystems? What vendors or supply chain components? What would doing it involve, at a very high level (remember the cliff of diminishing returns — it comes up so fast!)?

Specifically, there are two things you should take a look at:

  1. How likely is it that some part of the <things that will be touched by this task> will fail? A team? A vendor or partner? A technology or some product we are dependent on? Etc

  2. How much will it cost if that happens? If some or all of the <things that will be touched by this task> fail? It can be helpful to assign a pseudo-commercial number here, denominated in some currency. But the 'cost' can come in many forms: lost time, damage to your reputation or brand, etc. Usually, however, you can use a hand-wavey £ number as a proxy for most things in this category.

Taken together, those two things give you your sense of the riskiness of this prospective task.

Now, do the same comparative step that you did at the end of the Impact assessment: glance over at the pile of <things you're already doing or going to do>, and by comparing this prospective new thing to those things, give it a score of 1 (very low Risk) to 5 (a bloody lot of Risk).

That's It

That's the whole framework. You now have each new, prospective task assessed in two dimensions; Risk and Impact. You sort your pile of things to do using these dimensions, and you prioritise accordingly. Things that are Impact 5 and Risk 1 you do first. Things that are Impact 1 and Risk 5 you might never do.

Eh, hang on a second (post credits scenes)

Ah, this post is already sooo long. And you thought you were done. Ha! No such luck, I'm afraid. There are a few threads still dangling that we need to tidy up (and stick around for teh Q&A at the end).

Risk Mathematics

First, let's have a look at some maths that are the equivalent for Risk as those we've already seen for Impact.

Without any long-winded safety warning, that looks like this:

Risk = likelihood (aka probability) of failure * cost of failure

To recap in English, you work out how likely it is that whatever you are assessing will fail, and what will happen (the cost) if it does. The product of those two things is your overall Risk factor (expressed as a 'cost').

Now, here's the accompanying safety warning: unlike the pseudo-maths of Impact, these are real. This is a simplified version of a very real mathematical modelling tool called Probabilistic Risk Assessment (aka PRA). That Wikipedia link will give you several real world examples of how and in what context it is often used,

I actually think the fact that this way of thinking about Risk is supported by a genuine mathematical model is more dangerous than the equivalent pseudo-maths for Impact. That's because the cliff of diminishing returns may not be as obvious.

My advice is to treat this tool exactly the same as I suggested you should with the Impact 'formula'. Use it as a narrative tool, to add explanatory power to your arguments. Do not actually calculate it. Just because you can do something with a tool doesn't mean that you should.

Very few business problems fall into the category of Risk where taking the mathematics of PRA literally would be sensible.

The magic power of cost of failure

Now, having said that, there are a few things worth digging into in a little more depth in the Risk formula. And that starts with the magical properties of the cost of failure part of the equation.

Up until fairly recently, most innovation in 'business' and 'management' was about minimising the likelihood of failure. That's because the cost of failure was seen as largely fixed, based on an understanding of it that was mainly characterised by physical objects. If you are building a car, your cost of failure is about what happens when things go wrong on the assembly line, where you cannot easily go backwards or undo the work already done. If your problem is going to require you to build a factory to solve it, then the cost of that factory is going to drive your cost of failure quite high — you can't just shrug your shoulders, in the event of a calamity, and somehow make all of that investment <not have happened>. So business and management 'scientists' focused on what they could influence — the likelihood of failure. If we can't control how much the factory is going to cost, then we have to do everything we possibly can to make damn sure nothing goes wrong.

But there are natural limits to how far this can go. The reason for that is that the likelihood of failure is never — and can never — be zero. There's an even longer post than this one that could talk about why, but I don't need to write it; Nassim Taleb already did. But the tl;dr is basically: the second law of thermodynamics. See also Taylorism and that sort of thing for how this eventually reached its natural cliff of diminishing returns.

But just as it was becoming clear that, as a society, we had long since crashed over that cliff, something really interesting happened. New technological innovations began to make it possible to radically reduce the cost of failure in new and unexpected ways.

Cloud computing is a great example. Where once, in order to be a successful business, one would have needed to buy (or rent) a data centre, invest in a ton of hardware and people to manage it, one now need only spin up something on a cloud provider. The reduction in cost of failure is enormous, not only because of the reduction in those fixed costs. Even more significantly, this kind of technology can recover from any given failure much faster, allowing you to be more tolerant of higher likelihoods of failure. It's this last point that turned out to be a game changer.

Consider an analogy: imagine you have some insanely expensive IRL physical thing — a nuclear power plant, say. Now imagine that you could choose between two variants:

  1. Variant One; costs $500m, has a likelihood of failure of 5% and if it fails, will take five years (and cost another $500m) to recover

  2. Variant Two; costs $1bn, has a likelihood of failure of 60%, and if it fails will take five nanoseconds (and cost another $500m) to recover

Which do you choose? In case it's not obvious, you choose Variant Two. Every time. Remember those black swans and pesky thermodynamics. A 5% likelihood of failure will happen. Inevitably. It's only a matter of when, not if. And when it does, the true cost to you is the loss of 5 years of time plus the recovery costs — over time it will always be vastly more expensive.

More significantly, there's an inflection point where, once your cost of failure drops below a certain level, it makes the likelihood irrelevant. If your mean time to recover is genuinely five nanoseconds, it's become impossible for someone on the 'outside' to even perceive that you've failed. Your net Risk has dropped to zero, and stays there, even if your Thing (whatever it is) is literally failing every ten nanoseconds.

If you can drive the cost of failure low enough, you can cheat the universe.

This peculiar diversion is important because, as a user of this tool, learning how to spot and exploit possibilities to reduce the cost of failure will turn out to be a superpower.

Risk, dependencies, and the line of time

How might this work? To answer that, let's examine a characteristic of both Risk and Impact that we've not looked at yet: they are point in time statements. The Impact that we assess is usually one that will decrease over time, if we cannot find a way to seize the opportunity ('We can make £10m in Q2 next year if we're in the market with this, but if it's Q4, the opportunity is probably only about £2m'). The assessment is inherently bound by the moment it is relevant for.

For Risk, it often works the other way around. If we can put a task off into the future, the Risk often drops. There are many reasons for this, but the most common one is that this Thing we're considering has one or more dependencies on that Thing over there. If we do something about that dependency now (and put off the dependent Thing until after we're done) the resulting Risk assessment for the dependent thing will often drop, sometimes dramatically.

This is a great example of the cost of failure superpower. At no point in the previous paragraph did we do anything to change the likelihood of failure. But by addressing something that the dependency represented, we reduced the cost of failure. And that works magic on our tasks, our products, our business.

Risk and Work In progress

Another angle to examine in Risk for cost of failure opportunities is in assessing Work in Progress. You don't estimate Effort in the approach I'm describing here — that comes after you've prioritised, not before. But 'work' does play a role in assessing Risk, in the form of Work In Progress.

When you're assessing Risk, you look at the entire supply chain needed to implement it. One key aspect is the team (or teams) that will have to do the work the Task implies. When you glance over at them, if their Work In Progress queue is very high, your Risk goes up. If it's low, your Risk goes down.

This suggests that a really (really!) fruitful place to go hunting for cost of failure superpower moments is in the Work In Progress queues of your various teams. Keep Work In Progress under control, and within healthy limits, and you'll find that you can actually go faster, be more productive, not less — a result that many people find profoundly counterintuitive.

The important thing to internalise with regard to the Impact and Risk assessment is that work In Progress is one of the most important things to be able to assess, when trying to work out a Risk score.

What about Impact=5 and Risk=5 things?

In a nutshell, you don't understand the thing you are assessing well enough.The graphic below sums this up beautifully:

If, after having done an Impact + Risk assessment as described here, you are staring at a Thing that looks like it's an Impact 5, but also a Risk 5, you need to dig a little deeper into the problem.

What will typically emerge from a bit more research here is that there is something you need to do to lower the Risk assessment. You might discover a dependency that means 'if I do <this other thing> first, and put <this piece of work I'm assessing> farther out on the Roadmap, then the Risk for <this piece of work I'm assessing> drops to a 2' or something along those lines.

Ask Mark:

Question: what about “internal” features?

Anything which is going to make us more efficient but not necessarily make us more money. In this model, won’t they always fall to the bottom of the queue but could be quite important? Same with anything that is UX related; something that is going to simplify things for clients (it might be a feature that isn’t a selling point so isn’t going to win us new clients but might stop us losing existing clients)?

Answer:

The way to think about these kinds of <pieces of work> is still from an economic perspective. Things that ‘make us more efficient’ also reduce costs. So examine a thing that seeks to ‘make us more efficient’ by taking a guess at its impact on costs – and score that as your Impact number. Imagine looking at two different pieces of work. One piece is about doing something that we estimate will create an additional £100k in revenue (doesn’t matter how or why, just bear with me). The other piece of work is something that will cut costs by £50m. Relative to one another, the revenue generating task is Impact of, say, 2, whilst the cost cutting one is Impact 5.

Similarly, if a change in the UX is ‘going to stop us losing clients’ that can be rephrased as ‘something to minimise churn’ (‘churn’ is the amount of money you’re losing by customers leaving your product). Churn has a monetary value – work out how much your change in the UX will improve it, and you’ve got something you can use to estimate Impact.

Question: what about opportunity cost?

How do we use this framework to evaluate the cost of not doing something? For example, what if we decide to not do a mobile version of our software? How does that affect our planning? How would Nokia have avoided being slaughtered by their decision to not do something like the iPhone?

Answer:

Well, that’s a great and interesting question. What it’s actually lighting up is that this framework is not the correct tool for every kind of planning question. So the tl;dr version of the answer is: don’t do that. This is the wrong tool for that question.

Expanding on that response, what I would say is that exploring questions like opportunity cost for organisational choices is something that happens at strategic levels of choice. The Impact and Risk framework we are discussing here works best for tactical choices. Making sound choices about things that involve factors like opportunity costs requires a higher level of situational awareness than the simple relative comparisons between <pieces of work>. It requires a map (like Wardley Maps, for example), And those maps require a number of antecedents – a purpose, an understanding of the landscape and climate we’re operating in, and a doctrine to guide our strategic choices. The Impact and Risk framework I’m describing in this document is useful for teams who are in motion on the map. It helps them make relatively short term choices about navigating their way through it (where ‘movement’ is often done by building (and selling) things).

Question: what about RICE, MoSCoW, etc etc?

There are plenty of other prioritisation fireworks out there, many focused specifically on product management. Examples include Must, Should, Could, Wouldn’t (MoSCoW) or Reach, Impact, Confidence, Effort (RICE). There are lots. Here’s a nice post that lists and contrasts the most common ones. In particular, what I’m talking about sounds like it’s similar to RICE?

Answer:

It does sound similar, but despite both using something referred to as Impact, it’s not. The truth is that, having used most of the other approaches in one form or another, I find that they all have limits and bugs that make me dislike them in practice. After decades in the trenches, my approach is one that I find stays on this side of the cliff of diminishing returns for enough use cases to be generally useful.

Now, having said that, I’ll explain in a little more detail some of the limitations I see in some of the other frameworks, as well as which tools I do like and for what contexts or purposes.

First, what do I think are the attributes of an effective, useful prioritization framework?

  1. Forces focus on business value (and / or cost)

  2. Does not evaluate estimates of effort

  3. Does evaluate risk, including supply chain, including work in progress

  4. Can incorporate data in the process, but not reliant on it

  5. Fast and easy to do

RICE (and other approaches like Value vs Complexity and Weighted Shortest Job First) fail to meet this list on multiple levels, but their primary problem is the inclusion of some version of effort in the prioritization process. Why is that a problem? After all, it feels intuitively correct to sort things by how hard they are. The problem that it creates, however, is that it inevitably leads to putting off the hard things in favour of the easy ones. And that builds up technical debt, and is (again, in my experience) the most common contributing cause that leads teams and organizations to fall into The Build Trap.

I suspect that part of the confusion around this topic arises from the common conflation of two entirely different things in the single word ‘roadmap’. Roadmaps can be (and usually are) used to show us both of these things, in one tidy picture: how work has been sorted and ordered (i.e. prioritised) and how that work has then been scheduled. Understanding the schedule of work is important, of course – when are we planning to do this or that are key business questions. But when the schedule drives the prioritization, you tend to get suboptimal results. When the schedule is done after prioritization, you tend to get much better results.

Let’s look at an example in the physical world to illustrate this. Let’s say you are building a house. In the physical world, it’s pretty obvious that you have to do things in a certain order, because the laws of physics makes it so. You can’t put the roof on before you’ve built the foundation (or, indeed, the rest of the frame). But software isn’t physical, and so it can be much harder to notice when you’ve fallen foul of this kind of fallacy. To continue with our physical example, suppose that, for whatever reason, you weren’t aware of (and couldn’t immediately notice) the consequences of the laws of physics. You’ve estimated that building the roof would take a week or so, and building the foundation would take three. Further, assume that you had determined that having a completed roof would satisfy the ‘have somewhere to shelter out of the rain’ requirement, which you know is really urgent for <whomever you’re building the house for>. So you go ahead and build the roof first. Now, however, the laws of physics become impossible to overlook – you’ve got this roof, and nothing to put it on. So you now have to a) find somewhere to set it aside, b) go ahead and do the hard, slow things anyway, and c) communicate your failure to meet your promise (implied or otherwise) to the <whomever you’re building the house for> that they’ll have somewhere to shelter from the rain. This sounds ludicrously contrived, but this kind of pattern happens with software all the time. And including effort in your prioritization calculations is a guaranteed way to trip into it.

Some other approaches are either too subjective (Eisenhower Matrix: WTF is ‘Urgent but not important’ anyway?) or not enough (RICE, WSJF – be wary of the illusions that ‘more data’ can mask; precision is not accuracy). Some measure the wrong things (Kano – be careful of the trap of a ratio of 1:1 between ‘customer desire’ and ‘business value’).

Some aren’t really prioritisation models at all, but rather approaches to validating learning (Walking Skeleton, and indeed, the entire concept of MVP itself). I am happy to use tools to validate learning, and often, whilst doing so, I will find myself needing to prioritise tasks – and I’m right back where we started this monumental post.

Next
Next

UTFM