How can we put an estimate on complexity?
Or “How do we gauge how risky something is?”
People often have trouble putting an estimate on things that are not known. One way to help people improve estimates is to help them better understand the uncertainty associated with the work. Liz Keogh talks about the following approach (see Estimating Complexity - Liz Keogh, lunivore)
“Something I often get teams to do is to estimate, on a scale of 1 to 5, their levels of ignorance, where 5 is “complete ignorance” and 1 is “everything is known”.
This approach of “relatively speaking, how unknown is it” helps to determine what options we have to address. While the approach works for all levels of effort - team, program, and portfolio - I've found it particularly helps at the portfolio level where, rather than a 1-5 scale, we'd use one aligned with the fibonacci sequence so that there is a significant jump in risk as we move through the sequence. A chart might look like:
- 1: Just about everyone in the world has done this.
- 3: Lots of people have done this, including someone on our team.
- 8: Someone in our company has done this, or we have access to expertise.
- 13: Someone in the world did this, but not in our organization (and probably at a competitor).
- 20: Nobody in the world has ever done this before.
You can see that if a piece of work is estimated at “20”, it’s likely to be an experiment (or “spike” if operating at a story level) of some kind, regardless of how predictable we might like it to be! This matches Cynefin’s complex domain, and sits at the far edge, close to chaos, since we don’t yet know if it’s even possible to do. 13's are also a high-discovery, complex space; we know someone else has done them, but we don’t know how. As we move down the numbers, so we move through complicated work – understood by fewer people that we might consider to be experts – through to clear work that anyone can understand.
See What is the Best Approach to Making Decisions in Our Context? for more on this thinking.