Monday, 26 January 2015

Scrum - Part V: Story points

This is the post where I need to don my flameproof vest, since for one reason or another the subject of story points versus plain time estimates is one of the most opinionated battlefields.
I'll start off with two seemingly contradictory statements:
  • I find story points (let's call them SPs to save on blog space from now on) useful.
  • I've actually employed them in only 2 out of the 100-odd sprints I've managed so far, and do not intend to use them in foreseeable future.
The contradiction should get reconciled at the end; in the meantime let's do the obligatory intro, which as always is better described elsewhere.
SPs is/are a comparative work estimation technique, whereby the team uses imaginary units to compare tasks to one another rather than give an absolute time measure.
For example, I might say that writing this specific blog entry is three story points, since the topic is mid-sized, and it's about three times bigger than the smallest post that's going to grace this series. (As opposed to an absolute measure of four hours which is roughly how long it would take me from concept to pressing the Publish button).

There are different variants of SPs, with Fibonacci sequences being commonly used, but in fact, there's nothing revolutionary about any of them. Comparative techniques such as T-Shirt sizes, function points and so on exist and are being used for years; long before Scrum arrived at the scene.
Steve McConnell goes over a number of such techniques in his book, which is by the way a must in my opinion for anyone serious about software project estimation. Also, while not being an estimation pioneer by a long shot, I've been employing some of those back in 2004 before the onset of Scrum's popularity.

All comparative techniques are essentially based on one single premise; the person doing the estimation is likely to get it wrong. So, instead of coming up with false precision (e.g. this refactoring job is going to take me 2 weeks, 3 days, and 4 hours), we force the estimate to coarser units that better represent our understanding.

I agree with that. The units of weeks, days, hours and yes, minutes, are usually far more precise than what we, mere mortals, know about our upcoming tasks.
The keyword here though is "usually", and this is where I need to go into long-term versus short-term planning.

Let's go back to my long-running example of multimedia player, and pick up a single task (not at random!):

Integrate Chinese UI localisation provided by an external agency

Now, unless I've integrated Far East l10n in very similar circumstances before, I simply cannot/should not/must not provide a time estimate! It will be somewhere between wrong and inaccurate, and the right way to tackle this would to use a comparative technique: XL T-Shirt size, 34 SPs, you name it.

But this is where the crux of the message comes in: I would never contemplate putting such a task into any sprint. Yes, I'll estimate it if my boss comes over and would like to have a general off the cuff idea. I will estimate it if we do roadmap planning from afar, and need to figure out what in general the team would be working on. Still, I'll never put it as-is into my next sprint.

Instead, here are typical tasks that will be a sprint candidates:

Append Chinese UTF-16 LE localisation files to the multilingual directory
Review and run QA for localised menu items

Granularity is at a completely different level here. It is vital that by the time we reached sprint planning and commitment we do not have monolithic tasks; the work is already broken into fine sized units.

Yes, we may get the breakdown wrong, but this is where the law of large numbers comes in. Let's say I have 25 small tasks in the sprint: each and every one may have a wrong estimate, but they are extremely unlikely to err in the same direction. Statistics and probabilities are on my side, and they are very good allies to have.

And here we come to the main morale of this story: with sprint planning, the small guys always win. Or, to paraphrase, having many small tasks is a must for a successful sprint.
If I have that, then it does not matter much whether I use SPs or straight time estimates. The fact is that previous analysis and law of large numbers do the heavy lifting. If all of my tasks are within say 5 SPs, or two days, then the likelihood of success is high.

So, with all things being equal, I slightly prefer time estimates. Why? Because they are more natural, easier to track, and I do not need the extra complexity from comparative estimation if the tasks are small and well understood.

If a person takes two days holiday in a three week sprint, I simply subtract two days from our sprint allowance. I do not need to run a formula which applies a factor of 13/15 to their nominal SP bucket, and then apply another multiplier to account for load factor (see my previous post). This is simply a matter of convenience - nothing else.
Of course, all this convenience would be null and void with long-term planning, which I'll come to in later posts.

To summarise: comparative estimation works great when tasks are not well understood. This is juxtaposed with short-term sprint planning, where tasks should be small and well defined.

Now, we're ready for the reconciliation I promised back at the start.
SPs is a great technique, and I've used it a number of times informally for looking beyond the next month or two - however, that was always outside of sprints. 

While the vest is still upon my person, I'd like to mention that all this is borne on experience. Apart from two exploratory attempts, we have been stubbornly using real-time estimates in our sprints. Our prediction ability was usually reasonably accurate; I'd be the last person to enter a competition, but we more often than not did what and when we promised, and after all, this is what good estimation is all about.

Of course, experiences may differ, and it's quite likely that we'll enter a situation where SPs are a must. For example, we have to jump right into several poorly planned tasks due to business forces, and we need to at least slightly mitigate the unknowns. Fortunately, I haven't been quite there yet, but this may happen. As with all techniques, religiously avoiding is even worse than unconditionally employing.

In the next instalment of these series, I move on to a slightly less controversial of workflows and task lifecycles.

No comments :

Post a Comment