Estimation is waste?

There is an ongoing debate about whether or not to use Estimates in your software development process, largely fueled by #NoEstimates, but as is often the case the battle has been picked up by many that have become over zealous without fully understanding but with great certainty tell you that you should never estimate.

There are many good explanations as to why estimating may not help you and some great explanations of alternative ways to get the information you need, or to help you understand why the information was not needed. But I want to focus on one of my pet-peeves – Thou shalt not estimate because estimating is ‘waste’ Every time I see that I shudder, and quite often I am left with the feeling that the person writing doesn’t understand either Lean or #NoEstimates.

Thou shalt not estimate because estimating is ‘waste’

What is Waste?

First of all the statement makes it sound like waste is bad, in fact the word does seem to imply that waste is bad.  However, Lean is a lot less emotive with the term, Lean is often confused and simplified into simply the reduction/removal of waste.  But this is a very lazy and incorrect interpretation.

Lean is about productivity first and foremost, and is about reducing ‘waste’ ONLY when AND if doing so does not impact the system productivity. In other words, in Lean ‘waste reduction’ is far less important than system improvement, but for whatever reason we get hung up on waste – especially when bashing others. We reduce waste to improve the system – waste reduction is not our goal it is just a tool to help us.

Is Estimation waste?

Is Estimation waste?  The short answer is yes, but that is only part of the truth.  Lean considers waste to be activities that do not directly add value to your product and can be considered either ‘Necessary Waste’ or ‘Pure Waste’.

Waste covers a whole host of things, but waste includes: Planning, testing, reporting, breaks, vacation, sickness, and a great deal of other things far too many to mention.

So calling Estimation ‘waste’ is akin to calling Planning ‘waste’, if we were to eliminate say planning and testing in an effort to reduce waste, we would very likely cause more and worse waste by producing the wrong thing (over production waste), in the wrong order(over production), or poor quality (rework waste).

In other words not all waste is bad and not all waste should be removed – simply calling something ‘waste‘ does not help the conversation.

Sometimes a little waste now can save a lot of waste later.

8 wastes

Is it beneficial?

The real question is whether the activity helps the system? and as a follow-up, is there a better way of achieving the same thing?

Questions to ask when considering waste:

  1. Does this activity help the system to be productive now and in the future? (Would removing it impact our productivity)
  2. Is there a better was to achieve the same outcome?

I can’t answer the question of whether Estimation is beneficial in your system, because every system is unique, if you are using estimation for forecasting purposes then I’d suggest that there may be alternative solutions that are better and #NoEstimates may be a good place to start.  But forecasting is only one use of an estimate, your system may find that estimates are beneficial it is for you to decide.

Next time you see someone use blanket statements that eliminating ‘waste’ as an absolute and unqualified justification for not doing something, please challenge them to qualify their statement. Remember that waste is often necessary, our goal is more often to improve or understand the wasteful activity rather than eliminate it entirely.

1490696188622049861

Let’s consider a world without waste

If you are still unsure think what would happen to your system if you abolished all wasteful activities:

  • Vacations – typically 10% of your productivity lost.
  • Coffee breaks – 10 minutes every 2 hours = another 8% productivity lost
  • Toilet breaks
  • Stand-ups – yep they are waste too.
  • Demos, Retrospectives, Planning, all are ‘waste’

Just imagine how productive you would be with no direction, no feedback and no staff?

 

 

Forecasting, asking the why? behind the When?

A Product Owner, or development team will often be asked for an estimate or a forecast for a product; a feature; an iteration; or a story.  But when you go to a team member and ask them it is not uncommon for the colour to drain from their face, twitching to start, and their pulse to race. Past experience says that the next words they speak will haunt them for the foreseeable future, maybe even for the rest of their career. It is only a rough estimate you say to reassure them, but they just know you have other plans and the fate of mankind lies in the balance. Or so it seems.

dilbert estimate

The difference between a forecast and an estimate.

For most practical purposes there are only subtle differences, the main one being that a forecast deals exclusively with the future. When I think of the two I tend to think of an estimate as information and a forecast as expectation. It is likely that I will use estimates to create my forecast and I may use multiple estimates and even apply other factors to derive my subjective forecast. Whereas an estimate is generally an isolated objective assessment.

Both have huge degrees of variation; accuracy; precision; and risk factors, but in some instances it may very well be that my estimate and my forecast are the same for all practical purposes.

For example, If I was asked to estimate a journey, I may say that it is between 15 and 45 minutes and on average it is 30 minutes.    If I am asked to forecast when I will arrive, that requires knowing not only the estimate but the time the journey starts and if there are other factors such as a need to stop to get fuel, or food. I may also add a certain weighting to the journey based on the time of day. Thus my forecast, whilst based on an estimate may not be the same as the estimate.   Other times though I may simply take the estimate and it will become my forecast.

In practice though the terms are used interchangeably and likely it really makes very little difference, except when it comes to expectations. If I ask “How long will this take” I am asking for an estimate, if I ask “When will this be done” I am asking for a forecast, both are fine so long as everyone knows what is meant by the question.

Forecasting is misunderstood

Forecasting is guesswork, it may be scientific guesswork and it may be based on past experience and metrics and clever projection tools, but it is a guess. You will be wrong far more often than you are right. The more professional and clever and precise the forecast looks the more confidence you may instill in that guess. But it is still a guess, and when your audience gains undue confidence in your guess they have a tendency to believe it as fact not forecast.  It might be that a guess is all that is needed but dressing a guess up and delivering it with confidence may create a perception of commitment.

A forecast is commitment (in the eyes of the one asking)

In normal circumstances by giving a forecast there is an implied commitment, even if you give caveat after caveat, and at any point where you even hint at a commitment to delivery to a fixed scope, fixed cost AND fixed date you are setting yourself up for disappointment.  And sadly that is what a forecast is seen as by most people.

By all means work to a budget or a schedule or even a scope (although that is very likely to vary) but ideally ONLY one and never all three.

Understanding why a forecast is needed.

There are many reasons why people ask for forecasts, and the why is the most crucial aspect of the process. Understanding the why is the first step to providing the right information, and hopefully changing the conversation. Forecasting for stories and features is both more reliable and more accurate but the project/product level forecasting is where there is the most confusion and least understood purpose.

If possible, try to change the question of “When?” in to an understanding of “Why?”

Generally speaking we gather information to help us make decisions. If the information will have no bearing on our decisions then the gathering of that information is wasteful. Forecasting all too often falls into this category.  We take time and effort to produce a forecast and yet the forecast has no bearing on any decision, that effort was wasted, or more often the level of detail in the forecast was unnecessary for the purpose for which it was used.

Making predictions of unknown work based on incomplete information and a variety of assumptions leads to poor decisions, especially when the questions being asked are not directly reflective of the decisions being made.

In my experience the Why? generally falls into three broad categories:

  1. How long will this take?
  2. How much will this cost? and
  3. simple curiosity/routine.

But most people don’t ask why, they spend time creating a forecast and present it without knowing how it will be used. Some people asking for a forecast/estimate may not even know why they are asking, they just always get a forecast. But let us delve a little deeper.

How long will this take?

Why do you need to know?   Some typical answers are:

  1. I need to plan
  2. I have dependencies
  3. I need to prioritize
  4. I have a deadline

How much will this cost?

Why do you need to know?   Some typical answers are:

  1. I want to know if I will get a good Return on Investment
  2. I need to budget
  3. I need to prioritize
  4. I have limited available funds

Curiosity/routine

  1. We always ask for a forecast, I need to put something in my report
  2. I want reassurance that you know what you are doing
  3. I want to know if my project is on track

What you will notice about these questions is that when you ask why, the first request for a forecast suddenly doesn’t make sense anymore, they are not really interested in the forecast itself, but in some other factor that they can infer from the forecast. If the intent is clear then the question can be tailored to get the required information in a better way.

e.g.

I need a forecast – Why?… I need to know when I can allocate staff to the next product/project.

In this case would a simple high level guess be sufficient? I feel confident that staff won’t be available for the next 3-6 months, in 3 months let’s review, I’ll have a better idea then…

I could put a lot of effort into a detailed forecast but an instinctive response may give all the information needed, saving us a lot of trouble.

or

I need a forecast – Why?…  I want to ship this to maximize the Christmas shopping period – or I want to time the launch for a trade show etc.

This isn’t a request for a forecast, this is a request for an assurance that there will be something suitable available for a particular event/date.  I can give you an assurance and confidence level without a detailed forecast, I may even change the priority of some features to ensure that those needs are factored into the product earlier. Or can de-scope some features to meet a certain date.

or

I need a forecast – Why?… I have limited funds available, and I want to know when I can start getting a return on this investment.

This isn’t a request for a forecast, it is a request to plan the product delivery so that revenue can be realized sooner and for the least investment. It may be possible to organise delivery so that future development is funded from delivering a reduced functionality product early. Or that development is spread over a longer period to meet your budget.

or

I need a forecast – Why?… I need to budget, The way this company works I must get approval for my project expenses and staffing in advance so I need to present forecasts of costs and timelines.  This answer is twofold, first – can you challenge the process? It might be better to have a fixed staffing pool and prioritize products/projects such that the most important ones are done first and then move on to the next, in which case the forecast for this is irrelevant, it is a question of prioritization.  Or if the issue is ensuring staffing for the forthcoming year could I simply say whether this product will not be completed in the next budget year?

or

I need a forecast – Why?… I want confidence that you know what you are doing.  This is not a request for a forecast it is an assessment of trust in the team. There are many more reliable ways to ascertain confidence and trust in a team than asking for a forecast.

or

I need a forecast – Why?… I have a dependency on an aspect of this project.  This may not be a request for a forecast of this project, but more a request to prioritize a dependency higher so it is completed sooner to enable other work to start.

or

I need a forecast – Why?…  I want to know if my project is on track.   Essentially what you are saying is that I want to track actual progress of work done, against a guess made in advance that was based on incomplete information and unclear expectations.  And I will declare this project to be ahead or behind based on this.  I am sure those of you reading this will know that what you are measuring here is the accuracy of the original guess, not the health of the project. But we have been doing that for decades so why stop now?

finally the closest to a genuine need for a forecast

I need a forecast – Why?… I need to prioritize or I want to know if I will get a good Return on Investment.

Both of these are very similar questions, but are really requests for estimates not forecasts a rough estimate helps me gauge the cost and when I evaluate that against my determination of the value expected to be gained from the project it may help me decide whether the project is worth doing at all or if there are other projects that are more important. E.g. If it is a short project it may impact my decision on priorities

But even here it not the estimate that has value it is just information that helps me evaluate and prioritize. If I already know that this project has huge value and will be my top priority does forecasting aid with that decision?

When does it make sense to forecast?

Listing those questions above it seems like I am suggesting that a request for a forecast is always the wrong question and is never really needed. And it is true that I did struggle to come up with a good example of when it makes sense to do a detailed project level forecast that included dates or any type of scheduling expectations.

It may be necessary for a sales contract, to have a common set of expectations. Although I would very much hope that sales contract for agile projects are for time and materials and are flexible in scope and dates, if not cost too. So for the purposes of setting expectations and in negotiations I can see that there is value in an estimate, although I do still wonder if a detailed forecast adds anything here that a reasonable cost estimate doesn’t cover.  If possible I would rather work to an agreed date or an agreed budget than suggest a forecast that may lead to false expectations.

But the reality is that sometimes your customer does want one and wont tell you why, or doesn’t know why.  Some customers (and managers) are willing to accept that a forecast will cost time and money and the more detailed it is the more it costs, and of course being more detailed may not make it any more accurate.

More detailed investigation for your forecasting is likely to build greater confidence but may not be more accurate and you should ask the question “will more detail  have a material impact on your decisions?”, and if the extra effort wouldn’t affect your decision then it is just waste.

I would caution that if there is no other alternative and a forecast is made, that it is revised regularly and transparently, the sooner the forecast is seen as variable the more useful it is. There is so much assumption tied to a forecast that it becomes a ball and chain if there is not an expectation of it changing and so it must be refreshed regularly to prevent the early assumptions being seen as certainty, or they will lead to disappointment later.

Short term forecasting

The real value in forecasts is when the forecast is for a short frame of time.  Over the short term we can have much more confidence in our forecasts, especially if we have been working on this project for a while and have historic information we can base our forecast on.  There are fewer assumptions and less variables.

  • Can you forecast which features are likely to be included in the next release?
  • If I add a new feature now can you estimate a lead time for this?
  • Can you give me an estimate of how much this feature would cost?

By limiting the scope of the forecast to an areas where we have more confidence in our expectations the forecasts become more meaningful and whilst there is still a danger they are seen as commitments the risk is mitigated by the shorter time frame.

Alternatives to forecasting

Many of the questions above could have been resolved with much less effort than a detailed project level forecast. In most cases we could achieve sufficient accuracy for decision making with a high level estimate. E.g. A product like that is similar to ‘x’ therefore I’d estimate 4-8 months for a small team. Or as a rough estimate 12-18 months for two teams and calculate costs accordingly.

These estimates are certainly broad but if you have confidence in your teams and you believe they will use Agile principles to get value early and are able to communicate progress and be transparent with issues then I see no issue with broad estimates. It is sufficient to allocate staff and resources, to prioritize, to schedule and to determine Return on investment decisions.

For the other questions you may achieve far better answers through the use of Product/feature burndown charts, user story mapping, or even simple high level Road maps. These tools provide useful information which can be used for managing expectations, identifying dependencies and visualising the progress of a product. And crucially – aid in setting priorities and keeping the progress transparent.

 

ABOUT THE AUTHOR

John Yorke is an Agile Coach at Asynchrony Labs. He assists with the company’s development teams improve, and is a former Scrum Master with more than 20 years’ experience in software development.

 

 

 

 

 

 

Lies; Damn Lies; and Forecasting…

NoEstimates in a Nutshell

NoEstimates has made a lot of traction over the last few years, with good reason, it is primarily about adopting Agile properly, delivering the valuable work in order of priority and in small chunks, and by doing so eliminating the need for a heavy duty estimation process.  If we are only planning for the next delivery we can reliably forecast.

But sadly that is generally not good enough and some level of forecasting is often requested.  So NoEstimates came up with a very useful and low cost method of forecasting. However, it has brought with it a whole host of misunderstandings, most of which are not from the book. The author must be as frustrated as anyone by the misinterpretation of his proposal.  This has led to resistance from many (including me) to adopting this method for forecasting.  I am all for delivering value quickly and small chunks or prioritized work, but slogans that are used to excuse bad behaviour are damaging and hard to resolve, especially when they seem so simple.

My biggest bugbear and one I have covered previously is that many have interpreted NoEstimates as an excuse to skip story refining entirely, this was not in the book but nevertheless you can see any number of articles on the internet professing how adopting NoEstimates has saved them from wasteful refining meetings, the misconception is that if you don’t need to estimate the story then the act of understanding the story is no longer required.  When actually the author was suggesting that you don’t need to refine all work up front and could defer deeper understanding until it became relevant – the last responsible moment.

planning dilbert

Story Writing and being Estimable

I encourage those writing stories to use the INVEST model for assessing the suitability of a story and in that: the ‘E’ is Estimable,  but that doesn’t mean you must actually estimate the story, just that you ask yourself whether the story is clear enough and well understood enough to estimate if asked – are there open questions? is it clear what the acceptance criteria are and that these can be met?  There may be a subtle distinction there, but NoEstimates does not offer an alternative to writing and refining good stories. It is just a method for simple forecasting and encouraging deferring effort until it is necessary.

How does NoEstimates work?

Caveat aside I will try to give a very high level summary of how NoEstimates forecasting works, and when and where it doesn’t work. I shall do so via the medium of potatoes.

Preparing Dinner

I have a pile of potatoes on the side and I am peeling them ready for a big family dinner.  My wife asks me how much longer will it take me?  By counting how many potatoes I have peeled in the last 5 minutes (10) and by counting the potatoes I still have left to do (30) I can quickly and simply calculate a forecast of 15 minutes.

That is NoEstimates forecasting in a nutshell, it really is that simple.

Assumptions

However, the mathematics requires a certain set of assumptions,

1. I did not apply any sorting criteria to the potatoes I selected- e.g. I wasn’t picking either small or large potatoes, we assume my selection was random or at least consistent with how I will behave in the future.

2. That the team doing the work doesn’t change, if my son were to  take over to finish the job he may very well be faster or slower than me and my forecast would not be useful.

3.  We also assume that I will not get faster

4.  We assume that all potatoes in the backlog will be peeled, and no others will be added. If my wife asks me to peel more potatoes or to do the carrots too, the forecast will no longer apply and will need revising.
So there we have it, a very simple and surprisingly accurate method for forecasting future work.  But do you see any flaws to the system?

Flaws in the system

Flaw 1. Comparing potatoes with potatoes

The first flaw is that I am getting potatoes ready for roasting so I want them to be broadly similar in size, so when I get to peel a potato I am also sometimes slicing it, some potatoes only need peeling others may be sliced once and others more than once.  Some potatoes are bad and I throw them away.

If my wife comes along and sees my pile of potatoes and asks how much longer it will take? I can look at my pile of potatoes I have completed in the last 5 minutes (18)  and I can count the potatoes I still have left to (30). The problem is I don’t know how many unpeeled potatoes were needed to produce those 18 peeled and sliced potatoes, I am not comparing like for like.   To be able to give this estimate I would have needed to count how many unpeeled potatoes I had peeled, information I don’t have.  Maybe I could take a guess and then use that guess to extrapolate a forecast, but that sounds like guesswork rather than forecasting.

Flaw 2. Forecasting an unknown

Let’s assume that I am producing 10 peeled potatoes in 5 minutes, and I am asked to give a forecast as to when I will be done, but so far I have been grabbing a handful of potatoes at a time, peeling them and then going back for more, one could say that my backlog of work is not definitive, We have a whole sack of potatoes but I won’t use them all for this one meal.  I am simply adding work as I need it. My aim being to judge when I am satisfied I am done and start cooking.  It is very difficult for me to judge when the sack will be empty or when I have prepared enough for lunch.

Flaw 3.  Changing and evolving work

It is a big family dinner and uncle Freddie has just called to say he will be coming so we need to add more food, Aunt Florance eats like a bird so probably not worth doing a full portion for her.  And the table isn’t really big enough for everyone, so maybe we should do an early meal for the kids first.  The point here is that simple forecasting only works if you have a reasonably good assessment of what the work is still to be done, if your backlog of work is evolving, work being added or removed then the forecast will be unstable.

Flaw 4.  Assuming consistency

When selecting work to do next I have a tendency to choose the work that will bring me the most value for the least effort.  The highest ROI, so in this case I may choose the small potatoes first, less peeling and less chopping.  But that means that if I count my competed work and use that to forecast my future work I will end up underestimating how much is left, the backlog has some really big awkward shaped potatoes that will take far longer to do. But my forecast is based on only doing small simple potatoes.

Doesn’t this apply to all forms of estimates and forecasts?

Flaws 2 and 3 apply to any form of forecasting, they are not unique to NoEstimates. Flaw 1 and Flaw 4 could potentially be mitigated with the use of T-shirt sizing or story points, but to do so requires a level of upfront effort.  Effort that is not spent on peeling potatoes, so may well be considered waste – that is unless you see value in a more reliable forecast.
For me Flaw 1 is my main objection to NoEstimates (beyond the belief that refining is unnecessary)  When stories are refined and better understood it is normal to split or discard stories, and often add stories as the subject becomes better understood. So any forecasting tool that uses a metric based on counting refined stories to predict a backlog of unrefined stories is risking over simplification of the problem. But because the maths is so simple it can lead to a confidence level that exceeds the quality of the data.  These assumptions based on flawed data gets even worse when you use a tool like Monte Carlo forecasting which applies a further confidence level to the forecast. By giving a date combined with a confidence level adds such a degree of validity and assurance that it is easy to forget that a forecast based on duff data will result in a duff estimate – no matter how prettily we dress it up.

Summary

Forecasting is risky at the best of times, especially in Agile where it is our goal to have the work evolve and change in order to give the customer what they truly want. Forecasting needs to be understood by both parties and accepted that it is an evolving and changing metric. Anyone expecting a forecast to be a commitment or to be static is likely to be disappointed. Just take a look at the weather forecast, the week ahead changes day by day, the further away the forecast the more unreliable it becomes.  Understanding the limits of the forecasting method is crucial, a simple tool like NoEstimates is fantastic IF the assumptions can be satisfied, if they cannot then the forecast will be unreliable.

It is probably also true that your forecast will improve if you spend more effort understanding the work. Time spent refining the stories will improve your knowledge. But no forecast can reliably predict work you do not yet know about.  The question as always is “What problem are you trying to solve by forecasting?” That will guide you in determining whether the up front effort is worth it.
Related articles:

Why I think estimating isn’t waste
Demystifying story point estimation

Estimates

I seem to have had a great many conversations of late to do with estimating, not story estimating which is now pretty well understood and accepted, but now the discussion is about project level estimation.

Project level estimation is like a hydra, every time you think you have dealt with it, it comes back with two more heads.

I think part of the problem is that we don’t have a common language, we use the word ‘estimate’ to mean so many things. I get the impression that there is a great deal of confusion between “Accuracy and Precision” or between “A Plan and an Estimate”, or “An estimate and a foretelling of the future” and worst of all, the difference between an “Estimate and a Commitment” I also wonder if the question being asked is the right question, are you asking “if the project is viable” or “will it give a good return on investment”.  And finally there is a notion that if you do not deliver in line with your estimate then you got it wrong and that is bad.

What is an estimate?

Roughly (and generally quickly) calculate or judge the value, number, quantity, or extent of something.

Was my estimate wrong?

Let’s start with an easier one.  I estimated a project would cost  £1m and it ended up costing £2m, did I get it wrong?
Superficially this may seem obvious, for a commercial organisation to consistently underestimate projects would result in bankruptcy, clearly it must be wrong?  But an estimate in most practical situations isn’t wrong simply because the outcome doesn’t match the estimate, an estimate is made quickly and roughly based on incomplete information and the true answer is not known until it is done. You can overestimate or underestimate, even wildly, but you can’t be wrong.  Sounds confusing but let’s take a practical example.

I estimate the time it takes to drive into town,  past experience says that it would take between 10 and 20 minutes.  So I estimate 15 mins.    If the journey actually takes 30 minutes, that doesn’t make my estimate wrong.  I most certainly did not correctly predict the future, but I wasn’t wrong.  If I was asked the next day to estimate the same journey and I felt that the previous day was abnormal I may estimate 15 mins again.   I still believe the basis for my original estimate holds true and one abnormal example doesn’t change that.  But let’s say I discovered there were road works on that route, I may alter my estimate to reflect the new route or to reflect the anticipated delays.

I’ll give a second mathematical example to hopefully simplify things.

If I roll two fair six-sided dice, can you estimate the total of the face values?

Mathematically speaking the most likely outcome is ‘7’ the probable average of doing this many times is also 7.   So mathematically speaking it would be sensible to estimate ‘7’  but 5 times out of 6 the result would not match your estimate.  Our estimate was ‘correct’, we can mathematically prove it is ‘right’ and yet we can get the ‘wrong’ outcome 5 out of 6 times.

Conversely if I estimate ‘4’ and roll the dice and get it ‘right’ that doesn’t make my estimate right, it makes me lucky.

That doesn’t help the poor businessman that has just gone £1m over on his current project, but hopefully over time he will get better at estimates and he will on average get it ‘right’.

My point is that an estimate is a useful tool to help guide our decisions; it is not a method for foretelling the future.

Are you asking for an Estimate or a Plan?

Back to the car journey example, I have made the journey 10 times and the shortest time was 10 mins the longest was 30 mins and the average time was 15 mins.   If our goal is long term consistency then the best ‘estimate’ is probably 15 mins, because the likelihood is that If I did the journey 10 more times then the average would again be 15mins.

Easy!  But I have an appointment at 4pm that I mustn’t be late for, what time should I leave? My estimate is 15 mins so I leave at 3:45pm. But at that rate I’m likely to miss my appointment a high proportion of the time.  What went wrong, my estimate was perfect?

My mistake was that I confused an estimate with a plan.  My estimate is useful information, but whilst my journey is likely to average 15 mins I need to build contingency into my plan if I have a fixed deadline.  This all sounds obvious but in your own experience how many times has someone taken an estimate and put it directly in a project plan with other dependencies relying on it?

My point once again is that an estimate is a useful tool to help guide our decisions, it is not a foreteller of the future, and used in isolation without understanding it can be confusing or damaging.

Flexible or fixed

But let’s go further, if my estimate becomes a plan then my 15 minute estimate needs to become a 25 minute ‘plan’ so I can offer reasonable confidence in completing by a fixed deadline.   Anyone notice that to have a confident plan then in this example on average it contains 10 mins of contingency, and on average 10 mins of potential waste when a fixed deadline is imposed?   If I planned on doing this journey 10 times and had to be confident of meeting an agreed time each time, I would need to plan for 250 minutes, whereas if I simply allowed my plan to flex and take as long as it took I would likely do the same 10 journeys in 150 minutes, clearly that is a huge difference – by removing a fixed commitment I allow contingencies to be shared and am far more likely to achieve more. It is a contrived example but illustrates the amount of waste that is necessary in a fixed plan.

One last point is that even with significant contingency there is still no guarantee, I could still break down or there could be something unexpected that causes me to be late.

Are you turning an Estimate into a commitment?

Let’s take the journey and appointment a little further.   I use the estimate of 15mins and I add a sensible and realistic 5 mins contingency and then you say… “Can you guarantee you will be on time?”

No!  You didn’t ask for a guarantee or a commitment, you asked for an estimate, my estimate was 15mins and I feel pretty confident in that estimate.  If you want a guarantee, then I can’t give one, I couldn’t guarantee 45mins or even 60 mins, I cannot offer a 100% guarantee because I cannot foretell the future I can only offer an estimate based on my experience of the past.

Had you asked me whether I could be ‘confident’ I’d be on time or if it was ‘probable’ I’d be on time the majority of journeys then I’d be much more comfortable with the assessment.

My point is that an estimate is a useful tool to help guide our decisions, it is not a way to foretell of the future, and certainly the future does not come with a guarantee.

“Accuracy and Precision”

Another oddity with estimation is that I am often asked for a more accurate estimate. Let’s say I’ve estimated approximately 6 months, or given a range 4-8 months.   The response is quite often “can you be more accurate?”

What do they mean?   An estimate is by nature a rough approximation, hopefully accurate. But clearly they are expecting something else. They will only know if it is inaccurate after the work is done.

Take two estimates:  A) 2325.4 seconds.  B)  1 hour.

Which is more accurate?

I don’t have a clue, accurate is in reflection to the outcome which is as yet unknown. One estimate is more precise than the other but that doesn’t make it any more accurate.

My guess is that when they ask you to be more accurate what they mean is either, “What is your confidence level for this estimate?, what could you do to increase your confidence level?”. Or as is more often the case what they actually mean is “That is higher than I was expecting/hoping”

Can you estimate that again…

This is a fun one, whilst it is true that the more information you have the more confident you can be of your estimate. However, there is a law of diminishing returns, and in software the domain is generally so complex and there are so many variables that the effort in investigating is disproportionate to the gain in understanding and confidence in your estimate.  For example I’d suggest that spending a couple of days investigating for a 6 months project could lead to a reasonable level of confidence in an estimate, spending a further 2 weeks investigating often as not will not alter the estimate by a significant degree and will only lead to a marginal increase in confidence in the original estimate (I am not saying that you will not gain a lot of understanding, but the issue is whether it materially impacts on the estimate). Usually it results in more clarity of what you had assumed, and highlights how much you still don’t know.

Summary

That may sound defeatist, but the reality is that in most software projects, over the course of the project requirements will change, new requirements will emerge and priorities will change. The team may not remain the same; external resources may not be available when needed. No amount of investigation can predict the future with any degree of certainty; the best we can do is identify potential risks and be prepared to adjust. If you must have a fixed deadline then you really should have a significantly flexible scope. Have confidence in rough estimates and be agile.

So if we can learn to create plans based on rough estimates; prioritise work sensibly and be prepared to adjust, we are far better able to deliver, and are likely to deliver more than if we insist on fixed plans based on what will always be incomplete information.  A fixed plan by nature will include contingency and contingency generally gets used, there is huge potential waste and the cost of change is high.  A flexible plan, should flex to the amount of time needed or the scope should flex to the amount of time allocated.  If you cannot do this, however good your intentions, then directly or indirectly you will need to introduce wasteful contingency, to enable you to fool yourself you are delivering to a fixed plan.

You can’t handle the Truth!

When it comes to leadership it seems that a lot of problems and a great many solutions come down to either a lack of communication or lack of trust, often both.

But why are those two skills so difficult to master? How much time and effort gets wasted simply because we don’t trust our employees, or don’t understand a request? How much dissatisfaction and uncertainty results from not trusting your boss?

Around ten years ago I was working on a major release of software, it was a gated waterfall project. I and three others were critical reviewers and gate keepers for a major component. At the gate review our component was a shambles, really late, testing was far from completed, documentation hardly started. My understanding of the gate review was to quantify risk and to ensure all components were on track with the plan. Essentially a structured early warning system.

We four reviewers unanimously agreed to reject the component. We met a lot of resistance and were put under pressure, we were told that “we were delaying the project”, that “it would make us look bad” it was a difficult decision and we were well aware that it would be uncomfortable. But the reality was that the project was behind and we saw no value in faking things to keep a plan looking good. We felt the purpose was to highlight problems so they could be corrected.

But at some point the plan became more important than the product. The next day a company wide announcement was issued, the project was on track, it had passed the gate and all was well. We were shocked, it turned out that our boss had removed all 4 of us as critical reviewers and replaced us with others that were willing to say all was well. My colleagues and I were deeply unhappy about this.

The lack of trust was shocking, the lack of transparency and honesty showed just how dysfunctional the process was. Unsurprisingly as the project neared the completion the target date it slipped drastically catching people by surprise and the project was close on six months late and we were not first to market. We will never know if being transparent at that point could have given the project opportunity to rectify the situation sooner, but hiding the problem certainly didn’t help.

It is not that waterfall projects inherently lack transparency, but a rigid plan that has a high cost of change creates a barrier to transparency, Project Managers feel pressure to hide problems in the hope they can fix them before anyone becomes aware, or as more often is the case in the hope that another part of the programme slips more so they are not in the firing line.

These days I advocate a software delivery framework that highlights problems as early as possible, but many execs don’t like this. I sometimes wonder if they prefer to pretend all is well or imagine that problems will resolve themselves, this is an Ostrich mentality that allows them to defer worrying until later.

Adapting to an Agile framework where everything is transparent can be a difficult adjustment for many execs and programme managers, being aware of day to day problems, minor issues, or simply that some tasks take longer than expected can be a difficult experience for managers used to only getting positive assurances from PMs.  Suddenly they are exposed to information that was previously masked from them.  They must fight the urge to interfere and learn to trust the teams, to trust the Product Owners and the Scrum masters. In many ways it was easier to ‘trust’ a Project Manager with a Gantt chart when the real story was hidden – even when 90%+ of the time that story was inaccurate. A pleasant lie is always easier to accept than a painful truth.

You can’t handle the truth

The sad situation is that for many execs they simply cannot handle the Truth, they want an agreeable story that lets them claim a project is on track and are happy to believe all is well until it is too late to hide it any longer, and then they can shout and blame, but all this occurs usually after it is too late to take corrective action, the screaming and the desk thumping achieves nothing but to upset people. No one wants a project to be late, chances are they have all worked very hard and done their best. So in reality they have far less influence over the outcome than if they had valued honesty over platitudes earlier in the process. Rather than enforcing overtime for the last couple of months or scrapping all quality control to meet a deadline they could have taken sensible planned corrective action much earlier had they simply fostered a culture of honesty and openness.

I like to think that most software professionals have a desire to do a good job, they want to complete projects quickly and to a high quality. Trusting them should not be a great leap of faith. In my experience you are more likely to get overly optimistic promises than padding. Your biggest dangers are more likely feature creep or boredom, it is very rare to find a developer that wouldn’t prefer to be busy and challenged.  

In short trust the development teams, it is very likely that trust will be rewarded.

Should we re-estimate stories at sprint planning based on better understanding of how to implement a solution?

Estimates should be based on the relative size of the story.  E.g. If our story is to do a 1000pc jigsaw puzzle and we have estimated it as an 8 point story.   The story takes us 6 hours to complete.  We break up the puzzle and put it back in the box.

We are then asked to do the exact same puzzle again as a new story.  We have just done it, so we know the difficult bits, we have fresh knowledge and recent experience, it is highly likely We’d complete the story in much less time.  But the story is identical, We still have to do the same puzzle. Last time it was an 8 point story, this time it is still an 8 point story.jigsaw

In other words our experience changes our ability to complete the story it doesn’t change the relative size of the story. We estimate using relative size because we don’t know who will be doing the story or when it will be done. 

Hopefully we learn and get better, equally it is likely a more experienced or senior developers will complete stories quicker, but none of this changes the relative size of the story.

Story point estimates are to offer the ability to forecast, they are accurate in that context and over the long-term only,  Think of stories like rolling a dice.  A three point story is like rolling a dice 3 times and totalling the results, an 8 point story is like rolling a dice 8 times and totalling the results.  Sometimes a 3 point story will take longer than a 5 point story.  But in the long run the average will be 3.5 per roll.

I could never guarantee the next roll of the dice will result in a value of 3.5 but what we can offer probability not predictability over a longer period, by the time you roll the dice (take the story into sprint planning) the story points offer no value or interest to the development team. The forecasting value is gone. The story will take as long as the story takes, we must trust the team to do their job.

Estimating at a project level.

One of the most difficult aspects of the transition to Agile is the confusion over how estimation is done.

Estimation is difficult, the experts suggest that even with a full picture of what is required and with clear detailed and fixed requirements, the best estimators cannot realistically estimate better than within a 25% margin of error. It’s easily possible to do worse than this. But It isn’t possible to be consistently more accurate; it’s only possible to occasionally get lucky.

But in agile we start without clear requirements, we don’t have a fixed scope and chances are the requirements we do have are at a high level and there are many unknowns. I could talk about the cone of uncertainty but I’m not convinced most businesses will accept that level of uncertainty even if it is based on sound reasoning. In my experience they would rather base decisions on a specific guess than an accurate ranged estimate especially a wide range. Sounds daft when I say it like that but I bet you have experienced it.

Nevertheless it is still often necessary for commercial reasons to have a solid estimate before starting a project (Agile or otherwise), in many situations projects need to provide a good ROI or are limited by a budget.  In some situations the ability to estimate reliably could be the difference between the success and failure of a business. These estimates can be crucial.

So how do projects provide reliable and useful estimates?

First of all it is worth noting that estimates are largely misunderstood in general, they are misused and can often be damaging to the project. But still estimates are asked for and used to make important decisions.

In a survey from a few years ago*, a range of IT companies were asked about estimation strategies, the results were both worrying and yet reassuring that difficulties were universal.

* http://www.ambysoft.com/surveys/stateOfITUnion200907.html

Around 44% of the project teams in the survey described themselves as ‘Agile’ so this is a balanced pool of projects and should give an idea of estimation across the board.

When asked to give estimates to the business for project delivery around 65% of teams were asked by the business to provide estimates within the 25% margin of error range that experts in the field say is ‘impossible’. 11% were allowed no margin of error at all they had to specify a single date or a specific cost for the project,  conversely 21% were not asked to give any estimates at all. The rest allowed a margin of up to 50% on their estimates.

So how did that pan out for those companies?

Well 40% never even tracked whether those initial estimates were correct, it is difficult to draw any conclusions from this, but 40% came within that magic 25% of their estimates, which frankly is an incredible statistic, when I first read this I started questioning the validity of the survey. 40% of software project estimates were more accurate than the ‘experts’ say is possible to achieve consistently, 40% is more than just getting lucky it is frankly unbelievable.   At this point I was about to dismiss the survey as nonsense, but I read on…

How is it possible?

In order to achieve the 25% margin of error the projects did the following:

  • 18% admitted they had padded their original estimate
  • 63% de-scoped towards the end of the project to deliver on the estimated schedule.
  • 34% asked for extra funds to complete the projects on the original estimated schedule
  • 72% extended the schedule to deliver the promised scope (effectively revising the estimate and success was then measured on the revised estimate not the original)

It is impossible to tell from this how many of the projects matched the original estimates, but clearly it wasn’t very many, it is not a stretch to conclude that the vast majority of respondents de-scoped and/or extended the original estimates, including those that had already padded the original estimates.

Moving goalposts is the key

My reading of this survey is that very few if any delivered what was estimated in the originally estimated time-frame/budget. It makes very bleak reading and regardless of whether the project was or wasn’t Agile the estimates did not deliver what the business asked them to.

If we take the stated purpose as being simply to plan and budget and assume the estimates were not padded or interpreted then they hold very little value based on  the lack of accuracy.

In my opinion if any of the businesses that demanded such specific estimates went on to actually base business decisions on the accuracy of those estimates, then they were just setting themselves up for disappointment and future problems.

There is no way from this survey to conclude what the accuracy of the original estimates actually was other than to say that even with padding, de-scoping and extending schedules they were still unable to meet the original expectations and were overwhelmingly wrong and seemingly nearly always underestimated the true time/cost. This reads like a recipe for disappointed customers and shrinking profit margins.

That is a very long winded way of saying that (according to this survey at least) no one in the industry, Agile or otherwise is producing reliable estimates for software projects, we consistently get it wrong, and more worryingly fudge the figures so we never learn from our mistakes.  So any suggestion that estimating Agile projects is more difficult is not based in fact, estimating for software projects is difficult full stop.

Do estimates have value?

Now that is a different question, if I was running a business and I received a project estimate of 6 months, I would be foolish to consistently believe it will be delivered to the defined scope in that time-frame. But that doesn’t make the estimate useless.  If one project estimates 6 months and another estimates 3 months. I can conclude that the first is likely to take longer than the second, especially if the same person or group has estimated both.  Both estimates are likely wrong but chances are that on average and over time they will be wrong by a consistent margin, which makes them predictable.

If I check historic records I might be able to see that projects estimated at 6m generally take 8-12 months, or better yet I could ask the estimators to compare the current proposed project and identify a previously completed project that is closest in size and scope and use the actual figures from a sensible comparator.  Empirical evidence is so valuable I’m surprised more emphasis is not put into keeping track of past estimates and actual delivery costs and schedules.

Estimates are not commitments

Essentially we need to accept estimates as simply estimates not as a plan or a commitment.  Any PM that transposes an estimate of a software project straight into a plan is nuts, and it happens so often that in my experience developers turn white and have panic attacks when asked for an estimate, painful experience says they will be misused and ultimately the one that gave the estimate gets blamed.  If the business could be trusted to accept that estimates are not an exact science and factor in realistic contingency based on empirical evidence then developers would be less afraid to give estimates.

So how should we do it?

I have two suggestions, the first is to use an extension of the Planning Poker process.  Take a group of people that are experienced with software delivery and relatively knowledgeable about the scope and complexity of what is being asked. E.g. Product Owners, Business analysts, Project managers, representatives from development and testing.  Ask them to give estimates of a variety of projects relative to each other.  I’d use Fibonacci numbers or T-shirt estimates, to keep it at an abstract level.  If possible I’d try to include a benchmark project (or more than one) where the actual time/cost is known.

Blue-11Blue-6If we accept that the best we are going to get is a granular; relative; ball-park estimate of a project then this should give you that and more. In fact for budgeting purposes a reliable granular estimate is of far more value than an unreliable specific figure, and far more valuable than the estimates in the survey. Over time it is likely that the estimation team will learn and improve, they will get better with every completed project. I’d have far more confidence saying a project is either a Medium or Large T-shirt.  The T-shirt sizes could map to high level budgets.

My second suggestion which could be used in conjunction or independently of the first is to set a budget and ask the team to provide the best product they can within that time/cost. A good Scrum team will be able to prioritise stories and features to ensure you get the best value for money. If that budget is based on the poker estimates above it is more likely that the budget chosen is realistic and you will get the product you want.  You will also very quickly be able to tell if the project will be unable to meet the goal and can cut your losses early, rather than having to pour more money into a money-pit project that is over-running but too far down the line to cancel.

Estimation is a difficult skill to master but a group is far better than an individual.