Agile thinking in ancient Rome

I found a number of quotes from a popular writer and favourite of Julius Caesar :- Publilius Syrus that I thought were so appropriate to Agile thinking today, it is fascinating how these lessons have existed for millennia and yet we still frequently fail to follow what would seem to be common sense.

RomanSoldier

Here are some of his maxims:

On agility in planning:

“It is a bad plan that cannot be altered.”

 

On commitments:

“Never promise more than you can perform. “

 

On quality:

“Nothing can be done at once hastily and prudently“

 

On multi-tasking:

“To do two things at once is to do neither “

 

2000 years later and the old Maxims are still relevant.

Monte Carlo Syndrome

Monte Carlo Method

Monte Carlo Forecasting is a method for creating predictive forecasts based on a technique of repeatedly running random simulations of the samples to see a range of possible outcomes, and then to use those outcomes to forecast results of larger (or future situations).  The predictive model offers a percentage probability of being in certain ranges.  

It can be a very effective tool for statistical analysis of data, and there has been a surge in it’s use in Software delivery Forecasting.  It can be a useful tool even with small sample sizes but it relies on three very crucial premises.

  1. The sample data MUST be reflective of the forecast situation,
  2. The data must be also be statistically independent (results of one event does not impact another).
  3. The greater the sample size the more reliable the simulation will be.

Casino Roulette - 3d render

Monte Carlo Fallacy

Rather amusingly there is also a psychological condition called the Monte Carlo Fallacy where we assume past results impose a probabilistic bias on future events.  “The last 5 spins of a roulette wheel came up Black therefore the next is more likley to be Red as the odds must balance out.”  

That is the fallacy.  In a fair roulette wheel the odds of being black or red never change no matter how many times you get one result, any combination of outcomes has the same odds as any other.  In fact it is far more likely that the wheel has a physical bias towards Black than the odds righting themselves, probability has no memory.  

Applying the Method to the Fallacy

Amusing as it is the Monte Carlo Fallacy is the act of using observation to misinterpret probabalistic based statistics assuming the probabilities have memory, and the Monte Carlo Method is using observed events to create probabalistic forecasts by assuming (often dependent) events have independence.    

Just as a point of interest. If you used the Monte Carlo Method for gambling on ‘fair’ Roulette Wheels you would likely have no difference to any other ‘system’ – probability cannot be influenced, it would simply be using a ‘method’ to compound your Fallacy.

However, in theory the method could be used to identify faulty Roulette Wheels or other unusual variations from probabilistic results (e.g. to spot cheating), or to predict gamblers’ behaviour.  

Applying precision to inaccurate data is dysfunctional behavior. It is using the fog of precision to create an illusion of accuracy.

montecarlo

The Fallacy of the Monte Carlo Method

As you have probably surmised I have some grave concerns about using the Monte Carlo Method for forecasting software delivery.  In my opinion the Monte Carlo Method applies a huge degree of precision to an inaccurate forecast.  And applying precision to inaccuracy is dysfunctional behavior. It feels to me that we are using the fog of precision to create an illusion of accuracy.

The weaker the data available upon which to base one’s conclusion, the greater the precision which should be quoted in order to give the data authenticity.

Norman Ralph Augustine

Applying to Software Development

In software projects we tend to apply the reverse of the Monte Carlo Fallacy, we make the assumption that the past accurately predicts the future, so accurately we can give a percentage confidence level.  By doing so we are making certain assumptions.

The sample data MUST be reflective of the forecast situation.

  • Future stories are similar in size, scope, complexity, and effort to the sample data. E.g. early stories are similar to later stories.
  • Future stories are impacted similarly by external events (we don’t learn or fix problems)
  • Our ability to do the work is constant
  • The team does not change, either in size or skill set
  • The team’s ability to work together does not improve or degrade

The sampledata must be also be statistically independent (results of one event does not impact another).

  • The future work is not made easier by learning from previous work
  • The future work is not made harder by adding to a growing code base
  • We are not creating more rework later than we did at the beginning
  • Testing, feedback or support do not change as product grows or ages.
  • We do not improve our skills, or knowledge of the domain.
  • We do not mitigate problems to prevent recurrence.
  • We do not learn.

Would you feel comfortable giving that list of caveats along with a Monte Carlo forecast?

gamblers-fallacy

Where does Monte Carlo Work?

Monte Carlo Method and the simulations have many useful applications, googling brings up a whole bunch of options, but it works best where observations of sample data can be used to predict behavior.  As an example: parcel delivery time, whilst it may change over time it will likely be static enough to predict times based on certain volumes. Or to set reasonable SLAs for an IT help desk where the work is likely consistent in variation.

Where does Monte Carlo NOT Work?

Where it does not work very well is in situations where the sample is small or is not likely reflective of the event being forecasted. Or where past events influence future events, if you learn, or improve or grow, or become over loaded or congested   

Monte Carlo magnifies the significance of the sample data. Say you took a sample of 10 items and a 1 in a 100 event occurred Monte Carlo would apply a 1 in 10 significance to that event despite it being 1 in a 100. In software terms an unusually large story or small story or abnormal blocking event can throw off the results by magnifying the significance.

Quick example,

I ask 100 people to solve a puzzle and measure the time each takes.

If I use Monte Carlo forecasting based on the results, it will likely give me a pretty good projection of how the next 100 people will take to complete the puzzle.

If I apply Monte Carlo to forecast how long it will take those same 100 people to complete the same puzzle again it will likely get it completely wrong as some of them will likely get quicker learning from the first attempt. 

Forecasting is Hard

The problem is that in most cases Software delivery is uncertain, most software products are complex and the complexity varies from story to story. Work varies, early stories differ in composition, size and scope from later stories, we don’t order work in a manner that balances effort or delivery time, we order based on maximising value. Work evolves, we respond to feedback and we update. We learn, the first time we see a particular problem we may struggle but the next time it is a breeze.   As a consequence forecasting is very very hard to do.

NoEstimates helped a lot, we discovered that upfront estimating stories only gave us a marginal gain in accuracy for an upfront cost, and that by simply counting stories and measuring throughput we can get an adequate level of predictability, if we accept and understand the limitations.

Whenever I see a forecast written out to two decimal places, I cannot help but wonder if there is a misunderstanding of the limitations of the data, and an illusion of precision.

Barry Ritholtz

Accurate Forecasting of Software Products is Snake Oil

I am being a touch cynical but it is my experience that in most cases of people using Monte Carlo Simulations for software delivery forecasting it is being used by people that do not fully understand the tool, and the results are being presented to people that do not understand the limitations.  

  • I see people repeatedly tweak the settings until they get the answer they want,
  • others excluding data they don’t like
  • others making forecasts based on sample sizes of 6 stories
  • Many more based on backlogs or stories that include work that will be broken down at the point the work starts
  • or on backlogs excluding the possibility of new work being added (customer requests or bugs)

And those receiving the predictions believe that when the results say that there is a 90% confidence of hitting a particular date that they are completely unaware and uninformed of the assumptions behind that 90% figure or how it is calculated and there will be a great deal of expectation management needed to fix it later.

K.I.S.S.

I much prefer a simple moving average based on story count, nearly anyone can understand the numbers and the expectations and assertions of precision are absent so the expectation of accuracy is reduced. Good healthy conversation is invited about what might impact the product and it is very easy to see what could be done if the resulting forecast is beyond expectations.

Other than for a desire to dazzle someone with smoke and mirrors or to create false confidence, I struggle to see many situations where Monte Carlo adds any value over and above that which can be achieved with simple calculations.  For me there is far more value in everyone understanding the calculation and limitations.

Final word

In the right context and for the right audience the Monte Carlo Method and the simulations are hugely valuable and can be used to great effect. But ensure you understand how and when to use them.

A forecast is only useful if it is data that can be used to make an informed decision to take action.

If you do not fully understand how Monte Carlo Simulations work (including the assumptions and limitations), OR if those you are presenting to, do not fully understand how Monte Carlo Simulations work or it’s limitations then be wary that you are not simply baffling your audience with graphs rather than presenting them with valuable information they can act on. They could be making ill-informed decisions.

 

Forecasting, asking the why? behind the When?

A Product Owner, or development team will often be asked for an estimate or a forecast for a product; a feature; an iteration; or a story.  But when you go to a team member and ask them it is not uncommon for the colour to drain from their face, twitching to start, and their pulse to race. Past experience says that the next words they speak will haunt them for the foreseeable future, maybe even for the rest of their career. It is only a rough estimate you say to reassure them, but they just know you have other plans and the fate of mankind lies in the balance. Or so it seems.

dilbert estimate

The difference between a forecast and an estimate.

For most practical purposes there are only subtle differences, the main one being that a forecast deals exclusively with the future. When I think of the two I tend to think of an estimate as information and a forecast as expectation. It is likely that I will use estimates to create my forecast and I may use multiple estimates and even apply other factors to derive my subjective forecast. Whereas an estimate is generally an isolated objective assessment.

Both have huge degrees of variation; accuracy; precision; and risk factors, but in some instances it may very well be that my estimate and my forecast are the same for all practical purposes.

For example, If I was asked to estimate a journey, I may say that it is between 15 and 45 minutes and on average it is 30 minutes.    If I am asked to forecast when I will arrive, that requires knowing not only the estimate but the time the journey starts and if there are other factors such as a need to stop to get fuel, or food. I may also add a certain weighting to the journey based on the time of day. Thus my forecast, whilst based on an estimate may not be the same as the estimate.   Other times though I may simply take the estimate and it will become my forecast.

In practice though the terms are used interchangeably and likely it really makes very little difference, except when it comes to expectations. If I ask “How long will this take” I am asking for an estimate, if I ask “When will this be done” I am asking for a forecast, both are fine so long as everyone knows what is meant by the question.

Forecasting is misunderstood

Forecasting is guesswork, it may be scientific guesswork and it may be based on past experience and metrics and clever projection tools, but it is a guess. You will be wrong far more often than you are right. The more professional and clever and precise the forecast looks the more confidence you may instill in that guess. But it is still a guess, and when your audience gains undue confidence in your guess they have a tendency to believe it as fact not forecast.  It might be that a guess is all that is needed but dressing a guess up and delivering it with confidence may create a perception of commitment.

A forecast is commitment (in the eyes of the one asking)

In normal circumstances by giving a forecast there is an implied commitment, even if you give caveat after caveat, and at any point where you even hint at a commitment to delivery to a fixed scope, fixed cost AND fixed date you are setting yourself up for disappointment.  And sadly that is what a forecast is seen as by most people.

By all means work to a budget or a schedule or even a scope (although that is very likely to vary) but ideally ONLY one and never all three.

Understanding why a forecast is needed.

There are many reasons why people ask for forecasts, and the why is the most crucial aspect of the process. Understanding the why is the first step to providing the right information, and hopefully changing the conversation. Forecasting for stories and features is both more reliable and more accurate but the project/product level forecasting is where there is the most confusion and least understood purpose.

If possible, try to change the question of “When?” in to an understanding of “Why?”

Generally speaking we gather information to help us make decisions. If the information will have no bearing on our decisions then the gathering of that information is wasteful. Forecasting all too often falls into this category.  We take time and effort to produce a forecast and yet the forecast has no bearing on any decision, that effort was wasted, or more often the level of detail in the forecast was unnecessary for the purpose for which it was used.

Making predictions of unknown work based on incomplete information and a variety of assumptions leads to poor decisions, especially when the questions being asked are not directly reflective of the decisions being made.

In my experience the Why? generally falls into three broad categories:

  1. How long will this take?
  2. How much will this cost? and
  3. simple curiosity/routine.

But most people don’t ask why, they spend time creating a forecast and present it without knowing how it will be used. Some people asking for a forecast/estimate may not even know why they are asking, they just always get a forecast. But let us delve a little deeper.

How long will this take?

Why do you need to know?   Some typical answers are:

  1. I need to plan
  2. I have dependencies
  3. I need to prioritize
  4. I have a deadline

How much will this cost?

Why do you need to know?   Some typical answers are:

  1. I want to know if I will get a good Return on Investment
  2. I need to budget
  3. I need to prioritize
  4. I have limited available funds

Curiosity/routine

  1. We always ask for a forecast, I need to put something in my report
  2. I want reassurance that you know what you are doing
  3. I want to know if my project is on track

What you will notice about these questions is that when you ask why, the first request for a forecast suddenly doesn’t make sense anymore, they are not really interested in the forecast itself, but in some other factor that they can infer from the forecast. If the intent is clear then the question can be tailored to get the required information in a better way.

e.g.

I need a forecast – Why?… I need to know when I can allocate staff to the next product/project.

In this case would a simple high level guess be sufficient? I feel confident that staff won’t be available for the next 3-6 months, in 3 months let’s review, I’ll have a better idea then…

I could put a lot of effort into a detailed forecast but an instinctive response may give all the information needed, saving us a lot of trouble.

or

I need a forecast – Why?…  I want to ship this to maximize the Christmas shopping period – or I want to time the launch for a trade show etc.

This isn’t a request for a forecast, this is a request for an assurance that there will be something suitable available for a particular event/date.  I can give you an assurance and confidence level without a detailed forecast, I may even change the priority of some features to ensure that those needs are factored into the product earlier. Or can de-scope some features to meet a certain date.

or

I need a forecast – Why?… I have limited funds available, and I want to know when I can start getting a return on this investment.

This isn’t a request for a forecast, it is a request to plan the product delivery so that revenue can be realized sooner and for the least investment. It may be possible to organise delivery so that future development is funded from delivering a reduced functionality product early. Or that development is spread over a longer period to meet your budget.

or

I need a forecast – Why?… I need to budget, The way this company works I must get approval for my project expenses and staffing in advance so I need to present forecasts of costs and timelines.  This answer is twofold, first – can you challenge the process? It might be better to have a fixed staffing pool and prioritize products/projects such that the most important ones are done first and then move on to the next, in which case the forecast for this is irrelevant, it is a question of prioritization.  Or if the issue is ensuring staffing for the forthcoming year could I simply say whether this product will not be completed in the next budget year?

or

I need a forecast – Why?… I want confidence that you know what you are doing.  This is not a request for a forecast it is an assessment of trust in the team. There are many more reliable ways to ascertain confidence and trust in a team than asking for a forecast.

or

I need a forecast – Why?… I have a dependency on an aspect of this project.  This may not be a request for a forecast of this project, but more a request to prioritize a dependency higher so it is completed sooner to enable other work to start.

or

I need a forecast – Why?…  I want to know if my project is on track.   Essentially what you are saying is that I want to track actual progress of work done, against a guess made in advance that was based on incomplete information and unclear expectations.  And I will declare this project to be ahead or behind based on this.  I am sure those of you reading this will know that what you are measuring here is the accuracy of the original guess, not the health of the project. But we have been doing that for decades so why stop now?

finally the closest to a genuine need for a forecast

I need a forecast – Why?… I need to prioritize or I want to know if I will get a good Return on Investment.

Both of these are very similar questions, but are really requests for estimates not forecasts a rough estimate helps me gauge the cost and when I evaluate that against my determination of the value expected to be gained from the project it may help me decide whether the project is worth doing at all or if there are other projects that are more important. E.g. If it is a short project it may impact my decision on priorities

But even here it not the estimate that has value it is just information that helps me evaluate and prioritize. If I already know that this project has huge value and will be my top priority does forecasting aid with that decision?

When does it make sense to forecast?

Listing those questions above it seems like I am suggesting that a request for a forecast is always the wrong question and is never really needed. And it is true that I did struggle to come up with a good example of when it makes sense to do a detailed project level forecast that included dates or any type of scheduling expectations.

It may be necessary for a sales contract, to have a common set of expectations. Although I would very much hope that sales contract for agile projects are for time and materials and are flexible in scope and dates, if not cost too. So for the purposes of setting expectations and in negotiations I can see that there is value in an estimate, although I do still wonder if a detailed forecast adds anything here that a reasonable cost estimate doesn’t cover.  If possible I would rather work to an agreed date or an agreed budget than suggest a forecast that may lead to false expectations.

But the reality is that sometimes your customer does want one and wont tell you why, or doesn’t know why.  Some customers (and managers) are willing to accept that a forecast will cost time and money and the more detailed it is the more it costs, and of course being more detailed may not make it any more accurate.

More detailed investigation for your forecasting is likely to build greater confidence but may not be more accurate and you should ask the question “will more detail  have a material impact on your decisions?”, and if the extra effort wouldn’t affect your decision then it is just waste.

I would caution that if there is no other alternative and a forecast is made, that it is revised regularly and transparently, the sooner the forecast is seen as variable the more useful it is. There is so much assumption tied to a forecast that it becomes a ball and chain if there is not an expectation of it changing and so it must be refreshed regularly to prevent the early assumptions being seen as certainty, or they will lead to disappointment later.

Short term forecasting

The real value in forecasts is when the forecast is for a short frame of time.  Over the short term we can have much more confidence in our forecasts, especially if we have been working on this project for a while and have historic information we can base our forecast on.  There are fewer assumptions and less variables.

  • Can you forecast which features are likely to be included in the next release?
  • If I add a new feature now can you estimate a lead time for this?
  • Can you give me an estimate of how much this feature would cost?

By limiting the scope of the forecast to an areas where we have more confidence in our expectations the forecasts become more meaningful and whilst there is still a danger they are seen as commitments the risk is mitigated by the shorter time frame.

Alternatives to forecasting

Many of the questions above could have been resolved with much less effort than a detailed project level forecast. In most cases we could achieve sufficient accuracy for decision making with a high level estimate. E.g. A product like that is similar to ‘x’ therefore I’d estimate 4-8 months for a small team. Or as a rough estimate 12-18 months for two teams and calculate costs accordingly.

These estimates are certainly broad but if you have confidence in your teams and you believe they will use Agile principles to get value early and are able to communicate progress and be transparent with issues then I see no issue with broad estimates. It is sufficient to allocate staff and resources, to prioritize, to schedule and to determine Return on investment decisions.

For the other questions you may achieve far better answers through the use of Product/feature burndown charts, user story mapping, or even simple high level Road maps. These tools provide useful information which can be used for managing expectations, identifying dependencies and visualising the progress of a product. And crucially – aid in setting priorities and keeping the progress transparent.