Useful Scrum Metrics

Which do you use and Why?

I have already blogged about some common metrics but here are ten that I use or have used most frequently.  It is my aim to summarise them here and expand each in greater detail over the next few weeks. (I will come back and add links as I do)


Before we begin, the first piece of advice that can be said about metrics is that before you record data or use metrics you should really ask why you are recording the information and how it will be used.

If you cannot be sure that the metrics will be used for the benefit of the team then you shouldn’t record the metrics and certainly shouldn’t use them.  If ever metrics are misused they become worthless as they are very easily gamed, especially if the team feel they will be abused.

Metrics are almost always reliant on honesty and there are not many metrics that couldn’t be ‘gamed’ if someone was motivated to do so.  Never attribute reward or penalty for a metric or even multiple metrics. As soon as you do they are broken.

Have you ever heard tales of teams that were paid for number of lines of code written or for number of bugs fixed.  I have worked at companies where the annual bonus was dependant on a high management approval rating in an annual employee survey.

Any environment where you are effectively penalised for honesty loses all hope of trust, transparency and integrity and your metrics become worthless.

If there is even the hint or suggestion that any of the metrics will be used for anything other than self-improvement and introspection again they immediately become worthless.


But that being said as a Coach measurements can be very useful. Measurements, metrics and reflection and retrospection are hugely valuable for coaching an individual or team on how to improve.  An Agile Coach or Scrum Master has a variety of metrics available – and these 10 are by no means a comprehensive list, actually it is a very limited list, Each project or development environment is different and so tailoring appropriate metrics to your particular environment is important, not all metrics apply to all projects.

One last note is that there are myriad tools for tracking projects, some will measure these metrics but many won’t and it will be necessary to manually record the data. It will often be difficult to backdate if you haven’t been recording it.

In projects I work on I try to keep a lot of metrics just for myself, much more than the team uses.  If at any point I feel there might be value in a metric I will bring it to the team and allow them to decide if it has value and if they are interested. But too much becomes confusing.  As a rule of thumb they team only really have interest in a few of these, although which ones may vary over time. Keeping metrics without using them is also a good way of avoiding them responding to inspection.

Beware of context

I am also aware that metrics can be misleading if taken out of context. Information can be dangerous if misunderstood, if I choose to publish metrics in reports or in the office on a wall I will work hard to make sure the context is clear and if appropriate there is an explanation. But even then you can be sure that someone will misuse the information, so be prepared!


1 – The Burndown:

This is probably the most often used Scrum metric. In essence it measures work outstanding at the start of the sprint and a line is drawn converging on zero at the end of the sprint and each day the progress or burn-down is measured. The goal being to complete all planned(and emerging) work by the end of the sprint.

2 – The Velocity Chart:

The velocity chart can take many forms and can be used to track a number of key factors. In the most simplistic form it is a bar chart showing the number of points or sometimes the number of stories completed in the recent sprints.

3 – The Velocity Committed/Actual Chart:

Sometimes the velocity chart can be further enhanced to show committed points compared with actual points, this is useful when the team is not consistently delivering on commitments.

4 – Release Burndown

The principle is that for a given product we measure the Burndown of the entire product in a similar way to a Sprint Burndown, it may take the form of a cumulative flow diagram or be similar to a velocity chart.

5 – Cumulative Flow Diagram

A cumulative flow diagram tells all sorts of stories about a project, essentially it measures the number of stories in various states as the project progresses.

6 – The Burn-up

The Burn-up measures the actual effort expended in the sprint. In a perfect world it would be an exact inverse of the Burndown.  But rarely do we predict all tasks in advance nor estimate the ones we know accurately. It can help see where time went and improve our estimates.

7 –  Flow time, Lead Time, Cycle Time

There are some varieties in how these terms are used but I use a simplified version as follows:

Flow-time is the time from when a story is started to when it is completed by the team.
Lead-time is the time from when a story is added to the backlog to when it is completed into production – from conception to getting the value.
Cycle-time is the average time from when a story is started to the point where it is completed – we could be working on multiple stories at once, or overlap them. e.g. each story could have a flow time of 3 weeks but we could stagger them to deliver 1 per week.

Using averages for these allows us to measure whether we are improving over time.

8 – Health check survey

A health check survey is more touch-feely than the other more raw metrics but can be useful for gauging the team’s own opinion of their progress and behaviour. You can measure opinions on anything, from software practices to interactions with other teams, to quality of stories, anything the team has an opinion of can be measured.

9 – Predicted capacity : Actual Capacity

Each sprint during planning – stories can be broken into tasks, each of those tasks can be given a time estimate.  When totaled up and the team has committed to the Sprint content, this total effort estimate is the ‘predicted capacity’ at the start of the sprint. This is the important number.  Tasks will almost always be added as the sprint progresses and many of the task estimates will be off, but the initial predicted capacity is a measure of what the team is anticipating the workload will be for the sprint.

In theory the actual capacity of the sprint is simply the number of team members multiplied by the number of hours available in the sprint, excluding, planned holidays, planned meetings, predicted time on non-sprint activities.

Tracking these figures allows the team to gauge whether the capacity is realistic in relation to the previous sprint, the end result will be a (hopefully consistent) ratio.

10 – Scope creep

This can be covered by the cumulative flow diagram, and by the release burndown, but sometimes it is useful to explicitly report on what has been added or removed from the Scope each sprint.

Should we re-estimate stories at sprint planning based on better understanding of how to implement a solution?

Estimates should be based on the relative size of the story.  E.g. If our story is to do a 1000pc jigsaw puzzle and we have estimated it as an 8 point story.   The story takes us 6 hours to complete.  We break up the puzzle and put it back in the box.

We are then asked to do the exact same puzzle again as a new story.  We have just done it, so we know the difficult bits, we have fresh knowledge and recent experience, it is highly likely We’d complete the story in much less time.  But the story is identical, We still have to do the same puzzle. Last time it was an 8 point story, this time it is still an 8 point story.jigsaw

In other words our experience changes our ability to complete the story it doesn’t change the relative size of the story. We estimate using relative size because we don’t know who will be doing the story or when it will be done. 

Hopefully we learn and get better, equally it is likely a more experienced or senior developers will complete stories quicker, but none of this changes the relative size of the story.

Story point estimates are to offer the ability to forecast, they are accurate in that context and over the long-term only,  Think of stories like rolling a dice.  A three point story is like rolling a dice 3 times and totalling the results, an 8 point story is like rolling a dice 8 times and totalling the results.  Sometimes a 3 point story will take longer than a 5 point story.  But in the long run the average will be 3.5 per roll.

I could never guarantee the next roll of the dice will result in a value of 3.5 but what we can offer probability not predictability over a longer period, by the time you roll the dice (take the story into sprint planning) the story points offer no value or interest to the development team. The forecasting value is gone. The story will take as long as the story takes, we must trust the team to do their job.

The value of a ScrumMaster

In June 2014 a new ScrumMaster was brought in to work with an existing and established development team

At the point of joining the team had already raised concerns about environments and tools but the team had been unable to express specific issues that could be resolved, these problems had been intermittent for most of the year.   The team was composed of a BA acting as Product Owner, a Lead Engineer, three other developers and two testers with the addition of a ScrumMaster this made a team of size 8, which equates to an approximate cost to the business of around £600,000/$1,000,000 a year.

The team measure their productivity using a term called velocity, which is how much work measured using a relative scale is achieved over a period of time. The amount of work fluctuates for a variety of reasons, holidays, sickness, bugfixing, mistakes, environmental issues, focus or context switching, support requirements, spikes etc.  It is an anecdotal estimate of productivity only. But has become the industry de facto standard for measuring improvement. This had been recorded over the previous year and the documented average for the 12 months preceding the new ScrumMaster is noted below as 100 basis points per week.

Velocity can only be used to compare a team with it’s own past performance, i.e. it is a relative scale, not an absolute measure but in that context it is very useful. But should be used with caution. In this case the team had an existing Datum and a set of benchmark stories. The Datum was not modified and the benchmark stories remained in this period, thus there is a consistent base level for comparison.

Year 1 (Prior to ScrumMaster)


The new ScrumMaster, spent some time with the team, observing and evaluating. He sought to identify problem areas and any behaviour in the team where improvements could be made. But unlike a conventional manager or trouble-shooter he did not issue directives on changes to behaviour.

The team scheduled regular ‘retrospective’ meetings where they review the past time period and look for ways to improve.  The ScrumMaster used a variety of techniques, to coach and guide the team to focus on areas identified either by the team or by the ScrumMaster as problematic or where improvement could be made. The ScrumMaster collected and compiled metrics to aid this analysis, and facilitated meetings to be productive in deconstructing problems in a non-negative manner.

By using this approach the ScrumMaster did not solve the teams problems, he did not offer ‘his’ solution to the team, nor issue directives for improvement. The ScrumMaster worked with the team to enable them to become aware of their problems and the ScrumMaster created an environment where the team were empowered to solve their own problems. The ScrumMaster uses his past experience to identify problems and may steer the team to suitable solutions, but relies on the team to solve their own problems.

In the following quarter there was a notable upturn in velocity (productivity)

Year 2 (First quarter after ScrumMaster)


The ScrumMaster is responsible for creating an environment where the team can reach a long-term and consistent performance – a spike or a blip is not considered sustainable, so it is important to not seek to boost productivity with short-term fixes like overtime or cutting quality. The aim is sustainable pace and sustainable quality over the long-term.

The ScrumMaster continued to work with the team identifying more bottle-necks and waste, removing impediments and coaching the team how to work around impediments or resolve them themselves, there is always room for improvement in a team.

The ScrumMaster also spent time coaching the Product Owner and Programme Manager on expectations and in their interactions with the Scrum Team, including challenging the Programme Manager on how his priorities are fed to the team so that the team could focus to be more effective.

Year 2 (First full year with new ScrumMaster)


Velocity is only one measure of productivity and is largely anecdotal, however it is the only real measure we currently have available.  The productivity improvement should be considered in that context. However, there is a clear and notable improvement in the productivity of the team over the 12 month period.  The team deserves the credit for the improvement as they have improved themselves.  But by adding an experienced ScrumMaster to an existing team he has been able to coach the team into identifying ways they could improve and in doing so doubling their productivity – a significant change in just 1 year.  In this instance it could be argued that the ScrumMaster’s coaching of this one team has resulted in £600,000/$1,000,000 worth of added value to the business and assuming the team does not regress this is an ongoing year on year gain. 

It is not often possible to measure the impact on a team, and velocity is a fragile tool for that, but I hope it is clear from these metrics the value that one person can have on a team.  It is highly likely that Year 3 will not have the same growth but even if the new velocity can be merely sustained the value to the organisation is significant.

Why Agile?

I feel a little on the defensive recently, I have heard a Product Owner (ex PM) complaining that he could get it done quicker with Waterfall, and I have heard a number of people asking what was wrong with the old way and why Agile?

Short-term memories, fear of change – especially for a very large project manager community – teething troubles with the transition, and a variety of other factors are probably behind it.  But long-story short I was asked if there were any metrics on the comparison between Agile and Waterfall that we could use to ‘reassure’ some of the senior stakeholders.

After a bit of research I found an interesting survey: Dr. Dobb’s Journal 2013 IT Project Success Survey. Posted at

173 companies were surveyed: The companies were asked how they characterised a successful project.   Only 8%  considered success to be On Schedule, On Budget AND to Specification.  Most classed success as only one or two of these criteria.

Success criteria 2

Based on the criteria listed above and even knowing that success may be just one of those criteria,  only 49% of Waterfall projects were considered successful compared with almost 2/3rds of Agile projects considered outright successes.

Successful projects

When considering failed projects (projects that were never finished) nearly 18% of Waterfall projects were deemed failures compared to only 6% of Agile projects.

failed projects

Finally Projects that were considered challenging in that they were not deemed successes but were eventually completed.

challenged projects

Ultimately in my opinion “Why agile?” is the wrong question. The questions should be “How can we improve?” I believe that Agile is a philosophy rather than a Methodology.  I’d go so far as to suggest that if an organization has a desire to continually improve and follows through on that, then they are ‘Agile’ regardless of the methodology they use. And a company that follows an Agile framework without a desire to improve is not.

In other words ‘Agile’ is the answer to the question not the question itself. If done right and done with the right intentions Agile can help many organizations improve and improve further. Agile is not a destination.