Metrics – a snake in the grass?

One of the things I have noticed over the last few years is a drastic increase in the use of metrics. Particularly with the surge in the use of electronic boards for teams data is much more readily available and so is much more frequently used.

If you can’t measure it, you can’t improve it.

Peter Drucker

Previously there was an effort required to collect and process data to be meaningful, but now much more is easy to do, often it is done for you. But is this a good thing?

Problem 1: Lack of understanding the data or the tool

There are two problems I frequently see, the first is that we have access to data and tools but that doesn’t mean we know how to read the data or understand what it is telling us. Monte Carlo simulations are a great example here, I regularly see them being used without understanding what the data is telling you or understanding the limitations of the information. Instead the results are copied into forecasts and endorsed like they are gospel. The information is too easy to get that there is no longer a need to have an understanding to use the data.

Problem 2: Lack of understanding the consequences

The second problem is that our behavior changes as soon as we are measured, sometimes consciously sometimes not. Generally speaking this is the purpose of measuring, we measure what we want to improve, be it our weight or lap time or velocity.
However, there are nearly always unintended consequences, and they can be unexpected and unpredictable.

maxresdefault

The Cobra Effect

The Cobra effect is term that originated in a story from British colonial rule in India.  The British rulers were worried about the number of venomous cobras.

So they created a policy of putting a bounty on dead cobras.  People were paid handsomely to deliver dead snakes for a payment. What could go wrong?

Well at first the reward system worked well, the number of snakes seemed to decline, but the bounties paid continued to rise. The British became suspicious and investigated, they discovered that the reward was encouraging enterprising locals to start to breed snakes for the reward, it was easier and safer to breed snakes than catch wild ones, and they were breeding them in large numbers.

The British rulers were shocked at the dishonesty (It is simply not Cricket, old boy) and were so outraged they cancelled the program and  cracked down on the breeders, who released the captive snakes in to the wild. The result was a massive increase in snakes rather than a decline.

Republic of Congo

There is a much more horrific version of this situation from Leopold II (of Belgium) rule in the Congo.  If your quota from your rubber plantation was not met the overseers were expected to execute the wives and children of poor performing farmers and send their hands (as proof of death) in lieu of the rubber quota. – a hand equated to a quantity of rubber.

The problem was, some of the villages found it easier to supply hands than to supply the rubber, so were meeting their quota with a mixture of hands and rubber, there were some reports of entire quotas being met just with hands.

Bad as this policy was, it then got worse. The hands were not necessarily of poor performing farmers or their families. Some villages and soldiers that would collect quotas were attacking other villages, and killing each other just to chop off hands and then using the hands instead of rubber as payment.

Such was the extent of this practice some reports suggest that the population of the region reduced by as much as half as a result of this practice. Estimates suggest that 10million people were killed for their hands.

Parallels

Now those were extreme cases and it seems unfair to draw parallels given the seriousness but nevertheless it is human nature to respond to how we are measured, and by implication rewarded.

I hear tales of management that expects velocity to rise every sprint, and guess what it does.  But does actual value delivered go up – I very much doubt it.

I see teams wanting to mark a card as done and open a new identical one, because the cycle times are looking bad.

But the worse is the measure of being busy, far too many organizations have a fixation with everyone being busy 100% of the time, regardless of whether what they are doing is contributing to delivering value to the customer or if having a little slack would allow the team to deliver far more.

Suggestion

This sounds like a message that metrics are all bad, it is not at all. Metrics can be very useful and as many say you cannot improve what you don’t measure.  The key though is to have those being measured understand the desired outcome and share or support it.

Back to weighing yourself for a diet, your goal is to be healthier, your weight is a measurement. tampering with the scale may impact the measurement but will not help you to your goal. You may fool yourself for a while but it is only yourself you are fooling.

A clear goal, and a variety of metrics that support that goal is often more valuable than a single goal, especially if you are supportive of the desired outcome.

“Tell me how you measure me, and I will tell you how I will behave”

Eli Goldratt

Beware of the measurements you use, you may very well be defining the behavior you get, even if it is not your intent.

 

Useful Scrum Metrics

Which do you use and Why?

I have already blogged about some common metrics but here are ten that I use or have used most frequently.  It is my aim to summarise them here and expand each in greater detail over the next few weeks. (I will come back and add links as I do)

Warning!

Before we begin, the first piece of advice that can be said about metrics is that before you record data or use metrics you should really ask why you are recording the information and how it will be used.

If you cannot be sure that the metrics will be used for the benefit of the team then you shouldn’t record the metrics and certainly shouldn’t use them.  If ever metrics are misused they become worthless as they are very easily gamed, especially if the team feel they will be abused.

Metrics are almost always reliant on honesty and there are not many metrics that couldn’t be ‘gamed’ if someone was motivated to do so.  Never attribute reward or penalty for a metric or even multiple metrics. As soon as you do they are broken.

Have you ever heard tales of teams that were paid for number of lines of code written or for number of bugs fixed.  I have worked at companies where the annual bonus was dependant on a high management approval rating in an annual employee survey.

Any environment where you are effectively penalised for honesty loses all hope of trust, transparency and integrity and your metrics become worthless.

If there is even the hint or suggestion that any of the metrics will be used for anything other than self-improvement and introspection again they immediately become worthless.

 

But that being said as a Coach measurements can be very useful. Measurements, metrics and reflection and retrospection are hugely valuable for coaching an individual or team on how to improve.  An Agile Coach or Scrum Master has a variety of metrics available – and these 10 are by no means a comprehensive list, actually it is a very limited list, Each project or development environment is different and so tailoring appropriate metrics to your particular environment is important, not all metrics apply to all projects.

One last note is that there are myriad tools for tracking projects, some will measure these metrics but many won’t and it will be necessary to manually record the data. It will often be difficult to backdate if you haven’t been recording it.

In projects I work on I try to keep a lot of metrics just for myself, much more than the team uses.  If at any point I feel there might be value in a metric I will bring it to the team and allow them to decide if it has value and if they are interested. But too much becomes confusing.  As a rule of thumb they team only really have interest in a few of these, although which ones may vary over time. Keeping metrics without using them is also a good way of avoiding them responding to inspection.

Beware of context

I am also aware that metrics can be misleading if taken out of context. Information can be dangerous if misunderstood, if I choose to publish metrics in reports or in the office on a wall I will work hard to make sure the context is clear and if appropriate there is an explanation. But even then you can be sure that someone will misuse the information, so be prepared!

 

1 – The Burndown:

This is probably the most often used Scrum metric. In essence it measures work outstanding at the start of the sprint and a line is drawn converging on zero at the end of the sprint and each day the progress or burn-down is measured. The goal being to complete all planned(and emerging) work by the end of the sprint.

2 – The Velocity Chart:

The velocity chart can take many forms and can be used to track a number of key factors. In the most simplistic form it is a bar chart showing the number of points or sometimes the number of stories completed in the recent sprints.

3 – The Velocity Committed/Actual Chart:

Sometimes the velocity chart can be further enhanced to show committed points compared with actual points, this is useful when the team is not consistently delivering on commitments.

4 – Release Burndown

The principle is that for a given product we measure the Burndown of the entire product in a similar way to a Sprint Burndown, it may take the form of a cumulative flow diagram or be similar to a velocity chart.

5 – Cumulative Flow Diagram

A cumulative flow diagram tells all sorts of stories about a project, essentially it measures the number of stories in various states as the project progresses.

6 – The Burn-up

The Burn-up measures the actual effort expended in the sprint. In a perfect world it would be an exact inverse of the Burndown.  But rarely do we predict all tasks in advance nor estimate the ones we know accurately. It can help see where time went and improve our estimates.

7 –  Flow time, Lead Time, Cycle Time

There are some varieties in how these terms are used but I use a simplified version as follows:

Flow-time is the time from when a story is started to when it is completed by the team.
Lead-time is the time from when a story is added to the backlog to when it is completed into production – from conception to getting the value.
Cycle-time is the average time from when a story is started to the point where it is completed – we could be working on multiple stories at once, or overlap them. e.g. each story could have a flow time of 3 weeks but we could stagger them to deliver 1 per week.

Using averages for these allows us to measure whether we are improving over time.

8 – Health check survey

A health check survey is more touch-feely than the other more raw metrics but can be useful for gauging the team’s own opinion of their progress and behaviour. You can measure opinions on anything, from software practices to interactions with other teams, to quality of stories, anything the team has an opinion of can be measured.

9 – Predicted capacity : Actual Capacity

Each sprint during planning – stories can be broken into tasks, each of those tasks can be given a time estimate.  When totaled up and the team has committed to the Sprint content, this total effort estimate is the ‘predicted capacity’ at the start of the sprint. This is the important number.  Tasks will almost always be added as the sprint progresses and many of the task estimates will be off, but the initial predicted capacity is a measure of what the team is anticipating the workload will be for the sprint.

In theory the actual capacity of the sprint is simply the number of team members multiplied by the number of hours available in the sprint, excluding, planned holidays, planned meetings, predicted time on non-sprint activities.

Tracking these figures allows the team to gauge whether the capacity is realistic in relation to the previous sprint, the end result will be a (hopefully consistent) ratio.

10 – Scope creep

This can be covered by the cumulative flow diagram, and by the release burndown, but sometimes it is useful to explicitly report on what has been added or removed from the Scope each sprint.

How not to motivate

A friend suggested I watch this YouTube video on Motivation: Worst way to motivate

I thoroughly enjoyed this video and it is well worth watching.  The outcome flies in the face of some conventional logic but supports the view that we have come to know and love in Agile.

Spoiler Alert: for those that don’t watch the video, the premise is that The Federal Reserve bank in the USA commissioned a study in how to motivate staff and get better productivity, essentially how big  a bonus do you need to give to have an impact on productivity.

The study was carried out by scientists at MIT, Carnegie Melon and University of Chicago  and the outcome will surprise some of you.

Essentially what was classed as small or moderate incentive bonuses had no noticeable impact on productivity when applied to any task that wasn’t purely mechanical.  And large bonuses (equivalent to 3 months wages) caused a measurable decrease in productivity. Yes that is right, give someone a significant incentive when working on even mildly cognitive tasks and they got worse.

In other words incentives are bad, very bad.

The key to increasing productivity according to this study is in 3 areas:

  • Autonomy

  • Mastery

  • Purpose

More on that later…

Quick tangent…

My mind immediately jumped to the financial crisis. If the Federal Reserve believe that financial incentives have a negative effect, why are financial incentives the principle method used for rewarding bankers?   Does that mean that if bankers were paid an appropriate salary would they actually be better? Or is the conclusion that banking requires no cognitive input 🙂

It is not all about money.

It should be noted that for the study to reach the conclusion ‘salary’ had to be off the table, people need to be paid enough as a base salary for that not to be a motivating factor. This is easier said than done.

What I find interesting is that this study also supports old-school psychology too. Some of you may be familiar with Maslow’s hierarchy of needs.  Essentially once basic needs are met, security housing food etc. which have a direct correlation with income, things get more fuzzy, needs above essentials no longer have a direct monetary driver.  We get into Psychological needs: areas of personal relationships, social standing, and then on to our need for self-fulfillment, creativity, fun, personal achievement.  All of which have an element of financial impact but it is clearly it is not the primary driver. Once Basic needs have been met money becomes far less important to us.

Maslow's Triangle.pptx (2)

For example we don’t learn a musical instrument for financial benefit, or do a crossword, many of us have a hobby that takes huge amounts of effort, research, expense and often inconvenience and usually for nothing. Some participate in PTAs or Churches or social groups and clubs, or even write a blog. For most people money simply doesn’t drive us (Assuming we have enough to live), it is an enabler. So in theory if you have a reasonable base salary and you are content, then motivation needs to come from elsewhere.

Money cannot buy happiness, but being broke sure makes you miserable.

Aha, all looking good until…

A company in the USA read a similar study and concluded that if someone earned $70,000 they would have enough to be content, all essential needs would be met and so money would not be a factor and they could focus on productivity, so he raised the pay of all employees to $70,000.  Perfect! Only it wasn’t, humans are contrary soles and whilst the pay raise was great for some, we also measure our social standing by income, not because we care about income, but because it is a measurable metric of your value in society (or at least the workplace).  So when someone you felt was in a job that required less skill or less status than you was suddenly earning the same you felt disgruntled, your salary hasn’t changed, last week you were happy but today you feel you are relatively less important.  So it is about money and at the same time not at all about money.

Do you pay fair?

Paying fairly doesn’t mean pay them all the same.  Laszlo Bock from Google in his book Work Rules goes even further than this:- he says to deliberately pay unfairly, if you have a good employee pay them excessively, deliberately take money out of the equation.

My take from this is as follows:

Pay your staff well, pay them what they are worth as a base salary and review it regularly, but do not include any performance based incentives. And if you include profit share or Christmas bonuses, apply them equally (% based) and do not have them performance driven.

What you want is a secure workforce, you do not want them to be thinking they could earn more elsewhere, and you do not want high turnover of staff. Show your employees that you truly value them, pay above market rates to ensure stability, and be prepared to pay exceptional employees exceptionally.

So with a reasonable and appropriate salary (with no performance incentives) as our foundation, we can concentrate on the real issue of how to maximize productivity, and this is where it gets interesting…

more next time.

The value of a ScrumMaster

In June 2014 a new ScrumMaster was brought in to work with an existing and established development team

At the point of joining the team had already raised concerns about environments and tools but the team had been unable to express specific issues that could be resolved, these problems had been intermittent for most of the year.   The team was composed of a BA acting as Product Owner, a Lead Engineer, three other developers and two testers with the addition of a ScrumMaster this made a team of size 8, which equates to an approximate cost to the business of around £600,000/$1,000,000 a year.

The team measure their productivity using a term called velocity, which is how much work measured using a relative scale is achieved over a period of time. The amount of work fluctuates for a variety of reasons, holidays, sickness, bugfixing, mistakes, environmental issues, focus or context switching, support requirements, spikes etc.  It is an anecdotal estimate of productivity only. But has become the industry de facto standard for measuring improvement. This had been recorded over the previous year and the documented average for the 12 months preceding the new ScrumMaster is noted below as 100 basis points per week.

Velocity can only be used to compare a team with it’s own past performance, i.e. it is a relative scale, not an absolute measure but in that context it is very useful. But should be used with caution. In this case the team had an existing Datum and a set of benchmark stories. The Datum was not modified and the benchmark stories remained in this period, thus there is a consistent base level for comparison.

Year 1 (Prior to ScrumMaster)

image001

The new ScrumMaster, spent some time with the team, observing and evaluating. He sought to identify problem areas and any behaviour in the team where improvements could be made. But unlike a conventional manager or trouble-shooter he did not issue directives on changes to behaviour.

The team scheduled regular ‘retrospective’ meetings where they review the past time period and look for ways to improve.  The ScrumMaster used a variety of techniques, to coach and guide the team to focus on areas identified either by the team or by the ScrumMaster as problematic or where improvement could be made. The ScrumMaster collected and compiled metrics to aid this analysis, and facilitated meetings to be productive in deconstructing problems in a non-negative manner.

By using this approach the ScrumMaster did not solve the teams problems, he did not offer ‘his’ solution to the team, nor issue directives for improvement. The ScrumMaster worked with the team to enable them to become aware of their problems and the ScrumMaster created an environment where the team were empowered to solve their own problems. The ScrumMaster uses his past experience to identify problems and may steer the team to suitable solutions, but relies on the team to solve their own problems.

In the following quarter there was a notable upturn in velocity (productivity)

Year 2 (First quarter after ScrumMaster)

image002

The ScrumMaster is responsible for creating an environment where the team can reach a long-term and consistent performance – a spike or a blip is not considered sustainable, so it is important to not seek to boost productivity with short-term fixes like overtime or cutting quality. The aim is sustainable pace and sustainable quality over the long-term.

The ScrumMaster continued to work with the team identifying more bottle-necks and waste, removing impediments and coaching the team how to work around impediments or resolve them themselves, there is always room for improvement in a team.

The ScrumMaster also spent time coaching the Product Owner and Programme Manager on expectations and in their interactions with the Scrum Team, including challenging the Programme Manager on how his priorities are fed to the team so that the team could focus to be more effective.

Year 2 (First full year with new ScrumMaster)

image003

Velocity is only one measure of productivity and is largely anecdotal, however it is the only real measure we currently have available.  The productivity improvement should be considered in that context. However, there is a clear and notable improvement in the productivity of the team over the 12 month period.  The team deserves the credit for the improvement as they have improved themselves.  But by adding an experienced ScrumMaster to an existing team he has been able to coach the team into identifying ways they could improve and in doing so doubling their productivity – a significant change in just 1 year.  In this instance it could be argued that the ScrumMaster’s coaching of this one team has resulted in £600,000/$1,000,000 worth of added value to the business and assuming the team does not regress this is an ongoing year on year gain. 

It is not often possible to measure the impact on a team, and velocity is a fragile tool for that, but I hope it is clear from these metrics the value that one person can have on a team.  It is highly likely that Year 3 will not have the same growth but even if the new velocity can be merely sustained the value to the organisation is significant.

Estimating at a project level.

One of the most difficult aspects of the transition to Agile is the confusion over how estimation is done.

Estimation is difficult, the experts suggest that even with a full picture of what is required and with clear detailed and fixed requirements, the best estimators cannot realistically estimate better than within a 25% margin of error. It’s easily possible to do worse than this. But It isn’t possible to be consistently more accurate; it’s only possible to occasionally get lucky.

But in agile we start without clear requirements, we don’t have a fixed scope and chances are the requirements we do have are at a high level and there are many unknowns. I could talk about the cone of uncertainty but I’m not convinced most businesses will accept that level of uncertainty even if it is based on sound reasoning. In my experience they would rather base decisions on a specific guess than an accurate ranged estimate especially a wide range. Sounds daft when I say it like that but I bet you have experienced it.

Nevertheless it is still often necessary for commercial reasons to have a solid estimate before starting a project (Agile or otherwise), in many situations projects need to provide a good ROI or are limited by a budget.  In some situations the ability to estimate reliably could be the difference between the success and failure of a business. These estimates can be crucial.

So how do projects provide reliable and useful estimates?

First of all it is worth noting that estimates are largely misunderstood in general, they are misused and can often be damaging to the project. But still estimates are asked for and used to make important decisions.

In a survey from a few years ago*, a range of IT companies were asked about estimation strategies, the results were both worrying and yet reassuring that difficulties were universal.

* http://www.ambysoft.com/surveys/stateOfITUnion200907.html

Around 44% of the project teams in the survey described themselves as ‘Agile’ so this is a balanced pool of projects and should give an idea of estimation across the board.

When asked to give estimates to the business for project delivery around 65% of teams were asked by the business to provide estimates within the 25% margin of error range that experts in the field say is ‘impossible’. 11% were allowed no margin of error at all they had to specify a single date or a specific cost for the project,  conversely 21% were not asked to give any estimates at all. The rest allowed a margin of up to 50% on their estimates.

So how did that pan out for those companies?

Well 40% never even tracked whether those initial estimates were correct, it is difficult to draw any conclusions from this, but 40% came within that magic 25% of their estimates, which frankly is an incredible statistic, when I first read this I started questioning the validity of the survey. 40% of software project estimates were more accurate than the ‘experts’ say is possible to achieve consistently, 40% is more than just getting lucky it is frankly unbelievable.   At this point I was about to dismiss the survey as nonsense, but I read on…

How is it possible?

In order to achieve the 25% margin of error the projects did the following:

  • 18% admitted they had padded their original estimate
  • 63% de-scoped towards the end of the project to deliver on the estimated schedule.
  • 34% asked for extra funds to complete the projects on the original estimated schedule
  • 72% extended the schedule to deliver the promised scope (effectively revising the estimate and success was then measured on the revised estimate not the original)

It is impossible to tell from this how many of the projects matched the original estimates, but clearly it wasn’t very many, it is not a stretch to conclude that the vast majority of respondents de-scoped and/or extended the original estimates, including those that had already padded the original estimates.

Moving goalposts is the key

My reading of this survey is that very few if any delivered what was estimated in the originally estimated time-frame/budget. It makes very bleak reading and regardless of whether the project was or wasn’t Agile the estimates did not deliver what the business asked them to.

If we take the stated purpose as being simply to plan and budget and assume the estimates were not padded or interpreted then they hold very little value based on  the lack of accuracy.

In my opinion if any of the businesses that demanded such specific estimates went on to actually base business decisions on the accuracy of those estimates, then they were just setting themselves up for disappointment and future problems.

There is no way from this survey to conclude what the accuracy of the original estimates actually was other than to say that even with padding, de-scoping and extending schedules they were still unable to meet the original expectations and were overwhelmingly wrong and seemingly nearly always underestimated the true time/cost. This reads like a recipe for disappointed customers and shrinking profit margins.

That is a very long winded way of saying that (according to this survey at least) no one in the industry, Agile or otherwise is producing reliable estimates for software projects, we consistently get it wrong, and more worryingly fudge the figures so we never learn from our mistakes.  So any suggestion that estimating Agile projects is more difficult is not based in fact, estimating for software projects is difficult full stop.

Do estimates have value?

Now that is a different question, if I was running a business and I received a project estimate of 6 months, I would be foolish to consistently believe it will be delivered to the defined scope in that time-frame. But that doesn’t make the estimate useless.  If one project estimates 6 months and another estimates 3 months. I can conclude that the first is likely to take longer than the second, especially if the same person or group has estimated both.  Both estimates are likely wrong but chances are that on average and over time they will be wrong by a consistent margin, which makes them predictable.

If I check historic records I might be able to see that projects estimated at 6m generally take 8-12 months, or better yet I could ask the estimators to compare the current proposed project and identify a previously completed project that is closest in size and scope and use the actual figures from a sensible comparator.  Empirical evidence is so valuable I’m surprised more emphasis is not put into keeping track of past estimates and actual delivery costs and schedules.

Estimates are not commitments

Essentially we need to accept estimates as simply estimates not as a plan or a commitment.  Any PM that transposes an estimate of a software project straight into a plan is nuts, and it happens so often that in my experience developers turn white and have panic attacks when asked for an estimate, painful experience says they will be misused and ultimately the one that gave the estimate gets blamed.  If the business could be trusted to accept that estimates are not an exact science and factor in realistic contingency based on empirical evidence then developers would be less afraid to give estimates.

So how should we do it?

I have two suggestions, the first is to use an extension of the Planning Poker process.  Take a group of people that are experienced with software delivery and relatively knowledgeable about the scope and complexity of what is being asked. E.g. Product Owners, Business analysts, Project managers, representatives from development and testing.  Ask them to give estimates of a variety of projects relative to each other.  I’d use Fibonacci numbers or T-shirt estimates, to keep it at an abstract level.  If possible I’d try to include a benchmark project (or more than one) where the actual time/cost is known.

Blue-11Blue-6If we accept that the best we are going to get is a granular; relative; ball-park estimate of a project then this should give you that and more. In fact for budgeting purposes a reliable granular estimate is of far more value than an unreliable specific figure, and far more valuable than the estimates in the survey. Over time it is likely that the estimation team will learn and improve, they will get better with every completed project. I’d have far more confidence saying a project is either a Medium or Large T-shirt.  The T-shirt sizes could map to high level budgets.

My second suggestion which could be used in conjunction or independently of the first is to set a budget and ask the team to provide the best product they can within that time/cost. A good Scrum team will be able to prioritise stories and features to ensure you get the best value for money. If that budget is based on the poker estimates above it is more likely that the budget chosen is realistic and you will get the product you want.  You will also very quickly be able to tell if the project will be unable to meet the goal and can cut your losses early, rather than having to pour more money into a money-pit project that is over-running but too far down the line to cancel.

Estimation is a difficult skill to master but a group is far better than an individual.

Why Agile?

I feel a little on the defensive recently, I have heard a Product Owner (ex PM) complaining that he could get it done quicker with Waterfall, and I have heard a number of people asking what was wrong with the old way and why Agile?

Short-term memories, fear of change – especially for a very large project manager community – teething troubles with the transition, and a variety of other factors are probably behind it.  But long-story short I was asked if there were any metrics on the comparison between Agile and Waterfall that we could use to ‘reassure’ some of the senior stakeholders.

After a bit of research I found an interesting survey: Dr. Dobb’s Journal 2013 IT Project Success Survey. Posted at www.ambysoft.com/surveys/

173 companies were surveyed: The companies were asked how they characterised a successful project.   Only 8%  considered success to be On Schedule, On Budget AND to Specification.  Most classed success as only one or two of these criteria.

Success criteria 2

Based on the criteria listed above and even knowing that success may be just one of those criteria,  only 49% of Waterfall projects were considered successful compared with almost 2/3rds of Agile projects considered outright successes.

Successful projects

When considering failed projects (projects that were never finished) nearly 18% of Waterfall projects were deemed failures compared to only 6% of Agile projects.

failed projects

Finally Projects that were considered challenging in that they were not deemed successes but were eventually completed.

challenged projects

Ultimately in my opinion “Why agile?” is the wrong question. The questions should be “How can we improve?” I believe that Agile is a philosophy rather than a Methodology.  I’d go so far as to suggest that if an organization has a desire to continually improve and follows through on that, then they are ‘Agile’ regardless of the methodology they use. And a company that follows an Agile framework without a desire to improve is not.

In other words ‘Agile’ is the answer to the question not the question itself. If done right and done with the right intentions Agile can help many organizations improve and improve further. Agile is not a destination.