How not to motivate

A friend suggested I watch this YouTube video on Motivation: Worst way to motivate

I thoroughly enjoyed this video and it is well worth watching.  The outcome flies in the face of some conventional logic but supports the view that we have come to know and love in Agile.

Spoiler Alert: for those that don’t watch the video, the premise is that The Federal Reserve bank in the USA commissioned a study in how to motivate staff and get better productivity, essentially how big  a bonus do you need to give to have an impact on productivity.

The study was carried out by scientists at MIT, Carnegie Melon and University of Chicago  and the outcome will surprise some of you.

Essentially what was classed as small or moderate incentive bonuses had no noticeable impact on productivity when applied to any task that wasn’t purely mechanical.  And large bonuses (equivalent to 3 months wages) caused a measurable decrease in productivity. Yes that is right, give someone a significant incentive when working on even mildly cognitive tasks and they got worse.

In other words incentives are bad, very bad.

The key to increasing productivity according to this study is in 3 areas:

  • Autonomy

  • Mastery

  • Purpose

More on that later…

Quick tangent…

My mind immediately jumped to the financial crisis. If the Federal Reserve believe that financial incentives have a negative effect, why are financial incentives the principle method used for rewarding bankers?   Does that mean that if bankers were paid an appropriate salary would they actually be better? Or is the conclusion that banking requires no cognitive input 🙂

It is not all about money.

It should be noted that for the study to reach the conclusion ‘salary’ had to be off the table, people need to be paid enough as a base salary for that not to be a motivating factor. This is easier said than done.

What I find interesting is that this study also supports old-school psychology too. Some of you may be familiar with Maslow’s hierarchy of needs.  Essentially once basic needs are met, security housing food etc. which have a direct correlation with income, things get more fuzzy, needs above essentials no longer have a direct monetary driver.  We get into Psychological needs: areas of personal relationships, social standing, and then on to our need for self-fulfillment, creativity, fun, personal achievement.  All of which have an element of financial impact but it is clearly it is not the primary driver. Once Basic needs have been met money becomes far less important to us.

Maslow's Triangle.pptx (2)

For example we don’t learn a musical instrument for financial benefit, or do a crossword, many of us have a hobby that takes huge amounts of effort, research, expense and often inconvenience and usually for nothing. Some participate in PTAs or Churches or social groups and clubs, or even write a blog. For most people money simply doesn’t drive us (Assuming we have enough to live), it is an enabler. So in theory if you have a reasonable base salary and you are content, then motivation needs to come from elsewhere.

Money cannot buy happiness, but being broke sure makes you miserable.

Aha, all looking good until…

A company in the USA read a similar study and concluded that if someone earned $70,000 they would have enough to be content, all essential needs would be met and so money would not be a factor and they could focus on productivity, so he raised the pay of all employees to $70,000.  Perfect! Only it wasn’t, humans are contrary soles and whilst the pay raise was great for some, we also measure our social standing by income, not because we care about income, but because it is a measurable metric of your value in society (or at least the workplace).  So when someone you felt was in a job that required less skill or less status than you was suddenly earning the same you felt disgruntled, your salary hasn’t changed, last week you were happy but today you feel you are relatively less important.  So it is about money and at the same time not at all about money.

Do you pay fair?

Paying fairly doesn’t mean pay them all the same.  Laszlo Bock from Google in his book Work Rules goes even further than this:- he says to deliberately pay unfairly, if you have a good employee pay them excessively, deliberately take money out of the equation.

My take from this is as follows:

Pay your staff well, pay them what they are worth as a base salary and review it regularly, but do not include any performance based incentives. And if you include profit share or Christmas bonuses, apply them equally (% based) and do not have them performance driven.

What you want is a secure workforce, you do not want them to be thinking they could earn more elsewhere, and you do not want high turnover of staff. Show your employees that you truly value them, pay above market rates to ensure stability, and be prepared to pay exceptional employees exceptionally.

So with a reasonable and appropriate salary (with no performance incentives) as our foundation, we can concentrate on the real issue of how to maximize productivity, and this is where it gets interesting…

more next time.

You can’t handle the Truth!

When it comes to leadership it seems that a lot of problems and a great many solutions come down to either a lack of communication or lack of trust, often both.

But why are those two skills so difficult to master? How much time and effort gets wasted simply because we don’t trust our employees, or don’t understand a request? How much dissatisfaction and uncertainty results from not trusting your boss?

Around ten years ago I was working on a major release of software, it was a gated waterfall project. I and three others were critical reviewers and gate keepers for a major component. At the gate review our component was a shambles, really late, testing was far from completed, documentation hardly started. My understanding of the gate review was to quantify risk and to ensure all components were on track with the plan. Essentially a structured early warning system.

We four reviewers unanimously agreed to reject the component. We met a lot of resistance and were put under pressure, we were told that “we were delaying the project”, that “it would make us look bad” it was a difficult decision and we were well aware that it would be uncomfortable. But the reality was that the project was behind and we saw no value in faking things to keep a plan looking good. We felt the purpose was to highlight problems so they could be corrected.

But at some point the plan became more important than the product. The next day a company wide announcement was issued, the project was on track, it had passed the gate and all was well. We were shocked, it turned out that our boss had removed all 4 of us as critical reviewers and replaced us with others that were willing to say all was well. My colleagues and I were deeply unhappy about this.

The lack of trust was shocking, the lack of transparency and honesty showed just how dysfunctional the process was. Unsurprisingly as the project neared the completion the target date it slipped drastically catching people by surprise and the project was close on six months late and we were not first to market. We will never know if being transparent at that point could have given the project opportunity to rectify the situation sooner, but hiding the problem certainly didn’t help.

It is not that waterfall projects inherently lack transparency, but a rigid plan that has a high cost of change creates a barrier to transparency, Project Managers feel pressure to hide problems in the hope they can fix them before anyone becomes aware, or as more often is the case in the hope that another part of the programme slips more so they are not in the firing line.

These days I advocate a software delivery framework that highlights problems as early as possible, but many execs don’t like this. I sometimes wonder if they prefer to pretend all is well or imagine that problems will resolve themselves, this is an Ostrich mentality that allows them to defer worrying until later.

Adapting to an Agile framework where everything is transparent can be a difficult adjustment for many execs and programme managers, being aware of day to day problems, minor issues, or simply that some tasks take longer than expected can be a difficult experience for managers used to only getting positive assurances from PMs.  Suddenly they are exposed to information that was previously masked from them.  They must fight the urge to interfere and learn to trust the teams, to trust the Product Owners and the Scrum masters. In many ways it was easier to ‘trust’ a Project Manager with a Gantt chart when the real story was hidden – even when 90%+ of the time that story was inaccurate. A pleasant lie is always easier to accept than a painful truth.

You can’t handle the truth

The sad situation is that for many execs they simply cannot handle the Truth, they want an agreeable story that lets them claim a project is on track and are happy to believe all is well until it is too late to hide it any longer, and then they can shout and blame, but all this occurs usually after it is too late to take corrective action, the screaming and the desk thumping achieves nothing but to upset people. No one wants a project to be late, chances are they have all worked very hard and done their best. So in reality they have far less influence over the outcome than if they had valued honesty over platitudes earlier in the process. Rather than enforcing overtime for the last couple of months or scrapping all quality control to meet a deadline they could have taken sensible planned corrective action much earlier had they simply fostered a culture of honesty and openness.

I like to think that most software professionals have a desire to do a good job, they want to complete projects quickly and to a high quality. Trusting them should not be a great leap of faith. In my experience you are more likely to get overly optimistic promises than padding. Your biggest dangers are more likely feature creep or boredom, it is very rare to find a developer that wouldn’t prefer to be busy and challenged.  

In short trust the development teams, it is very likely that trust will be rewarded.

The value of a ScrumMaster

In June 2014 a new ScrumMaster was brought in to work with an existing and established development team

At the point of joining the team had already raised concerns about environments and tools but the team had been unable to express specific issues that could be resolved, these problems had been intermittent for most of the year.   The team was composed of a BA acting as Product Owner, a Lead Engineer, three other developers and two testers with the addition of a ScrumMaster this made a team of size 8, which equates to an approximate cost to the business of around £600,000/$1,000,000 a year.

The team measure their productivity using a term called velocity, which is how much work measured using a relative scale is achieved over a period of time. The amount of work fluctuates for a variety of reasons, holidays, sickness, bugfixing, mistakes, environmental issues, focus or context switching, support requirements, spikes etc.  It is an anecdotal estimate of productivity only. But has become the industry de facto standard for measuring improvement. This had been recorded over the previous year and the documented average for the 12 months preceding the new ScrumMaster is noted below as 100 basis points per week.

Velocity can only be used to compare a team with it’s own past performance, i.e. it is a relative scale, not an absolute measure but in that context it is very useful. But should be used with caution. In this case the team had an existing Datum and a set of benchmark stories. The Datum was not modified and the benchmark stories remained in this period, thus there is a consistent base level for comparison.

Year 1 (Prior to ScrumMaster)

image001

The new ScrumMaster, spent some time with the team, observing and evaluating. He sought to identify problem areas and any behaviour in the team where improvements could be made. But unlike a conventional manager or trouble-shooter he did not issue directives on changes to behaviour.

The team scheduled regular ‘retrospective’ meetings where they review the past time period and look for ways to improve.  The ScrumMaster used a variety of techniques, to coach and guide the team to focus on areas identified either by the team or by the ScrumMaster as problematic or where improvement could be made. The ScrumMaster collected and compiled metrics to aid this analysis, and facilitated meetings to be productive in deconstructing problems in a non-negative manner.

By using this approach the ScrumMaster did not solve the teams problems, he did not offer ‘his’ solution to the team, nor issue directives for improvement. The ScrumMaster worked with the team to enable them to become aware of their problems and the ScrumMaster created an environment where the team were empowered to solve their own problems. The ScrumMaster uses his past experience to identify problems and may steer the team to suitable solutions, but relies on the team to solve their own problems.

In the following quarter there was a notable upturn in velocity (productivity)

Year 2 (First quarter after ScrumMaster)

image002

The ScrumMaster is responsible for creating an environment where the team can reach a long-term and consistent performance – a spike or a blip is not considered sustainable, so it is important to not seek to boost productivity with short-term fixes like overtime or cutting quality. The aim is sustainable pace and sustainable quality over the long-term.

The ScrumMaster continued to work with the team identifying more bottle-necks and waste, removing impediments and coaching the team how to work around impediments or resolve them themselves, there is always room for improvement in a team.

The ScrumMaster also spent time coaching the Product Owner and Programme Manager on expectations and in their interactions with the Scrum Team, including challenging the Programme Manager on how his priorities are fed to the team so that the team could focus to be more effective.

Year 2 (First full year with new ScrumMaster)

image003

Velocity is only one measure of productivity and is largely anecdotal, however it is the only real measure we currently have available.  The productivity improvement should be considered in that context. However, there is a clear and notable improvement in the productivity of the team over the 12 month period.  The team deserves the credit for the improvement as they have improved themselves.  But by adding an experienced ScrumMaster to an existing team he has been able to coach the team into identifying ways they could improve and in doing so doubling their productivity – a significant change in just 1 year.  In this instance it could be argued that the ScrumMaster’s coaching of this one team has resulted in £600,000/$1,000,000 worth of added value to the business and assuming the team does not regress this is an ongoing year on year gain. 

It is not often possible to measure the impact on a team, and velocity is a fragile tool for that, but I hope it is clear from these metrics the value that one person can have on a team.  It is highly likely that Year 3 will not have the same growth but even if the new velocity can be merely sustained the value to the organisation is significant.

Estimating at a project level.

One of the most difficult aspects of the transition to Agile is the confusion over how estimation is done.

Estimation is difficult, the experts suggest that even with a full picture of what is required and with clear detailed and fixed requirements, the best estimators cannot realistically estimate better than within a 25% margin of error. It’s easily possible to do worse than this. But It isn’t possible to be consistently more accurate; it’s only possible to occasionally get lucky.

But in agile we start without clear requirements, we don’t have a fixed scope and chances are the requirements we do have are at a high level and there are many unknowns. I could talk about the cone of uncertainty but I’m not convinced most businesses will accept that level of uncertainty even if it is based on sound reasoning. In my experience they would rather base decisions on a specific guess than an accurate ranged estimate especially a wide range. Sounds daft when I say it like that but I bet you have experienced it.

Nevertheless it is still often necessary for commercial reasons to have a solid estimate before starting a project (Agile or otherwise), in many situations projects need to provide a good ROI or are limited by a budget.  In some situations the ability to estimate reliably could be the difference between the success and failure of a business. These estimates can be crucial.

So how do projects provide reliable and useful estimates?

First of all it is worth noting that estimates are largely misunderstood in general, they are misused and can often be damaging to the project. But still estimates are asked for and used to make important decisions.

In a survey from a few years ago*, a range of IT companies were asked about estimation strategies, the results were both worrying and yet reassuring that difficulties were universal.

* http://www.ambysoft.com/surveys/stateOfITUnion200907.html

Around 44% of the project teams in the survey described themselves as ‘Agile’ so this is a balanced pool of projects and should give an idea of estimation across the board.

When asked to give estimates to the business for project delivery around 65% of teams were asked by the business to provide estimates within the 25% margin of error range that experts in the field say is ‘impossible’. 11% were allowed no margin of error at all they had to specify a single date or a specific cost for the project,  conversely 21% were not asked to give any estimates at all. The rest allowed a margin of up to 50% on their estimates.

So how did that pan out for those companies?

Well 40% never even tracked whether those initial estimates were correct, it is difficult to draw any conclusions from this, but 40% came within that magic 25% of their estimates, which frankly is an incredible statistic, when I first read this I started questioning the validity of the survey. 40% of software project estimates were more accurate than the ‘experts’ say is possible to achieve consistently, 40% is more than just getting lucky it is frankly unbelievable.   At this point I was about to dismiss the survey as nonsense, but I read on…

How is it possible?

In order to achieve the 25% margin of error the projects did the following:

  • 18% admitted they had padded their original estimate
  • 63% de-scoped towards the end of the project to deliver on the estimated schedule.
  • 34% asked for extra funds to complete the projects on the original estimated schedule
  • 72% extended the schedule to deliver the promised scope (effectively revising the estimate and success was then measured on the revised estimate not the original)

It is impossible to tell from this how many of the projects matched the original estimates, but clearly it wasn’t very many, it is not a stretch to conclude that the vast majority of respondents de-scoped and/or extended the original estimates, including those that had already padded the original estimates.

Moving goalposts is the key

My reading of this survey is that very few if any delivered what was estimated in the originally estimated time-frame/budget. It makes very bleak reading and regardless of whether the project was or wasn’t Agile the estimates did not deliver what the business asked them to.

If we take the stated purpose as being simply to plan and budget and assume the estimates were not padded or interpreted then they hold very little value based on  the lack of accuracy.

In my opinion if any of the businesses that demanded such specific estimates went on to actually base business decisions on the accuracy of those estimates, then they were just setting themselves up for disappointment and future problems.

There is no way from this survey to conclude what the accuracy of the original estimates actually was other than to say that even with padding, de-scoping and extending schedules they were still unable to meet the original expectations and were overwhelmingly wrong and seemingly nearly always underestimated the true time/cost. This reads like a recipe for disappointed customers and shrinking profit margins.

That is a very long winded way of saying that (according to this survey at least) no one in the industry, Agile or otherwise is producing reliable estimates for software projects, we consistently get it wrong, and more worryingly fudge the figures so we never learn from our mistakes.  So any suggestion that estimating Agile projects is more difficult is not based in fact, estimating for software projects is difficult full stop.

Do estimates have value?

Now that is a different question, if I was running a business and I received a project estimate of 6 months, I would be foolish to consistently believe it will be delivered to the defined scope in that time-frame. But that doesn’t make the estimate useless.  If one project estimates 6 months and another estimates 3 months. I can conclude that the first is likely to take longer than the second, especially if the same person or group has estimated both.  Both estimates are likely wrong but chances are that on average and over time they will be wrong by a consistent margin, which makes them predictable.

If I check historic records I might be able to see that projects estimated at 6m generally take 8-12 months, or better yet I could ask the estimators to compare the current proposed project and identify a previously completed project that is closest in size and scope and use the actual figures from a sensible comparator.  Empirical evidence is so valuable I’m surprised more emphasis is not put into keeping track of past estimates and actual delivery costs and schedules.

Estimates are not commitments

Essentially we need to accept estimates as simply estimates not as a plan or a commitment.  Any PM that transposes an estimate of a software project straight into a plan is nuts, and it happens so often that in my experience developers turn white and have panic attacks when asked for an estimate, painful experience says they will be misused and ultimately the one that gave the estimate gets blamed.  If the business could be trusted to accept that estimates are not an exact science and factor in realistic contingency based on empirical evidence then developers would be less afraid to give estimates.

So how should we do it?

I have two suggestions, the first is to use an extension of the Planning Poker process.  Take a group of people that are experienced with software delivery and relatively knowledgeable about the scope and complexity of what is being asked. E.g. Product Owners, Business analysts, Project managers, representatives from development and testing.  Ask them to give estimates of a variety of projects relative to each other.  I’d use Fibonacci numbers or T-shirt estimates, to keep it at an abstract level.  If possible I’d try to include a benchmark project (or more than one) where the actual time/cost is known.

Blue-11Blue-6If we accept that the best we are going to get is a granular; relative; ball-park estimate of a project then this should give you that and more. In fact for budgeting purposes a reliable granular estimate is of far more value than an unreliable specific figure, and far more valuable than the estimates in the survey. Over time it is likely that the estimation team will learn and improve, they will get better with every completed project. I’d have far more confidence saying a project is either a Medium or Large T-shirt.  The T-shirt sizes could map to high level budgets.

My second suggestion which could be used in conjunction or independently of the first is to set a budget and ask the team to provide the best product they can within that time/cost. A good Scrum team will be able to prioritise stories and features to ensure you get the best value for money. If that budget is based on the poker estimates above it is more likely that the budget chosen is realistic and you will get the product you want.  You will also very quickly be able to tell if the project will be unable to meet the goal and can cut your losses early, rather than having to pour more money into a money-pit project that is over-running but too far down the line to cancel.

Estimation is a difficult skill to master but a group is far better than an individual.

Vision without action is a daydream. Action without vision is a nightmare

Waterfall vs Agile?

Sounds a bit daft to even be having the debate, surely by now there is no one left that still thinks waterfall is the better choice for delivery of complex software projects, unfortunately that is very far from the truth, there seem to be many that still prefer waterfall and recently I even heard it talked of in terms of the good old days.  So I delved a little deeper.

In this instance the source was a statement by someone who’s opinion was given weight. The comment was:

I could deliver ‘project x’ using waterfall if you gave me dedicated resources and we could lock ourselves away for 6-months.  

In waterfall the costs of planning are high, the costs of analysis is high, and the cost of change is very high. But by it’s nature waterfall has a clear scope(vision).   The result is that everyone is clear on the priority, the project is protected, no change and no disruption, the scope doesn’t change. Because everyone understands the cost of interruption.

In Agile the upfront cost of planning is low, the upfront cost of analysis is low and the cost of change is relatively low. The result in some cases is that ‘unnecessary’ direction change can be frequent, the project is unprotected because it can adjust, disruption can be frequent, and so the scope regularly changes for sometimes trivial reasons or low priority distractions.

In short we are not comparing apples with apples.  We are comparing focused vision-led waterfall, with an unfocused ad-hoc agile.

In the example above the team had a fairly vague goal to deliver ‘x’ but also support ‘project y’ and deliver small increments of ‘project z’, ‘project w’ and maybe even an update to ‘project v’ too. And that plan was likely to change next week when something else came along. Because of this ‘project x’ was taking far longer than anticipated. so this notion that delivery using waterfall would be quicker has arisen.

But, the problem for me is that it was not the ‘waterfall’ part of the plan that would ensure success, it was the dedicated resources and clear vision that were the crucial aspects.  Give me a clear vision and those same dedicated resources and I can be pretty sure that I would deliver what the customer wants, sooner, cheaper and more reliably using Scrum than someone else could using waterfall.

Agile is the victim of it’s own agility, it is like the free gift no one values.  Because there is less pain in changing direction in agile projects, and because the team can adjust and adapt, that doesn’t make it free and it doesn’t mean it isn’t disruptive.

Agile enables you to have flexibility in planning, it allows deferring decisions, it allows planning only small amounts, and allows changes in direction and content.  But just because you can do these things doesn’t mean you always should.

A single vision with dedicated resources is a powerful combination, Agile is a powerful tool, but as they say with great power comes great responsibility.

Agile enables you to vary the scope of your vision, to respond to change, it is not a licence to operate without a vision.

As the Japanese proverb goes:

Vision without action is a daydream.  Action without vision is a nightmare.

Who is interviewing who?

How I’d like to do it.

My advice would be to concentrate first and foremost on creating a place where people would want to work, then marketing your organisation to the candidate, they should really want to work for your company, that way you start with a better candidate pool, immediately giving you a better chance of finding the best employees.

Then – even if it feels impersonal, a series of consistent structured tests (IQ and Psychometric) and then formal structured interview questions where the results and responses are noted and then referred onto an independent panel for a decision on hiring. Forget the gut-feel, forget the technical questions, forget hostile interview techniques, they really don’t work. Throughout all this ensure the candidates are treated well and made to feel important.

Sounds like hard work, it sounds like it undermines the hiring manager, it sounds incredibly time-consuming and bureaucratic, perhaps even expensive. But if your goal is to consistently hire good quality candidates it is going to be hard work and you will have to accept that there are some things that cannot be effectively and consistently assessed in an interview situation.

My final thought is that if you have got your hiring right, it is very likely that the most capable, most experienced and most reliable person for a role is the one currently in it. Don’t lose them.  Look after good employees they are your company’s most valuable asset.  Treat them well and do whatever it takes to keep them.

recruitment tips

And next time you are in an interview situation, ask yourself who is interviewing who?

Typical interview styles

The interview itself.

What I have observed is that interviewing is generally composed of one or more of a very small number of techniques:

  1.  Free-form, gut-feel unstructured or semi-structured chat and questions between candidate and hiring manager.

  2.  More formal pre-defined questions structured and consistently asked to all candidates.

  3.  Some variety of standardised test, essentially IQ based.

  4.  Questions posed by a technical expert with a desire to highlight his superiority rather than assess your capability.

  5.  A technical test or series of technical questions intended to be pragmatic and fair.

  6.  Aggressive panel questioning

  7.  Psychometric testing: verbal/numeric/diagrammatic reasoning or personality tests.

  8.  Presentation by interviewer on why the candidate should work for you.

There are studies and statistics that rank the effectiveness of the different techniques for selecting good candidates, I won’t pour them out here – I am not an expert and I’d just embarrass myself. But in my experience 1 is the most common, often combined with 4 or 5. But logic states that 4 is a waste of time and puts off candidates, and studies show that 1 and 5 are actually very, very poor indicators of ability and capability for the role and do not result in long-term success.

So what works? 

2, 3 and 7 are by far a better method of assessing candidates, followed by an independent panel-decision based on the documented evidence taken from the interviews – the interviewer should not be allowed to make the decision independently as they display bias (subconscious or otherwise).

But 2, 3 and 7 are hard, it requires yet more work on what is already a tough and time-consuming process, so very rarely gets done.

The reciprocal nature of an interview (8) is so often overlooked, if you want the best people and you want to get the best out of them, then not only do you need to decide if they are right for you, you need to convince them that your business/team is right for them. You should be putting as much effort in to impressing candidates as they put into impressing you. You are competing for them in a buoyant market.

recruitment2

Doing it right.

As an interviewer, my best experience has been for a company that did a combination of 2, 3, 7 and 5. We put a lot of effort into interviewing, but the majority of people still failed the combination of tests, especially the technical test. The test felt ‘easy’ to us and some candidates did well, but the results didn’t match the apparent skills of candidates, in hindsight I think it confused the process. The problem is that technical tests are not effective at assessing ability during an interview, there are simply so many other factors at play.

Relying on structured questions, IQ tests and psychometric tests may feel clinical and impersonal, but very likely the best way to find the right candidate. But ego plays a role and when hiring for a team, we are only human and so many hiring managers want to rely on their own instinct, even if evidence demonstrates this is unreliable, so it is hardly a surprise that many hiring managers favour their instinct over a structured process.