Advice on splitting stories

One of the most common reasons we reject people interviewing for coaching or product ownership related roles is an inability to grasp the purpose and value in splitting stories effectively, especially lacking an understanding of vertical slicing.

This is also a commonly requested topic for me at the meetup or speaking engagements. Yet it is a topic I have struggled to effectively explain. The conversation often ends up as a narrow technical example on certain techniques, or difficult stories or becomes too abstract for people to apply. In short it is a large and complex topic.

But this video sums up the notion of story splitting and in particular vertical slicing and the ‘why’ behind the method so perfectly that I felt I had to share it.

“Successful problem solving requires finding the right solution to the right problem. We fail more often because we solve the wrong problem than because we get the wrong solution to the right problem.”

Russell Ackoff

Dr Ackoff sums up the issue with the analogy of the parts of a car, if you assess the purpose of a car to get you to a destination then an engine alone is worthless, even the best designed and most efficient engine cannot get you to your destination. Until it is connected to the minimal set of features to achieve the user’s purpose it is useless and remains useless.

Building any feature that does not work end to end adds no value, and building any feature that does not support the purpose of the user also adds no value. But more crucially it is often the interaction between layers or between components that is the most complex aspect of any development, be it a car or software. and the notion that we can build an engine, and a gearbox and fit them together later and expect them to work seamlessly is laughable. But I hear it all the time in software design.

A system must have an aim. Without an aim, there is no system.

W. Edwards Deming

We’ll build the database first then add the other layers, or we’ll work on an API layer 3-6 months in advance of the front end. It is as if we assume that the integration is the easy bit and worse is the assumption that we have anticipated every need of the user (and omitted everything they don’t need) before we design and build the interface, and before we ask for any feedback. And yet as software designers; planners; and project managers we repeat this error over and over, never learning from the pain of not using vertical slicing for splitting stories.

I believe our fundamental attribution error is the focus on the blocks of functionality (the components of the car) rather than the interactions, and rather than focusing on the purpose of the tool and user feedback we plan for efficiency of the workforce. The result is an optimized workforce and an inefficient workflow. We create a sub-par product that has efficient working components but do not effectively work together, and generally this results turf wars over interfaces that do not match the use cases and last ditch efforts to fit square pegs in round holes.

We can learn so much from Dr Ackoff, software alone is not the system, software is a tool, it becomes a system only when it is in use. The only way we can know if the software is efficient is by putting working software in front of a user and for them to use it and give feedback. So the only good way to split a story is in such a way that you are able to get feedback from the user that helps shape the design or to assist in making decisions.

If a story cannot lead to feedback or use, then it has no value, it becomes inventory or work in progress, it is a liability rather than an asset. That Database or API layer that is built with nothing utilizing it is not benefiting you, it is waste, it is an over engineered liability and the pain comes when you integrate it with other components. This extends to unused data fields or unused end points, “we know we will need them later” is a poor excuse for creating additional WiP (work in progress).

Learning is not compulsory… neither is survival

W. Edwards Deming

We as Agile practitioners can learn so much from Dr Deming and Dr Ackoff we are building systems, and the development process itself is a system, if we applied a little more systems thinking I believe we could be far more effective.

But as Deming said “Learning is not compulsory… neither is survival ”

Goldratt User Stories

As a product owner we should be building products to deliver on the Vision and a set of  defined objectives or success criteria. But often our plan is not closely mapped to those criteria or it can be difficult to measure whether we have achieved our Vision or delivered on our objective.

Let’s change the way we write stories

So I wondered if we could expand on a desire for delivering measurable value by changing the way we both plan projects/product and write our user stories?

 

Tell me how you measure me and I’ll tell you how I will behave.

Eli Goldratt

 

The notion of this is that rather than defining our User Stories in terms of behaviour, or actions or activities –  in which we primarily focus on the ‘What’ we should change, our start point should become us defining the “needle we want to move”  Let us start with the measurement of success. Understanding how we will be measured and identifying behaviour that impacts the measurement.

photo_of_the_day_ferrari_458_italia_dashboard

E.g. our goal is to increase the number of items per basket for online sales, our measurement is therefore the average number of items sold per transaction.

Equally it could be to increase the average value of the basket per transaction, or increase the number of units sold of high profit products

Hypothesis rather than Story

Our ‘story’ then would be to ask the Product Owner and Team to identify one or more hypothesis for ways that could impact that measurement. The notion being that a hypothesis must be tested before the story could be considered done.

This could take the form or a brainstorm session to identify the more traditional stories or it could simply be left for the team (in conjunction with the PO) to determine the best way (possibly two or three alternatives) to achieve the desired outcome after the story has been pulled.

This is building on the principle:

The best architectures, requirements, and designs
emerge from self-organizing teams.

Already happening?

You could argue that this is already the role of the Product Owner and that the Product already does this, and in identifying and prioritizing stories they have already considered the impact.  In some cases I would agree with you, in a few cases I have seen products or projects with clear objectives in terms of the goal of the product and even sometimes a measure of success in terms of measurable objectives.

I have yet to see the end goals mapped even at a high level to the stories and features, in most cases success criteria are wooly or undefined and more often than not the Product Owner and team will brainstorm stories (story mapping) without reference to the goals, and even Impact Mapping often will not extend to measurable goals.

That is not to say that the Product Owner is not considering this holistically, the measurements are generally part of the consideration but many stories creep in that have no material benefit or impact on the success criteria of the product.

From-the-book-User-Story-Mapping-by-Jeff-Patton

Measurements for Goals as the basis for Story Mapping

For reference I love Story Mapping, I consider it my favourite tool for Product Ownership and have been known to rave a little too much about it on occasion.

The start point in story mapping is typically to identify themes of user activities (you could call them epics but don’t get hung up on terminology they are essentially big rocks and then we will progressively break them down in to smaller rocks arranged underneath the big rocks.

We can then prioritise the work in the context of the whole rather than being constrained by a linear backlog which makes the context difficult to visualise.

It is also a fantastic tool for explaining the product for release planning and forecasting and so on. Like I say one of my favourite tools.

But what if instead of starting with activities or big stories, what if we started with our measurable goals or success criteria.   e.g. new subscribers per month, revenue per visit, time to achieve objective using application etc.  And we then identify stories that best enable us to achieve that goal.

There may very well be some overlap so there maybe the notion of tagging a secondary objective, but imagine if this helps us weed out stories that are not taking us to our goal or have limited value in the context of our goal.

We would ask which of these stories has the potential to move the needle the most? and prioritise accordingly.

download (9)

Make it measurable

Focusing our efforts on measurable impact to the product should implicitly be the goal of any Product Owner so why not make it our explicit goal?

 

 

 

 

 

How do I know I am delivering Value?

Many of us are relatively familiar with the notion that stories should be expressed in terms of value delivered (Why) and how important the “Why” is for being able to maximize the outcome for the customer–

 

Who: As a …

What: I want to :

Why: So that …

 

When we talk of good stories we refer to INVEST as a means of helping validate that our stories are well written, I think this is a great tool for helping write user stories, we may even include Acceptance Criteria to help the team identify that the story has been completed in such a way that the value is best able to be realized, but I’d like to propose going a step further.

success

Expected Vs Actual

Whichever way the story is written the assumption is that the PO has determined the value of the story and prioritized it accordingly, but value is a very nebulous term and encapsulates all sorts of things many of which are assumptions or just plain guesses or personal preferences. We are also making the assumption that the story will successfully deliver the value we intend.  Rather than accepting that it is a hypothesis that it will achieve our goal.

Up to this point we are assuming that the Product Owner is always making the right decisions, and their assumptions of the value delivered by a story are infallible. Speaking as a Product Owner, that is a rather hopeful assumption, often value judgements are little more than educated guesses, certainly they are very subjective opinions on value. Even market research is guesswork to some extent and particularly with new products or internal systems there is little opportunity for effective predictions of value.  I don’t want to take anything away from the PO and their authority on making these judgements that is after all their role.  But as a PO I would very much value a feedback loop which would enable me to validate that my decisions were right or wrong and give me the opportunity to course correct accordingly.

In other words it is necessary for us to make judgement calls but getting feedback on the accuracy or otherwise of those decisions would be hugely beneficial.

download (5)

So what can we do?

We could add to the Acceptance Criteria to include some additional validation. Our Acceptance Criteria helps us validate that a story is implemented the way we intend it to be implemented, it does not however always enable us to measure whether the value is fully realized.

For example:

Assumption: We believe that adding a picture to listings on a product website will increase sales by 10% (our market research says so).

As an online customer,

I want like to see pictures of products

So that I can make more informed buying decisions (and thus buy more products) –

(Business Value: Marketing estimates sales increase of 10%)

 

Our Acceptance Criteria may stipulate positioning of the picture, the size and what to display if a picture is not available. We may even add some performance Acceptance Criteria such as average page load time. But that is not enough to validate that the value was achieved.

missing3

How do we validate that a story delivers Value

But how do we validate that this story does actually deliver the value we expect – whilst we can be confident that having a picture fulfils some aspects of the value – the better informed decisions, it might be that we are missing out by not having the ability to zoom on the picture, or it may be that our users are not bothered by the picture at all and would prefer another feature such as the “lead time” or “quantity in stock”

Validation Criteria

What if as part of this story we not only implement the feature to show the picture but we also include analytic measurements on the page load times, and even a measurement for the number of sales of a product or products per day and then evaluate 50% of users with the new feature and 50% without the pictures and compared the results. Or we could conduct focus groups on this feature or usability studies to get more subjective but detailed feedback.

As part of the story we could add an additional layer of Validation Criteria, this would be similar to Acceptance Criteria but would be a way to measure the value actually realized by the user

download (6)

What do we gain?

Would including functionality or activities that enable us to measure that we have delivered the value we are expecting make the stories better? Would that information help shape our product and build a better product? Would it help us prioritize our backlog as we get a better understanding of value actually delivered vs value expected to be delivered?

We could either add stories for these measurements or consider these to be encapsulated in the delivery of this story.

Essentially we are asking whether feedback is valuable and if it is – how valuable is it to us.

download (7)

Return on Investment:

When discussing this with a colleague the first response was that this is putting more upfront work and that is a challenge for the ‘lazy’. In Agile ‘Lazy’ is a virtue so this is important feedback.

Naturally there is an overhead in this but as with all feedback loops the information is valuable, knowledge is power and we just need to tune our efforts – our feedback volume to the right level to get valuable information with the minimum necessary effort. Another example of an Andon Cord where if the effort is too much and producing either too much information or nothing of value then we need to retune our feedback loops to give us enough valuable feedback to act.

Also many of these measurements will be applicable to multiple stories so the investment may end up being very limited but the feedback may be far reaching, and once automated the ongoing feedback can be tweaked to add extra sensors to give us more and more valuable information.

Some examples of value assessing measurements:

  • Website analytics: hit rates, click through rates, hot spots etc.  The cost of these is minimal and is often something that can be applied even after development.
  • We could write stories to build into our application measurement for how our product is used, or the performance of our product,
  • We could add usability testing or focus groups, surveys of users
  • By using feature flags we could set up effective A-B testing to get feedback on structured hypothesis validation.

Please note that not all measurements need to be software driven – increased subscribers say may be measured entirely independently of your application.

Navigation

Vision

But ultimately the biggest change would be in your initial vision creation, do you know your product goals and do you have a way to measure success.

Is your goal increased sales, or time saved, or efficiency improvements, or increased users or cost saving and regardless of your goals do you have a plan for measuring whether your product is achieving your goals.

This may seem like stating the obvious but I cannot count the number of projects I have been on where the stated aims were cost-saving or revenue generating and numbers were stated and yet after the project was authorized no one ever went back and assessed whether the project was a success or achieved any of it’s aims. Having an aim was enough to get the project started.  Claiming a 10% increase in sales or reduction in costs should be something you can measure so measure it.

Ironically being able to map a story to one of your stated goals for the product could be another way to filter unnecessary stories if the expected impact is not one of your product goals.

Summary

This is a very simple change to your story writing process – an extra little consideration that could have significant implications to the success of your product, the addition of a very valuable feedback loop on value delivered (rather than value expected)

As a …

I want to …

So that …

And I can verify this by …

 

Post Script:

I presented this notion to a Product Ownership Meetup and the response was amazing the conversation was full of so many great ideas, and examples of how some of the product owners are already putting this in to practice – not explicitly but have made usability and measuring usage a key part of their Product Ownership methodology. I love the conversations this group has each month, but this month I came away with so many new things to think about.

The highlight of which was Goldratt Users stories – which will be the subject of my next blog.

Tell me how you measure me and I’ll tell you how I will behave – Eli Goldratt

Lies; Damn Lies; and Forecasting…

NoEstimates in a Nutshell

NoEstimates has made a lot of traction over the last few years, with good reason, it is primarily about adopting Agile properly, delivering the valuable work in order of priority and in small chunks, and by doing so eliminating the need for a heavy duty estimation process.  If we are only planning for the next delivery we can reliably forecast.

But sadly that is generally not good enough and some level of forecasting is often requested.  So NoEstimates came up with a very useful and low cost method of forecasting. However, it has brought with it a whole host of misunderstandings, most of which are not from the book. The author must be as frustrated as anyone by the misinterpretation of his proposal.  This has led to resistance from many (including me) to adopting this method for forecasting.  I am all for delivering value quickly and small chunks or prioritized work, but slogans that are used to excuse bad behaviour are damaging and hard to resolve, especially when they seem so simple.

My biggest bugbear and one I have covered previously is that many have interpreted NoEstimates as an excuse to skip story refining entirely, this was not in the book but nevertheless you can see any number of articles on the internet professing how adopting NoEstimates has saved them from wasteful refining meetings, the misconception is that if you don’t need to estimate the story then the act of understanding the story is no longer required.  When actually the author was suggesting that you don’t need to refine all work up front and could defer deeper understanding until it became relevant – the last responsible moment.

planning dilbert

Story Writing and being Estimable

I encourage those writing stories to use the INVEST model for assessing the suitability of a story and in that: the ‘E’ is Estimable,  but that doesn’t mean you must actually estimate the story, just that you ask yourself whether the story is clear enough and well understood enough to estimate if asked – are there open questions? is it clear what the acceptance criteria are and that these can be met?  There may be a subtle distinction there, but NoEstimates does not offer an alternative to writing and refining good stories. It is just a method for simple forecasting and encouraging deferring effort until it is necessary.

How does NoEstimates work?

Caveat aside I will try to give a very high level summary of how NoEstimates forecasting works, and when and where it doesn’t work. I shall do so via the medium of potatoes.

Preparing Dinner

I have a pile of potatoes on the side and I am peeling them ready for a big family dinner.  My wife asks me how much longer will it take me?  By counting how many potatoes I have peeled in the last 5 minutes (10) and by counting the potatoes I still have left to do (30) I can quickly and simply calculate a forecast of 15 minutes.

That is NoEstimates forecasting in a nutshell, it really is that simple.

Assumptions

However, the mathematics requires a certain set of assumptions,

1. I did not apply any sorting criteria to the potatoes I selected- e.g. I wasn’t picking either small or large potatoes, we assume my selection was random or at least consistent with how I will behave in the future.

2. That the team doing the work doesn’t change, if my son were to  take over to finish the job he may very well be faster or slower than me and my forecast would not be useful.

3.  We also assume that I will not get faster

4.  We assume that all potatoes in the backlog will be peeled, and no others will be added. If my wife asks me to peel more potatoes or to do the carrots too, the forecast will no longer apply and will need revising.
So there we have it, a very simple and surprisingly accurate method for forecasting future work.  But do you see any flaws to the system?

Flaws in the system

Flaw 1. Comparing potatoes with potatoes

The first flaw is that I am getting potatoes ready for roasting so I want them to be broadly similar in size, so when I get to peel a potato I am also sometimes slicing it, some potatoes only need peeling others may be sliced once and others more than once.  Some potatoes are bad and I throw them away.

If my wife comes along and sees my pile of potatoes and asks how much longer it will take? I can look at my pile of potatoes I have completed in the last 5 minutes (18)  and I can count the potatoes I still have left to (30). The problem is I don’t know how many unpeeled potatoes were needed to produce those 18 peeled and sliced potatoes, I am not comparing like for like.   To be able to give this estimate I would have needed to count how many unpeeled potatoes I had peeled, information I don’t have.  Maybe I could take a guess and then use that guess to extrapolate a forecast, but that sounds like guesswork rather than forecasting.

Flaw 2. Forecasting an unknown

Let’s assume that I am producing 10 peeled potatoes in 5 minutes, and I am asked to give a forecast as to when I will be done, but so far I have been grabbing a handful of potatoes at a time, peeling them and then going back for more, one could say that my backlog of work is not definitive, We have a whole sack of potatoes but I won’t use them all for this one meal.  I am simply adding work as I need it. My aim being to judge when I am satisfied I am done and start cooking.  It is very difficult for me to judge when the sack will be empty or when I have prepared enough for lunch.

Flaw 3.  Changing and evolving work

It is a big family dinner and uncle Freddie has just called to say he will be coming so we need to add more food, Aunt Florance eats like a bird so probably not worth doing a full portion for her.  And the table isn’t really big enough for everyone, so maybe we should do an early meal for the kids first.  The point here is that simple forecasting only works if you have a reasonably good assessment of what the work is still to be done, if your backlog of work is evolving, work being added or removed then the forecast will be unstable.

Flaw 4.  Assuming consistency

When selecting work to do next I have a tendency to choose the work that will bring me the most value for the least effort.  The highest ROI, so in this case I may choose the small potatoes first, less peeling and less chopping.  But that means that if I count my competed work and use that to forecast my future work I will end up underestimating how much is left, the backlog has some really big awkward shaped potatoes that will take far longer to do. But my forecast is based on only doing small simple potatoes.

Doesn’t this apply to all forms of estimates and forecasts?

Flaws 2 and 3 apply to any form of forecasting, they are not unique to NoEstimates. Flaw 1 and Flaw 4 could potentially be mitigated with the use of T-shirt sizing or story points, but to do so requires a level of upfront effort.  Effort that is not spent on peeling potatoes, so may well be considered waste – that is unless you see value in a more reliable forecast.
For me Flaw 1 is my main objection to NoEstimates (beyond the belief that refining is unnecessary)  When stories are refined and better understood it is normal to split or discard stories, and often add stories as the subject becomes better understood. So any forecasting tool that uses a metric based on counting refined stories to predict a backlog of unrefined stories is risking over simplification of the problem. But because the maths is so simple it can lead to a confidence level that exceeds the quality of the data.  These assumptions based on flawed data gets even worse when you use a tool like Monte Carlo forecasting which applies a further confidence level to the forecast. By giving a date combined with a confidence level adds such a degree of validity and assurance that it is easy to forget that a forecast based on duff data will result in a duff estimate – no matter how prettily we dress it up.

Summary

Forecasting is risky at the best of times, especially in Agile where it is our goal to have the work evolve and change in order to give the customer what they truly want. Forecasting needs to be understood by both parties and accepted that it is an evolving and changing metric. Anyone expecting a forecast to be a commitment or to be static is likely to be disappointed. Just take a look at the weather forecast, the week ahead changes day by day, the further away the forecast the more unreliable it becomes.  Understanding the limits of the forecasting method is crucial, a simple tool like NoEstimates is fantastic IF the assumptions can be satisfied, if they cannot then the forecast will be unreliable.

It is probably also true that your forecast will improve if you spend more effort understanding the work. Time spent refining the stories will improve your knowledge. But no forecast can reliably predict work you do not yet know about.  The question as always is “What problem are you trying to solve by forecasting?” That will guide you in determining whether the up front effort is worth it.
Related articles:

Why I think estimating isn’t waste
Demystifying story point estimation

What is the Minimum Viable Product?

A question came up this week following a discussion on story writing that I think bears further discussion.

What really is the Minimum Viable Product? 

To explain my thoughts I will use the building of a Skyscraper as an analogy.

The project vision is to build the UK’s tallest building. To achieve that it must be over 1000ft tall and 88 floors.

What is the MVP?  There are many possible answers to this, but the first two responses that spring to mind are:

1                 My MVP is over 1000ft tall and 88 floors, without that it simply fails to meet my vision.

But let’s say I underestimated the costs and after 65 floors I run out of money, I have been agile and each of the floors was a complete deliverable, the building is perfectly functional at 65 floors, in fact it was probably functional at 1 floor.   So surely at any point it was an MVP?

construction

2                 My absolute worst case scenario MVP a single floor

This is where things get confusing.    No serious customer would consider a 1 floor building with massive land costs and hugely expensive foundations ‘Viable’  the project would never be signed off.  As such this is not viable.  However, I’d still suggest aiming for the ability to deliver in phases, e.g. a delivery after each floor.

In this situation I’d say an MVP is the minimum that a customer would accept. And that is a conversation, would he agree to a MVP of 88 floors but a first phase goal of 20 floors, with the ability to expand later? As you can imagine the MVP is likely to change according to the circumstances. Our goal is to delight the customer but our plan is to build the product in such a way that if at any point he is satisfied or better yet delighted, we can ‘deliver’ immediately.

skyline

My point – hopefully not lost in the mix is that an MVP is about thinking about the minimum that a customer would find useful – usually this is closely related to the original vision. In the case of this building – what if the last 10 floors were empty unfinished shells? or similarly we leave the project in a state where it is valuable as is and the Vision can be achieved in a future project  – The building still will meet the vision and those floors could be finished later?

Having an MVP in mind is useful, it may help with some decisions, but every project is different and MVP will likely evolve with the project. A good Product Owner or BA will be able to work out what is the true MVP and why. The PO can focus on the customers evolving needs so that the customer gets the best ROI at each stage, but most of all they will be aware of what would be the solution that would delight the customer.

An MVP is a tool like any other, in most cases it is a fall back position, not the goal.