Many of us are relatively familiar with the notion that stories should be expressed in terms of value delivered (Why) and how important the “Why” is for being able to maximize the outcome for the customer–
Who: As a …
What: I want to :
Why: So that …
When we talk of good stories we refer to INVEST as a means of helping validate that our stories are well written, I think this is a great tool for helping write user stories, we may even include Acceptance Criteria to help the team identify that the story has been completed in such a way that the value is best able to be realized, but I’d like to propose going a step further.
Expected Vs Actual
Whichever way the story is written the assumption is that the PO has determined the value of the story and prioritized it accordingly, but value is a very nebulous term and encapsulates all sorts of things many of which are assumptions or just plain guesses or personal preferences. We are also making the assumption that the story will successfully deliver the value we intend. Rather than accepting that it is a hypothesis that it will achieve our goal.
Up to this point we are assuming that the Product Owner is always making the right decisions, and their assumptions of the value delivered by a story are infallible. Speaking as a Product Owner, that is a rather hopeful assumption, often value judgements are little more than educated guesses, certainly they are very subjective opinions on value. Even market research is guesswork to some extent and particularly with new products or internal systems there is little opportunity for effective predictions of value. I don’t want to take anything away from the PO and their authority on making these judgements that is after all their role. But as a PO I would very much value a feedback loop which would enable me to validate that my decisions were right or wrong and give me the opportunity to course correct accordingly.
In other words it is necessary for us to make judgement calls but getting feedback on the accuracy or otherwise of those decisions would be hugely beneficial.
So what can we do?
We could add to the Acceptance Criteria to include some additional validation. Our Acceptance Criteria helps us validate that a story is implemented the way we intend it to be implemented, it does not however always enable us to measure whether the value is fully realized.
For example:
Assumption: We believe that adding a picture to listings on a product website will increase sales by 10% (our market research says so).
As an online customer,
I want like to see pictures of products
So that I can make more informed buying decisions (and thus buy more products) –
(Business Value: Marketing estimates sales increase of 10%)
Our Acceptance Criteria may stipulate positioning of the picture, the size and what to display if a picture is not available. We may even add some performance Acceptance Criteria such as average page load time. But that is not enough to validate that the value was achieved.
How do we validate that a story delivers Value
But how do we validate that this story does actually deliver the value we expect – whilst we can be confident that having a picture fulfils some aspects of the value – the better informed decisions, it might be that we are missing out by not having the ability to zoom on the picture, or it may be that our users are not bothered by the picture at all and would prefer another feature such as the “lead time” or “quantity in stock”
Validation Criteria
What if as part of this story we not only implement the feature to show the picture but we also include analytic measurements on the page load times, and even a measurement for the number of sales of a product or products per day and then evaluate 50% of users with the new feature and 50% without the pictures and compared the results. Or we could conduct focus groups on this feature or usability studies to get more subjective but detailed feedback.
As part of the story we could add an additional layer of Validation Criteria, this would be similar to Acceptance Criteria but would be a way to measure the value actually realized by the user
What do we gain?
Would including functionality or activities that enable us to measure that we have delivered the value we are expecting make the stories better? Would that information help shape our product and build a better product? Would it help us prioritize our backlog as we get a better understanding of value actually delivered vs value expected to be delivered?
We could either add stories for these measurements or consider these to be encapsulated in the delivery of this story.
Essentially we are asking whether feedback is valuable and if it is – how valuable is it to us.
Return on Investment:
When discussing this with a colleague the first response was that this is putting more upfront work and that is a challenge for the ‘lazy’. In Agile ‘Lazy’ is a virtue so this is important feedback.
Naturally there is an overhead in this but as with all feedback loops the information is valuable, knowledge is power and we just need to tune our efforts – our feedback volume to the right level to get valuable information with the minimum necessary effort. Another example of an Andon Cord where if the effort is too much and producing either too much information or nothing of value then we need to retune our feedback loops to give us enough valuable feedback to act.
Also many of these measurements will be applicable to multiple stories so the investment may end up being very limited but the feedback may be far reaching, and once automated the ongoing feedback can be tweaked to add extra sensors to give us more and more valuable information.
Some examples of value assessing measurements:
- Website analytics: hit rates, click through rates, hot spots etc. The cost of these is minimal and is often something that can be applied even after development.
- We could write stories to build into our application measurement for how our product is used, or the performance of our product,
- We could add usability testing or focus groups, surveys of users
- By using feature flags we could set up effective A-B testing to get feedback on structured hypothesis validation.
Please note that not all measurements need to be software driven – increased subscribers say may be measured entirely independently of your application.
Vision
But ultimately the biggest change would be in your initial vision creation, do you know your product goals and do you have a way to measure success.
Is your goal increased sales, or time saved, or efficiency improvements, or increased users or cost saving and regardless of your goals do you have a plan for measuring whether your product is achieving your goals.
This may seem like stating the obvious but I cannot count the number of projects I have been on where the stated aims were cost-saving or revenue generating and numbers were stated and yet after the project was authorized no one ever went back and assessed whether the project was a success or achieved any of it’s aims. Having an aim was enough to get the project started. Claiming a 10% increase in sales or reduction in costs should be something you can measure so measure it.
Ironically being able to map a story to one of your stated goals for the product could be another way to filter unnecessary stories if the expected impact is not one of your product goals.
Summary
This is a very simple change to your story writing process – an extra little consideration that could have significant implications to the success of your product, the addition of a very valuable feedback loop on value delivered (rather than value expected)
As a …
I want to …
So that …
And I can verify this by …
Post Script:
I presented this notion to a Product Ownership Meetup and the response was amazing the conversation was full of so many great ideas, and examples of how some of the product owners are already putting this in to practice – not explicitly but have made usability and measuring usage a key part of their Product Ownership methodology. I love the conversations this group has each month, but this month I came away with so many new things to think about.
The highlight of which was Goldratt Users stories – which will be the subject of my next blog.
Tell me how you measure me and I’ll tell you how I will behave – Eli Goldratt
Very good points. Fin Goulding and I made some suggestions in 12 Steps to Flow. To summarise we said if you look at the flow in terms of upstream (emergent or overlooked customer needs and assets), downstream (post-delivery feedback) and midstream (development) then you have and end to end process for assessing value rather just eliminating waste. All work needs to be shaped by goals, i.e. diligently breaking work down so that it helps deliver a goal (with a goal being an aspiration to help customers to succeed.) That also means goals are shaped upstream, an area IT has not had enough focus on traditionally but in future has to.
LikeLiked by 1 person