Back to List

How to Identify Good and Bad Agile User Stories

Rachael Wilterdink Rachael Wilterdink  |  
Feb 11, 2020
 
If you read my previous blog, you’ll know the basics of a User Story. But what makes for a good or bad agile user story? In this blog, I’ll cover the criteria that will help you identify one from the other.
 

What Makes a Good User Story?

The most common checklist that is applied to identify a GOOD user story was coined by Bill Wake, and it is the acronym INVEST. It stands for:
 
I - independent
N - negotiable
V - valuable
E- estimable
S - small
T – testable
 
Let me briefly explain each of these, and how it relates to the quality of a User Story.
 

Independent

This seems obvious, doesn’t it? Well, in case it’s not, whenever possible you should strive to keep each User Story independent from another. This means sequencing your stories so there are no dependencies. It also means writing them in a way that does not create dependencies. This can be quite difficult to do and will take a lot of practice to get it right. Sometimes dependencies are unavoidable, but do try to avoid them, if you can.
 

Negotiable

The items in the Product Backlog need to be negotiable (and negotiated). Just because someone adds an item to the backlog does not necessarily mean it will be developed. All stories in the backlog should be able to be questioned.
 
One other tip on this is that the solution should not be specified as part of a User Story or its Acceptance Criteria. User Stories describe the “who, what, and why” – not the “how” (which is the development team’s responsibility).
 

Valuable

Each story should be valuable to the user. A tangible benefit to the user should be clearly articulated, and it should align with a business goal. If a feature does not have a solid value proposition, question whether it should be done at all.
 
Also, watch out for stories that are written from the wrong sort of perspective (such as from a developer’s point-of-view) – these will most likely not provide any direct value to the end user.
 

Estimable

A User Story must have enough detail for it to be estimated, or sized, by the development team. If there are open questions, big gaps, or it’s too big, it’s likely that the team will not be able to size it. This isn’t to say that there must be a huge specification written – it needs to include just enough information, just in time, and any further details can be discovered during development.
 

Small

Stories need to be small enough to be completed within a sprint or iteration. I’ve seen some say you should target stories to be equivalent to about 1-2 days of work, but I have also seen teams use a rule, like, “If the story has been estimated at 8 Story Points or above, it’s too big and needs to be decomposed further before being considered for pulling into a Sprint”.
 
How you size your stories will also be somewhat dependent on the length of your sprints and the size of your team. The main point here is that they should be small enough to be completed (to your definition of done), within a single Sprint.
 

Testable

Stories also need to be testable. This is where the Acceptance Criteria really comes into play. Ideally, it can be used as a further quality checklist (on top of your overall definition of done) that a tester can test for a pass/fail result. If there is anything ambiguous or unclear, if there are missing scenarios, and if unhappy paths are omitted (unless intentionally), then you may have suboptimal results. Ensure that you have clear, crisp, and concise Acceptance Criteria for each of your User Stories.
 
(You can also use this as a “pointer” to figure out if you need to split a story; if you find you are writing far too many scenarios or criteria, then it’s probably a good sign that you have too much going on in that story.)
 

What Makes a Bad User Story (aka User Story “Smells”)?

Now it’s time to explore the flip side of the coin. Here’s what you should look for to identify BAD stories (or, in the parlance of Agile, Story “Smells”).
 

Stories violate any of the INVEST quality criteria

 
This includes stories that are:
 
  • Dependent on other stories
  • Haven’t been discussed, questioned, or negotiated (or you skipped the conversation)
  • Have no value to the customer or end users
  • Don’t have enough information to be sized or estimated by the team
  • Too big
  • Are not testable
 

Stories are written from an improper perspective

 
Rather than being written from a customer or end-user perspective, stories are:
 
  • Written from a Product Owner’s perspective (WRONG)
  • Written from a Developer’s perspective (WRONG)
  • Written from a generic user’s perspective, without considering other roles
 

Stories are poorly sliced

 
  • Stories are split horizontally (by technical layer) instead of vertically
  • They are sliced in ways that don’t deliver value
 

Stories are at the wrong level of detail if they...

 
  • Include too much detail
  • Don’t include enough detail
  • Forget about progressive elaboration
  • Don’t include the “why” part of the story – they just state what the user wants
 

Acceptance Criteria…

 
  • Are missing
  • Don’t include conditions of satisfaction (boundaries for testing)
  • Specify the solution (they shouldn’t)
  • Include the look and feel (they shouldn’t)
  • Don’t include enough information to be truly “suitable for development”
  • Have open questions or gaps
  • Have no definition of “Ready” for stories
  • Don’t include items such as non-functional requirements – which are often overlooked (or could be included in the team’s definition of done, since they often apply broadly across a project)
 
 
As you can see, there are many issues to watch out for when evaluating the quality of User Stories (and their Acceptance Criteria). These are clues that can help us when we approach how to split User Stories in appropriate ways to make them small, but still valuable.
 
Agile

 

Love our Blogs?

Sign up to get notified of new Skyline posts.

 


Related Content


Blog Article
Agile User Story Splitting by Data Variations and Boundaries
Rachael WilterdinkRachael Wilterdink  |  
Jul 07, 2020
In this blog series, Rachael Wilterdink (CBAP, PMI-PBA, PSM I, CSM) dives into 25 different techniques for approaching story splitting that she has used throughout her career. Make sure to stop by each week to catch all 25! This is a two-for-one special. Joking aside, data is another great way...
Blog Article
Agile User Story Splitting by Device, Platform, and Channel
Rachael WilterdinkRachael Wilterdink  |  
Jun 30, 2020
In this blog series, Rachael Wilterdink (CBAP, PMI-PBA, PSM I, CSM) dives into 25 different techniques for approaching story splitting that she has used throughout her career. Make sure to stop by each week to catch all 25! As I think we all know by now, there are countless possible combinations...
Blog Article
Agile User Story Splitting by Business Rules
Rachael WilterdinkRachael Wilterdink  |  
Jun 16, 2020
In this blog series, Rachael Wilterdink (CBAP, PMI-PBA, PSM I, CSM) dives into 25 different techniques for approaching story splitting that she has used throughout her career. Make sure to stop by each week to catch all 25! Before I dig into this story splitting technique, let me give you a...
Blog Article
Agile User Story Splitting by Acceptance Criteria & Test Cases
Rachael WilterdinkRachael Wilterdink  |  
Jun 09, 2020
In this blog series, Rachael Wilterdink (CBAP, PMI-PBA, PSM I, CSM) dives into 25 different techniques for approaching story splitting that she has used throughout her career. Make sure to stop by each week to catch all 25!   As I mentioned in one of my previous blogs in this series...
Blog Article
20 Ways to Adapt Agile Best Practices to Remote Work
Rachael WilterdinkRachael Wilterdink  |  
Mar 24, 2020
The author of our Basic and Advanced Agile Transformation eBooks shares how you can adapt agile best practices to enable your workforce to be effective working remotely from home, the beach, or anywhere in the world (with reliable internet).   With COVID-19 disrupting nearly every aspect...