In agile software development, user stories are everywhere. These informal, natural language descriptions of requirements of a software system put the focus on the end-users and emphasize the value that we aim to create for them. Their intention is to facilitate conversations between the development team and its stakeholders.
The strengths of user stories – being user-centric, informal, and brief – happen to also be their weakness. Due to their nature, user stories cannot serve to reach a formal agreement between stakeholders and development team. They remain open for interpretation and may lack details that are necessary for implementation. What’s more, when they follow one of the popular templates, they usually miss the non-functional requirements (such as maximum response time).
This is where acceptance criteria come in. Stakeholders and development team agree on the boundaries of a user story and when a deliverable will be considered to meet the desired requirements. But there’s a caveat: Poorly chosen acceptance criteria can ruin even the greatest user story!
Common pitfalls that I have observed (and, admittedly, have fallen into myself) are acceptance criteria that
- are overly and unnecessarily specific,
- prescribe solutions, or
- go beyond the intended scope of the user story.
They might limit the team’s creativity and flexibility to find the best solution to generate the desired value for the end-users. Or they might allow or even require work that does not add value.
Take this user story as an example:
As a learner, I want to receive a cheerful confirmation after completing a lesson, so that I stay motivated to continue learning. Acceptance Criteria: - When a user completes a lesson, the URL query string lessonComplete is set to true. - On the learning path, if and only if URL query string lessonComplete is true, the completion card is displayed.
In this example, the acceptance criteria dictate the implementation of the desired behavior via URL query strings. While this solution might be fine, it prevents the developer from exploring alternative solutions such as using the browser’s local storage or using API calls. Other solutions could potentially be better suited in the particular situation.
Instead of imposing a particular solution, the acceptance criteria should describe the desired behavior:
As a learner, I want to receive a cheerful confirmation after completing a lesson, so that I stay motivated to continue learning. Acceptance Criteria: - After completing a lesson and returning to the learning path, the completion card is displayed. - When the user refreshes the browser, the card is still displayed. - When the user navigates away from the learning path and returns again, the card is no longer displayed.
I have encountered these “user story smells” in almost every team I have been working. But rather than giving the team a boring lecture (think “Harry Potter and the Poorly Chosen Acceptance Criteria”), I prefer to run a short team activity that allows them to discover these risks themselves. Plus, it’s always fun to put away the keyboard and do a little doodling instead!
- 3 flip charts
- 3 empty sheets of paper per participant
- colored pencils (but no red) for all participants
- a timer
On each flip chart, write the following user story:
As [INSERT YOUR TEAM NAME], we want to live in a house, so that we can spend more time together.
Add no additional information on the first flip chart.
On the second flip chart, add these acceptance criteria:
- big enough that everyone gets an individual room and we share at least two common rooms - rooms should be filled with natural light - door on the ground floor, which we can open but intruders can’t
On the third and last flip chart, add these acceptance criteria:
- 3 floors, brick and mortar - blue painted walls, red door - four equal-sized windows on floors 1 and 2, - two windows on ground floor (one to each side) - door lock with 4-digit number pad - garage for 1 car and at least 4 bicycles next to the house
Ask your team to imagine it’s the beginning of a sprint. They have just pulled in a user story and are now asked to implement it by drawing on the first sheet of paper.
The sprint lasts 2min.
Show them the first flip chart and start your timer.
Ask them to stop as soon as the time runs out.
Ask your team to imagine they are redoing the same sprint again. This time, you show them the second flip chart, though. The sprints lasts again 2min.
For the third sprint, you show your team the last flip chart and set the timer again to 2min.
Sooner or later, they will realize they are missing red pens to meet the second acceptance criteria: the door is supposed to be red.
Let them struggle a bit. When they feel blocked, tell them you need to talk to the Data Analysts first.
Wait some more, then apologize, and claim: “Historical data showed that the color of the door had no statistical significance on the Net Promoter Score of the inhabitants. Hence, you may choose any color to paint your doors.” Try not to smirk.
Demo & Discussion
Pretend it’s the sprint review. Let each team member present their work from all three rounds.
Ask them how they felt in each round. Lost, guided, confined? I expect you’ll arrive at these or similar insights:
Now that your team has learnt to identify common “smells” in user stories, a reasonable follow-up activity is a backlog grooming: Distribute upcoming user stories among the team members. Ask them to check the respective story for these smells. As a team, improve the affected user stories.
It will take some time and the implementation of a couple of refined user stories before you find the sweet spot. What degree of specification provides guidance without restriction and flexibility without ambiguity? Eventually, your improved user stories will lead you to better solutions that serve your users!