What does “Done” mean, really? Working on an Agile project is really an exercise in honesty and self-reflection; if your team can’t predictably complete its work, can’t progress through each of its sprints; it’s safe to assume each of your team’s releases will be equally unpredictable. And here lies the crux of an Agile team’s opinion of its own work. Members of a team may define Done as “Ready for Test” or “UAT Passed”, or “Deployed,” but the truth is, if each team member doesn’t have the same definition, there will be no way to rely on consistent output from that team.
Since we all value working code over comprehensive documentation, it becomes all the more important, especially in situations where there are multiple development teams with intertwined dependencies, that each team is able to deliver high-quality code that has passes all of the rigors of testing, before marking a story as “Done”. In theory, this is easy to agree to, but in practice, it is incredibly hard to hold large teams of busy people to such high standards of completeness and transparency.
Typically, when reporting out status to the rest of our team, we share development team velocity (how many units of work (story points) a team has completed in each sprint), assuming that each Product Owner has broken down their user stories down into suitably small, discrete work items that are accurately sized and straight-forward to understand [large assumption]. As long as dev teams are maintaining a constant velocity and are reaching the agreed-upon definition of done, it is assumed that work is progressing in the right direction. Since one measure is rarely enough to confirm a trend, teams also report on defects raised by test teams (broken functionality, caused by either bad code or bad requirements). Low defect numbers are considered an indicator of high-quality code.
But I’ve found there is a missing factor that deserves equal consideration along with velocity and defect tracking. “Carry-over” represents the number of stories deferred from one sprint to the next, and is often the hidden enemy of sprint predictability. Typically, carry-over is a symptom of unplanned scope, and by extension, a Product Owner who didn’t correctly size a user story’s complexity, or a dependency that failed and became a blocker. The break-fix in this situation is typically to add another story of comparable size to the deferred item, and work on it in the meantime. While in practice, the team’s velocity will appear constant; this trend actually disguises the fact that the scope agreement made at the beginning of the sprint has been violated, and this can impact what is being delivered.
While carry-over isn’t the end of the world, and is pretty common in the execution of Agile projects, it is worth mentioning in a discussion of team self-evaluation. Carry-over items can become a weight around a team’s neck that will eventually cause delays, or even project failure. I’ve found the best way to assess predictability in a sprint is to not only track velocity and defect counts, but also to track a team’s definition of “Done” through a carry-over report.
In my next post, I’ll describe how to track and resolve carry-over when it happens, but in the meantime, share your stories of when you’ve encountered carry-over in your project in the comments section, below. We want to know how you’ve encountered deferred stories in your project. Were you able to resolve it before it caused delay?