I do a lot of what could be considered firefighting on my current project. I’m often given tasks with very short deadlines. Tasks that would normally require days to complete need to be performed in a matter of hours. One of these tasks was to perform an “audit” of the requirements for the project (I didn’t write the requirements, by the way). When I dug a little deeper as to what was meant by an “audit”, it meant “we need to determine which requirements have been implemented, which were descoped, which were missed, and so on.” Immediately, I thought “Oh, you mean requirements traceability”. Of course, these are the sorts of things that requirements traceability—real requirements traceability generated out of a requirements management tool–tells you. Unfortunately, due to the haste with which so many tasks needed to be performed on the project, the requirements were at one point sucked from a spreadsheet into a tool-whose-name-shall-not-be-mentioned and summarily “traced” to test cases. This actually sounds like a good thing and seems like it would give us good insight into which requirements had been implemented by determining which related tests successfully passed, right? Oh, if only it were that easy. Our project was fraught with a few problems which prevented us from having this sort of faith in what the tool told us: There was a missing link in the traceability from the requirements to the functional specs, the linkage between the requirements and test cases was hardly reviewed, and the tool we use actually provides misleading information on requirements coverage, rendering the traceability report generated from the tool virtually useless. Fortunately, a few really bright, dedicated people work on this project. I say “fortunately” because often I need to rely on their expertise, background knowledge, and memory to accomplish a lot of tasks. This was one of the tasks where I could leverage the talent and memory of my colleagues. So, in a series of meetings which were a couple of hours each, we reviewed each requirement, line-by-line, relying on the users with the most experience, the development team, and tribal knowledge as to whether a requirement was implemented. I called this “soft” traceability, because instead of using computers to tell us whether a requirement had been implemented, we relied on each other’s memories. In some cases we were mistaken—there was a lot of arguing, exegesis, and letting of blood and tears, but in the end, we actually produced a quasi-reliable traceability document that could be used to create a backlog of requirements for upcoming releases. I suspect we have at least 90% accuracy in the “soft” traceability—which is more than what you could say for a lot of the traceability reports that are automatically generated.
What did I learn from this experience? Well, that it is definitely worth it to have a disciplined, well-coordinated requirements management process. I definitely do not want to go on record as being a proponent of only executing “soft” traceability on a project. It’s messy and feels a bit like alchemy sometimes. Another shortcoming is that you only get traceability in one direction, from requirements to implementation, which does not tell you which features were added as “gold-plating” or additional scope. Also, computers are good at traceability because they are good at remembering and maintaining the relationships between things. People are horrible at this. Indeed, the only reason why we were able to perform this exercise was due to the relatively low number of requirements—which were around a thousand. If the order of magnitude of number of requirements were even one greater, “soft” traceability would be impossible. However, in some circumstances, when all else fails, and you’re up against a deadline, I’ll take the poor man’s requirements traceability over nothing to ensure I have a somewhat complete backlog of requirements.