A lot of the popular literature, training, documentation and processes related to the management of Software Requirements in an organization are typically focused on two constituencies – Users and Developers – and thus miss an important constituency, Testers.
User is the catch-all term for anyone who may use the software being created. They typically define what the software should be able to do – they are the source of the Requirements. The Developers are the lucky ones who get to build all the wonderful stuff that the Users have asked for. They consume the Requirements; convert the concepts, ideas, rules, models and countless Requirement statements into bits and bytes of code that taken together is hopefully the solution envisioned and defined by the Requirements Analyst at the start of the process.
Sitting smack dab in the middle of this creative stream of consciousness that stretches from the User to the Developer before ending again eventually with the User is the Tester. Before the software created by the Developers is handed off to the Users, the Testers have to give it a clean bill of health. They have to certify that the Development team is giving the Users what they asked for.
How exactly do the Testers know what the Users asked for? By looking at the Requirements documents that were created at the start of the process and comparing the delivered software against the requested features and specifications.
Despite the fact that the Testers are almost entirely dependent on Requirements documentation to perform their duties, I have seldom seen them incorporated into the Requirements Creation Process. Based on my observations of multiple projects in several large multinational corporations, I have come across only one instance where the Test team was treated as a constituent group on equal footing with Users and Developers during the Requirements Creation Process.
In general, what I have seen is that Test teams have no connection with or influence over the Requirements that are given to them for testing and validation while they are being created. They are typically handed the Requirements documents long after they have been finalized and development is well underway. They are then told to ensure that the software meets the documented Requirements.
What typically ensues thereafter is chaos. The Test team comes back with a ton of questions about the Requirements that no one is able to answer. Alternately they are given multiple answers to their questions which often contradict each other. (I have often wondered which is worse, but that is the subject of another blog post.) This craziness continues until someone finally realizes that the Requirements need to be tweaked and reworked.
If the people involved with the Project were lucky, the changes made to the Requirements did not impact Development because the Developers had guessed correctly or had not yet gotten started on it. If the people involved with the Project were not lucky, the tweaks to the Requirements resulted in meaningful changes to schedule and scope since rework was needed. If the people involved with the Project were really unlucky, significant changes to scope, schedule and budget had to be made.
There are two obvious questions here. Why do the Requirements get tweaked almost always after the Test team has gotten them? And, what can we do to keep this from happening again, and again, and again, and…
Stay tuned for Part 2 and Part 3 of this series, to learn more about what we can do to avoid this waste and chaos.