The moment your perspective shifts to that of a tester, you’ll find new ways for requirements to be made better. If you ask an analyst what the characteristics of a good requirement are, they will tell you that it should be “necessary, concise, capable of being interpreted only way and testable.” In this post, we’ll explore how the “testable” characteristic can be put to good use to write better requirements and also as a teaching aid.
Consider the following requirement statement: “User shall receive a confirmation email when they place an order.”
On face value, this requirement statement has the necessary characteristics – brief, necessary, not likely to be misinterpreted and, at first glance, eminently testable. But, if this statement was the sum total of the requirements for the “send email upon placing order feature,” then it is debatable as to whether the requirement can actually be tested.
Why? For starters, simple questions like this cannot be answered.
- Which system will send out the email?
- What should the email say?
- How is the email to be formatted?
- Should there be any legal disclaimers included with the email?
- Who should receive the email? Just the person placing the Order or some other people as well?
I could go on in this fashion asking questions about the requirement that is not addressed in that one statement. As a potential tester, unless I have answers to the above questions, I cannot write a complete set of test cases. From the viewpoint of the tester, there a whole bunch of missing requirements.
Questions like these become obvious if you write the test case(s) for a requirement. The moment your perspective shifts to that of a tester, things that you might not have considered when you were creating the requirement become self-evident. This naturally leads the analyst to think along the lines of why s/he missed these requirements.
Nine times out of 10, the answer will point to missing models. So, in this example, if there is no Ecosystem Model, there is nothing to trigger the thought patterns that lead to requirements around the system responsible for user communications. Similarly, the lack of an Org Chart may not have triggered the analyst to run the requirements by the Legal department where the missing disclaimer would have been caught immediately.
When analysts write test cases, the need for models teaches itself as they start seeing holes in their requirements, both individually and collectively. For each missing requirement identified, there will be an associated model that is missing or an existing model from which requirements were not properly extracted. So one of the easiest ways to write better requirements is to go to the back of the line and write test cases for the requirements you do have.
Try it. You will be surprised at how much you learn.