This is the second post out of three on where to use the Session Based Test approach with the goal to convince management that this is a good and perhaps the best approach in its context.
I already discussed how to use the Session Based Test approach in case of testing a bug fix for the production environment.
In this post, I will focus on why using the Session Based Test approach is a good (perhaps the best) approach for testing a (small) feature improvement on an existing application or solution.
Using The Session Based Test Approach For Testing A (Small) Feature Improvement
The problem with these small work packages is that they often used for extra work to fill in gaps when a tester is blocked with his test execution during the test period of a major release or program.
These major releases or programs often have progress reports where all the test cases are tracked to show the progress of a major release or program.
Most of the time you are not allowed to add test cases once the test execution period to the program or a major release has started. For example, you add 15 testcases to the test case set of the program or major release to test this feature improvement. The ‘Waterfall manager’ becomes upset and bounces out of phase, because the test progress of today was -2% as a result due to the fact that you added those test cases (and the rest of the team did not ‘report any progress while executing their test cases’ for that particular day)
Those test cases you think you need to do to test this feature improvement, consider them as test ideas. You write them down in a few so called “to do” session test reports. When you have done this, you pick the first “to do” session test report and start your test session. When this session is finished, you pick the second “to do” session test report and start another test session. Now you hear that you can continue with your main assignment, i.e. executing the test cases of the major release or program.
The next time you are blocked with your text execution for the major release or program, you can pick up the test work for this feature improvement you started some days ago. You just read your ‘finished session reports’ to get up to speed and continue with the next “to do” session test report and start a test session to continue with the test execution of the feature improvement. Note that during your test execution of the feature improvement, you can add more test ideas as you get new information which needs extra attention and test work.
When you finished with your “to do” session test reports and other relevant tests captured in session reports, you write your final statement in the last session report and list an overview of the test sessions (and the session report names) you have done. Hereby you finished with the test work for the feature improvement. You either continue with the test work for the major release or program or you pick another (different) feature improvement to test (for which you of course use the Session Based Test approach)
This approach works in this context because you test the feature improvement in a situation where management is only interested in a pass of the test of the feature improvement. When you have several of these feature improvements assigned to you, you can provide the test manager a small test progress dashboard, where you have listed the feature improvements (ID name), the red, orange and green smiley’s, and some status information.
I use this approach with success. Some of the testers in my department working for the other test managers, follow this approach. It is quick, structured, easy to stop and restart the test work. And my colleague test manager is pleased. No extra test cases added to his major release or program and a green smiley when the feature improvement is successfully tested.
A win-win situation. You can do the tests for the feature improvement using the Session Based Test approach and the test manager you report to can keep his focus on the test progress of the program or major release.