Beware of Broken Promises as a Tester
Project Manager: Hey Testers, have you completed testing everything? Did you cover all the scenarios?
Testers: Yes, we covered all the scenarios, completely tested the application, and reported all bugs.
Sounds familiar? And have you ever been embarrassed with your word? Then, this article would reflect the situations you have faced in your career as a tester; miscommunications, and misunderstandings you have had along the way. Further, let’s discuss how to be cautious when we say completed.
Completed testing offers lots of ambiguity. Often it is more than stated or in other words, the expectations would be higher than what was meant. Hence ‘completed’ is something that should be used with ultra care.
Let’s check some of the phrases that offer ambiguity and misunderstanding.
"Completed the discovery of every bug in the product."
As the testing principles state, testing shows the presence of defects and does not guarantee the absence of defects. The main idea of testing is to identify problems and fix them beforehand deploying to production. However, there may be bugs that were not uncovered during testing.
Isn’t it a problem?
If you discover critical bugs or blockers after testing has been completed, that is a severe problem that needs to be addressed. You may need to re-visit your test strategy, the test coverage for the feature, and so on. However, there can be edge cases that you may find and may not be cost-effective to cover. Hence right expectation should be set to the stakeholders, that the software is not error-free.
"Completely examined every aspect of the product."
As soon as you read the line, you may have understood, that it is impractical to test every aspect of the product. Even if you strongly believe you have covered every aspect, there is always room to miss certain scenarios or different combinations. To gain a quantitative value to examine how far you have tested the product, you can leverage code coverage tools which reveal you to which percentage of the code you have covered. Well, achieving 100% coverage too doesn’t guarantee every aspect has been tested or covered. Hence, this statement may offer lots of challenges to justify leading to misunderstanding.
"Completed everything I know."
This statement is hard to quantify too as it is a statement depending on self-judgement and so, may not be accurate. There is a high chance of missing out on what you know vs what you tested. On the other hand, there is no way, or it is quite hard to backtrack on what you have covered during the test sessions.
"Completed broad, but not deep, tests on the product."
Breadth and Depth are relative measurements as long as there is no defined standard. When it comes to testing too, breadth and depth are hard to define as they vary from person to person and are hard to quantify. A scenario prioritised by a tester may not be a priority for another tester. Rather than speaking with breadth and depth, it would be better to use severity and priority reflecting the depth and breadth of the coverage.
Now, let’s examine some of the precise “completed” statements that will offer more clarity about the work done.
Completed the agreed tests.
The statement defines the scope of the tests performed, as a qualitative value and if you want to go further to figure out the quantitative value, you can check the number of agreed tests for the given feature or the test session. Moreover, this offers ownership and responsibility for the work done as it can be referenced.
Completed the period allocated for testing.
When it comes to testing, depending on the type of tests performed, for quantifying, either we can agree on completing a certain number of tests or we can agree upon a time-bound. When we perform regression testing or feature testing, we agree on completing the number of tests with a soft time boundary. For exploratory testing, we tend to agree on a time boundary as there are no defined test cases and the test session will be closed after the agreed time.
Completed a specific type of testing on the product.
“We have completed the smoke test the regression test or the feature test”. Isn’t this a familiar statement you make during your meetings such as stand-up meetings or in review meetings? Mostly, these types of tests are well-defined within the scope of the product. When the tester says the smoke test is done, everyone in the team has some level of awareness of what types of tests have been performed and to what extent the product has been tested. To elaborate further, when the tester says smoke is done, it means a light, short test has been done to verify the stability of the build and when the tester says regression is done, it implies a deep test covering the entire product has been done.
Completed the assigned tests.
This is a fair statement that explains the qualitative and quantitative aspects of your task. You have been assigned and agreed to test a certain area or a component of the application. Hence, you are clear on what to test and everyone is aware of what you are testing. From my experience, the frequency of testing is one gap you will have to define accurately. Therefore, this would be a precise statement when you say completed the assigned tests for the current build, where you imply the exact point of testing.
In summary, when defining the completeness of testing, it is necessary to reduce the friction due to the ambiguities of the statements made. Define the completeness with quantitative and qualitative parameters so that it will help to avoid misunderstandings. Nevertheless, as testers, we can be responsible for what we have done and on the other hand, if something goes wrong, these quantitative and qualitative values will help us to justify criticism. Beyond that, easily we can improve the quality of deliveries by back-tracking where and what went wrong and taking adequate measures.
MagicPod is a no-code AI-driven test automation platform for testing mobile and web applications designed to speed up release cycles. Unlike traditional "record & playback" tools, MagicPod uses an AI self-healing mechanism. This means your test scripts are automatically updated when the application's UI changes, significantly reducing maintenance overhead and helping teams focus on development.