Skip to content
  • There are no suggestions because the search field is empty.

How to Apply The 7 Testing Principles in The Real-World

Whether you're a software tester, quality assurance professional, automation engineer, or test coach, you have heard about the seven testing principles at least from a theoretical perspective. Have you ever considered applying these principles practically to enhance the outcomes of your deliverables?

In this article, we'll explore how each principle can be applied to actual testing environments, with real-world examples that test professionals encounter every day.

Testing Shows the Presence of Defects

Principle - Testing can show that defects are present, but it cannot prove that there are no defects.

Embrace the possibility of uncovering defects. To overcome this, the simplest approach is to use a combination of techniques, such as manual and automated tests. Since automated tests are great for efficiently handling repetitive, high-volume checks, they can uncover defects from repetitive use cases and performance bottlenecks.

Regression tests are the best candidates for automated tests. With automated regressions, testers can focus more on user experience, aesthetics, and context-based testing, the tests that should be carried out with the intervention of a human being to identify defects.

Further, you can broaden the scope by adopting different types of testing, such as functional and non-functional. As an advanced step, you can leverage the power of mutation testing to evaluate the effectiveness of the existing tests.

Exhaustive Testing is Impossible

Principle - It is not practical to test all possible combinations of scenarios during testing.

One of the best examples is testing an e-commerce application with a wide range of products, payment methods, and shipping options. Testing all possible combinations would take an unreasonable amount of time and resources.

To overcome such cases, you can use techniques like equivalence partitioning and boundary value analysis to reduce the number of test cases while still achieving good coverage. These techniques can reduce the risk of defect leakages and test the application efficiently within time constraints.

For instance, assume the application has a classification based on age.

  • 0 – 12 yrs as a Child
  • 13- 17 yrs as a Teen
  • 18-60 yrs as an Adult
  • 61+ years as a Senior

Based on equivalence partition, we divide the age input range into groups where each group should behave similarly in the system. Hence, we can use any value between each range to verify the expected value is returned and also a negative value to test the out-of-range.

In boundary value analysis, the values at the boundaries of each group are taken into account. For example, 0 and 12 should be tested for the Child group, -1 for the invalid group, and so on.

Early Testing Saves Time and Money

Principle - Testing should start as early as possible in the software development life cycle.

The earlier defects are found, the lower the cost to fix. For instance, if a defect was identified in the requirement phase, there is no necessity to iterate through the whole development cycle. If the defect was identified after deploying in production, a complete cycle must be executed in order to fix the defect which costs time, money, and effort.

Inculcating a testing-first culture would be the best solution to avoid such chaos. It is the responsibility of the quality engineers and the entire team to develop a quality-focused mindset. One of the biggest hurdles for the team to achieve a testing-first culture is to unlearn existing practices and adopt a change management process.

If the team is self-motivated, it can be achieved easily. Otherwise, the resolution is to get the help of a test coach or a testing expert, which is often the case.

Defect Clustering

Principle - A small number of modules generally contain most of the defects discovered during testing.

Isn’t it true? Haven’t you experienced that in your product? If you are not sure, your issue tracker will be able to provide evidence of this. When you analyze the trends of defects reported in your issue tracker, you will observe that certain modules have a higher number of defects. This should be your area of focus and priority compared to other modules.

You can empower the team to conduct detailed code reviews in these areas, and with the help of code coverage tools, you can add more tests to increase the coverage, especially in the brittle modules.

You can apply the Pareto Principle, the 80/20 rule. 80% of the defects might be found in 20% of the system. To figure out the 20% that needs attention, analyze your issue tracker and use metrics like defect density and trends, defect prediction models, code coverage, and code complexity reports to identify vulnerable areas of the application.

Pesticide Paradox

Principle - Running the same set of tests repeatedly will not find new defects.

If you were trying to get rid of pests and you use the same pesticide on them, over time, the pesticide might become less effective as those pests would start to adapt and survive. Similarly, when we use the same set of tests repeatedly, the probability of capturing new defects tends to decrease as the product might have been developed to be resilient against the existing tests.

To mitigate this issue, implement these strategies:

  • Review and Update tests – regularly revisit tests to ensure they align with the new changes, retire outdated tests, and introduce new tests.
  • Expand test coverage dynamically – use dynamic test data in automated tests to explore edge cases. This may not be feasible for all instances. In such cases, you may have to employ a curated data set for the tests to avoid unexpected failures.
  • Incorporate exploratory testing- Having dedicated exploratory sessions comes in handy as ad-hoc tests reveal unexpected defects. You can also promote effective exploratory scenarios as regression tests.
  • Leverage AI and Analytics—Use historical defect trends, along with AI and analytics, to help identify frequently failing modules and introduce more tests for such modules.

Testing is Context-Dependent

Principle - Testing is influenced by the context of the software.

In other words, different applications require different types of testing. Hence you need to assess the type of testing an application needs based on factors such as functionalities, industry, domain, etc. When deciding what type of testing you need to test an app, do the following:

  • Understand the purpose of the application.
  • Evaluate the target audience and their expectations.
  • Consider the industry standards and domain-specific standards.
  • Impact and Risk of the application on the users.
  • Assess the integrations and dependencies.

Absence of Errors Fallacy

Principle - Finding and fixing many defects does not necessarily mean the software is usable or meets the user’s needs.

  • Ensure that testing focuses on finding defects and validating that the software meets user requirements and expectations.
  • Conduct user acceptance testing (UAT) and usability testing, working closely with stakeholders to verify that the software delivers value.
  • Gather and analyze user feedback to make necessary improvements. Emphasize the importance of aligning testing efforts with user needs and business goals.

Conclusion

The seven testing principles have been taught in school as a theory. However, these principles serve as the bedrock for successful software testing if you apply them practically when testing applications.

When you do so, you will avoid common pitfalls, optimize resources, and ensure that software products not only function correctly but also meet user needs and business goals.

Reference



MagicPod is a no-code AI-driven test automation platform for testing mobile and web applications designed to speed up release cycles. Unlike traditional "record & playback" tools, MagicPod uses an AI self-healing mechanism. This means your test scripts are automatically updated when the application's UI changes, significantly reducing maintenance overhead and helping teams focus on development.


Sajitha Tharaka Pathirana

Written by Sajitha Tharaka Pathirana

A Test automation enthusiast, passionate to help the teams to enhance their testing journey with his decade of experience in the field, developing automation platforms and tools to optimize the overall testing process.