Has your team implemented test automation? I believe many of you have taken up this challenge, given the wide range of both paid and free test automation tools and services available. However, I'm afraid that some of you might have given up before even putting those tests into full operation on a daily basis. In this article, I will provide you with tips to ensure you don't encounter common pitfalls and offer insights specifically focused on E2E (end-to-end) testing.
Automated testing can be categorized into three groups, as outlined in The Way of the Web Tester: A Beginner's Guide to Automating Tests by Jonathan Rasmusson:
In this article, the term "E2E testing" refers to UI tests as described above. The term "end-to-end" implies conducting external tests to ensure the entire system functions seamlessly without any issues.
There are several reasons why users may give in to challenges before fully implementing automated testing:
The last possibility is likely the most prevalent since automated testing is an ongoing project. It is crucial to keep tests updated according to the application under testing. Otherwise, test failures will no longer signify technical issues, rendering the tests useless. This challenge is unique to E2E testing, unlike unit tests that can be written during the development phase and naturally maintained. To prevent this from happening and establish a habit of "living" automated testing to identify defects and stabilize the entire organization, two key factors come into play: goal setting and implementing automation step-by-step.
It is easy to fall into the trap of viewing the purpose of automating testing as simply reducing workload and costs. Automating tests for daily operations requires significant time and effort. When comparing the "execution time of manual tests" to the "creation time of automated tests," the disparity can be overwhelming. Should we then give up on automation because it takes years to break even?
I believe the true purpose of automated tests is to enhance productivity in both development and QA by detecting defects and operational issues. Automated tests can be executed continuously without concerns about human resources. By running them every day in the development environment, we can identify problems much earlier in the development phase.
Early detection and easy correction are beneficial to developers. Identifying defects can be time-consuming, especially with frequent changes during the development phase. Correcting these defects can be a painstaking process, requiring developers to recall specific changes. In unfortunate cases where the issue involves other developers, additional time is needed for investigation. Conversely, a clean development and testing environment can be established by running tests daily, enabling the entire organization to maintain productivity.
Automated testing also benefits the QA division. If issues are only identified during the testing period after the development phase, the launch may be delayed to allocate extra time for issue resolution. By running automated tests daily during the development phase, the QA division can start the testing period with a product of decent quality. In addition, by continuing to run tests also during the testing period, we can spot regressions (when certain parts stop functioning properly).
As mentioned earlier, the routine of running tests every day leads to a boost in development and QA productivity. This is precisely how we stabilize automated testing in our organization. Daily test execution yields several desirable outcomes:
On the contrary, if tests are only run before official launches, the issues listed at the beginning of this article will still occur despite the presence of automated tests.
I am a developer of an automated testing platform called MagicPod and many of my clients who have successfully adopted automated testing for a considerable period emphasize the merit of "boosting development and QA productivity" and insist on running tests every day. MagicPod has been running automated tests daily since its inception, and through continuous communication with our clients, I have come to realize the importance of this practice.
So, what is the first step when embarking on automated testing (after selecting the tool)? It is to establish a flow to:
It is completely acceptable to begin with a single test, as the primary objective is to identify crucial actions that serve as the basis for automation. Before increasing the number of test cases, ensure that each case is ready for repeated execution. This establishes the application's foundation for your team and enables easy verification of the functionality of your created tests.
While one case may not reveal all issues, it can detect critical problems in the development phase (e.g., failures in important screen transitions), which would impede QA work.
Is that all, you may ask? The most crucial factor for successful adoption of automated testing within your team is the visible output. Even if it is minimal, the team gains a much deeper understanding of taking over from senior members and other team members. They develop a more concrete image of authentic applications based on the mechanism of automated testing and the categories of notifications they receive.
Once the concrete foundation for daily tests is established, you are ready to expand into large-scale automated testing. Follow these steps:
Since we covered Step 1 in the last section, let's delve deeper into steps 2 to 5.
By running a test every day, you can confront a series of problems, such as testing stability. Address these issues one by one while gradually increasing the number of tests. Although it might seem more efficient to spend a significant amount of time creating multiple tests at once and repeating them in subsequent days, that is an illusion. Increasing the number of tests without verifying their maintainability will render them untouched and unmanageable. Therefore, focus on verification incrementally before aiming for a sudden expansion.
At this stage, it is advisable to develop mechanisms and regulations for sustainable operation. For example:
3. Cover a wide range of general and straightforward system functions.
Start by automating checks to verify the smooth functioning of basic features and screens throughout the system. Instead of automating detailed tests for individual functions with various conditions, prioritize and automate a wide range of general and straightforward functions and screens. Avoid simply automating existing manual tests from top to bottom. At this stage, prioritize automating essential test cases that can detect major functional regressions.
4. Establish a usage period with a constant number of cases
Before considering case expansions, refrain from making any changes for about 1-2 months to assess maintenance and the daily operations of your team. The main tasks required during operation are:
Make sure that the above tasks can be performed without significantly burdening the team. E2E testing is generally more prone to unstable results than unit testing, but if there are too many instabilities, it will take time to isolate them and increase the burden on operations. It is necessary to make judgments such as devising the implementation of test cases or excluding them from automated testing if they are unavoidably difficult.
5. Expand test cases while considering options beyond E2E
Once you are confident in your maintenance capabilities, you can begin automating more complex test cases, such as analyzing error messages and checking feasibility in special environments. However, it is unnecessary to execute all E2E tests, as it can be time-consuming and ultimately a waste of resources. Consider unit tests and API tests as viable alternatives. The key is to establish a virtuous cycle that provides prompt feedback.
The two essential elements for rapid and successful implementation of E2E test automation are:
Let's strive for sustainable automated testing by steadily running tests!
Read the next part of this article here.