- Published on
6 expensive ways to fail in an automated testing project
- Authors
- Name
- Asaf Shochet Avida
- @mrfrogrammer
Here are 6 ways that we failed in a very big Selenium based testing project. These automated testing mistakes cost us a lot of money, due to the maintainability, performance and visibility problems they created in our tests.
Want to know more about how to make your Selenium tests faster? Follow this one.
Rephrasing Tolstoy, each automation project has the same green-yellow-red "traffic light", but each project fails in a unique way. We failed on all the points that will soon be mentioned ๐ Yeay! But worry not! We now have a state of the art Test Automation project, running tests 10s of times a day in our CI/CD pipeline.
The test automation project I will refer to here is in high focus inside the company (thanks TimeToKnow for letting me share this experience here), used for "end-to-end" sanity and regression tests. Due to the nature of the product (a web-based SaaS platform), the tests are aimed to run through browsers or mobile devices.
1. Assuming your own logs are enough to understand what's going on
Once we wanted to make the entire R&D organization an integral part of the testing analysis mechanism, we started getting questions about "how do I read this test output?". For us as the ones who wrote the tests, it was fairly easy to read a report that looks like this:

Readable indeed. Not really.
After some research we started using a product called ExtentReports (yes, it has a great free version). And now it looks like this:

This is better, and answers the "what failed" question that developers are eager to know.
But what about bug reproduction? Video to the rescue! Our latest improvement related to the reports mechanism was to add video capabilities, based on this predefined docker container, and โ it's free.

2. More git repositories than developers
The system under test is made up of microservices both on the client and server sides (that's a subject for a different post). When we started the automation project we had a designated test repository for each of the microservices, and a common infra repository. With a total of 7 repositories with dependencies between them. During that time, the team underwent significant changes โ maternity leaves, some personal shifts to other places in the organization etc. At the end, we ended up with 2 developers, working on 7 different repositories. This doesn't make sense. There's no mathematical correctness here, but as a rule of thumb โ if you spend too much time moving between repositories and updating repositories โ it's time to unite.
3. Reinventing the wheel
When we first started the automation project, it was very simple. Start a new project, add dependencies for Selenium, start building the project's structure based on the Page Object pattern, and the skies were blue, rainbows everywhere. After a while, we needed our own customizations:
- Customized reports that have a very specific look, with a nice dashboard, aggregating test results from many test runs.
- Pluggable "browser" object to support many types of configurations such as browser type, running on mobile or tablet, etc.
A year into the project โ we saw that almost 20% of our code handles things that looked (to us) like they are not a part of our business logic. 20% means lots of time to develop, lots of time to test, and lots of focus put on these places. While sometimes it is a must to do these things "in our own way", most of the time it's better to invest in RESEARCH rather than on reinventing stuff. Need a customized report? Google it. Need a way to capture videos? Google it.
It took time, but now we google it.
4. Hard coded configurations
Some of the tests need to have information about their surroundings, for instance: login URL, server URL (for API usage), bucket definitions, specific timeouts for different environments, a different failure threshold for specific tests on specific environments. It starts small, but the number of configurations can easily get larger when tests become more complex, and when it grows โ you will definitely have more environment-related logic in the test code. How can you tell if you suffer from the same disease?
Does this look familiar?
If (environment == "my-staging-ip") {
โฆ
}
How to solve it? A common solution is to pass configurations in runtime, using parameterized builds in Jenkins. The code becomes more "stupid" and the control over the tests becomes way simpler. A new environment to test? 30 seconds and it's configured!
5. Timeouts. Lots of timeouts.
In tests โ timing is the king. And when the timing fails (clicking on an item before it's there, not waiting for a loader to disappear, starting the tests a millisecond before the environment is up) โ all hell breaks loose. A common mistake, that obviously we made too, is to base our waiting mechanism on absolute time.
"When will the loader disappear? Wait for 5 seconds, it will definitely disappear". This approach leads to the following "bad code smell":
// Enter a page
Some code here..
// Wait 5 seconds for the loader to disappear
Thread.sleep(5000);
// Now we continue to work on the loaded page, clicking elements, etc
Some more code here..
What's wrong with it? Well, everything. One day there's a glitch in your office's network and 5 seconds isn't enough, and it fails. The hard truth is that you can almost never know when it's enough.
To overcome this issue, we switched to a "wait for an expected condition" approach, where we wait for specific conditions, for example waiting at most 10 seconds for that loader element not to appear in the DOM and only then proceed. If it disappeared after 2 seconds โ the code continues, no time is wasted.
Bonus: A good approach here is to have a default timeout used in all places, and to have the ability to control it by a configuration.
6. Not realizing who the customers are
An automated testing project needs to be treated as a software product on its own. Who are the customers? It really depends on the goals of the project. It can be the test team, managers, developers, devops team, etc. For the test team โ it's the developers, for the developers โ it's the managers, for the managers โ it's the customers, for the devops team โ it's the developers and the test team. A project cannot be successful unless you know who your customers are, and WHAT they want to do with it.
In our case there are 2 types of customers (forgive me for oversimplifying):
- Developers: "Did I break something on this commit?"
- Managers: "Is the release candidate ready to roll?"
When tests pass โ everyone is happy. But when something breaks โ there are many things to discover: what failed, is it a regression, how to reproduce it, what's its severity? To handle these, we put lots of effort into improving test reports (see the paragraphs above), and lots of time TALKING TO THE DEVELOPERS so we'll be in the same boat, both hearing the needs and explaining what the failures mean.
Is the version ready for production? To answer this one, a deeper understanding of the real users' usage of the system is required.
Conclusion
I have learned a lot from this project. I hope you will learn from my mistakes and avoid them in your projects. If you have any questions, please feel free to ask in the comments section below.
Enjoy our automated testing mistakes ๐