Software testers and SDETs around the world spend a big portion of their working time on Debugging their test automation framework code and infrastructure to locate the causes of errors, defects, and issues found during their test automation run. Yet, for some reason, this is a skill that is less discussed and is assumed to be intuition-oriented and ingrained inside the testers who write automation code. The aim of this article is to debunk the myths of debugging which will help testers to reduce their debugging time and effort. To put in layman’s terms, “debugging” is a deductive process of identifying the failure reasons. There is a popular saying – “To err is human”. People write lines and lines of code that can break at any point – code can be the code written to create and run a software application, automation code to deploy an application, scripts to monitor logs and raise alerts, scripts to automate some other repetitive manual tasks or code written inside test automation frameworks to perform checks on the application. While the evolution of tools, technologies, frameworks, and IDEs have reduced the debugging time of some individual tasks for us, the total time to debug issues have not reduced significantly since a large part of the debugging process has the human element of interpretation, reasoning, analysis and effort attached to it. Let’s understand how we can help ourselves by busting some myths around debugging.
Myth 1: Frequent use of search engines for debugging makes you a bad programmer
Once I was helping out a tester to debug a test failure at the UI level which, we later found, had been caused by a small unexpected change in the modal dialog window inside a webpage that he was testing. After debugging for quite some time using the Chrome Dev Tools and the framework code, when I suggested him to google the issue, I noticed some hesitation inside him. Some programmers, junior and senior alike, assume that if they are using search engines and StackOverflow quite often to debug issues, then they are not good programmers – which is absolutely wrong. It is impossible to know and remember everything. By nature, good testers are usually quite efficient in searching and reading a lot of documentation, manuals, and references online and it is completely alright to take help of search engines if you are stuck in the middle of debugging an issue, even if it is for small, minute details.
Myth 2: Systems and Dependencies will always work as they are supposed to
While debugging, our initial instinct is to assume that the only cause of the failure/error is the code that is running the tests. But factors like the platform libraries, execution environment, third party libraries, upstream and downstream systems, jobs/ batches/ crons, APIs, databases, servers, cloud infrastructure and other dependencies facilitating the execution are as important as the Test Automation framework itself. In my experience, unusual and unexpected issues related to these factors are the most common ones causing the errors and we have to keep ourselves prepared for that. As a good test automation coder, we have to keep all the possibilities in mind during debugging and need to carefully experiment for each factor in order to isolate the issues.
Myth 3: Design knowledge of the complete automation framework is not always required for debugging
It is often observed that when a new team member raises a pull request to merge his code with the master branch to fix a bug, the code review done by the reviewers results in a lot of comments and improvement suggestions. The person raising the PR might think that these are unnecessary review comments since his code change has done the job of fixing the bug which was the main motive. This assumption is wrong. We cannot “fix” an issue completely unless we understand the framework well enough to know where it came from and all the parts of the framework architecture it is having an effect on. One solution that worked for me in such cases is to draw a debugging model of the issue in mind in terms of the flow of data. Raising queries to yourself around the data (or collection of data) will help in most cases – Where does the data come from? What are the functions that process this data? What are the parts of the code that consume this data? Of course, this model alone won’t help us all the time but it will be a good start in most of the cases.
Myth 4: Documenting the issue isolation procedure is a redundant activity
We must understand that debugging is not only an activity to fix issues but it is also an opportunity for us to learn and become a better programmer. For the sake of time and less bandwidth, once we isolate the issues and their underlying causes, we go ahead and make the changes, retest, push the changes and continue with our next task. What we forget is that the method is taken, the procedure followed or the way in which we isolated the issues is something that will certainly help us to debug more efficiently in the future. You might have debugged it in the Brute-Force way by printing out statements and manually searching through stack-traces and log files. Or you might have used techniques like Back Tracking by navigating through the execution path or you might have used the Cause Elimination method by developing a hypothesis as to why the issue has occurred. Whatever be the method, it is always helpful to document the steps you have taken to track the problem, how you reproduced it, how you simplified the system (e.g. when you removed that code snippet for debugging), how you isolated the possible issue origins and the reasons why you focused on the most likely origin, how you isolated the issue chain and corrected the defects. Especially when you are working in a Test Automation Framework development and maintenance team, keeping a note of the debugging process for future reference will help you.
Myth 5: Experimenting in the negative realm is not at all required during debugging
Sometimes finding the actual reasons for failure becomes difficult since all root causes are not so simple to be found out easily using search engines, forming hypotheses or checking the dependencies. In such cases, what works for me is to contradict the formed hypotheses and traditional way of debugging and I start checking the part of the code with data where the issues should not happen. By switching the debugging mindset to this approach, I am able to isolate the conditions in which the failure occurs and also the conditions in which it doesn’t. This approach of debugging might seem to be difficult and time-consuming but when done correctly and with practice, many difficult hidden root causes can be found out easily.
As a tester, I like and enjoy debugging. Many a time I pair up with other testers and help them to debug issues. It is a must-to-have skill and what is required the most is an inquisitive mindset. The skill might seem to be intuition-based but like other software engineering practices, this is also a process-based practice and is surrounded by myths. Hope this article will help you to eradicate some of them the next time you are debugging your code.