Sunday, December 14, 2014

Automation Testing – Myth or Reality?

Test Automation” is really misunderstood by many. With the intent to produce high quality software with ever-more complex technology under increasing competitive pressure, automated testing is being heavily adopted. This heavy adoption, over a period of time, have led many of us believe that it is a replacement of so-called “manual” testing.

Talking to one of the test managers about how he sees automation testing: “It’s good for everyone. Every client wants it and thus it’s important. Also, once you’ve the tests ready, the tester headcount may go down (just few are needed to execute tests). Best of all, automation results in huge ROI compared to manual testing.” This conversation actually shook me and led me talk to few other test managers to verify if that was just an individual’s opinion. But the truth was bitterer. The more I talked, the more shock I received. Most of them were having the same views as stated above.

This post is an attempt to spread the actual meaning of automated testing or to be precise automated checking and how it must coexist with manual testing.

Does Automated Testing really exist?

First of all, there’s nothing called as automated testing. It may sound, at first, little shocking but continue reading and I will prove it. We think of automation testing as: “a form of testing that utilises scripts to automatically run a set of procedures on the software under test, to check that the steps which are coded in the script work.” For example: if we had a script that logged into the website, then added an item to the cart and placed the order, a basic automated test would check that this path through the system is operational, which is an affirmation that the function operates without causing any known validation errors or exceptions and produces an expected output. But it does not check anything that is not written in the script. Does it?

The key word here, relating to automation, is “check”. As computers can’t think for themselves, they can only follow a set of commands that we give them, which offer a “yes/no” response. Anything that has a set “expected result” can be classed as a check, where no real sapience is required. (Michael Bolton has a great blog post on “Testing vs Checking” that is well worth a read if you haven’t already).

So, let’s start calling it as “Automated Checking” or just “Automation”, thereby making sense to what actually happens.

Automation is meant to replace Manual?

James Bach has very clearly explained it as: Contrary to the implication of typical marketing literature for test tools, automated testing is not the same as manual testing. What observant humans do when they go through a test process is in no way duplicated or replaced by test automation, because automation cannot be aware of all the hundreds of different failure modes that a human can spot easily. I have to explicitly program automation to look for suspicious flickers and performance problems, but with humans I can say "be alert for anything strange." So, no matter how many smart tools are available in market, there are still many things you can’t do with automation. Said this, it does not, in any sense, degrades the value of automation.

So, there will always be a need for manual, sapient testing in the software industry, but being able to utilise automation for the checking activities is highly beneficial. In order to have an effective process that focuses on building quality products in a fast paced environment, both testing methods are important for being able to achieve this! You need to utilise the benefits of having automation, in the fact that it will free up time and effort that a tester would normally have to spend performing the “checking” tasks, and allows the testers to focus more on the sapient testing that is required to be done in order to be able to discover an accurate level of quality that the software bestows.

Read more about Testing & Checking here.