I am talking about the fact that even though you invested the time to automate something you still choose to run it manually.
Notice that I am asking for any tests, including the ones you run manually only once in a while or those you give to your junior testers to be 100% sure all is OK; in fact any tests you are still running both automatically and manually at the same time.
Surprisingly enough, even organizations with a relatively mature automation processes still run a significant number of their automatic scenarios as part of their manual tests on a regular basis, even when this doesn't make any sense (at least on the theoretical level).
After realizing this was the case I sat down with a number of QA Managers (many of them PractiTest users) and asked them about the reason for this seemingly illogical behavior.
They provided a number of interesting reasons and I will go over some of them now:
We only run manually the tests that are really important
The answer that I got the most was that some teams choose tests to be run automatic and manually only when they are "really important or critical".
This may sound logical at first, but on the other hand when you ask what is their criteria for selecting the tests that should be automated most companies say they select cases based on the number of times they will need to run them and also based on the criticality or importance of the business scenario. In plain English, they automated the important test cases.
So if you choose to automate the test cases that are important then why do you still run them manually under the same excuse of them been really important…? Am I the only one confused in here?
We don't trust our automation 100%
The answer to the question I asked above, of why run the important tests even though they are already automated comes in the form of an even more interesting (and simple) answer: "We don't really trust our test automation"
So this basically means they are investing 10 or even 50 man-moths of work and in most cases thousands of dollars on software and hardware in order to automate something and then they don't really trust the results? Where is the logic in this?
OK, so I've worked enough with tools such as QTP and Selenium in order to know that it is not trivial to write good and robust automation, but on the other hand if you are going to invest in automation you might as well do it seriously and write scripts that you can trust. In the end it is a matter of deciding to invest on the platform and be serious in the work you are doing in order to get results you can trust (and I don't mean buy expensive tools, selenium will work fine if you have a good infrastructure and write your scripts professionally).
The alternative is really simple, if you have automatic tests you can't trust because they constantly give you wrong results (either false negative or even worst false positives!) you will eventually stop using them and finally throw all the work and money out the window…
We don't know what is covered and what is not covered by the automated tests
This is also another big reason for why people waste time running manual tests that are already automated, they are simply not aware of which scenarios are included on their automation suite and which aren't. In this situation they decide, based on their best judgement, to assume that "nothing is automated" and so run their manual test cases as if there was no automation.
If this is the case, then why do these companies have automation teams in the first place?
The automated tests are the responsibility of another team
Now for the interesting question, how come a test team "doesn't know" which scenarios are automated and which aren't? The most common answer is that the tests are been written by a completely different team, a team of automation engineers, that is completely separate from the one running the manual tests.
Having 2 test teams, one manual and one automatic, is not something bad and in many cases it will be the best approach to achieve effective and trustworthy automation. The bad thing is that these teams can sometimes be completely disconnected and so work on the same project without communicating and cooperating as it should be.
I will talk about how to communicate and cooperate in a future post, but the point here is that when you have 2 teams (one automated and one manual) you need to make an extra effort to make sure both teams are coordinated and as a minimum each of them know what the other is doing in order to plan accordingly.
We want to have all the results in a single place to give good reports
Finally, I wanted to mention a reason that was brought up by a number of test managers, even though they brought it as a difficulty and not a show stopper but it was brought up many times and so it sounded interesting enough to mention it. The fact that they needed to provide a unified testing report for their project, and for this they either run part of their tests manually, or created manual tests to reflect the results of their automation.
Again, this looks like a simple and "relatively cheap" way of coordinating the process and even producing a unified report, but it suffers from the problem of being a repetitive manual job that needs to be done even after you already have an automation infrastructure and it will eventually but surely (specially as more and more automation is added) run into issues of coordination and maintenance that will make more expensive and in some cases will render it misleading or even obsolete.
What's your take?
I am actively looking for more issues, experiences or comments like the ones above that revolve around the challenges in manual and automated testing. Do you have stuff you want to share? Please add it as comments or mail me directly to joel-at-practitest-com. We've been working on a solution for these types of issues and so we are looking for all the inputs we can get in order to make sure it will provide an answer to as many of the existing challenges as possible. I will be grateful for any help you can provide