Can Automation Do Actual Testing?

Can Automation Do Actual Testing?

CAST 2014

I just listened to an excellent presentation from CAST 2014 with Richard Bradshaw the Friendly Tester on automated testing. I think this is a “must listen to” presentation. Here is my quick overview:

Automation is not testing

Automation will never replace a real tester. That is why it’s ridiculous for anyone to say that 100% of tests should be automated. If all we relied on test automation alone, we’d be in some serious trouble.

Automated tests can’t think, they can only check things that they’re told to check. Therefore, automation is for checking — not testing.

Furthermore, fewer testers should not be the goal of automation. Richard Bradshaw goes into detail regarding the reasons why you cannot (or should not) want to rely on automated testing alone to test an application.

What is Automation?

So what is test automation? The definition Richard offers up is that Test Automation is “any use of hardware or software tools to support testing.” Notice it is to support testing —
not replace it.

If you flip test automation around to automation in testing, it can really change your perspective. Automation is just a tool. It is mindless, can’t think and can only do what you tell it to do.

We (testers), as humans, need to provide the actual thinking. Automation supports testing but it is not testing.

My main concern

I’m all about automation, but my main concern is that teams who are focusing all their efforts on automated testing will undoubtedly miss quality issues that can only be found by human testers.

Our goal as test engineers is to produce a quality product, not automated tests. A co-worker of mine, Mark Johnson, has an interesting take on this. He compares automation to remodeling kitchens (not a surprise to anyone who knows Mark—he’s a pretty handy guy).

“When planning a kitchen remodel, there will be a number of known components: appliances, counters, cabinets.

Automation is good at ‘checking’ characteristics of these components to verify that they are complete and that they work as intended. Some components are more complex than others (e.g. a refrigerator compared to a section of countertop).

Remodeling experts can design kitchens using these components, and are able to rely on the fact that a sink cabinet will be a certain height and width, and what strength and/or size can be expected to successfully hold up a sink and counter.

They can use those components to design various kitchens, making decisions and adjustments in designs that will not be automated (exploratory testing).

There may be certain known configurations that can be automated (validation level automation workflows), but these are small in number.

Not every engineer needs to become a programmer; their expertise should be leveraged, not sublimated.”

Check out Richard Bradshaw’s full video presentation on YouTube: Test Automation != Less Testers || Faster Testing || More Time For ET – YouTube

What do you think?

Do you agree or disagree with the Friendly Tester? Leave a comment and let me know your thoughts. And thanks, Richard, for starting what I think is a must-have conversation within the testing community.

11comments
[BLOCKED BY STBV] 19: Greg Paskal: Removing “Test” from Test Automation - September 7, 2014

[…] Richard Bradshaw CAST Presentation […]

Reply
Keiran - September 9, 2014

Hey Joe,

I’ve been a software automation tester for 20 years.

I watched Richard Bradshaw’s presentation to the end and it invoked a few laughs and a few nods too.

I agree that test automation can’t replace all testing – that’s a no-brainer.
What it can do – if successfully implemented – is to free testers to explore complex edge cases and discover more defects.
Automation will never reduce tester head count – it might make your software much more reliable.

I found Richard’s off-hand comments about id’s and object libraries typical of developers here.

Keiran

Reply
    Joe Colantonio - September 10, 2014

    Hi Keiran I completely agree with your comment especially your point about automation being able to help testers to explore complex edge cases and discover more defects.

    Reply
Gilbert - September 10, 2014

Hello Joe,

Sorry, I have to disagree with some of views/comments regarding test automation.
True, test automation cannot replace a manual tester 100%. There are some tests where when you tap or type keys very very fast and issues are found, they are almost impossible to reproduce using automated scripts.

But, “automation is for checking and not for testing”…c’mon now you’ve been in software testing for a while so you know that is not true. You put the thinking on how you’ll test things then automate it, what is the difference. The test tool did not do the thinking, you did. But you put your thinking in an automated way so whatever you were planning to do for “testing”, the test tool did it for you and it did not convert the “testing” to “checking”.

You can find out more when you have to do “memory leak” testing where in some cases, memory leaks only happens when you have open/close dialog boxes/screens a hundred or a thousand times or sometimes 10,000 times. Would you be willing to do that manually?

…just my opinion…

Regards,
Gilbert

Reply
    Joe Colantonio - September 10, 2014

    You bring up some great points Gilbert! First I want to reiterate that I’m all about test automation and in no way saying that automation is not useful. Your case about memory leaks and performance testing in general cannot really be done without automation. Basically I think my concern is (maybe this is rare) that I’ve been in situations with software that is FDA audited and the managers are pushing for 100% test automation. Which in my experience is not feasible or desirable. We have some teams that rely on automation so much that they just run the test and hardly ever actually manually bring up the application after a deploy to look at it and make sure everything is kosher. And there are many UI issues that are in the application because of this. But when managers hear things like 80% automation coverage they need to understand what that really means and that unless they have testers also doing manual testing they are in trouble.

    Reply
      Gilbert - September 10, 2014

      Hello Joe,

      That I agree with you 100% (ok maybe 99%, for managers satisfaction). That’s why we automate things so that we have time to do manual tests on the side (while the automated scripts are running on another machine) on areas that are very hard to automate. The problem we see a lot is that many managers have not done testing (manual and/or automated) so they have no clue on why 100% test automation was not done or why test automation did not catch this-n-that bugs. Negative testing is one good example since it can be infinite if you really want to make the testing effort endless. The “what ifs” of one tester can be very different from the “what ifs” of another tester. I have seen software where a programmer tried to hide some silly (kind of funny or stupid) stuffs by pressing combination keys like (Shift y or Ctrl 7) and some sort of animation / cartoon will display. With those “what ifs”, we were still able to catch this by accident (we were so bored and no other things to test on that day so decided to just keep banging on the keyboard).
      If only we can make our managers write the test plan/test cases and tell us which test cases to automate, then we can always tell them “test automation 100% complete from what was asked for” although we (testers) know it is not.

      Regards,
      Gilbert

      Reply
        Richard Bradshaw - September 19, 2014

        I have to assume you listened to the talk Gilbert.

        The difference is that once you have done the “testing” and automated it, that’s where the testing ends. You have learnt something about the system, as only a human can. You have decide to capture that knowledge in the form of a check. So now that check can no longer learn, all that check will do is inform you of a change based on what you taught it, the oracle you coded. If that check fails, you will use your human brain and put that failure into context, something a machine cannot do. You may decide that the check failing is actually expected and delete/amend/note accordingly.

        Also as mentioned by Joe, I am all for using Automation to support testing, your memory leak for example. Of course I would not want to do that manually, I would want a tool to support me. In this example we remove the oracle part out of the equation. We would simply be using the automation to create the dialog some 100 times. While observing the behaviour. Or we might automate the whole process and have such a tool churn out some numbers for me the human to analysis and decide if there is a problem here.

        Check out this post from James Bach and Michael Bolton regarding testing vs checking. http://www.satisfice.com/blog/archives/856 There is a difference, a huge difference, and as coders with a testing mind we need to be more aware of where we can support testing. Checking is very important, big part of any testing approach, but lots of checking and little testing, is not going to help you get new information about a product. Information that the business requires to make their key decisions.

        Checking = checks old information hasn’t changed. Information you would use to guide your testing, your believe knowledge of the system. Testing seeks new information. If you have made a change to the system, surely you want new information.

        As you also mentioned, as did I in the talk, lots of testing takes place while creating checks, I bet you have found many bugs while creating the checks, I have.

        Reply
Aravindhan - September 11, 2014

Hi Joe,

Well again i believe this is a never ending topic. We automation testers will try to classify this topic (100% Automation) based on our work experiences. I would like to put two points here. First thing is “Automation Helps Testers”. In my testing experience, i have made lot of automation scripts which creates seamless test data for manual testers on a regular basis. In many of the testing scenarios, testers spend 70% of their time in preparing test data which when automated saves a whole lot of time. It indeed have inspired manual testers to perform extensive testing. And i have felt the real advantage of automation at these areas. Second point is, we should be clear whether our scope is to reduce the workforce by automating the testing or to assist the testing, create space for extensive testing and as well as fasten up the testing process.

Reply
    Richard Bradshaw - September 19, 2014

    As per one of my main points Aravindhan, we can’t automation the testing, we can automate the checking and create tools to support the testing. However the depressing truth is that a lot of organisations consider checking to indeed be what testing is, not part of it. They have there testers repeatedly running scripts, scripts intended to test the system. The problem isn’t helped that sometimes these testers will find issues based on what the scripts said, but most of the time they will find issues observed while following the scripts, not actually related to the script. That’s the testing part of it. Humans are really bad at checking.

    Automation can eliminate this depressing cycle, by doing the checking and doing a far better job of it then we can. Plus a faster job. Freeing up the testers to test. Seeking out new information, exploring the product. Supporting the business in making their decisions.

    Once the testers are testing, they are likely to asking for some of the things already mentioned in the comments. I have been asked to explore this, but its taking me ages to create this data, can you help? Reading this log file so many times is driving me crazy, can you write a parser for me? I am only interested in the look of these pages, but it takes 10 mins to get to this page, can you automate this and send me screenshots each time? Of course we can do all this.
    But if the mindset is on writing tests, as in it can do all our testing for us, these fantastic ideas/tools to support testing don’t get thought about. Or even if they do, because management don’t understand, they say no, don’t waste time on that, keep automating my testing. Sigh. But it’s how it is.

    But it can be different.

    Reply
Richard Bradshaw - September 19, 2014

Hello Joe, thanks for this write up, I feel you really understood the message I was trying to get across.

It was the first time doing this talk, and it showed in places. The message(s) will become clearer in times.

Look forward to reading some of your ideas on this topic in the future. Nice kitchen analogy btw.

Reply
Click here to add a comment

Leave a comment: