A few weeks ago, I had the honor of being a guest on a Zephyr Webinar about Test Automation alongside John Sonmez and Dave Haeffner. Like I’ve done in the past when I was a guest on some Internet of Things webinars, I’m posting some of my notes and a recap of what we covered.
For the full replay, be sure to check out Zephyr’s Getting Test Automation Right webinar.
Why Automated Testing is important
The First Question I was asked was “Why do you think Test Automation is so important?” It’s actually a question I’ve been thinking about for some time. I believe that with the current trends in software development practices, as well as the shift in focus towards customer-driven development models, test automation is more important now than ever.
Agile software testing and DevOps demand more automation, and practices like continuous integration and continuous delivery require automated tests that can be run quickly and reliably.
In fact, I would go so far as to say that one cannot be successful in any of these practices without there being some degree of test automation in place.
I wrote an article for TechBeacon recently in which I present five steps to help folks get ready to make the shift left with test automation, and which goes into more details around this.
Another big reason why automation testing is important is because it helps you to scale quickly. I work for a large company, and we use a subset of our automated tests across many different environments like development, staging and INT. When we test new technologies, like how our application runs against certain versions of our database, or how our app runs against a customer-specific configuration, we can leverage our existing automated tests over and over again to help us test these types of issues. This saves us time, of course, and we don’t have to worry about scheduling lots of resources for all these activities.
I think everyone can agree that automation testing is important, but I think one of the biggest struggles for teams is to actually know what to automate.
What should you automate?
One of the top questions I get from qa teams who are new to automation is what they should automate; how can they identify manual tests that might make good candidates for automation?
Here are four reasons I have given, but be sure to check out the entire webinar to also hear Dave and John’s perspective on this. (I imagine some of these seem like common sense, but you’d be surprised how many teams still struggle with this dilemma.)
- Don’t automate one off items (there are some exceptions to this – for example, I’ve quickly written some data population automated scripts that have save folks hours of manual effort.)
- Don’t automate something that will require manual steps during the test run. This one drives me crazy! I work with eight sprint teams, and they’ll tell me that all their tests are automated and running fine in their environment(s). But when I run them in our continuous integration environment they fail. And it often turns out that when I get them to tell me exactly what they’re doing to get them to work in their environments, they say that when the test runs that random popups are occurring, but they are manually clicking on then to help the test along. Obviously, this approach isn’t really going to work in CI and it definitely isn’t scalable or maintainable.
- This leads me to another other important point: you shouldn’t automate non-deterministic tests. If you don’t know what behavior is going to occur at what time when testing an application manually, how are you going to get an automated test to do it? Automation is not magic. It can only do what you tell it to — even if you could automate some of these random things.
- It also turns out that many non-deterministic tests are better left to manual testers because they can be easily verified by a human, but almost impossible to automate. When this is the case, always go with a manual test.
Even though I faithfully promote automation, it should not be the only testing strategy your teams use. Automation is an important piece, but I’ve seen some teams take automation too far in that they have automation as part of their sprints’ Definition of Done, and it’s all they focus on; they basically try to do too much with it.
It sometimes gets to the point where no one is actually opening up and poking around their application’s functionality and looking at it with a tester’s mindset.
This can be dangerous — especially if you’re working on healthcare mission-critical applications. I’d think that you would want a set of actual eyeballs doing some sort of spot checks, using the application as a real user would, to ensure that everything is okay.
Our automation tests can only test what we tell them to, and often teams do not have the right checks in place to verify that something is really working.
For example, I know of a team that for the last year has been praised for all the unit tests they performed that ran quickly and reliably and had all kinds of coverage. It turned out, however, that when we actually analyzed these tests, many of them were not checking the right things, and some were doing nothing at all.
Another risk is that I see in many teams is that automation is treated as an afterthought, which is actually the biggest risk of all.
Testability needs to be baked into our applications right from the start. As a normal part of sprint planning, developers should be thinking about how they can make their application code more testable by providing things like unique element IDs for their application fields and APIs to help create hooks into their application(s) that can be used in their automated tests.
They should also be thinking about how any code changes they make to the application are going to impact existing automated tests, and plan accordingly. If you don’t do this you’re not going to be successful with automation for very long.
For the full webinar replay be sure to check out Zephyr’s Getting Test Automation Right webinar.