Want to know the state of automation testing in 2019?
Every year I post my views on where I think test automation is going in the New Year. This was originally posted on TechBeacon but I have updated it here.
The information I collect gives me a clearer picture of what the rest of this year will look like—and what you should prepare for as a tester.
Here are the key challenges test automation engineers are likely to face in 2019, as well as the skills and tools they will need to succeed.
What Automation Engineers Are Struggling With
Test engineers are facing what they have been dealing with since I first began my testing career back in the late ’90s. The good news is that, based on the trends I’ve seen, many of those issues will finally be addressed in 2019.
Maintenance of Automated Tests Will Get Easier
Fortunately, most new and emerging AI solutions I’ve seen were created to address the issue we have had with the maintenance of test scripts. You can expect that trend to continue.
Dan Belcher, a
How many times have your tests scripts failed because someone changed the application under the test’s element IDs but didn’t update the automated test scripts?
Most new functional AI-based tools are designed to help with this problem. So, for example, instead of having to manually update an automated test after every new user interface change, these tools are smart enough to self-correct during runtime.
This is the year you’ll start putting these tools to work if you haven’t already.
Heal Thyself Automation
Self-healing is possible because, for every test run, the tools are collecting tons of data. This data spans not just test output logs but also HTML data, images, screenshots of the state of pages at each point that you’re executing the tests, and errors coming out of Chrome and other browsers.
All that data is feeding into the tool’s machine-learning algorithms. So every time you run your tests, the tool trains the algorithm and then uses what it learned from the data to know “normal” application behavior.
Over time, deviations from known behavior can serve as flags for potential issues. This data is also used to make a corrective decision at runtime if a test encounters a potential problem.
As a result, the tools can begin to know, for example, when an expected field element ID is unavailable and which other element it can use to interact with the field in order to continue the test without breaking it.
The plugin helps find Appium elements using a semantic label (such as “cart,” “microphone.” or “arrow”) instead of having to dig through your app hierarchy. Using AI, the same labels can be used to find elements with the same general shape across different apps and different visual designs.
Expect to see other companies open-sourcing similar solutions in 2019. Besides tools that can self-heal at runtime, you’ll see more tools to help you analyze and report on your test results by leveraging machine learning.
The AI tools I’ve tested already understand the difference between what might be a very small change to an element, a new feature, or a broken front end. This progress will continue.
AI Test Automation Assistance
Within some larger enterprises, the amount of time required to triage failing tests can be all-consuming. That leaves you with little time to innovate, as you waste considerable time hunting through log files to determine whether failures are due to real issues, system or coding issues, or real defects.
Machine learning is a perfect remedy for this type of dilemma.
“What these AI-based solutions do is take the drudgery out of testing. They actually find things at scale,” said Arbon. “For example, you, as testers, can’t go through a million server logs, but a machine can.” A machine can easily show you what’s most likely to be different.
“So ultimately, the machine does the drudgery of surfacing up insights that are interesting and have a prediction. But it’s ultimately up to the human judgment to say if the machine is right or wrong.”—Jason Arbon
A great example of this is ReportPortal.io, an open-source tool. I’ve tried it, and I think it’s going to take off this year.
Eran Sher, CEO of Sealights, recently told me that he expects to see a transition not only to AI and machine learning but toward managing quality through data analytics, as opposed to relying solely on opinions, feelings, and experience.
How cool would it be to have a tool that leverages machine learning to analyze not only the code coverage of all your existing tests (including functional tests), but your test coverage as well?
And wouldn’t it be great to know, when a developer checks in code, the exact number of tests you need to run to verify that change? 2019 is the year when that becomes possible.
Automation Tool Diversity
Four years ago I wrote about how browser-based automation tool vendors were slowly dying off as users replaced them with the open-source Selenium tool.
Over the last five years, Selenium has been the number one browser-based automation tool for testers. As a result, vendors of commercial tools now embrace and support it as well.
But this is the year when testing organizations that started their automation journey years ago might begin to realize that automation does not necessarily always equal Selenium. They need to use the right tool for the right job in their development pipeline.
Don’t get me wrong. There’s nothing wrong with Selenium. But it’s just a tool, and different teams have different needs, styles, and preferences.
More options are becoming available, and that trend will continue. It’s almost as if we’re going back in time when it comes to AI-based automation testing options, with vendors gaining the upper hand once again. Most of the big players in the AI testing space right now are paid solutions.
This is why more new tools are being introduced that don’t leverage Selenium WebDriver. TestCafe, Cypress.io, and Jest are just a few examples of this trend. That said, the trend also seems a bit weird, since WebDriver is now a W3C standard.
The bottom line: If you want to get started with AI automation, you’ll probably want to budget for it in 2019—unless an open-source, AI-based automation tool comes out that changes the game, as Selenium did.
The Return of Record and Playback Automation Tools
The Selenium IDE is back. I know, you may be screaming in horror at this trend, but keep an open mind and read on.
You’re going to start seeing many practices that testers used to frown on coming back in vogue this year, including record and playback, and tools such as Selenium IDE.
What’s interesting here is that most of the new AI-based tools have record and playback functionality. They then use machine learning to help improve reliability during runtime.
Continuous Testing Top Buzzword
You’re going to be hearing much more about continuous testing in 2019; in fact, it’s already one of the hottest topics in my Test Talk interviews.
The best definition of continuous testing I’ve heard is the ability to instantly assess the risk of a new release or change before it affects customers. You want to identify unexpected behaviors as soon as they are introduced, because the sooner you find or prevent bugs, the easier and cheaper they are to fix.
The most common way to accomplish this is by executing automated tests that probe the quality of the software during each stage of the software development lifecycle (SDLC). The mantra of continuous testing is “Test early, test often, and test everywhere in an automated fashion.”
Continuous testing begins not after everything is completed, but from the start. Each step along the way serves as a quality gate, baking in excellence at each stage of the SDLC pipeline.
This will be the year when you’ll find more ways to improve quality at each stage of your pipeline by implementing continuous testing methodologies.
Automation Vendor Merger Madness
With the growing popularity of continuous testing, more vendors are jumping at the chance to create end-to-end testing solutions for their clients. But to accomplish that, many of them will need to acquire tools that aren’t part of their current portfolio, in areas such as performance testing or test management. That’s why you’re going to see more mergers and acquisitions in the coming year.
Already, Perforce has acquired Perfecto Mobile. Perforce CEO Mark Ties stated in his post about the merger that one of the reasons for it was that “Enterprises continue to need more automation to both scale and accelerate their application delivery,” which is really the whole point of continuous testing.
These acquisitions will lead to better product suites and choices, making 2019 a good time to be a tester. The market is demanding that businesses develop quality software rapidly, and vendors are investing in testing tools to help that along.
That said, you can’t do anything with a tool or technique without testers, a fact that should give you optimism in the new year. So embrace the change, and lead the way to a better-automated future.