I recently was a guest on a webinar by Kobiton on making the Jump: Manual to Automation, Automation To Autonomous.
Also on the panel was Frank Moyer, CTO Of Kobiton, Praveen Bhasker, Former Sr. Manager Enterprise Automation At Office Depot and Paul Grizzaffi, Principal Automation Architect At Magenic.
Here is the transcript below as well.
Manual Testing, Necessary or Antiquated
Before we got into too much detail Frank provided some clarification on semantics. With manual testing or to say it another way a human executing a test script. I think it needs to be broken down further. Broadly testing performed by humans can be either checking or exploratory testing. Manually checking and exploratory testing are both manual testing. But they are different. Exploratory testing is a mindset and requires thinking while manual checking does not. So you know with those semantics exploratory testing in my view is and will always be necessary no matter how far artificial intelligence takes us. There will always be a need to perform some level of exploratory testing. In my view, there’s a having a thorough exploratory strategy provides the level of quality above straight automation and does provide an advantage. However manual checking puts the brakes on a release cycle? And it is the greatest impediment to continuous testing. With the acceleration of development release cycles from month to weeks to days to hours. Relying 100% on manual checking becomes increasingly burdensome on the organization. Especially in mobile where testing is real devices is imperative.
So with “manual testing” human eyes an interaction on the software is going to be necessary just because there are things that machines cannot do for us today. And there are things that machines should not be doing for us today. And then there’s a big pile of stuff that maybe the machines could do. But it’s not cost effective. And if we look at automation as being a force multiplier helping the teams get the software out the door at an acceptable level of quality and acceptable level of risk then what we’re looking at is it not being business wise or a good choice to go all automated or even go all human interaction. There’s a sweet spot in there. Looking at the continuous testing pipeline. We hear a lot of companies talking about with every check-in. It’s going to be a push out to production. That’s great if you want to do that if you want to have that level of frictionless code deploy to production you can do it without having a human actually look at the code but you’re incurring a risk. And the risk is that your automation missed something and it’s a pretty good risk. It’s it’s a pretty hefty risk. Companies that do that really need to have tolerant users or they need to be able to have a low cost of change and probably a low cost of failure as well because if either one of those is pretty high then the chances are it’s not the right way for you to go with something like no humans looking at it at all.
For some reason every time I hear manual testing or automated testing now I just think of Michael Bolton the tester protester and he goes crazy about this wording. So I just think about what Frank mentioned about semantics. When we talk about testing there really is no manual or automated testing is what Michael Bolton would say the same way a doctor doesn’t do like automated doctoring or manual doctoring. So I really think that really just sets the mindset that it doesn’t really matter what type of testing do as long as it’s finding value. It’s finding risk and it’s helping the customers. That being so that’s why I would never say manual testing is antiquated. We will always need exploratory testing. It’s going to be necessary. Also as Paul mentioned it really is based on your context. So I work for a large medical organization. So in that type of situation, you probably want to have some manual eyeballs on your application before it’s deployed. Just to make sure things are right because you have lives at stake. But if you had just a simple application the risk is minimal you probably could get away with all automated tests. It really it does matter on your situation. Mine is going to be different from yours. So it really is a team decision on what you need to go with. Manual testing can become an advantage but it really is a hindrance as Frank mentioned if you needed to test coverage against multiple devices, multiple operating systems, all these different types of things it’s probably better to have an automated test suite to be able to get that coverage against all the browsers and then maybe you just have a few exploratory tests to go through and get a human’s point of view. So it really is a marriage of both. I don’t think that they’re either mutually exclusive. I think you use both approaches in your test plan to get the best type of quality in the hands of your customer to make sure that you’re delivering quality software.
I agree with Frank on his views on the difference between manual and exploratory testing. It all depends on what you want to achieve in your organization. So let’s say you want to you want to test legacy application, which is going to get replaced in six months. It’s not a good idea to implement automation in that area because its going to be replaced and eventually those test cases that your created for it will go to waste. Or will have to be rewritten completely.
How to Add Automation to Test Strategy
I don’t see going from manual to automated testing as a switch. I see it as a morph. I see it as something every organization needs to think about in advance of every development initiative. And it’s rarely done. How do you morph parts of the application from exploratory testing into automation? What typically happens is companies aren’t delineating between exploratory testing and manual checking. What starts out is exploratory testing in an app and that’s in constant change right you’ve got these really cycles. It’s constantly changing. Then also of a sudden, it morphs into manual checking on a stable app. You then got organizational friction there to then change your automation. And that’s something that’s going into an initiative; the organization really needs to think about instead of this big switch. How do we go into piecemeal moving parts of the application that become stable into automation? So understanding that as the beginning and setting expectations with the organization that that is going to happen. So start small and grow when ready. And the litmus test that I used to decide when to switch a functional subset of an application to automation is when there is less than 10 percent of functional changes for the past two spirits. So it’s got some level of stability while you’re still developing other functional areas that functional area is stable and you can get an ROI on that subset of the application.
I’d like to expand that definition of automation so that we’re not thinking so much about test cases and test tools. I like to look at it as a judicious application of technology to help us do our jobs. When we expand that definition that way, there’s not a switch, there’s not even a morph, there’s not a transition. It’s more like I’ve got this big set of tasks that are going to help me deliver this software. Some of those tasks relate directly or indirectly to testing and software quality. A subset of those tasks is going to be good candidates for machines to help us do. And if you look at it broadly that way you might say well I could write some sort of smoke script to check that. Basically I can purchase with a credit card or something like that if I’m running an e-commerce site but a better use of my time might be to write some Python, Perl whatever some little sidebar script that’s going to generate some of my data fill for me that’s going to make sure the systems up and running prior to me doing any sort of actual human interaction with the system. And we may get additional benefits that way because what we’re doing is we’re then taking the work and we’re applying the right sort of entity to work with it. Either a human entity or a technological entity and we’re not so bound into just taking what humans were doing and having machines do the same thing. We’re letting each of these entities play to their strengths. Computer strength tends to be doing the same thing over and over potentially over transition datasets. What humans are good at is sort of the opposite. We’re bad at that tedious grind over and over crank turning type stuff. What humans are good at a critical thinking, lateral thinking, noticing pattern differences that are very difficult to deliver into a computer to say look for these patterns for me. We’re getting better at that. At the computer level but we’re not there yet. Humans are just so much better at saying oh wait a minute that was a little different this time. And now I should do this other thing. The computers might notice the difference. But that’s where they stop. And they can stop there and deliver that information to the human and the human the tester in this case can then pick up and run with it. So it doesn’t have to be this pass fails all or nothing. It’s automated or not. It’s all about helping us do our jobs to be more effective or more efficient at what we’re doing.
It’s very to know when to make the switch to automation. So we have to identify what kind of methodology is your team following. Is it Waterfall or Agile? Depending on the methodology we can to choose what kind of testing framework our team should use. There are so many to choose from like BDD, CDD (checklist driven development) defect driven development. Metric driven development, functional a driven development, test driven development driven there are so many different ways. It is very important to understand the methodology that’s being followed in the company and maybe the team too. Once we understand that we also need to understand the scope of the application. Whether it is a mobile application, web application, web services application, micro-services application etc.. Depending on the application you need to understand the technology involved. Not only that the most important thing to identify is there real ROI to start automating things. Assume that the application is getting released every six months or once a year it might not be worth automating. It all depends on how often you are releasing and how much time saving you get from running an automated test vs. running manual tests.
Our definition of automation isn’t necessarily an automated test it’s any process that’s currently manual that can be automated to help deliver software faster with higher quality to your customers. Also, I get a little worried when people say we’re going to switch to automation, as if like that’s going to solve everything. In my experience with large enterprises if you have really bad manual tests I mean most likely you can create really bad automated tests. So it’s usually more of a culture type issue in those situations that you probably need to get under control before you even start trying to expand its automated testing because the same issues were manual testing will be the same exact one with automation. Automation is not going to save you. If it’s just another checkmark in your definition of done. So when you know when to make the pivot is when your team already has a whole team approach to testing and you know it’s not going to just be one person that’s responsible for the automated testing. So that’s one definite thing. Also if your developers are not writing automated tests or unit tests themselves already then I will make the switch to end and testing until you have that in place. There are signs you can see whether or not your job your organization is ready for automation or CI/CD. For example, if you still need to manually deploy your application before you could run your test then you probably really need to think about what your mindset is for your company and get it right before you put all your eggs into automation. Also, there is this question about a realistic timeline for ramping up automation. A lot of people have unrealistic expectations around automation I think as soon as you start implementing automation your velocity is going to go up. It’s actually the opposite. In the first few sprints, the velocity is going to drop down dramatically as your team gets up to speed with automation, with the process. So it’s something you need to keep in mind. Also when I think of automation I think of it as a wave. So at first you’re starting and ramping up. There’s really not a wave. You learn the tools. You’re learning the process. Then people create the first test and things are going great. And everything is running and your applications not failing. And then all sudden you get some new features, and your test start failing, the team start ignoring the failing test and then you get near the bottom of the wave where people totally ignore the test. They stop blaming the tools and automation as not being good enough and you need to write a new framework and then they write a new framework and it starts all over again. So you just really need to be realistic about your timeline. Make sure you have the whole team approach in place and your culture really is embracing it.
How to Achieve Automation Greatness
All code is a liability including automation script code. It’s a liability it’s not an asset until you can actually add value to the organization. And the risk is going through that never-ending cycle of you start to automate, it’s not maintainable so it’s abandoned. Which you end up having the liability far outweighs the asset there. In my view there the broad recipe for the automation greatness is to always include an exploratory tech strategy. To automate functional sections when they are mature. You need to allocate time for script maintenance and to leverage intelligent playback opportunistically.
Definite plus one on treating your automation like a software development engagement because that’s what it actually is. Also, maintainability is a big big part of it. But one thing that we didn’t touch on is the cultural aspect of this activity this automation activity may be directly benefiting testing and testers. The way we’re working we’re crafting it but really it’s benefiting the team and the organization. It’s helping to deliver the product more appropriately. As such it’s a whole team endeavor and it’s going to work better that way. It doesn’t really matter who types the key. It could be a developer could be a tester could be a B.A. it could be a DBA. Whoever has the skills to type the keys for the automation should be a candidate to do that work. Especially in a highly cohesive team like a real sort of a healthy scrum team. It doesn’t matter to who’s next up who’s next up that’s got bandwidth to type the key is for this automated script. Great you are now the person to do that. It shouldn’t be looked at as a tester or a QA only job there. And we have to bring in other people especially if we’re looking at sort of an enterprise rollout of an automation initiative. Your product ownership’s going to care. They’re going to want to know where the time is going where the money going. You’re release management and your infrastructure people are going to care because you’re going to be asking for access to software build change and you’re going to be needing machines to run all this automation on. Your legal team might care a lot of times we like to go and get the latest greatest open source tool. But how are we using i?. How are we using it across multiple site?. Are we going to run afoul of any license restrictions there. Because remember the code the software for open source is free but that doesn’t mean it’s free to do anything and everything you want to do with it. There are limitations and certain companies especially those in banking and health care tend to be a little more sensitive around the exposure of data and licensing comes into some of that exposure of data aspects. So involving your whole organization or at least the appropriate people from your organization is key to a successful automation endeavor.
I don’t mean keep go into this but this is something I deal with everyday. And that’s the culture piece. And it’s usually the hardest. You really need to have everyone understand why you’re going with automation. What it’s trying to achieve. How it’s going to help your customer and really make sure it’s not just another check mark in your teams definition of done –just checking it off and people are just trying to get the stories closed. You really need to have that in place in order to achieve automation greatness. Also if your application is not built to be automatable then you’re not going to able to automate it. So your developers your architecture really needs to be built with automation in mind. So have hooks in there so the automation tool can interact with it. Making it so you can create a faster test by creating maybe a micro services approach really would help making your automation really better. I know teams that have gone and rebuilt their software to use micro-services those so that it is more automatable. Usuall,y most people are familiar with the testing pyramid. So I think of more of that whatever test is the faster you want to go with it and usually it tends to be unit tests. When you create more of a micro service type approach you tend to get tests that are quicker, faster more reliable. And usuall,y when you start unit level they tend to be quicker and faster. So those are the ones you should focus on first and then if you can’t test the functionality at a unit level then you moved from API type level. If you can’t get the functionality tested there then you move to an automate UI test but it doesn’t necessarily need to be a full-blown end-to-end test. It should be atomic. And I’m really testing one thing. So when it fails you know why it fails. I think if you do that you’ll have better luck overall with your automation approach. Als,o you need to keep in mind that any type of automation is code and code requires maintenance. A lot of people forget about that when starting on the automation journey. And so you need to bake that into your estimates. Ye,s it might be saving you time but you need to treat it just like code make a change your application and element id you need a touch automation code. So it’s the same type of effort same type of rigor involved in it. So if you follow the same rigor you do with your code then you do your automation I think you’d be in a good place. Get your team onboard with this whole team testing automation culture and you develop an application to be automatable to begin with.
The most important thing about automation is providing fast feedback to What’s the one thing about automation how do you improved time to market. Identifying the application and the tool sets certainly is very critical. Understanding the methodology and the toolset that if we decide on a bad choice of choosing our tool set then we’re going to actually have a lot of maintenance issues. A separate team should develop the automation framework and test automation testing should be developed by a separate team so the feedback to the test automation framework team is much quicker and the resolution is also much quicker. So this is a very critical factor that contribute to a faster path to automation. Like I said infrastructure for automation is also very important so deciding on the application and design deciding on what kind of infrastructure is need to execute whether its outsourced or in a house is very important.
Automation Using AI and Machine Learning
I always see artificial intelligence as the great equalizer for testing. And right now there is a lot of hype and undoubtedly confusion around machine learning artificial intelligence in the testing space. It’s important to understand what problem you’re trying to solve and how the companies you’re going to partner with in this space are positioned in the long term. Right now because a lot of the products are a little bit all encompassing right you’re committing for a really long time. There are very few open standards in this space right. It’s still very early days. So finding companies that support open standards where they don’t tie you down to the framework that they’re using is imperative. There’s a lot that’s going to happen and if you’re going to invest two years yeah you can do a prototype or proof of concept to make sure that it does what it says it does. But the market changes and products change in other emerging companies will show up in and displace those companies. So it’s important to support open standards in this process. So do they support open standards and is the functionality that they provide isolated or does it require an entire platform replacement. If it’s that big then yeah you need to have spent a lot of time assessing the tool.
A lot of times I hear people say hey how’s Google doing that how’s Amazon doing that. It is great. It is really essential that we understand how these companies are doing things because we can get good ideas from them but we should not lift those ideas and implement them directly in our organization. Because most of us don’t work at Facebook or Uber or Amazon. We’re working at a company who doesn’t have the resources at their disposal to be able to do some of the things that the larger companies and the more sort of experimental type companies are doing. We just don’t have that latitude that luxury that that bankroll we don’t have it. So we have to do things that are appropriate. That’s the biggest thing that I counsel when I go to companies and talk to them and say hey we want to do things like company X or I want to implement process y. That’s great. But is that appropriate for you. Perhaps we can’t do that or we can’t do it that way with you. Let’s do something that’s appropriate. What are your business goals? Let’s work towards meeting those. So I would take the competing with those companies out and say how do we best use this sort of emerging technology AI and ML. I mean AI and ML to some extent have been around for a long long long time but it’s becoming more accessible due to computing power and marketing and other aspects there. How do we best use it? The same way we approach automation in general which is I have a problem that I’m trying to solve or a goal that I’m trying to achieve. In this big stack of technology possibilities that I can take and apply with what I’m doing here. Some of them are going to have this machine learning or AI capability or aspect to it. Are they good candidates for us. Can they do what they say they can do for us. Now I don’t believe companies are out there lying saying things that their product can do X Y and Z and they can’t. I fully believe that they can in the right situations. It’s just that not everybody fits every situation equally. So we have to take these tools bring them in try them out on our software. Because if it works on some other software or some demo site or some test site that’s great that’s a great proof of concept but it doesn’t show it’s going to work in my environment with my software with my people. It doesn’t fit my context. Some tools will some won’t. So I think that’s the way we should approach the notion of AI and M.L. at least in the near term the near to mid-term as another tool in the arsenal that’s going to fit some of us for some situations.
It’s like anything any type of tool or technique you really should do a proof of concept with the whole team and run it through your whole process from requirements all the way to CI CD get the feedback and feed that learning back in in just to make sure that everyone involved understands what was happening and get real feedback to know whether or not it’s going to work you. So that being said I was not a believer in AI/ML based technology until about 2014 when I started using a tool called Applitools. And I think the model Applitools uses is how I think of a machine learning now.
When you look at the AI based solutions their really is supposed to take the drudgery out of testing. So the actual finding of things at scale, that as a tester, no matter what you would never be able to go do through say a million service logs yourself. But a machine can. You will never be able to visually validate all the different elements on your screen even if you use selenium. That’s not a visual validation it’s just checking how HTML should be rendered. It’s not even how it’s rendered. So with all these types of things a machine is probably is better than you are. But what’s great about it. It’s really man and machine type approach with some of the technologies coming out now where it will flag up things of concern or things that are abnormal based on previous history or previous runs. And then based on that a human being goes in and looks and goes hey there’s this really an issue. Oh no, it’s not a real issue. This is a new feature. Let’s mark this is this is the right one to use going forward. And you teach the AI machine learning algorithm as you go along. So I think those types of things are really awesome.