Automation Testing

EPISODE 98: Don’t Panic – The Hitchhiker’s Guide to Test Automation [PODCAST]

By Test Guild
  • Share:
Join the Guild for FREE
Don't Panic - The Hitchhiker’s Guide to Test Automation

Don't Panic – The Hitchhiker’s Guide to Test Automation

Welcome to Episode 98 of TestTalks. In this episode, we'll discuss The Hitchhiker’s Guide to Test Automation with author Mark Fink. Discover how not to panic when it comes to your test automation efforts.


Don't Panic - The Hitchhiker’s Guide to Test AutomationDon't Panic - The Hitchhiker’s Guide to Test Automation

When our automated tests start randomly failing, it’s hard not to panic. It often seems to happen at the exact wrong time — like right before a big release. No worries. In this episode you'll discover a better way to handle your test automation process with Mark Fink, author of The Hitchhiker’s Guide to Test Automation, as he shares some of the best ways to set up your automation frameworks and care for your test scripts.

Listen to the Audio

In this episode, you'll discover:

  • How automation fits into the software development process
  • The key steps Mark recommends for teams who are getting started with automation
  • Automation as an “enabler”
  • Tips to improve your test automation efforts
  • What GoGrinder is, and why you should use it
  • Much, much more!

[tweet_box design=”box_2″]Automation is an enabler for everybody, everyone on the team should be involved with #TestAutomation~@markfink www.testtalks.com/98[/tweet_box]

Join the Conversation

My favorite part of doing these podcasts is participating in the conversations they provoke. Each week, I pull out one question that I like to get your thoughts on.

This week, it is this:

Question: How does test automation fit into your test development lifecycle? Share your answer in the comments below.

Want to Test Talk?

If you have a question, comment, thought or concern, you can do so by clicking here. I'd love to hear from you.

How to Get Promoted on the Show and Increase your Kama

Subscribe to the show in iTunes and give us a rating and review. Make sure you put your real name and website in the text of the review itself. We will definitely mention you on this show.

We are also on Stitcher.com so if you prefer Stitcher, please subscribe there.

Read the Full Transcript

Joe: Hey, Mark. Welcome to TestTalks.

 

Mark: Yeah. Hi, Joe. Thanks for inviting me.

 

Joe: Awesome. It's great to have you on the show. Today, I'd like to talk about your book the Hitchhiker's Guide to Test Automation. Before we get into it, could you just tell us a little bit more about yourself?

 

Mark: Yeah. Sure. After university started this developer in enterprise software to be precise on auto management system custom built. We worked towards the release day. One or two day before the release, we did try to organize some talk testing. This was not great. Many defects showed up at the customer side and they were kind of unhappy and this was going on for a while. It blew my mind when I learned that there are professionals that focus on quality topics and ever since was fascinated by every singularity to software testing and I ever since then trying to improve my skills.

 

In 2004, I founded my own company FinkLabs so I can focus on this technical testing topics. I worked in all over Europe basically and Switzerland and Netherlands, Germany, mostly work on their publications and I have teams to set up and improve test automation and performance testing and also performance management.

 

Joe: Is that one of the reasons why you wrote the Hitchhiker's Guide to Test Automation? Is it based on what you've been seeing in the field or your different clients?

 

Mark: Yeah, exactly. This is the main reason I … My intention was to ease the adoption of technical testing in a corporate environment. My book is structured more like a practical tutorial type of book, not this scientific textbook character we've all attempted to find the [center 00:01:46]. I basically started in 2012. We had the first [PyCon 00:01:55] conference in Germany and they asked me to give a half day tutorial on test automation. At the time, mainly the PyCon community in Germany, like scientist, and they use [version 00:02:08] control and basically modern stuff but they are not very much into computer science and also not test automation.

 

This was an interesting topic. As part of the preparation for the tutorial, I wrote scripts and this was already like 100 pages. From that on, I decided to finally go forward and to write the book. I had the idea for a long time already and was collecting some ideas and some material but it never went through. For kind of like 3 years on and off, I worked on the book and in 2013, I took 3 or 4 months off and finally released it.

 

Joe: What I really like about the book is, like you said, it's a hands on guide. You give step by step walking someone through in actual application process and how you would test it. What I really liked about the book though is you cover a lot of things in a very short time, like the first 41 pages. I think sending the foundation for test automation, I think a lot of people are unaware of. I just want to explore those topics a little bit more and get a little more information around.

 

The first thing you covered on that though was really cool in the first few chapters was a lot of people today, as they move towards agile, they have questions on how does automation fit into the software development process. What are your thoughts around that?

 

Mark: Yeah. This is really a main part of the book. Sometimes people maybe talk about test automation. They jump right to the tools and I added this chapter to make it very clear from the beginning that this test automation is also about people. It's very important to deal with the fact that not everything can be automated and people are involved and also the software development process that we are talking right now is very, very important. We are talking about a software process and in this discipline is the most important factor. Without that, that test automation doesn't really make much sense.

 

From how I see it, the test automation is an enabler for the software development process. Sometimes I have seen dysfunctional teams and they don't work together as a team. More like a group of experts. Test automation doesn't make that much sense in that context. To come back to the aggression or to really bring it into the process, I think a good practice is to start the Sprint with [inaudible 00:04:54].

 

For that, if the process is a little bit more sophisticated already, that's usually something like a definition of ready. All those stories that are [groomed 00:05:03], other stories that are groomed, they went already to analysts that put together the requirements and maybe some basic ideas how to implement the story. In the grooming, the team comes together and discusses the story, how to implement it. Maybe there is few ideas how to take the situation and this also include testing which is important to what's usually tested and how to [take 00:05:33] the testing for user story. This is already a good start. If you have Sprint 2 weeks, 3 weeks, you can also prepare stuff to do the testing at the end of the Sprint.

 

Joe: Great. I think that's a great point. Sometimes people lack visibility or it's just someone else's job to do the test automation. Like if you make a part of your definition of ready and definition of done then the team then, in order to deliver those, they have to have some sort of automation in place of some sort of testing or thought of that upfront also. I guess the next question I usually get asked from people is they need examples. Do you have any examples of a checklist that you've seen most companies use for the definition of ready or definition of done? Is that really specific to different teams of what they're trying to accomplish?

 

Mark: Yeah. In my experience, it's very specific toward application and also the environment. Also, the level of sophistication of the software development process. I was talking earlier about dysfunctional teams, they're doing the wrong stuff sometimes for many years and they don't see it anymore that it's wrong. That's just how they do it. A very detailed list of, basically a laundry list what should be included in a Sprint can be very helpful because basically you have a checklist and see everything. For other teams, they are more advanced then it's less because everybody already doing that correctly. It's just maybe done, it's done. Yeah, everything is included, also the testing.

 

Joe: As you go all these companies and you're introducing test automation, so you're going to a company that has a team that's never done test automation. What are the steps you recommend for teams getting started with automation?

 

Mark: This is an excellent question. Often, when teams want to improve for an area, they just look for tools. Then they want to do some monitoring so they install Nagios or for continuous integration, maybe they start in this area. They just installed Jenkins. For analysis, they install Sonar, et cetera. The tools is usually the easy part. Many times the initiative stops with the tool installation. Once the tool installation is done, for my perspective, it takes much more than that. The tool needs to be configured and it also needs to be maintained because the situation changes over time and different team consolation, more developers, less developers. That needs also always some adjustment to the tooling.

 

The tool basically need some maintainer and that also takes a little bit time and this is important. To come back to your question, I like to structure test automation in four phases. Usually, I like to start, if there's no test automation and this is now a scenario with the infrastructure to install and configure the tools. I think very important is to have at least the basic integration into our development IDE. In many teams, they are very diverse. Some people like [Imax 00:08:59] or VI next to [Intel HA 00:09:00], so very diverse IDE's. At least you should have a standard way of how you can develop your test for your project and also to start the documentation.

 

This infrastructure phase in my opinion should also include showcase so you can demonstrate that other important facets of your test automation works. One time, I was working on a project where my task was to replace the existing test automation infrastructure because they were unhappy with it. The experts, the analysts, they pick really the hard test cases that they never could run on the old infrastructure.

 

This showcase or prototype should also really include the difficult tests. The next phase after infrastructure should be training. I think it's important to tell the relevant persons how everything works and also this is very helpful because I get a ton of feedback from problems, people see how it doesn't work them. This can be fixed and adopted so everybody will be happy with the test automation tooling and setup.

 

Then, I think the next step would be to start with early adapters. Basically, frankly users who are technically capable to do the automation without perfect documentation, without everything working perfectly. They also should be like all test automation. They should have at least a sense that automation really helps them to get more velocity and better also the quality.

 

After that, you can also fix any issues that would come up during that phase. I would rule out test automation to the whole team. For me, this also means in the old days, there was a test factory or a test phase at the end of milestone. Now, for me, it means test automation is an enabler for everybody, for every developer and everybody should be involved with test automation. The last phase covers everybody.

 

Joe: I love that concept of automation as an enabler. A lot of people, a lot of teams I've seen, they see it almost as a roadblock, getting in the way for them creating features. Actually, if it's done correctly, it should be helping developers not being a handy cap so that it should give them confidence and act almost like a safety net when they write code that they should actually enjoy. I have these automation tests [so then 00:11:36] act like a safety net that give me some feedback based on the quality of my code when I check it in.

 

Mark: Yeah. I cannot agree more. It's exactly what I see. To set up test automation can be really difficult. If it's not like structured in a way that the project can [deal with it 00:11:56], it can really as you said, a roadblock and something that stands in the way of getting new features. Yeah. Sometimes features might be more important and maybe it's better to not start the test automation and get maybe the next release out of the door. I don't know.

 

Joe: You talked about structure. Obviously at some point, the team needs some sort of automation framework or scaffolding that has some common methods and object libraries split up so you get some reuse out of things. You're not always reinventing the wheel when you're doing test automation. You have some topics within your book and one of the topics that you talked about which is different than I've heard from most people is that you're not necessarily a fan of pay job jocks. It seems like you prefer things called patterns and abstract [layers 00:12:44]. Could you just tell us a little bit more about pay job [jocks 00:12:48] versus patterns and abstract layers approach that you recommend?

 

Mark: Yeah. Sure. It's not absolutely my favorite topic because things can get very controversial like Emacs against the VI or something like that. I also could give you different answers depending on … I think the scope perimeter is very important. To do some integrated test, it's very important that there is solid foundation of unit test and component test. Depending on what layer of the pyramid, we are looking this answer [pay job checks 00:13:28] versus patterns or [obstruction 00:13:31] layers could be different.

 

From a general perspective, both are valid approaches and they just differ. The one more like a bottom up oriented to what's the technical artifacts and the other way is top down or oriented towards requirements and expressing test intention. You had Gojko Adzic. He came up with what he call as activity testing pattern. This is exactly the top down way of doing things and also my preferred way. I work with both ways and both work. The Page Objects tends to orient themselves more towards the technical solution and then I recently analysis for a company and they did [pay job checks 00:14:19] a lot.

 

Yeah. You see the technical artifacts in the tests and [to enter 00:14:23] integration tests, you see a lot of technical terminology, technical artifacts. It's not very easy to see the intention of the test or the functional requirement. If you have a lot of test, this can get into the way. One aspect is who maintains the test. If mainly developers are [in roughly 00:14:49] test maintenance, they usually prefer [pay job checks 00:14:53]. This might be the good choice in our situation.

 

Joe: All right, Mark. Also, I guess once a team has automation in place and they have a framework, a lot of companies I've been speaking with are moving towards continuous integration. Do you have any tips on how to be successful with automation within our CI systems?

 

Mark: I think the most important factor is that you have the scope right. You cannot test everything and usually this is a requirement. The business people require from the developers, from the project that everything is tested. This is also a question of resources of time, and so you cannot just test everything so you have to find an intelligent way to structure your test and I was already referring to this pyramid of scope. This is from Whittaker's book. I think it's a great book in many ways. He writes about how they articulate testing at Google and he has this test pyramid.

 

There are many versions of the test pyramid. He structured it in large medium and small scope. There are the large test that have a huge scope. They are very complex and have a high [effort 00:16:10] to maintain the test. If something goes wrong, it's very difficult to tell what causes the problem. You need to spend a lot of time analyzing the test results from large scope tests. You should have only limited amount of this large scope test and always try to drive your testing efforts into the lower level of the pyramid. There you have medium scope. They're your test components and focus on the functionality of the components and the interfaces.

 

On the basis of your testing pyramid, there is of course unit testing and there you have a very specific scope. If something goes wrong and you see an error popping up in your continuous integration monitoring or in your wall display or whatever, you can easily identify the root cause or what caused the problem. Also, yeah, it can be fixed quickly. The pyramid helps you to deal with the complexity of the application.

 

For me, it's the [thing 00:17:14] most important concept in test automation or if somebody takes only away one thing from this podcast, this would be the important part.

 

Joe: Awesome. I definitely agree. I think a lot of people focus on that pyramid and they lose site of, like you said, faster test or more focused test. Another thought that just popped into my head was as teams are running in CI, I've seen teams that have odd requirements. I'm just curious to know how you handle it. For example, I know a team that has a product that it has multiple packages that can be deployed with that product. In order to test that particular package, they need to do something on the back and like turn things on and off.

 

If they're running in CI, they just can't run a test [week 00:17:56], because in the middle of the test they may have a test that requires a certain package to be turned on. Because it's not on, they have to stop the test. You turn the servers off and on and then run the test again. How have you seen teams handle those type of situations if you've seen that at all?

 

Mark: Of course, this is very common that testing involves many, many manual steps that usually when somebody calls me for a project then the first question I have if they already have one part in deployment. Sometimes that's something that's in the works or they thought about it but didn't even start. We thought one [button 00:18:37] deployment test automation doesn't make that much sense because then it's only partial automated and you cannot run it during the night because you need somebody pressing the buttons. I think, yeah, test automation can be beneficial in this situation.

 

You never get the fast feedback. Usually, the most important benefit from the test automation is somebody checks in some code and he gets immediate feedback. Immediate meaning like within 15 minutes or within maybe 1 hour still makes a lot of sense. If you have manual steps, you don't get this kind of feedback loop. Then the benefits are less and less. There is different approaches to your question, how to handle the situation that you have to change something in the backend.

 

One is the automation that probably would be the best thing. You have maybe this is a test in a test environment. You have, I don't know. Sometimes they call it forbidden API or something that you can change something in the backend. Basically a testing feature. You have to press the switch automatically. Also, if you have a medium layer, [medium 00:20:01] scope test, you can walk out backend altogether and so you don't have this dependency anymore. If it's not relevant for your test, you shouldn't have the dependency in many ways to deal with that.

 

Joe: Great. You actually have a chapter or a section that talks about [marketing 00:20:20]. At a high level, what is [marking 00:20:21]? I think a lot of testers aren't that familiar with [marking 00:20:24] or how [marking 00:20:24] could be used with test automation.

 

Mark: [Mocking 00:20:28] is very controvers topic also but it helps you to deal with the complexity and many customers I worked with, they have systems [techs 00:20:37] with a couple of hundreds, different applications and many application have dependencies on each RS. If this dependency network is very complex and if there is an API change, this has a huge impact on effort and also for the testing. For example most banks I have seen, they have more than 400 application. This is a very common problem and also the [mock 00:21:02] [inaudible 00:21:03] to deal with these dependencies during testings.

 

For example, you bring something new and you cannot test against the new components because they are not built yet. You test against some mock and then you can proceed with your own code. This is one scenario. You can also … For mock, you can control what [to recite 00:21:28] from your request or from your API call is. This is usually very important to determine the test outcome. If you're not in control of the API response, basically everything can happen and maybe there's pack in your neighboring application and so you have to deal with that in your test automation and this makes things very flaky and very difficult to deal with.

 

Yeah. This also can very quickly, if you have that problem in a larger scale, this can really lead through your test automation failure. I have seen many times that project put a lot of effort into test automation but once the test read the reports, failed tests, it's usually difficult to see what failures are new and what failures are already known. People start not to care anymore about the test results and then you put in a lot of effort without getting any benefits from test automation. It's very bad place to be but often very permanent. I think this API dependencies is one of to make sure things that cause this problem.

 

Joe: You have great advice. I see it all the time. People in this position where they have tests running in CI but they are flaky like you say and teams don't trust them and they just start ignoring them. Then, you put all these effort in for nothing because no one's getting of the benefits of all the effort that you try to put it. I know you are a developer at heart and it sounds like you still do development besides test automation is development. I know you also create tools. I heard about a tool called GoGrinder. Could you just tell us a little bit what GoGrinder is?

 

Mark: Yeah. Sure. This is something very new. It is open source tool to similar target load on your application and to measure the response time and throughput. I work in technical testing and a main part of that is performance testing for more than 12 years now. I use the old kind of proprietary and open source tools. Many things happen during the last couple of years. This really changes the environment. I'm thinking about DevOps, a lot of tools for visualization monitoring. Also provisioning tools that make it easier to performance testing, low testing of the application.

 

Another thing is existing opensource solution sometimes lack test coverage themselves. It's a little bit difficult to extend or improve the tools. Also, new technologies came up during the last years. I decided to try myself to solve this problem. So far, it looks very, very promising. I use that tool already for two projects. I like it myself. I use it in my own projects and they are very interesting.

 

Joe: How do you see performance testing fitting into agile development life cycle? Is it done at the end like it's always been or how do you try to incorporate performance testing into a Sprint team?

 

Mark: Yeah. This again is very much dependent on the project environment. The scope of the application, the number of releases, also the risk you're facing. In my opinion or what I usually see is the performance testing at least takes some effort to execute it, so to set up a test environment and performance testing that really makes sense cannot be executed against the empty database. The database needs to be filled first with synthetic data or with some data that is more realistic than an empty database and this also takes some effort and sometimes the applications is a little bit slow, so to load a couple of million records takes a couple of days or so. This performance test preparation takes at least some time and some effort.

 

Usually, this is not done with every commit and it's not possible with every commit I think. [The fact is 00:25:50] I see that usually major milestones are performance tested in an end-to-end fashion and against performance requirements. This is also a very important for performance testing to really have to use requirements or performance requirements or sometimes they have service level agreements, SLAs. You have something that you can test against. If not, performance is really difficult to argue against.

 

Maybe to load some data takes 24 hours. Maybe it takes 48 hours. If this is still okay or not, it's difficult to determine. We thought that requirements almost impossible. Performance testing needs to be done functional defects have been fixed. I think there is some natural order of doing things and to drive the application defects through the performance test have a very high effort because of performance testing itself has a lot of effort.

 

Also, the long feedback loop could be a problem. If you have only test major milestones, you don't know after 3 months or so what really caused the performance hit and it's difficult to judge. You need a lot of analysis to find the root cause for that. Again, depending on the situation, it might be good to have also a scope pyramid for performance testing. To give you an example, [inaudible 00:27:34] very early adopted continuous performance. At least, that's what I like to call it and that they identified smaller parts of the system. Also performance critical parts and they monitor how the performance of these parts and watch over time.

 

It's sometimes difficult to have really performance requirements for the smaller parts but at least you can compare the performance of the smaller parts against the last release and so you'll see how the performance of your application works over time. In many projects, I see they don't have end-to-end performance test and also no continuous performance. This is two things probably once you think about.

 

Joe: Okay, Mark. Before we go, is there one piece of actual advice you can give someone to improve their test automation efforts and let us know the best way to find or contact you.

 

Mark: One thing I learned recently. There is this book Toyota Kata from Mike Rother. This is really important. I think he studied, I don't know exactly anymore, but I think he studied Toyota for like 20 years. He came up with this framework how to approach things in an enterprise environment. He said it's very important to know your current situation and to know the target situation. Then you can identify a gap and you make an alternative plan how to close the gap. This is very helpful to improve testing efforts in my opinion. Also, [I saw 00:29:15] that you should know your risk of your software, of your company, of your team, so you can prioritize your tasks accordingly. For my contact data, I [write a blog 00:29:28] at FinkLabs.org. It's a bit of diverse, some micro controller stuff there, software engineering but also some testing related to our stuff and [also 00:29:40] through the page with all my other social media and contact data and this is Mark Fink.

 

 

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}

SafeTest: Next Generation Testing Framework from Netflix

Posted on 03/26/2024

When you think of Netflix the last thing you probably think about is ...

Top Free Automation Tools for Testing Desktop Applications (2024)

Posted on 03/24/2024

While many testers only focus on browser automation there is still a need ...

Bridging the Gap Between Manual and Automated Testing

Posted on 03/13/2024

There are some testers who say there’s no such thing as manual and ...