Since I’m getting ready to put on my first online conference dedicated to automation, I thought it would be fun to hear about Google’s Test Automation Conference (GTAC) from one of its participants – Greg Paskal.
Greg is no stranger to TestTalks. This is his 4th appearance, and he brings his awesome GTAC insight (and some lessons he’s learned) to another can’t-miss episode.
About Greg Paskal
Greg is currently the Director of Quality Assurance – Automation at Ramsey Solutions, A Dave Ramsey Company. He is also the author of multiple white papers on test automation and testing in general.
Greg recently published his first book, “Test Automation in the Real World”, sharing insights from over 30 years of automated testing development. He has spoken at many conferences including StarEast, StarWest, QAI and QA TrailBlazers.
Greg is known in the testing industry as being passionate about mentoring and training others in software testing and other technical areas. Greg hails from California but currently lives in Nashville, TN with his wife Cindy and daughter Hannah.
Quotes & Insights from this Test Talk
- This is a Google Conference and it's held once a year. It's an unusual conference, and that like many conferences that you would go, you go ahead and request funds to go from your company, usually, and they send you off. It's not a big deal to get in. This conference is much different. You put in a request and it goes into almost a process of them choosing, and out of that, a small selection of folks get asked to come. It's remarkable. I feel very honored that I got to go this year.
- This conference had, I think, 300 people were asked to come. People had issues with thesis and all sorts of things. I think somewhere near, maybe, 250 actually made it to the conference there in Sunnyvale, California. I also found out that GTAC isn't always in California, Google's headquarters. It can be in all locations. For instance, next year, it's in London. Wouldn't that be remarkable to get to go to?
- The theme wasn't so much a technical theme as much as it was a knowledge theme. I'm taking a quick look at my notes here. They were very open with this idea that they wanted a venue where knowledge could get shared across a very broad section of folks. You continue to hear that theme throughout the conference. It might be a very niched topic, but something that had some interest that would apply … If people could get their arms around it, that might apply to everybody there. They had one that stood out to me, it was on robotic telepresence, I think. Something that most of us aren't going to deal with day in and day out. Yet, they showed how somebody had to test an automated QA test process for these essentially moving platforms with a screen, with a person's face on it. It wasn't like anything I've ever seen before. It was quite remarkable.
- The Flaky Test concept was the reality that as automation engineers, we encounter flaky tests. We see the signatures of them where one day they're acting up and the other day they're … The next day, they're acting well and giving us accurate results. The reason that this really mattered to Google and why they asked this gentleman to come in is … I hope you're sitting down when I give you this number. Google has 3.5 million automated tests. It's a remarkable number. I don't even think I have 1% of those automated. It's just staggering. Imagine if you had a test suite that size and even 1% of those tests were flaky, providing inaccurate pass or fail results. In my case, when I have a test that fails, I generally have my manual test or go back and run it manually to make sure there's a real bug there, real defect. If you had 1% of 3.5 million, you're going to have some people pretty busy for a long time.
- Some unusual metrics that are maybe harder for some of us that are a one man show or there's just two or three of us in our companies trying to do this. I think my takeaway that I got, and I would encourage those listeners to do is to find more ways to gather metrics on your automated runs and populate that in some format. We're talking about using our data lake, which is our data warehouse methodology here at Ramsey Solutions to do that, so that we can do analysis on our runs and begin to capture more this type of information and helps us to make our test suite much smarter, much more effective in the long run.
- My best advice from GTAC is to begin to find new ways to gather metrics on your automated test run information. Everything from pass and fail, your execution metrics, time of run. If you can get your hands in things that just processor usage and memory usage, those would be fantastic, and begin to capture that data and find creative ways to analyze it.
- GTAC 2016
- Automating Telepresence Robot Driving 2
- How Flaky Tests in Continuous Integration
- Helpful Tips When Implementing Test Automation and More – Ep8 with Greg
- Removing “Test” from Test Automation – Ep19 with Greg
- Real World Test Automation Survey Results with Greg Paskal – Ep112 with Greg
Connect with Greg & Google Test Automation Conference
May I Ask You For a Favor?
Thanks again for listening to the show. If it has helped you in any way, shape or form, please share it using the social media buttons you see on the page.
Additionally, reviews for the podcast on iTunes are extremely helpful and greatly appreciated! They do matter in the rankings of the show and I read each and every one of them.
Test Talks is sponsored by the fantastic folks at Sauce Labs. Try it for free today!