Software Testing

Schools of Testing [PODCAST]

By Test Guild
  • Share:
Join the Guild for FREE
Schools Of Testing

Welcome to Episode 78 of the TestTalks podcast. In this episode, we discuss what skills testers need to in order to be successful in the coming years with Andy Tinkham host of the testing podcast Testing Bias. Discover the shift in testing we all need to be aware of and how to adapt to the every changing world of software development and testing.

SchoolsOfTesting

Are you ready to get schooled in the art of testing? In this episode Andy provides his 20 years of master testing craftsmen insight on the state of testing as well as tackles some hot button topics like Schools of Testing as well as, “Can developers do testing?”

Listen to the Audio

In this episode, you'll discover:

  • What are the Schools of Testing
  • Can Developers do testing
  • How heuristics can focus our thinking during testing
  • Training courses every tester should know about
  • Tips to improve your overall testing efforts
  • Approaches to developing awesome tests

[tweet_box design=”box_2″]One thing to improve your #testing effort would be to be conscious of WHY you're doing that activity.~ @andytinkham http://testtalks.com/78 [/tweet_box]

Join the Conversation

My favorite part of doing these podcasts is participating in the conversations they provoke. Each week, I pull out one question that I like to get your thoughts on.

This week, it is this:

Question: What heuristics do you find helpful when testing? Share your answer in the comments below.

Want to Test Talk?

If you have a question, comment, thought or concern, you can do so by clicking here. I'd love to hear from you.

How to Get Promoted on the Show and Increase your Kama

Subscribe to the show in iTunes and give us a rating and review. Make sure you put your real name and website in the text of the review itself. We will definitely mention you on this show.

We are also on Stitcher.com so if you prefer Stitcher, please subscribe there.

Read the Full Transcript

Joe:         Hey Andy, welcome to Test Talks.

Andy:      Thanks Joe. Happy to be here.

Joe:         Awesome. So before we get into it, could you just tell us a little bit more about yourself?

Andy:      Sure. I am the QA Practice Lead for a company called C2 IT Solutions, in Minneapolis Minnesota. I've been doing software testing for about 20 years; started out as an automator, branched out to a more broader role. And now I take more of a strategic view, as I help our clients.

Joe:         Awesome. So I know you have a lot of experience – 20 years – so I'm probably going to throw a lot of things at you, and we'll just take it as it goes and see where it leads us.

Andy:      Awesome. Let's do that.

Joe:         Cool. So, first, what the heck happened to your awesome Testing Bias podcast? I think the last episode was the first week of July?

Andy:      Yeah, so Ian Bannerman and I – my fellow host Ian – were on a good roll, but we were doing a lot of the show pretty much on the fly. We wanted to do a little more planning, make sure we covered our topics, so we stepped back to re-tool the show. And then I ended up leaving Magenic, our employer at the time. Our audio hardware was provided by Magenic, and so when I left they reclaimed it all. And now I work for C2, Ian still works for Magenic, and getting multiple marketing departments in there can be a little interesting from time to time. So we've been regrouping. There certainly will be something coming; I'm hoping to have it start by the end of the year. I don't know if it'll be just me, if it'll be me and Ian, what's happening there yet. But, I have a whole new set of audio equipment here, and we'll see what comes.

Joe:         Awesome. So Andy, like we've said, you've been in the industry for almost 20 years. Is there anything that you see that is changing, that testers need to be aware of?

Andy:      Oh absolutely. I think we're at a very crucial moment of change in testing. I've seen more and more companies moving away from the “traditional” forms of testing. As agile development takes on, more acceptance becomes more prevalent, I think we're reaching … The late adopter even in laggard's stages in Jeffrey Morris Crossing the Chasm diagram there. And, I see testing as shifting away from the rigid documentation, the “let's run the same tests over and over and over” mindset. And so I think that, that would very much of it add a point of change in how we approach things, the way that testing fits into the organization, and our overall mission.

Joe:         Yeah I definitely agree with you. Are there any skills, you think, a tester should be working on to get ready for this change? Or is there a mindset you think testers should have in order to maybe shepherd this change that their companies may be going through?

Andy:      Sure. As far as skills go, I think that one of the biggest skills that testers are going to need is presentation of information. Visualization of data I think is going to become bigger and bigger. Maybe even getting into some deeper data analysis, where the results of the test are one piece, but some of the broader trends. I think some of your other recent guests have kind of hit on some of those threads as well.

Knowing what is coming from test, not just the test fails every now and then, but if there's a pattern in that failure. Or, “How do I take this information, this massive pile of information that I've generated, that the team has generated, and condense it down in a way that my executives really see a lot of value in having me do my job?”

Joe:         Well I definitely agree with you. And especially I could see this actually blowing up in the sense that as more and more internet of things, devices start going online, testers need to start testing all these different types of applications. I think data is going to grow more and more and more, so I think if we start now, learning how to analyze data, I think it's going to become actually more … We're going to be responsible for analyzing all this test data. And be able to pick out the trends. Even if all this was automated, you're still going to need that person that knows how to think, to know what to look for and what to pull out, to say, “Here's a significant trend that we need to focus on.”

Andy:      Yeah. And I think if you augment that, I think there's a lot more along the exploratory testing, and pieces along those lines, that I don't ever see it being all automated. But absolutely, that data analysis is key.

Joe:         What I have been seeing more and more of is more and more companies are shifting left. So they're trying to find bugs earlier in the development life cycles. So using things that behavior-driven development actually talk about the requirements before they even write a line of code. And also, the continuous integration of developers checking in code more often, so they need quicker feedback with tasks. So, how does the way most people may see manual testing, how does that fit into this new agile, continuous delivery, new world that we're living in – basically a DevOps world – that's really driving this forward?

Andy:      I think it's a shift from the very detailed-oriented, is this requirement work checking, what Michael Bolton and James Bach call checking. I think it's a shift from that to more of the big picture. The thinking about how did these despair features work together? How can I go through and take more of a risk-driven approach? And not risk-driven like do the most risky things first – which is important, but not what I'm talking about here. I'm talking about more risk-driven testing as a test design technique. So, thinking through, how could this feature fail, how could I design a test that would show me if that's a problem, and using that as a way to further explore the application and let the automation handle the stuff that we can easily define.

Joe:         I think a lot of people may be thinking with all this more automation in place that QA is dead, but I think as we shift left, it's actually going to be a bigger … QA people are going to be actually elevated almost. Because they're the ones that really know the big picture from beginning to end. And they're the ones that are going to be really driving the quality initiative all the way through the pipeline.

Andy:      Yeah, I think that big picture view is key.

Joe:         What do you consider a good approach to developing tests?

Andy:      A model for designing tests. Well, I think that comes back to the analogy of lenses. And, there are a number of lenses that we could look at the system through. And just like with real lenses – telescopes, magnifying glasses, 3D glasses, that sort of thing – looking through a different lens will highlight some things that become more prevalent in our vision, and it will obscure others. If I were looking at the moon through red and blue glasses, I'm going to see different details than if I were using a telescope.

I think for a long time, a lot of testers that I've talked to have pretty much just used the lens of requirement. And that's a very important lens; we absolutely have to be looking at applications through that lens. But, if that's the only lens we're using, we're blinding ourselves to a lot of other potential tests.

And in my role as a consultant, I've gone into organizations, and they've got huge test teams, and the test teams really only look at the requirements document to say, “yeah okay, these are defined well enough that we could test them.” And they go through and they do basic tests that only look at the requirements, and the organization releases buggy code. And they're baffled by why they could have all this testing effort, but not catch all these bugs. And the problem there is, they're only using that one lens. I think that at least a combination of requirements and that risk-based test design lens – risk being another way we could look at it – would give a much more robust test suite, as we're starting to think through the models.

But there's also a lot of other ones. We could also use performance, accessibility, security, pretty much any of those non-functional attributes. We could look at it from a cost lens, schedule lens, some of the project management-type pieces. And those are all going to highlight and obscure different things. The skill comes in knowing which lenses make sense in your context, and then paring that list down using the risk-based prioritization or some other way of figuring out, “These are the important things for me to do, so that we can get the best test we can.”

Joe:         Great. So you mentioned context a few times, and I've been hearing more and more about context-based testing, and different schools of testing. I don't know why; I've been in the industry for a long time; I've never heard of schools of testing or anything. So, could you tell us a little bit more what that means, or to agree with the school of testing concept that some testers identify themselves with?

Andy:      Yeah, there's a whole can of worms here. This will be a good talk. Around the 1999-2000 time-frame, Cem Kaner, James Bach, Bret Pettichord got together. They were working on their book, Lessons Learned in Software Testing, and they also in their backgrounds have learned about Thomas Kuhn. Thomas Kuhn is a history of science, philosophy of science person, and I think in the mid-20th century or so, he wrote about the way that academic fields go through revolutions. A revolution in an academic field is pretty much where there's competing ideas, and then one of them becomes dominate, basically.

And so the cycle that these fields go through, you start off, you've got some sort of unifying theory, this is the way whatever-the-field-is works. But over time, as more problems are investigated, there's more and more gaps that arise. Things that the current theory doesn't explain. So you start getting fracturing, as people start coming up with different ways to fill some of those gaps. And what you find if you look at the patterns of what academic papers site other academic papers, you get these clumps developing. And these clumps are called schools of thought. And over time, these schools will refine their own ideas. They're usually fairly isolated from the other schools. There will be a few people who move back and forth, but there's a lot of interaction in the school, and not a lot between schools necessarily.

And eventually one of those theories will become dominate, and it will be accepted by the rest of the field because it explains the gaps the best, and the cycle repeats. So, Cem and James and Bret looked at the field of software testing, and they saw much the same thing. Where there were these clusters of attitudes about testing, where they had very different viewpoints, very different core theories, and some of that same isolation in terms of interactions. And they originally identified four schools:

They had the Quality School, which is kind of “the QA is a gatekeeper,” type mentality, “we're protecting the users from our developers,” that sort of thing. They had what they call the Factory School, which they identified as being very “let's document everything so anybody can do it,” very much focused on repetition. And they had the Analytic School, which is largely found in academia, but there's a lot of work on using various statistical techniques to predict where bugs are going to occur; it's a very different community than most testers are thinking about testing. And then they looked at how they thought about testing, and they declared that they were going to be the Context-Driven School. Where context really matters, and the seven principles of the Context-Driven School. Where there are no best practices, there are only good practices and context. Their projects are about people, software's intended to solve problems. I think there's a couple of others that I'm forgetting at the moment, but they had these seven principles. And I think they're in the appendix in Lessons Learned in Software Testing.

And as they're working on the book, they had people reviewing the drafts. There's a list of people who were the initial signers of the context-driven principles. I'm on that list, a bunch of other people are, and that was kind of how it started.

Now, the can of worms comes in because things have split over time. First, there is a bit of a derogatory nature in some of the explanations of the other schools. And nobody has ever really, that I've seen, stepped up and said, “Yeah you know what? I'm a Factory School tester.” So there's always been a little bit of contention there. Cem and James and Brett have had internal contention as well, that they've had blog posts on. The latest I think I've seen from Cem, he says that there is no one Context-Driven School anymore.

So those are kind of the schools of though. Oh, they did add a fifth school a little later on as they were looking at the agile development and things like TDD and things like that. So that was obviously a different approach to testing than any of the others, so they made that a separate school as well.

Even as recently as, I think it was April 2014, something like that, STPCon had Cem and then Rex Black do a joint keynote of a debate between whether the schools are even a good idea, a useful idea, or not. And both Cem and Rex just posted that audio earlier this year, I want to say maybe June/July time-frame? I can send you a link afterwards for the show notes. And so there's a very active debate going on there, where some people are arguing that the idea of schools is divisive. And I think Rex Black's argument is that we have different approaches to testing. I don't want to go too deep into his argument, because I would probably get it wrong, but I've listened to the debate and that's pretty much the main source I've got for it. But he's on this standpoint saying it's done harm to the field. The pro side is that it was intended to raise differences so that we can discuss them and grow as a field.

In the June 2015 Testing Trapeze issue, I have an article where I bring in the work of Arnold Kling. Kling is an economist at the Cato Institute, and he's done a lot of looking into why US politicians always seem to be talking past each other, and never really communicating with each other, and really getting in their own way. And he's got this theory that he calls The Three Languages of Politics. And he's looked at conservatives, progressives, and libertarians, and he sees each of them as operating on a different continuum. So the way that they frame their issues, the way that they frame their debates.

He's not saying that they only use a particular continuum, but as a group, they tend to use those as their dominant arguments. So he frames the conservative argument as Civilization vs. Barbarism – being a barbarian. And so a lot of their arguments are about preserving civilization, avoiding chaos, and all of the bad things that are associated with barbarians. While progressives might be more looking at the Oppressed vs. The Oppressors. So we've got to help these poor people who are being oppressed and save them from the people who are oppressing them. And libertarians would use Freedom vs. Coercion as their axis. And a particular group can understand arguments from the other axes, but they tend to, when they're trying to make their best arguments, when they're trying to formulate the point that's going to convince everybody, they tend to think along their own axis, in Kling's model. And the axes are different enough that one person thinking along one axes doesn't see anywhere near the import that an idea on another axes might have. And so they miss these pieces because they're focused in different ways.

And I think that the schools of testing might well have those same sorts of continuous. So the Factory School might really be valuing consistency, and they're arguing against chaos in testing. Bringing some order to that process, where there is so much that either we can't predict, or can go wrong, or that we just don't have time to cover. The Analytic School might be looking more at a Correctness vs. Randomness approach, where they're looking to prove that their results are correct, that they have built a solid foundation for their testing. The Context-Driven School could value Context Awareness vs. Mindlessness. So really thinking about the situation that you're in, adapting your techniques to that situation and not just mindlessly saying, “Yep, this is a best practice. I'm going to do this.” The Quality School might be focused on process. They can see process as a very valuable thing, and they're opposed to cowboy-ism – just going off and doing your own thing without thinking about the impacts on the team. And finally, the Agile School could be viewing it as a whole team Quality Responsibility vs. Ulting on Quality as an afterthought.

And I don't know these continuums are the right ones or not, but the idea really resonates with me.

Joe:         Yeah but I'm just trying to understand how it actually helps someone. It's almost like I can picture – I don't know why – a developer manager just snickering at me if I actually try talking to him about the schools of quality. But I guess there's value in there, thinking about how you're going to approach testing, and having that philosophy. You're able to evangelize it better, or do better testing if you analyze it more, I guess? I don't know.

Andy:      Well, and I think they're useful for the field on an internal basis as well, because it highlights places where we view things differently. And rather than starting from the place of, “We all mean the same thing when we talk about regression testing. Or even testing. Or quality.” There are things where there are very different definitions for these terms out there. So I think having these schools helps us find where those differences are so that we can work to resolve them, rather than just assuming that we're all on the same page. I don't know how to use them well with a development manager, honestly.

Joe:         Right. Andy, I guess another term I also been coming across more and more – and I don't know if it's related to this school of thought, but – heuristics? I was reading a book by Daniel Pink, Drive, and it's not a testing book, but he talks about Heuristics vs. Algorithmic work. Could you tell us a little something about heuristics, is that something you're familiar with, or …

Andy:      Yeah! And by the way, I will echo your mention of Drive. I think all of Daniel Pink's books have been very, very good. I enjoy reading them.

Joe:         Awesome.

Andy:      So a heuristic is … it's a method that we use internally as we're thinking through things, that is a guideline. And it enables our brains to function in a world where there is so much information coming at us all at once. So even things like our peripheral vision is a heuristic. The things that are directly in front of us may be most likely to need our focus, and things coming from the side we pay less attention to. We don't see the same detail. Like most heuristics, they don't have to be right. So we could have a threat coming from the sides, but if we were to use that same level of detail that we see at whatever level we're looking at, across our field of vision, that would just be so much to have going on that we wouldn't be to process it all.

So we use heuristics the same way, when we are thinking about things as we are slicing and dicing information. In testing, an example of a heuristics would be that bugs tend to cluster. In a lot of cases, yeah, if the situations were right that a bug was created in one piece of the software, there's a good chance that the situation was right for other bugs to be created there as well. And if there is a place where the situation was right so that there weren't very many bugs created, there's a good chance that there'll be fewer bugs there.

But again, that's not always right. And so there's all sorts of these heuristics that we apply to focus our thinking and make it more manageable.

Joe:         Awesome. What really made me think about testing for some reason is the debate over Automation vs. Manual Testing. The things that, like you talked about earlier, about data … Being able to analyze data. I don't think we'll ever get to the point where we'll be able to automate that because it's a heuristics activity where you need to think about it, and actually analyze it. But that's what just brought it up in my mind.

Andy:      Yeah, you have that heuristic matching of patterns. As humans, we're very good at seeing patterns where they don't exist. There are entire blogs set up about pictures of inanimate things that make a face. You often see it on the front end with the headlights of a Mazda. There's often you can see a smiling face there on the front of the car, or … That kind of pattern recognition is another heuristic that we're using. And in data analysis, you might very well see a pattern, or think we see a pattern, in the data because of that heuristic analysis that maybe it's right, maybe it's not. And that's where the human judgement comes in. Exactly.

Joe:         Cool. So I'm probably going to hit on one more hot-button topic, and that is: can developers do testing? And the reason why I say that is, I see more and more companies going towards developers-in-test almost. And also, sprint teams are usually have more developers than a “tester,” and it's more of a role. So how do you see that playing out? Do you think that's beneficial for testing, or … Can developers test, or do you think it's completely a separate mindset, in that the two can't really play well together? Not play well together, but it's just so different that you want to have someone that's thinking in a testing way and someone that's thinking in a developer way? For them to go back and forth may be too much pressure, because it's totally different viewpoints almost.

Andy:      I think the short and controversial answer is, yes, developers can test. It's not like testers are born with a special genetic ability or anything that makes us unique. But there are differences between the testing activity and the testing role. I think that we all win when developers take on some of that testing role. There are different questions we are trying to answer, and one of those questions is does the code work the way the developer intends the code to work? And the best person to determine that is the developer who knows the way they intended it.

And so taking off some of those early levels of testing that are more focused on that question, I think is great for the team. Now I don't want to trivialize the years of experience, the skill, the expertise, that goes into testing as well. I think that's also very important and you're right: it is a different viewpoint. I think there's switching costs, so the developer is spending a lot of time thinking about, “This is how the system should work when it's complete.” The laggard team even, is mostly thinking about, “This is how our users are going to use it,” they're all thinking about how it should work. And having that opposite mentality of, “How could things go wrong, let's make sure we plan for it,” is important on most teams. I think there are teams that could get by without it … It's very contextual.

I think that there is room for both sides doing the role, I think it's a great thing for developers to take more ownership of their quality. And I think a lot of that can free up testers to apply their skills in more detailed and valuable ways then simply if we're testing a calculator and making sure 2 + 2 = 4, for example.

Joe:         Are there any resources or books that you would recommend to someone to help them start thinking more about testing, or their approaches to testing?

Andy:      Yeah, I know that People Are Brought Up Explored by Elizabeth Hendrickson, on your cast before. I would start there, and that Lessons Learned in Software Testing book. If you have a chance to take the BBST software course, it's online, and this is when Cem Kaner went back to being a professor down at Florida Tech. I went down there, I got laid off in 2000, 2001, somewhere in there … Went down to go to grad school, to figure out why people loved manual testing. At the time I was an automator, my standard response was, “I can write a script for this, why do I want to do this by hand?” But I saw that there were people that got very, very passionate about testing, who weren't automators. So I went down to school to find out more about that viewpoint. And that first semester I was there, Cem was teaching his Black Box Software Testing course, and I figured, “Oh yeah. I've been in testing for 10 years, I've read books on it, this will be easy.”

But it was absolutely transformative to me. It has so shaped my testing philosophy that I recommend every tester take it. Now luckily, you don't have to go down to grad school anymore to do this. Grad school is way too expensive at this point. Cem has taken the course, and even at Florida Tech he's inverted it. So rather than the standard model of you have lecture on Tuesdays and Thursdays, you go spy on the classroom, the instructor talks to you for an hour, hour-and-a-half or whatever, and then you go away and do homework, he has video lectures of all of the course. And has broken up into three sections now, and then there's some extra add-on modules. These videos were all funded by an NSF grant. He makes them available on his website, at testingeducation.org, and they're all in little 10 minute, 10-20 minutes chunks, so you can work through the lessons. There are readings listed there, there are exercises. There are even study guides for the first two modules now.

So you can either do that as self-study, or the Association for Software Testing offers these courses regularly. You have to be a member, but then they just offer them online. And so you're working with students from around the world, you're getting some very good perspectives. I would highly recommend that class.

Joe:         Awesome. My very first book I ever bought was Testing Computer Software by Cem.

Andy:      Oh, yep!

Joe:         I bought this book, and I read it before a job interview – I knew nothing about QA – and I got the job. This book has always held a special place in my library. Anything by him I would highly recommend. I didn't even know that the videos were online, so thank you for that resource. I think everyone should check that out.

Andy:      Yeah, absolutely.

Joe:         Okay Andy, before we go, is there one piece of actual advice you can give someone to improve their testing efforts? And let us know the best way to find or contact you.

Andy:      I would say the one thing to improve your testing effort would be to be conscious of why you're doing that activity. There's a lot of, “It's the best practice, so we're just going to do it.” But that doesn't necessarily mean that it's providing the team even any value. And even if it is providing some value, is it providing the most value that's possible for you to provide in that period of time that you're spending on it?

So really taking that time to periodically look at what you're doing and say, “Is this still doing what we think it's doing? Why are we putting this in the process, why are we feeling like this is worth investing time and money and resources into, when there are so many other things we could be doing?”

And the best way to find me … probably Twitter. I'm Andy Tinkham on there. I don't post all that often, but also I blog at testerthoughts.com. And for the new podcast stuff, there should be something going out at least on the Testing Bias Twitter feed, and I'll probably make some sort of audio file to say here is the new stuff. Certainly going back, there are 13 episodes of Testing Bias out there; I'm pretty happy with them. And there'll be more stuff to come, and we'll definitely keep those listeners in the loop there.

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}

What is Behavior Driven Development (An Introduction)

Posted on 03/21/2024

Love it or hate it—Behavior Driven Development is still widely used. And unfortunately ...

What is ETL Testing Tutorial Guide

Posted on 11/29/2022

What is ETL Testing An ETL test is executed to ensure that data ...

Open Test Architecture How to Update a Test Plan Field (OTA)

Posted on 11/15/2022

I originally wrote this post in 2012 but I still get email asking ...