Performance Testing

Getting Started with Performance Testing

By Test Guild
  • Share:
Join the Guild for FREE

[pullquote align=”normal” cite=”Reinhold Messner”]Mountains are not fair or unfair, they are just dangerous.[/pullquote]

I recently spoke with Mark Tomlinson, the host of the PerfBytes podcast and 20 + year veteran of performance testing, about how a team can get started with performance testing. Mark has seen a lot and he shared with me a bunch of tips, to help someone new to performance testing, get started with their performance efforts.


You'll also discover how to avoid being featured on the PerfBytes News of the Damned segment.


Here's the full transcript  of the performance testing goodness that Mark shared in episodes 48 of test talks.


For more performance testing awesomeness for your earbuds listen now:

Performance Testing News of the Damned


Mark: Most people who are doing zero performance testing are almost absolutely guaranteed to end up in similar performance testing horror stories that PerfBytes features in News of the Damned. But it's interesting because there are so many shared things you can do to avoid ending up in a disaster like that. And part of it is just getting started with doing some load testing proactively But there's a lot of it goes to operational practices and good practices and things you can do with infrastructure in operations, that's why I think APM is becoming, the application performance management and monitoring, is becoming so much more popular is you can avoid a lot of these problems just with simple configurations and infrastructure even if you're not . . . even if you don't have all the load testing practices.

APM


Actually I got a question recently the APM grows up from over 15 years ago and years ago it was people just running monitors in production. Surprisingly, Joe, you might not think this. People never used to monitor a production.


They just would say, “We tested it for the last 18 months and boom it's live.” So it was actually quite a revelation a long time ago, too. We should monitor the CPU disk and memory, they would run IO stat they or they would run it every few minutes or something like that. I remember it was a revelation for some people to actually monitor response time in production like, “Oh, we can do that.”


Those practices have come a long, long ways. Now we've gone from just monitoring from outside the app we monitor inside the app, so diagnostics and profilers like New Relic, Dynatrace. CA has some of the old Wiley stuff that you can plug into your stack and you can monitor the actual method level calls, database traces so you can do a lot of that stuff nowadays because computers are so faster so much faster you can actually monitor without a lot of overhead. And a lot of the cloud providers now give you a lot of the basic stats to do “APM”.


So take that practice from production,  and start thinking, “You know who else is profiling in this whole company building software? Who else does profiling?”


And then you finally run into a developer at the water cooler and say, “Hey, do you do profiling?” They're like, “We do profiling all the time. We do unit testing, we do white box testing, we do you know debugging and profiling and all this stuff in development.” We're like, “Oh, well why don't you guys get together with the ops guys and start talking about profiling the code?”


But yeah doing some APM, now of course performance testers and even functional testers. Those environments are QA environment the same types of profiling tools can be used to, if you think of Microsoft recently did the whole, they want to eliminate the idea of not reporting  a bug, or that being the problem that you couldn't repro a bug. And they wanted to eliminate that because they thought it was a major waste of cycle when a tester would say, “Oh, I found a bug.” And then the dev says, “I can't repro that bug so I'm just going to dismiss it, not a bug.”


So they tried, through the windows, you know, event tracing and stack tracing and putting sort of real time profiling into QA and really making QA more like a white box development discipline behind the scenes. And so when I think of APM or performance management or performance practices moving upstream, it actually gets to look a lot more like white box profiling and debugging. And some testers are having to learn that now, they're using tools like Dynatrace or, you know, Visual Studio the CLR profiler for that net or java profiler. Oh that and don't even consider that you would be running load tests. You know, you might take your functional regression suite and turn on some profiling behind the scenes and if you hit a bug boom you instantly have all the tracing that you need to look at and go.

[pullquote align=”normal”]Start making your performance metrics available or make people more aware of them. [/pullquote]

Performance Testing Start with Profiling


Joe: Are you saying that would be a good place to start for someone that says that they don't have performance yet in their organization and they want to get their feet wet. At a minimum would you say let's just start this profiling the application and seeing where we go from there?


Mark: Sure Think of it in two ways. If you monitor and make visual, if you make it make the data, the measurements on CPU disc, memory, network, response time, throughput, app stats like garbage collection, size, things like that. Requests queued, number of connections used in a pool, number of threads used in a pool. If you start making those metrics available or make people more aware of them. What you're doing is influencing the thinking of those people who first of all, would be interested at all in those numbers and secondarily, would know what to do if those numbers were “good or bad”. So if you let's say, Joe, you're working in operations and I came up and said, “Hey, Joe something's running at 99%.” Well how do you feel?


Joe: I feel it's probably the CPU or some like that's . . .


Mark: But it's not good. What if I said it is 99% up time?


Joe: That's good.


Mark: Oh that's good. Right so just see so you just make if you just make people aware of a number, they may know what the number means. A second thing is, if we're not really sure about the reaction the follow through. We may be doing, you know, the sky is falling, Chicken Little. Which could be, “Wow, let's publish all these great numbers.” We don't, we haven't really indoctrinated or come up with an elegant way to know how people are going to react.


So before you even start load testing if you start making your numbers make measurements aware. Be very careful about the communication pattern your influence on developers or your influence on a product owner or somebody who doesn't know . . . and know the difference between someone who doesn't really know what that number means and what to do. You may give them a lot of power, put them in a management position they're like, “Oh, my gosh we're going to put together a million dollar project to do all this stuff to go after that cache hit ratio.” And let's say they just didn't understand cash hit ratio. There's all these esoteric black arts in the performance world which everyone is a team of firefighting specialists and I think we need to break that down by just simplifying some of these numbers and saying, “Hey, Bill when you see this number if it's over 85%, what I'd like you to do is get three guys together. Mary, Pete and Shirley and bring them together and we're going to do a little brainstorming on triage. As if it's over e85% we'd probably have something to look at. Otherwise, Bill, you can relax and go about your business.”


Then we're being compassionate to Bill because he doesn't need to be a performance guru but he at least knows, “Hey, I'm part of a functional team and we're making good use of those numbers.” And I think that's the reactive model where as load testing or performance engineering using preproduction, or what if scenarios, or production simulation early that proactive model uses the same numbers but it communicates something very differently when you start switching to doing load testing.

[pullquote align=”normal”]Make friends with your OPS guys [/pullquote]

Communication – What do these numbers mean?

Joe: I definitely agree and I think this is actually a big issue. It seems small but the communication piece you give someone a tool that's new to performance testing and all of a sudden they're throwing 1,500 users at your application, with no wait time time and you have all these numbers being generated and they're going to management saying, “Our application's broken.” And everyone's running around in a frenzy.


So this is a good way to ease your team into it and slowly teach them what's a good number what a number means. And before you actually dive into throwing a bunch users at an application and then trying to deal with all the output that comes out of it.


Mark: I think also people evolve. Like you say, if someone jumps into load testing, and this happens often with teams where somebody listened to one of our podcasts. Listened to this podcast and then went back to their team and said, “We're not doing enough about performance testing. We have to start doing something about it.” And it lands on somebody within a back sprint. “Oh, I got a story,” and a story probably isn't even specific it says, do something about performance. As a product owner, I'm really scared that we haven't done anything about performance yet. And it would be a valid story but if, like you say, you'll end up with false positives if somebody just jumps and they're detached from the real world of production what the real systems look like.


And so they're a of couple things, if you're getting started in a vacuum and you haven't made friends with your ops guys or even just say the database administrator, somebody that touches production. And just haven't, you know, take him out to lunch to say, “Tell me how do I even know how much load to throw at a system?” Or maybe the people that know the web logs or load balancer logs and say, “Well, we can tell you about traffic. We can tell you about how many hits per second when the peak two hours is.”


That should, might be the first part of research before you start running, you know, “real load”. Now, not to say, Joe, let's say you were a developer and you said, “I'm just interested in running 10 threads. I'm not, I'm never going to communicate to anyone that this is Chicken Little, production is going to fail. But I do want to do some concurrency tests.” That's like end unit perf or J unit perf will do some concurrency and you can look at thread contention and you could look at, you know, connection limitations or something. So mutex you could look at very, very small level my memory leaks. Other critical sections within the code that you sort of want to blast out. There's nothing wrong to let a developer do that, and even a tester to help out in doing that but you're right it's a problem if we turn that into, “Oh, my gosh, my little 10 threads exploded. We need to escalate this to management, the production is going to fail.” You know that's probably not the best practice. So somewhere in the middle those first couple of steps are, how do I switch into, how do I generate some real load in that?

[pullquote align=”normal”]You should think about paired performance. [/pullquote]

Performance testing is a team sport


Joe: Awesome. Another issue I find is that there's one person that they dub the performance guy and then any time anything goes wrong, you know, the performance guy says, “This number looks bad.” They're like, “So what do we do about it?” So how do we make performance testing a team effort where everyone is an engaged like you said?


Mark: Performance testing is a team sport, it's a collaborative sport actually just like any other modern innovative team or development effort whatsoever around products. Everyone is agile, we're talking that we're in the teenage years of real agile or extreme development. We understand there is more benefit, more, hat to use the synergy word but it is true, there's a lot of interactive benefit that stems from human beings thinking together on problems and solving problems, paired testing pair programming.


You should think about paired performance if you've got maybe one particular app or a set of components or services. One scrum that is like these guys are the mission critical scalability guys. Pull your specialist performance tester and say, “Will you come and just join a couple sprints with us? Even if you're not taking on a story. Just help us figure out what are the critical things going on within this code.” So pull them out.

[pullquote align=”normal”]So building absolute or isolated the ivory tower of a bucket of bottlenecks, only has a life span, it has a half life and suddenly you're not really valuable anymore. [/pullquote]

A Bucket full of Bottlenecks


I would say also, years ago I had this story or a metaphor of a guy that walked around with a bucket full of bottlenecks. But it's true, think about the metaphor of a guy, a guy like me who's like, “Well, I've learned these 10 different environments. This storage, this database, this network, this app this what. And I have a bucket full of bottlenecks that are already known.” Like take SQL server intension in tempDB, so you have a single tempDB file instead of round robining across 10 tempDB files. It's just a logical constraint within old SQL server. So I knew that bottleneck and I know that 90% of the customers with a SQL server I could walk in, tap tap tap. So I start from such processes and tap tap tap.


It's like a doctor, “Yep, you have you have the same disease and here's your prescription and thank you for your money.” At some point the SQL server team fixes the default setting and no longer has that contention. And now my value is shot.


So building absolute or isolated the ivory tower of a bucket of bottlenecks, only has a life span, it has a half life and suddenly you're not really valuable anymore.

[pullquote align=”normal”]a great way to keep learning is to be collaborative [/pullquote]

Keep Learning


So every performance tester out there needs to know that their current knowledge is going to be cut in half within the next year with new technology and so if you have a career in performance, I mean, probably a good idea if you keep learning. And a great way to keep learning is to be collaborative and so the biggest tip is to respect somebody's past knowledge. But say, “Is there anything similar in IO contention that we might find in Oracle or in Logging or in JMS or in some of the . . . Oh, wow if we look at RabbitMQ yeah, there's also some file contentions. Maybe we can set our queues up differently, you know, I think of that conceptually that's very similar to that tempDB contention from 15 years ago.


So it's difficult, so inviting your performance folks to transfer their knowledge in a little bit more abstract way that's one way, Or just start including them in, “What do you think?” You know, “What should we do with the story?”

[pullquote align=”normal”]Make it relevant to the real world. [/pullquote]

Realistic Performance Criteria


Joe: Awesome. We have eight sprint teams and there's a person that says now we're going to focus on performance and each sprint team is got to work on performance that's there their high level thing. And they get some number from high up saying, “Thou shall perform in two seconds.” And of course it doesn't work in two seconds, even using one user manually. So how do you engage that team to realize that maybe that the performance criteria they got was unrealistic and how can they show up proactively and in a positive way, why that is and how to move forward to a more realistic number?


Mark: A good example of this is to show how this works out is that like e-tailors, online e-commerce, we've done a pretty good job, and I'll say to you that testing tool or APM vendors out there, we do a good job of helping people convert the metric of their conversion rate into bottom line impact for revenue and actually connecting performance to revenue. And sometimes if the guy in the corner office he or she sends down that, “Thous shalt have two seconds.” There's a couple different ways that that could come about and we should be aware of that in the scrum and say, “All right, well why two seconds?” And some are, “Don't ask questions, just get it done.”


Well so maybe we don't need to ask the question really for some reason there's a valid reason. It's either financial, meaning a two second response time. Currently our average in production is three seconds and vendor X or white paper Y told me that if I shave one second of response time off the shopping cart page, I can boost revenue conversion rate by 15% and that's my revenue goal.


So you've got this very hard revenue or throughput number attached to performance, that dysfunction is to just say, “Let's bring more clarity.” And you may not be able to go back up to the executive to talk about that but you can go parallel, go to your peers and say, “Hey, Jim. Tell me, you've got the monitoring tool. Tell me what what's the actual response time in production? Like is two seconds crazy?” Are and you might find out response time in production is 2.1 seconds. “Okay, hey, wait a minute, we're only shaving 100 milliseconds off of this transaction. Guys this is not going to be hard at all. And let's start digging around where we can do a little caching or maybe limit some round trips and it might be very easy to do it.”


But let's say your response time in production truly is five seconds and you got to shave three seconds somewhere. That's a more difficult endeavor. So first thing would be figure out whether this is a serious hard to do problem or if it's actually easy by talking to your friends in the real world and look at your beaconing, monitoring, response times in production and if you don't have any of that just go out, go home from your internet connection and take some samples on the page.


Use Why Slow, you don't need to do a load test just take some samples of the page load times in five different browsers. So that's a good thing is to figure out the scope of whether this is a serious problem or not. I would say also then if you're across all of these sprints you're probably immediately should say look at all of these teams are looking at performance, we might end up finding one team finds a pattern or an anti-pattern that they're solving.


There should be a collaboration across the teams maybe every day at 4:00, if you know there's the ah-ha team that we all, you know, go stop, get a cup of coffee, grab a beer or whatever going to do, we all go and hop on a Skype group or hop on a collaboration thing and say, “Hey, Susan found this awesome bug and we think it may be a pattern across all the data access layers for all the teams.” And so share what you found, share how you fixed it minimizing round trips is a good example. Minimizing or caching some data that you don't have to retrieve every time from the database.

Start Collaborating


That's the second thing I would do is, start collaborating because those patterns will be you'll find them across all the teams and share them as soon as possible. And what you find is shaving 50 milliseconds off, even though Susan might say, “Look 50 milliseconds is not that much.” Then you're like, “Wait a minute this is across all 15 teams on different components. Together, all those 50 milliseconds add up to 650 milliseconds.” Now we're getting somewhere. So be make sure you collaborate with the bugs, with the bottlenecks that you find.


Get your Validation Piece Together

The third thing is, get your validation piece together. So if you're not doing load testing to confirm that Susan's fix is working then you need to start looking at, “All right. I need a way to just start.” Build a little lab, pick one of your environments.


Start somewhere and get either a JMeter or there's free, everyone has free open source performance testing tools now. CloudTest has free tools LoadRunner has free tools. You can get started with those small level of load and start hammering some load at the fixes and create that environment for all of these bottlenecks within that two weeks every night. Hey whatever fixes you got? Let's get them in a build and put them in some production simulation and start running some load.


So those are kind of the three, make it relevant to the real world. Collaborate on any fixes you find and then set up, choose one of your environments for a highly repeated, you know, put all the fixes in and rerun it, rerun it,re run it using, even starting with some of the free tools that are out there. Great place to start.

It runs fine on my machine


Joe: How do we avoid that false expectation that because it performs a certain way within my environment it's going to perform just like that in production?


Mark: Joe, it runs fine on my laptop, it should run fine in the cloud with thousands of nodes, man, right? And that's true if you actually run production on 1,000 laptops. You could maybe do the math and it would be fairly simple on your capacity section. You're right, the traditional, I'll say “traditional” in a respectful way because there were some really, really very intelligent modeling tools from TeamQuest and high performance way back in the day and you think of the business value of doing capacity projections on a annual or biannual budgeting exercise and you may only buy hardware.


Remember hardware cycles a long time ago were very slow and they were actually hardware. So if you put the order in now it's going to be six months before you actually get that next upgrade.


And so all of those practices, it was, you know, just making one estimation, one model. So there was a tremendous amount of value to the business to make a really, really great decision out of that one model and the model didn't change that out. So those practices where you would take a run book or take all the stats from production and you would do a mathematical or arithmetic calculation on modeling for the future and you would set your budget on business growth and what the expectation is.


You come up with that model and then, you know, you'd say good bye to the TeamQuest guys, you know, for a year and then say, “Come back with your consultants next year when we do this.” And you only really do those modeling calculations once a year, once every two years. And you made a $5 million decision and bought your hardware and things move slow.


We don't live that in that world anymore. People, because of the rapid pace of deployment and everything I think the idea is that two innovations have happened. One, production has become more fluid and more open.


The idea that it's more heterogeneous lots more interfaces, the whole service architectures have changed everything. So the ability to . . . the practices is to virtualize something in production. Flashback databases, wipe out the data. All of the things I learned from Dan Barter who's at [inaudible 00:20:33] and those guys really kind of formalized and proceduralized a lot of the ways, consultative ways I'll say to get production testing in production.


We used to never do it, “Thou shalt never do that.” And it's changed a lot so it's actually valid to do that. You might choose not to take the risk of doing that every night. So development is doing builds a couple times a day or definitely every night. You can't test in production every night.

[pullquote align=”normal”]”What if” scenarios are harder to do in production because production is production [/pullquote]

What if?

So the idea that you can test at 100% scale in production but there's still a limitation at that, which is, how do I test fictitious future scenario? What if I do twice the load? What if I acquire the competitor and I run three times the throughput on my current systems, can it handle it? The “what if” scenarios, “what if” I had twice the machinery?


So those “what if” scenarios are harder to do in production because production is production and you're probably not going to get permission to change it dramatically. So when we move into the test lab, that other part of your question to answer recently, is testing at scale. When you build that little environment, and sometimes developers or testers are the least empowered people, like you really got to have friends in the ops world and you know you got to say, “Look, I'll you I'll either bribe you with beer or single malt scotch.” Or I'm going to actually be the provider of great data so they can love what you give them, you know, they're your customer. And testing at scale, I try to advise customers stay either at 50% scale or 25% scale because you're really doing 2X or 4X conversion.


For some reason we like twos and fours. Two NIC cards two CPUs, dual core, quad core and so it's easier math. If you say, let's say, production has a single node has eight cores, you know two quad core processors, great. So I can run at 50% load with a single quad core. Or you might do, you know, half, throttle it down or whatever.


So making your scaled environment absolutely, perfectly 50% scale is incredibly hard to do. So CPU, memory, disk, network, number of connections, number of throughput. Then get into the app layer, the number of connections that you allow to the database. So if you have 100 in production you do 50 in tests. If you have the number of threads, min and max, you're going to change that as well.


So you can take this to the utmost eighth degree and do go through that exercise but every other day somebody says, “Well I think we should bump up the connections. I think we should bump down the connections. I think we should change this because we're more fluid in our releases in configuration” So the truth is if you're going to try to get load testing kind of put in a pipeline, load testing should become that place where you can run it with the new configuration settings before it goes to prod so you truly getting back into a pre release model.


Hopefully not blocking but agile enough where you can say where exactly 50% scale and therefore we know and we run half the load and it ran fine. So if we run twice the load with twice the number of connections we won't blow out the connection tool in prod. And so that exercise, I mean, it took me five minutes to explain that it's really hard to explain it to your manager if they don't get what you're doing. So it's incredibly difficult to be in that load.

This is a lot of work – should we just give up?


So the third thing I'll say on this, because I have a lot to say. It's true. I still think people see the complexity of that idea as this, “We're never going to get it right let's just give up and not do it.” And please don't do that because there are other kinds of bottlenecks that are not specific to scale so you can find logical contention, deadlocks, blocking.


Maybe it's not physical contention like just IO latency or network latency but let's look at, you know, the trend and this is what I see with continuous integration. If we're running the same little performance test every night on every build and we look at the trend over time, suddenly after three weeks of running we consumed four times the number of connections in the connection pool and no one even knew that that was going to happen.


Now that bug is a logical configuration bug, we're using framework components and all of a sudden two other components boom chew up another connection or they hold it too long or they don't release it. So you can find the sort of logical code based defects, those bottlenecks just by watching something change and, you know, I find a lot of people get a bug all the way into production they're like, “You know, if I were looking at this on the nightly build I'd have cut this thing three weeks ago. This is some simple little thing.”


So, I'm like, “How can we make those, just the consumption of a resource relative to last night's run the night before. Over time can we see that?” And of course then you get into some other problems people I see they have shared environments and it's noisy and you want to control for the noise.


But if you can at least get something running like that, again, that make people aware and they see the visibility of it they start getting some value out of it.

Set up a physical environment


Now you have some traction to say let's go let's go set up a physical environment. Some people encounter performance, we work with technology as testers, developers, whatever sometimes we discount our credibility as users and we know we're biased and it's good to know you're biased but it's also good to know, “Hey this is a simple page. This thing is taking 40 seconds to load and I don't care if it's just my environment or not, something is not right here.”


Especially developers you think developers that sort of play with the code they compile and run it, compile it and run it. They're bugging as they're building some people do that interactive. They don't just write all the code perfectly and push the compile button once, I don't see them anymore.


That's the old COBOL and C.programming but in the interactive framework based stuff they're molding it more like clay. They're sort of developing it as they go and, “Get this running and now add this and what changed and add this, you know, but I added this component or this extra method and now it does something what is that. And oh, I'm using the wrong futures and the pool's not right and I don't have enough threads.” Things like that.

[pullquote align=”normal”]Never talk about any physical properties at all. Just talk about time and volume. [/pullquote]

Don't forget your users

So the idea that we get trapped in the physical thing we're building, the use of threads or physical resources, we lose track of time. And actually I find our influence as engineers . We will influence not only our peers in testing but especially the business users, business analysts, product owners, even a technical product owner.


We think about technology first and users second and I actually advise engineers, when we're doing sprint planning or we're doing investigation or paired work around what the story should be, to never talk about any physical properties at all. Just talk about time and volume. And that can be . . . if your product owner or your business analyst doesn't understand response time, then there must be something abnormal with them in their brain because everyone on the planet can relate to time.


Time is something that, you know, how long did it take you to drive to work? How long did it take you to do these things? We know the difference between the context for what would be perceived as a long time and what would be perceived as a short time and your business analysts your product managers or your or even your sales guys should understand who they're competing with. And if the competition takes 40 seconds to get credit processing done. Boy if you can do you can do that in six seconds, that's tremendous. So if you go back to the CEO you were talking about or the executive that said, “Thous shalt have two seconds.” You might be able to go back and ask the business analysts, “Hey guys do we really need two seconds?” And they may come back and say, “Guys, the competition can't even do it in 30 seconds.” So if you can do it in, I don't know where the two seconds came from but get your partner in the business to give you that stamp of approval or some feedback to say, “This is . . . under 10 seconds is . . . ”


Now you also know end user perspective and context matters for what they're doing, so if you're in an e-commerce site you want how many seconds for the “Buy Me” button shows up, right? I mean, sometimes it's like we rendered everything beautifully and then the “Buy” button takes forever. Conversely I've talked with guys at STPCon last week we were talking about they actually trimmed such little content from the page they actually saw people not spending time so they weren't selling. So e-commerce has a whole different dynamic.


Go to the back office, and look at reports and say report runs that we cache in business objects or out of the OLAP cube or the data warehouse, you may wait 20 minutes to get your report. But that report to the company could be worth a couple million dollars, you know? So it all is relative and if your business people aren't thinking about, “Well who am I really serving and what is a long time for them what is a short time to them and are we competing on this dimension of time in the marketplace or even competing against the clock with our you know more people less time back office kind of stuff? So I think time.


You know, ignore CPU, disc memory, network, apps servers, all the technology stuff that you ever studied and just spend time thinking about time and the different context for what's the real value of being fast. Which again goes back to the executive we mentioned saying, “Thou shalt have two seconds.”


And if you're able to have a savvy a conversation as I just did, you'll walk into the CEO's office and blow their minds like, “Oh my God, you're the new VP of performance I didn't even know you could think about time this way.” But there is no one absolute. I will say also there's a challenge with mobile and latency. That was my time at [inaudible 00:30:12] and I've known [inaudible 00:30:14] for so long as they got now acquired by HP, they really did a lot of the state of the art thinking around network impacts on functionality on user perception, on customer satisfaction, on conversion rates.


They kind of lived that more than anyone I've ever witnessed in terms of really impacting the business and impacting end users and then mobile came along and oh my gosh latency is everywhere. So, that's the other part. You have to think first, about the human experience context or the end user and what they perceive as performance and then only secondarily, start digging into what are the what are the physical implications of my environment for serving that experience to a customer?

We as technologists have power


Joe: Awesome it's like working backwards start with the customer. Work your way back, almost.


Mark: Yeah and the danger the danger is we underestimate how much power we have as technologists. When we started . . . because we can talk infinitely technology and it's amazingly fascinating and we can make it sound like, oh my God this is innovation we're going to be a million. I mean, we can spend forever getting people excited and they forget, “Oh yeah, there's a human interacting with the web page and they're going to pay us money. That's how we're going to . . .” That's right, let's not lose sight of those guys and that's why I think the onus is on us as engineers. We can talk about CPU disc memory later, we can talk about it later. In fact, what you say and what we learn about time and response time or performance from the end user. I'll figure out what we do with CPU, let me figure that out. That's my job. I'll take care of that and if we need 100 servers or 100 nodes in the cloud or not we'll figure that out. But I'm not going to worry about that first, let me worry about that secondarily.


And I would also say, I mean, as you know we're collaborating around there's a wealth of knowledge in the perf bites series and we have like 70 some episodes and we keep it entertaining so, you know, if you're just doing this in your leisure it is fun. But we do cover CPU disc memory network, we talk about configurations, we talk about all different practices, performance requirements, being in an agile team. It's also a bit of a debate from the historical views that James and I have to kind of observing new companies and what their challenges are. So it does give you some context of the history for a load testing so.

[pullquote align=”normal”]What if happens, it's a wonderful contemplative question. [/pullquote]

One piece of actionable advice you would give someone trying to improve their performance testing effort


Joe: Before we go is there one piece of actionable advice you would give someone trying to improve their performance testing effort?


Mark: Run a load test. If you're not running a load test today, run a load test. Pick up JMeter, pick up end unit perf, J unit perf and get some unit tests. Run something with multiple threads and I don't care where you run it, I don't care how you run it. Make it small. Don't try to blow everything up at first but just that first step of, “Wow what if I run multiples of this at the same time?” It's a beautiful question. What if happens, it's a wonderful contemplative question. Starts from a point of exploration and that feeling, that moment where you get the freedom. “I'm going to go . . . What if I do this?” And it's multiple, concurrent activities that, if that opens your mind and you're excited about it that's the spark I look for when I'm mentoring people. And they're like, “Wow, I never thought about that.” And then they're going and try to fuel that energies.


But really the first actionable thing is just pick up a tool and run a load test. The first actionable thing if someone's already doing a load test and they want to go further is to start, I would say, right now it's really popular, learn the real world. Connect to the APM see if you can make your load test relate to the real world. Figure out what real world performance looks like and those are two actionable things, I think, for somebody coming from zero to somebody who's been doing load testing but maybe wants to go further with what they're doing.

Mark Tomlinson


For more Automation Awesomeness for you ear buds check out the full TestTalks audio interview episode 48 with Mark Tomlinson

{"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}

Unlock the Power of Continuous Performance Engineering

Posted on 03/25/2024

With the current demand for software teams to high quality product in a ...

16 Top Load Testing Software Tools for 2024 (Open Source Guide)

Posted on 03/12/2024

In the world of software development, testing is vital. No matter how well ...

What is Throughput in Performance Testing?

Posted on 12/16/2023

Throughput is one of the most misunderstood performance testing concepts new testers sometimes ...