Performance Testing Basics – What is Throughput?

What is Throughput:

Basically, “Throughput” is the amount of transactions produced over time during a test. It’s also expressed as the amount of capacity that a website or application can handle. Also before starting a performance test it is common to have a throughput goal that the application needs to be able to handle a specific number of request per hr.

For example:

Let’s Imagine

Let’s imagine that a gas station attendant fills up a car’s gas tank using a gas pump. Let’s also say that it always takes the gas attendant just one minute to fill up any car, no matter how big it is or how low the car’s gas tank is.

Let’s call this gas station “Joe’s Gas,” and envision that it only has three gas pumps. Naturally, if we have three gas pumps and three cars, it follows that Joe’s attendants can only fill up three cars per minute. So, if we were to fill out a performance report for Joe’s gas station, it would show that Joe’s throughput is three cars per minute.

Throughput no wait

This is Joe’s dilemma: no matter how many cars need gas, the maximum number that can be handled during a specific time frame will always be the same –three. This is our maximum throughput; it is a fixed upper bound constraint.

throughput

As more vehicles enter the gas pump line they are required to wait, thus creating a queue.

It is the same concept that applies if we are testing a web application. If a web app receives 50 requests per second, but can only handle 30 transactions per second, the other 20 requests end up waiting in a queue. When presenting performance test results, throughput performance is often expressed as transactions per second, or TPS.

Real performance test throughput results:

I use HP’s LoadRunner (which comes with a throughput monitor) for performance testing. In a typical test scenario, as users begin ramping up and making requests, the throughput created will increase as well.
Once all users are logged in and processing in a steady state, the throughput will even out since the load each user makes stays relatively constant. If we wanted to find an environment’s throughput upper bound we would continue increasing the number of users. Eventually, after a certain amount of users are added, the throughput will start to even out, and may even drop. If the throughput enters this state, however, it is usually due to some kind of bottleneck in the application.

A look at a typical throughput results

Below are the LoadRunner throughput chart results for a 25-user test that I recently ran. Notice that once all 25 concurrent users are logged in and doing work, the throughput stays fairly consistent. This is expected.

Good Throughput Chart

Now notice what throughput looks like on a test that did not perform as well as the last example. All of the users log in and start working; once all users are logged in making request you would expect the throughput to flatline. But in fact, we see it plummet. This is not good.

Bad Throughput Chart

As I mentioned earlier, throughput behavior like the example above usually has to do with a bottleneck. By overlaying the throughput chart with a HP Diagnostics ‘J2EE – Transaction Time Spent in Element’ chart, we can see that bottleneck appears to be in the database layer:

Bad Throughput Chart with HP Diagnostics

In this particular test, requests were being processed by the web server, but in the back end work was being queued up due to a database issue. As additional requests were being sent, the back end queue kept growing, and users’ response times increased. To learn more about HP Diagnostics check out how I configured LoadRunner to be able to get these metrics in my video: HP Diagnostics – How to Install and Configure a Java Probe with LoadRunner

Recap

To recap: Throughput is a key concept for good performance testers to understand, and is one of the top metrics used to measure how well an application is performing. I’ve also written some other post on other concepts that a performance test engineer should know about:

Extra
For more detail info on performance testing make sure to grab a copy of Performance Analysis for Java(TM) Websites.

Succeeding with Automation Awesomeness. I’ll show you how!

Test automation, like all development efforts, is difficult. Most projects don't succeed. You can do better! Sign up and receive exclusive content like my free 10 Proven Actionable Steps for Automation Awesomeness from some of the biggest testing leaders in the industry. Let me help YOU succeed with Test Automation! Sign up now:

Leave a comment:

77 comments
TM - August 15, 2011

Excellent. It helped me to understand with the simple examples and the visual graphs.

Reply
    Joe Colantonio - August 15, 2011

    TM » Thanks! I really appreciate your feedback. It helps me in planning what types of future post I should focus on. Cheers~Joe

    Reply
      Haris - February 2, 2013

      Hi Joe, thank you for the explanation on throughput. I really like the gas station analogy. I have another basics question though. How would you define SLAs when developing a performance test plan? How much throughput is good, what is acceptable CPU on servers etc. I know there is no one size fits all, but if you can give some examples similar to the gas station analogy, would appreciate it. Thanks.

      Reply
        Joe Colantonio - February 3, 2013

        Haris » Thanks Haris! I’ll try to see what I can put together for an SLA post. Thanks for the idea ~Cheers Joe

        Reply
shailesh - December 19, 2011

Hi,
i would like to know…What is pacing?why pacing should be given between iteration or what if pacing is not given…

Thanks in advance.

Reply
    Joe Colantonio - December 22, 2011

    shailesh » Hi Shailesh – Good question – I think this quy can explain it better than me:. Hope it helps. Cheers~Joe

    Reply
    earnest - January 18, 2012

    Shailesh,
    Pacing is the time you wait between iterations.

    BTW this is a great article JOE.

    Reply
      Joe Colantonio - January 18, 2012

      Thanks Shailesh! I like your pithy definition for pacing :) Cheers~Joe

      Reply
Paul - January 11, 2012

Thanks for the article. It’s really helpful !!

Reply
    Joe Colantonio - January 11, 2012

    Paul » Thanks Paul – I’m glad it helped you! Cheers~Joe

    Reply
kumar - February 22, 2012

Nice article. I do have one question.i was going through the Loadrunner training manual and in that it mentions “If throughput has flattened out while the number of Vusers increase, there is a likely a bandwidth issue.” But i feel your explanation is more correct. can you please shed some views on this?

Reply
Mathews - April 4, 2012

Hi Joe, I woulld like to know that, what are techniques or methods that can be used to evaluate throughput in terms of computing ? [I am a Computer Science Honours student (South Africa), and currently doing something on throughput evaluation ]

Reply
    Joe Colantonio - April 9, 2012

    Mathews » Hi – First I would use a performance test tool like LoadRunner,SoaSta, Grinder or JMeter to place a load against the test system. I would start with one user and increase the user load by small increments until I found the throughput plateau. I consider a plateau to be the point where either the throughput really falls off or the response time is well over what is acceptable for my requirements.That’s it in a nutshell – this may be to basic for your needs.

    Reply
fazal - April 12, 2012

i want to know that what is throughput out and throughput in.
one think i must tell you, your way of explaining is toooooo good.

Reply
    Joe Colantonio - April 12, 2012

    fazal » Thanks Fazal.

    Reply
Abhinav - May 15, 2012

Thanks Joe for this wonderful post, it helped me a lot.
However it is clear to me but I just wanted to confirm that throughput(It is also a term in Argeegate Report(Listener) of Apache Jmeter) is same as processing rate of requests, a high throughput means high performance, Right?

Reply
    Joe Colantonio - May 23, 2012

    Abhinav » Hi Abhinav yes the throughput on the JMeter Aggregate_Report is the same. Not sure I would go as far as to say that high thoughput = high performance. Its one factor in many that needs to be looked at when performance testing an application. Cheers~Joe

    Reply
nikhil - July 2, 2012

Hi everyone…

What is ADHOC testing and when & why this testing is done?

Thanks,
Nikhil

Reply
    Joe Colantonio - July 7, 2012

    nikhil » Hi Nikhil – Ad hoc testing is a commonly used term for software testing performed without planning and documentation. Ad hoc testing is an informal test and usually is a test that is only run once

    Reply
Abhishek - August 21, 2012

Excellent Stuff. Finally I am able to understand throughput now.

Reply
    Joe Colantonio - August 22, 2012

    Abhishek » Thanks Abhishek! Let me know if there are other performance concepts you want me to blog about. Cheers~Joe

    Reply
Murali - September 20, 2012

Excellent article Joe it made things easy for me
how many users should i use if my Target is 0.7 TPS

and is it different if i call a webservice via soapUI

Reply
    Joe Colantonio - September 25, 2012

    Murali » Thanks Murali – its hard to tell without knowing how much wait time you have between transaction and the amount of time your response is for 1 user. You should try to get a handle on how many users the business expects and then try to create a realistic scenario emulating that to check your TPS. Good Luck~Joe

    Reply
Dzmitry Kashlach - September 27, 2012

Thank you, Joe! I like comparison with gas station)))).
BTW, what do you advise to start from when analyzing results?
Do you use the same approach as in the article from below?
http://blazemeter.com/blog/how-analyze-results-large-scale-load-test

Reply
waasay - October 5, 2012

good article. it is very helpful
thanqs joe.

Reply
Sajid - October 17, 2012

HI Joe,
Thanks a lot for providing this nice information. Its the best article I have ever went through with very simple and elaborative examples. Its really helpful. Can you please publish few more similar articles on most important LoadRunner monitors like Hits per Seconds, Average Transaction Response Time etc.?

Reply
    Joe Colantonio - October 18, 2012

    Sajid » Thanks Sajid! I’ll try to write some more post similar to this. I usually only write about what I’m currently working on but I will try to focus more on LR again. Cheers~Joe

    Reply
Shiv Raj Sharma - December 19, 2012

Hi Joe,

Very Nice article with lot of information. Please Provide more information on performance testing with LR.

Reply
Anoop - December 26, 2012

JOE>> This is a very good portal for performance testing. Well Done!!!!!!!!

Reply
Vinod - January 24, 2013

Appreicative – very informative :-)

I am using 9.1 loadrunner, how do we get – HP Diagnostics ‘J2EE – Transaction Time Spent in Element’ graph which show database/appserver in our case.

Reply
Shinto Abraham M.A - January 29, 2013

Thanks a lot ….Joe’s gas gave a good understanding of throughput.

Reply
Syed - February 10, 2013

Amazing! you are doing great job for learner

Reply
Srikanth - February 18, 2013

Thanks joe for the information pertaining to Throughput.
I have one query After the test is done we can get the total throughput value in bytes and my question is whether we can get the throughput value for a single request.(Example for a single login request how can we measure the throughput value.)
Thanks In Advance.

Reply
Mukundan - March 6, 2013

Great Article Joe.

Reply
Ravi Ranjan - April 26, 2013

Hi Joe,
Appreciate the way you explained Throughput.
I have one question:
What is the difference between Think Time and Pacing, if we give think time before “return 0″ in script then it will also work like pacing, then wats the difference?

Reply
Sharath - May 15, 2013

Excellent Joe, you have simplified the concept with better understanding

Reply
celine - May 17, 2013

thanks joe, it is reallu useful for understanding what throughput is.

Reply
celine - May 17, 2013

thanks, really useful

Reply
tester - May 28, 2013

Hey
i wanted to know that if increase the threads in each run and also the ramp up time
i.e 1st run threads :5 ramp up time is 10 sec
2nd run threads :10 ramp up time :20 sec
the throughput and bandwidth will also increase?

Reply
Shraddha - June 12, 2013

Very Informative article!!! Thanks Joe!

I have a query,we are using LR11. Could u please tell me how to calculate pacing time and what is its significance? should we include Think time as well in pacing time or not?

Thanks!

Reply
Anukrrosh - June 13, 2013

Hi Joe,
Its a excellent post, and helped me a lot .. could you please explain the relation between hits/sec, response time and throughput……as I am seeing a lot of people saying different views about this ..

Reply
bigb - July 18, 2013

Very nice example…..good way to present the basic concepts….

Reply
Preetha - August 26, 2013

Thank you Soo much.Really useful information

Reply
Ramamurthy Pujari - September 1, 2013

HI Joe,

I read the article, you explained very well about throughput. Than you so much. But i did not understand what is the database issue. which database counter you measured for throughput. If you don’t mind can you elaborate in detail the ” back end work was being queued up due to a database issue”.

Regards,
Ramamurthy P

Reply
    Joe Colantonio - September 2, 2013

    Hi Ramamurthy – if I remember correctly this issue was that the web server was sending request to the DB but the DB was too busy trying to fulfill previous request. Because of this all the request where building up in a queue. The web servers Processor Queue Length kept increasing which was a clue to me that something was happening on the DB side. I had a db administrator monitor the DB as the test ran to pinpoint what was happening.

    Reply
      senthil - December 27, 2013

      Hi, I need full process of Performance testing and how to do that for single user and 5 user. Thanks

      Reply
anitha - September 25, 2013

HI Joe,

Nice blog…
I have a question… If the hits per sec is high and the throughput is low,what does it mean?

Reply
    Ramamurthy - October 24, 2013

    Joe-Thank you so much for reply.
    Anitha- Below are the points two cause the low throughtput

    1. Network bandwidth- constraint. beacuse htis/sec increases as vusers increase also throughtput also increase. but throughput decreases means shortage in the network bandwith or more utilization or crossing the limit of bandwidth.

    2. Less number configeration of thread pools.
    3. CPU utlization reached more than 90%.

    Please correct if i am worng for above points regarding the throughtput decrease eventhough Hits/sec increases.

    Regards,
    Ramamurthy P

    Reply
Anandh - December 5, 2013

i have one doubt in load runner?
if we increase the through put, should the response time take more time?

Reply
    Joe Colantonio - December 17, 2013

    Depends – in general I would say eventually yes as your throughput goes up response time usually goes up as well once the server’s resources start to get heavily utilized.

    Reply
Gaurav siwach - February 1, 2014

Hi Joe

whenever you do a performance for a website

which of the following types of testing u normally perform in every iteration

Load
Stress
Scalablity
Volume
Endurance

BTW awesome website man..

Reply
    Joe Colantonio - February 23, 2014

    Hi Gaurav – great question but its hard to answer other then it depends. I think at a minimum you want to do some kind of Load test to make sure that performance has not changed from the previous release. Also you want to do performance testing as early as possible.

    Reply
Shafee - February 17, 2014

Very good article…thanks joe. I am very big fan of all ur posts

Reply
Manju - June 19, 2014

Nice post

Reply
vijaya - July 10, 2014

Joe …Thankyou verymuch for this excellent article.

Reply
Akshay - July 30, 2014

Hi Joe,

That was a nice post on throughput, but one thing i am not clear about is the difference between throughput and hits/sec. Are these same? If not, what is the difference?

Cheers
Akshay

Reply
Pratyusha - August 5, 2014

Hi Joe,
I am doing performance testing for an application where I need to check system behavior based on the number of records generated.For example, manually I have verified that from 1-Apr-2014 to 30-Apr-2014, no of records generated is 35000,again from 3rd Apt-2014-15-Apr-2014 , no of records generated is 18000. So for me increasing the Thread count won’t probably solve the purpose. What I am doing is changing the date ranges in my script and noting the performance. Is there a better way to do this.i am using Jmeter

Reply
Rahul - August 13, 2014

Hi Joe,

I am new to performance testing. I want to learn scripting in Load runner. I have intermediate knowledge of c language. Could you please let me know what should be my next step (resources or websites i should refer to, any dumps or examples which i can refer to)? Also, what should i refer to be able to learn performance tuning?

Thanks in advance for your help! Your posts are really helpful…

Regards,
Rahul

Reply
    Joe Colantonio - April 18, 2015

    Hi Rahul I would first check out the podcast PerfBytes which is hosted by Mark Tomlinson and James Pulley. Between the two of them they have more then 40 years or performance testing experience.

    Reply
Vinay - October 29, 2014

I want to know if thoughput in JMeter consist of network latency or just pure server throughput?
I know ideally we should run perf test from LAN to avoid network latency. But I am testing web app which is hosted on cloud. Cant skip network latency.

Reply
Naz110 - December 23, 2014

Hi Joe,

I have been reading through the QAs and finding it very helpful. I just started to learn about performance testing.
I am looking for a good blog/video on correlation and parametization. Can you suggest some?

Thanks

Reply
SNEHA - July 1, 2015

Very specific and easy to understand.helped me a lot.
Thanks and keep it going!!

Reply
Click here to add a comment

Leave a comment: