JMeter throughput may be a fake data that contains local processing time.

First of all, I did not use JMeter for pressure measurement. The reason for the story was that I saw a test report of my colleague that applied JMeter to test. By chance, I found a problem that the throughput error in the JMeter report was large. The results are as follows:

Throughput TPS or QPS based on the classical model should be equal to the number of concurrent threads divided by the average response time:

tps =Thread / AVG(t)

Or TPS = COUNT(request)/T

Consider the first example: average response time 593ms, 100 concurrency, calculated throughput: 168.63, JMeter throughput of 166.4, the error is almost negligible. Consider the third case: 100 concurrency, average response time 791ms, calculated throughput of 126.422, JMeter throughput of 92.3, which is already a big error.

To find out why the error was so large, a research colleague’s pressure measurement revealed that in the third case, he used more regular matches to verify the response return value. Did JMeter take too much time to process the return value, resulting in a throughput error? Reminds me of my previous article on modifying pressure test results with microbenchmarks and how performance testing can reduce native errors.

So let’s do an experiment to verify: first write a script, I use a single threaded script, ask 10 times to see the result:

Looking at the results, the average response time is 207ms, a concurrency, the calculated result is 4.83, JMeter gives the result of 4.8, in line with expectations.

Then I put the thread to sleep for 500ms with a Groovy post-processor, and then I asked for the result 10 times with a single thread concurrency:

Looking at the results, the average response time is 193ms, which is similar to the first result. The throughput value given by JMeter is 1.5, which is a huge error.

So where does the 1.5 throughput come from? We add 193ms to our waiting 500ms (500 * 9/10 should be added here), and the calculation result is 1.54, which is in line with the 1.5 given by JMeter. It can be concluded that JMeter also takes the processing process of the machine into account when calculating the throughput.

If JMeter’s average response time over the course of a request is the normal count between the time a request is issued and the time a response is received, but the throughput is missing, the entire thread loop time of the native machine is used as the basis for the throughput calculation.

If you do other things in the thread, such as regular extraction, parameter validation, variable assignment, etc., then the throughput will decrease. Once the machine processing time increases, the actual pressure on the server is less than the configured pressure. If the machine performance consumes too much or there is a wait somewhere, then the throughput can be treated as a fake data.

Here I present a scheme for modifying pressure test results using microbenchmarks and how performance tests can reduce native errors.

If large errors are found, the results must be corrected to avoid false data.


  • Solemnly declare: the article is first published in the public account “FunTester”, prohibit the third party (except Tencent Cloud) to reprint, publish.

Selection of technical articles

  • Linux performance monitoring software NetData Chinese version
  • Third edition of the Performance Testing Framework
  • How to Perform a Pleasant Performance Test on the Linux Command-line Interface
  • Diagram HTTP brain map
  • Java Concurrency Bugs Basics
  • Java Concurrency BUG promotion
  • Graphically output test data during performance testing
  • Measure the delay of the asynchronous write interface during a pressure test
  • Quantitative performance test scheme for multiple login methods

No code articles featured

  • Programming thinking for everyone
  • JSON based
  • 2020 Tester self improvement
  • Pitfalls to Avoid for Beginners in Automation (Part 1)
  • Pitfalls to Avoid for Beginners in automation (bottom)
  • How to become a Full stack Automation Engineer
  • Left shift test
  • Manual or automated tests?