Recently we received a request from the product to load balance an interface. When I heard this request, my heart broke – there is only one interface, how to do load balancing, load balancing must have at least two!

Finally, he understood what the product was trying to do: he wanted to make the interface more robust because it was down because of the volume of requests online. Basically, don’t let the program die, just do it.

When doing this requirement, I first performed a simple performance test on the interface, which is recorded here.

Performance testing tool

  • Jmeter
  • JProfile

Test Index Introduction

  • Response Time (RT) : The Time consumed by the client from the Time when the request is made to the Time when the last byte of data is received.
  • Queries Per Second (QPS, Queries Per Second) : Number of requests processed by the server in one Second.
  • Throughput: The number of user requests processed by the system in a unit of time
  • The other…

Jmeter configuration

For this interface performance test, all the components and processors I added to Jmeter are as follows

All components and processors
  • Thread group: number of threads that initiate a request, number of loops, and other parameters
  • HTTP request: Request protocol, server, port, request address, and request parameters
  • Result tree: Records the response information for the request
  • Response time Graph: Charts the response time of a request
  • Summary report: Automatic statistics of the test data, including sample number, average response time, maximum and minimum response time, exception rate, throughput, packet receiving, etc
  • Generate summary results: This is mainly used to measure the total time of the request. Other metrics overlap with the summary report and are not covered here

Performance tests for single threaded requests

To first test the average response time of the interface, I configured a thread in the thread group to loop 100 times. The ramp-up period parameter is used to set up all threads within this period. There is no need to change this parameter.

Thread Group configuration

To configure the response time graph, you first need to set an output file path and then configure the appropriate interval (different intervals will display the graph differently), which I configured for 10 milliseconds.

Response time diagram configuration

The result tree is not important in response time testing, and summary reports and summary results do not need to be configured.

After completing the above configuration, click the start button, and after waiting for some time, the test process is complete. First, open the display diagram in the response time diagram.

Response time diagram

As you can see from the graph above, the average response time per request is around 40ms.

At the same time, you can see the data results of this test in the summary report interface.

Summary report

As can be seen from the table, the total number of samples is 100, the average response time is 41ms, the minimum response time is 28ms, the maximum response time is 135ms, the request error probability is 0.00%, the throughput is 23.7 times per second, regardless of the statistics of data packets.

Since this test was performed sequentially by one thread, we can conclude that the average response time of the interface when only one user requests it at a time is 41ms.

Above for a single user request performance test is equivalent to a trial of Jmeter demo project, because in normal circumstances, users of online system can not only one, so the above conclusion does not represent the interface performance, we need to find a suitable best concurrency, and concurrent test.

Performance testing of concurrent requests

Change the configuration item of the number of loops in the thread group to 10 (in order to shorten the test time), and then change the number of threads for multiple tests. The test results are shown in the following table.

The number of threads Number of samples Total request time (ms) Average response time RT(ms) Throughput ThoughPut(pieces /s) QPS (a/s)
1 10 1 60 16.6 10
10 100 2 145 64.5 50
20 200 3 270 67.7 66.7
30 300 4 349 79.4 75
40 400 6 511 68.7 66.7
50 500 8 735 64.4 62.5
60 600 11 970 57.6 54.5

QPS = number of samples/total request time, this interface is a query interface, so QPS and TPS can be regarded as the same.

As can be seen, when the number of threads is 60, the throughput and QPS start to be smaller than the number of threads, indicating that this is a bottleneck. If the number of concurrent threads is increased, the performance will become lower and lower. Therefore, the optimal number of concurrent threads for this interface is about 50.

Meanwhile, it was observed that the throughput increased first and then decreased, and the highest point was the maximum throughput of 79.4/s.

Finally, I test the maximum concurrency and continue to increase the number of threads. When the program starts to throw Error exceptions and the Error rate in the summary report is no longer 0%, the maximum concurrency is reached. Through the long pseudo binary test, I get the maximum concurrency of this program is 160.

summary

This is where the JMeter-based performance tests end. The main metrics involved are average response time (RT), throughput (Thoughput), queries per second (QPS), optimal concurrency, maximum throughput, and maximum concurrency.

Going back to the original requirement — to identify the cause of the outage — it was clearly stated in the log that it was due to a heap overflow.

In the test above, the maximum concurrency of this program is only 160 because its initial heap size is only 256MB. Using the -xms512m startup additional command to increase the initial heap size to 512MB, the maximum concurrency will also increase. In the same case, there will be no heap overflow. Even if the number of concurrent calls is doubled, the outage will not occur again (unless the consumer repeats several thousand times and the number of concurrent calls increases several times…). .

Summary again

After writing this performance test tip, there are some new content to share:

First of all, before testing, you need to know the various configurations of the online JVM, server, and local program environment to simulate the online environment as much as possible.

The second is concurrent number. It is better to have PV statistics in the online environment, so that we can know the system access when the system is down. In the configuration of Jmeter thread number, we can directly fill in the access number, and the goal of this is to simulate the real online environment.

The third is to consider from the code optimization, such as configuration items in the configuration file can be resolved as a configuration class at program startup loading in, rather than each time you use to go to a configuration file parsing, doing nothing (I don’t know why so write inside project, to optimize it in a few days).

Fourth, is also a more important point, when conducting performance testing must not even the production environment database (generally speaking normal companies will do production environment and development environment isolation) or third-party interface what, otherwise the production environment is blown up.

reference

  • Interface performance test scheme

Copyright notice: This article is created by Planeswalker23, please bring the original link to reprint, thanks.

This article was typeset using MDNICE