Locust: An Open Source Performance Testing Tool

Locust running mode

Before running the Locust script, let’s take a look at the operating modes supported by Locust.

When running Locust, there are usually two running modes: single-process running and multi-process distributed running.

The single-process mode means that all virtual concurrent users of Locust run in a single Python process, which can be divided into no_web and Web. Because of the single process, this mode can not give full play to all the processors of the press, so it is mainly used for debugging scripts and small concurrent pressure measurement.

Locust’s multi-process distributed running mode is used when the concurrency pressure is high. Literally, your first reaction might be to have multiple presses running at the same time, each sharing part of the pressure generation of the load. It is true that Locust supports distributed operation of any number of presses (one master, many slaves), but there is another case of multi-process distributed operation mentioned here, where multiple slaves are enabled on the same press. This is because most computers are currently using multiple processor cores, which are available in single-process mode but can be invoked by running multiple slaves on a single press. A good practice is to start one master and N slaves on a press with N processor cores. Of course, we can also start multiple of N slaves, but according to my data, the effect is the same as N, so we only need to start N slaves.

Script debugging

Once the Locust script is written, it usually doesn’t go smoothly, and you need to debug and run it before you can start performance testing.

However, the Locust script is a Python script, but it is difficult to run as a Python script directly. Why? This is mainly because the Locust script references HttpLocust and TaskSet classes, and it would be difficult to write a startup script to test them directly. For this reason, students new to Locust may find the Locust script difficult to debug.

This problem can be overcome with Locust’s single-process NO_Web running mode.

In the single-process NO_web mode of Locust, we can directly execute the script in Terminal by specifying the number of concurrent (-c) and total number of executions (-n) with the –no_web parameter.

On this basis, when we want to debug the Locust script, we can print the log in the script where we need to debug, and then set the concurrent number and the total number of executions to 1, as shown below.

$ locust -f locustfile.py --no_web -c 1 -n 1Copy the code

In this way, we can easily debug the Locust script.

Perform the test

Once the Locust script has been debugged, you’re ready to stress test.

The Locust is started by running a command in Terminal. The general parameters are as follows:

  • -H, --host: of the system under testhostIf, in theTerminalIf you do not specify inLocustIn the subclasshostParameter to specify;
  • -f, --locustfile: Specifies the executionLocustScript file;

In addition to these two general parameters, we also need to select different Locust operating modes based on the actual test scenario, and the mode specification is controlled by other parameters.

Single process running

no_web

If the no_web form is used, the –no-web parameter is used and the following parameters are used.

  • -c, --clients: Specifies the number of concurrent users.
  • -n, --num-request: Specifies the total execution of tests;
  • -r, --hatch-rate: Specifies the concurrent compression rate. The default bit is 1.
$ locust -H http://debugtalk.com -fDemo. Py - no - web - c1 - n2 [the 21:27:26 2017-02-21, 522] Leos - MacBook - Air. Local/INFO/locust. Main: Starting Locust 0.8 a2 [the 21:27:26 2017-02-21, 523] Leos - MacBook - Air. Local/INFO/Locust. The runners: Hatching and swarming 1 clients at the rate 1 clients/s... Name# reqs # fails Avg Min Max | Median req/s------------------------------------------------------------------------------------------------------------------------ -------------- ------------------------------------------------------------------------------------------------------------------------ -- -- -- -- -- -- -- -- -- -- -- -- -- -- Total 0 0 (0.00%) 0.00 [the 21:27:27 2017-02-21, 526] Leos - MacBook - Air. Local/INFO/locust. The runners: All locusts hatched: WebsiteUser: 1 [the 21:27:27 2017-02-21, 527] Leos - MacBook - Air. Local/INFO/locust. The runners: Resetting stats Name# reqs # fails Avg Min Max | Median req/s------------------------------------------------------------------------------------------------------------------------ -- -- -- -- -- -- -- -- -- -- -- -- -- -- GET/about / 0 0 0 0 0 (0.00%) | 0 to 0.00 ------------------------------------------------------------------------------------------------------------------------ -------------- Total 00 (0.00%) 0.00 Name# reqs # fails Avg Min Max | Median req/s------------------------------------------------------------------------------------------------------------------------ -- -- -- -- -- -- -- -- -- -- -- -- -- -- GET/about / 1 0 17 17 | 17 (0.00%), 17, 0.00 ------------------------------------------------------------------------------------------------------------------------ -- -- -- -- -- -- -- -- -- -- -- -- -- -- Total 1 0 (0.00%) 0.00 [21:27:32 2017-02-21, 420] Leos - MacBook - Air. Local/INFO/locust. The runners: All locusts dead [21:27:32 2017-02-21, 421] Leos - MacBook - Air. Local/INFO/locust. Main: Shutting down (exit code 0), bye.
 Name                                                          # reqs # fails Avg Min Max | Median req/s------------------------------------------------------------------------------------------------------------------------ -- -- -- -- -- -- -- -- -- -- -- -- -- -- GET / 1 0 (0.00%), 20 20 20 20 | 0.00 GET/about / 1 0 17 17 | 17 (0.00%), 17, 0.00 ------------------------------------------------------------------------------------------------------------------------ -------------- Total 2 0(0.00%) 0.00 Percentage of the requests completed within giventimes
 Name                                                           # reqs 50% 66% 75% 80% 90% 95% 98% 99% 100%------------------------------------------------------------------------------------------------------------------------ -------------- GET / 1 20 20 20 20 20 20 20 20 20 GET /about/ 1 17 17 17 17 17 17 17 17 17 ------------------------------------------------------------------------------------------------------------------------ --------------Copy the code

web

By default, Locust uses port 8089 to start the Web. If you want to use other ports, you can specify the following parameters.

  • -P, --port: Specifies the Web port. The default value is8089.
$ locust -H http://debugtalk.com -fLeos demo. Py [21:31:26 2017-02-21, 334] - MacBook - Air. Local/INFO/locust. Main: Starting the web monitor the at * : 8089 [21:31:26 2017-02-21, 334] Leos - MacBook - Air. Local/INFO/locust. Main: Starting locust 0.8 a2Copy the code

At this point, the Locust test has not started and needs to be started by configuring parameters on the Web page.

If Locust is running on the local computer, you can access the Locust Web management page by visiting http://localhost:8089. If Locust is running on another machine, just visit http://locust_machine_ip:8089 in your browser.

On the Locust Web management page, you need to set only two parameters:

  • Number of users to simulate: Sets the number of concurrent usersno_webPatterns of-c, --clientsParameters;
  • Hatch rate (users spawned/second): Rate at which virtual users are started, corresponding tono_webPatterns of-r, --hatch-rateParameters.

After the parameters are configured, click [Start swarming] to Start the test.

Multiple processes run in distributed mode

Regardless of the single-machine multi-process mode or multi-machine load mode, the operation mode is the same. A master is run first and then multiple slaves are started.

To start the master, use the –master argument; Similarly, if you want to use a port other than 8089, you also need to use the -p, –port argument.

$ locust -H http://debugtalk.com -fDemo. Py - master - port = 8088 [22:59:57 2017-02-21, 308] Leos - MacBook - Air. Local/INFO/locust. Main: Starting the web monitor the at * : 8088 [22:59:57 2017-02-21, 310] Leos - MacBook - Air. Local/INFO/locust. Main: Starting locust 0.8 a2Copy the code

After the master starts, you need to start the slave to perform the test.

The –slave parameter is used to start the slave; In slave, there is no need to specify ports.

$ locust -H http://debugtalk.com -fDemo. Py - slave [the 23:07:58 2017-02-21, 696] Leos - MacBook - Air. Local/INFO/locust. Main: Starting Locust 0.8 a2 [the 23:07:58 2017-02-21, 696] Leos - MacBook - Air. Local/INFO/Locust. The runners: Client'Leos-MacBook-Air.local_980ab0eec2bca517d03feb60c31d6a3a' reported as
 ready. Currently 2 clients ready to swarm.Copy the code

If the slave and master are not on the same machine, specify the IP address of the master by using the –master-host parameter.

$ locust -H http://debugtalk.com -fDemo. Py, slave, master - host = < locust_machine_ip > [the 23:07:58 2017-02-21, 696] Leos - MacBook - Air. Local/INFO/locust. Main: Starting Locust 0.8 a2 [the 23:07:58 2017-02-21, 696] Leos - MacBook - Air. Local/INFO/Locust. The runners: Client'Leos-MacBook-Air.local_980ab0eec2bca517d03feb60c31d6a3a' reported as
 ready. Currently 2 clients ready to swarm.Copy the code

After the master and slave are started, you can access the Locust Web management page at http://locust_machine_ip:8089. It is used in the same way as the single-process web mode, except that the concurrent pressure is generated through the multi-process load. The actual number of slaves can also be seen in the Web management interface.

Test results presentation

During the Locust test, we can see the results running in real time on the Web interface.

Compared with LoadRunner, the results of Locust are very simple, mainly in four indicators: concurrency, RPS, response time, and outlier rate. But for most scenarios, these metrics are sufficient.

In the figure above, the values of RPS and average response time are statistics calculated from the last 2 seconds of request response data, which can also be understood as instantaneous values.

If you want to see how performance metrics are trending, check out the Charts bar. Here, you can see how RPS and average response times fluctuate throughout the run. This feature was missing from Locust until recently, when it was filled in by my former colleague at Ali Mobile (network IDmyzhan). This feature has been incorporated into Locust and can be used by updating to the latest version.

In addition to the above data, Locust also provides percentage statistics of the entire operation process, such as 90% response time and median response time, which can be obtained by Download Response Time Distribution CSV. The data can be displayed as follows.

conclusion

As you are familiar with Locust’s features and functions, you should have no problem using Locust as a productivity tool in real projects.

However, no tool is perfect, and there are bound to be some shortcomings. However, Locust is extremely customizable and can be easily extended to meet specific requirements.

In order to get rid of CPython’s GIL and gevent’s monkey_patch(), myzhan rewrote the Locust slave using Golang and replaced gEvent with Goroutine. In tests, his Golang implementation showed a performance improvement of five to ten times over the native Python implementation. Currently, he has opened source the implementation, and the project is called MyZhan /boomer. If you are interested, you can read his blog post, writing pressure Tools with Golang.

If we want to do secondary development based on Locust, how do we get started?

Without a doubt, reading the Locust project source code is an essential first step. Maybe for many people, reading the source code of open source project is a very difficult thing, do not know how to start, in Zhihu also see a lot of questions about how to read the source code of open source project. In fact, the Locust project has a clean code structure and a small amount of core code, making it easy to read and learn. Locust is a great place to start, even if you just want to get a taste of reading open source projects or brush up on your Python skills.

In the next article, I will analyze the source code of Locust, “Simple and simple Open source performance testing tool Locust (source code)”, stay tuned!

GitHub project address

Stormer

Hard wide

Welcome to follow my personal blog and wechat official account.

  • Personal blog: debugtalk.com
  • Wechat official account: DebugTalk