Author: Roast chicken prince

Source: Hang Seng LIGHT Cloud Community

Introduction to the

This year, the company’s developer conference was held online. According to the convention, in order to guarantee the service, it was necessary to evaluate the performance of the whole system, so I cramming at the last minute. The most commonly used tools are JMeter and Apache Bench, and I finally chose Apache Bench(AB for short). Also for the AB tool to do some summary.

Introduction of AB

Open the official website to see the following paragraph:

ab is a tool for benchmarking your Apache Hypertext Transfer Protocol (HTTP) server. It is designed to give you an impression of how your current Apache installation performs. This especially shows you how many requests per second your  Apache installation is capable of serving.Copy the code

ApacheBench is a Web stress test tool (AB for short) of the Apache server. Ab is also a line tool, the local requirements for initiating load is very low, according to AB can create a lot of concurrent access threads, simulate multiple visitors to access a URL address at the same time, so it can be used to test the load pressure of the target server. In general, THE AB tool is small and simple, fast to learn, and can provide the required basic performance indicators, but without graphical results, cannot be monitored.

Comparison of JMeter and AB

There are a lot of introductions on the Internet, the collection of a god’s summary (not to say, we can baidu)

1. Jmeter is a complete request and return, while AB just sends the request and does not process the return. It just sends the request successfully or fails. So in terms of accuracy, Jmeter is more accurate, while AB is faster and can generate more access requests with the least machine resources.

Jmeter itself supports assertions, variable parameters, and CSV data set input, which can set up more flexible test scenarios, while AB does not support.

3, Jmeter can provide more detailed statistical data, such as interface error information, single thread request time, etc., while AB does not support;

4. Jmeter does not support precise time pressure measurement, such as 10 minutes, but AB does.

5. Jmeter supports distributed pressure measurement cluster, and supports functions, AB does not support;

6. The software itself consumes resources: Jmeter takes more time and resources than AB due to its heavy weight and statistics of a lot of result data. AB is super lightweight, which is very suitable for single-interface pressure measurement in the development and testing process.

Because this test is only for a single interface, and I happen to have a spare Linux machine on hand, I chose AB after comprehensive consideration. Without further ado, I will explain the use of AB below.

The use of AB

The official website has made a detailed introduction on the use of AB. We can check the official website address:

Httpd.apache.org/docs/2.4/pr…

Here are some excerpts, English is relatively simple, I will not do translation.

Ab parameters

ab [ -A auth-username:password ] [ -b windowsize ] [ -B local-address ] [ -c concurrency ] [ -C cookie-name=value ] [ -d  ] [ -e csv-file ] [ -E client-certificate file ] [ -f protocol ] [ -g gnuplot-file ] [ -h ] [ -H custom-header ] [ -i ]  [ -k ] [ -l ] [ -m HTTP-method ] [ -n requests ] [ -p POST-file ] [ -P proxy-auth-username:password ] [ -q ] [ -r ] [ -s timeout ] [ -S ] [ -t timelimit ] [ -T content-type ] [ -u PUT-file ] [ -v verbosity] [ -V ] [ -w ] [ -x <table>-attributes ] [ -X proxy[:port] ] [ -y <tr>-attributes ] [ -z <td>-attributes ] [ -Z ciphersuite ] [http[s]://]hostname[:port]/pathCopy the code

Parameters that

  • -A auth-username:password

    Supply BASIC Authentication credentials to the server. The username and password are separated by a single : and sent on the wire base64 encoded. The string is sent regardless of whether the server needs it (i.e., has sent an 401 authentication needed).

  • -b windowsize

    Size of TCP send/receive buffer, in bytes.

  • -B local-address

    Address to bind to when making outgoing connections.

  • -c concurrency

    Number of multiple requests to perform at a time. Default is one request at a time.

  • -C cookie-name=value

    Add a Cookie: line to the request. The argument is typically in the form of a name=value pair. This field is repeatable.

  • -d

    Do not display the “percentage served within XX [ms] table”. (legacy support).

  • -e csv-file

    Write a Comma separated value (CSV) file which contains for each percentage (from 1% to 100%) the time (in milliseconds) it took to serve that percentage of the requests. This is usually more useful than the ‘gnuplot’ file; as the results are already ‘binned’.

  • -E client-certificate-file

    When connecting to an SSL website, use the provided client certificate in PEM format to authenticate with the server. The file is expected to contain the client certificate, followed by intermediate certificates, followed by the private key. Available in 2.4.36 and later.

  • -f protocol

    Specify SSL/TLS protocol (SSL2, SSL3, TLS1, TLS1.1, TLS1.2, or ALL). TLS1.1 and TLS1.2 support available in 2.4.4 and later.

  • -g gnuplot-file

    Write all measured values out as a ‘gnuplot’ or TSV (Tab separate values) file. This file can easily be imported into packages like Gnuplot, IDL, Mathematica, Igor or even Excel. The labels are on the first line of the file.

  • -h

    Display usage information.

  • -H custom-header

    Append extra headers to the request. The argument is typically in the form of a valid header line, containing a colon-separated field-value pair (i.e., “Accept-Encoding: zip/zop; 8bit”).

  • -i

    Do HEAD requests instead of GET.

  • -k

    Enable the HTTP KeepAlive feature, i.e., perform multiple requests within one HTTP session. Default is no KeepAlive.

  • -l

    Do not report errors if the length of the responses is not constant. This can be useful for dynamic pages. Available in 2.4.7 and later.

  • -m HTTP-method

    Custom HTTP method for the requests. Available in 2.4.10 and later.

  • -n requests

    Number of requests to perform for the benchmarking session. The default is to just perform a single request which usually leads to non-representative benchmarking results.

  • -p POST-file

    File containing data to POST. Remember to also set -T.

  • -P proxy-auth-username:password

    Supply BASIC Authentication credentials to a proxy en-route. The username and password are separated by a single : and sent on the wire base64 encoded. The string is sent regardless of whether the proxy needs it (i.e., has sent an 407 proxy authentication needed).

  • -q

    When processing more than 150 requests, ab outputs a progress count on stderr every 10% or 100 requests or so. The -q flag will suppress these messages.

  • -r

    Don’t exit on socket receive errors.

  • -s timeout

    Maximum number of seconds to wait before the socket times out. Default is 30 seconds. Available in 2.4.4 and later.

  • -S

    Do not display the median and standard deviation values, nor display the warning/error messages when the average and median are more than one or two times the standard deviation apart. And default to the min/avg/max values. (legacy support).

  • -t timelimit

    Maximum number of seconds to spend for benchmarking. This implies a -n 50000 internally. Use this to benchmark the server within a fixed total amount of time. Per default there is no timelimit.

  • -T content-type

    Content-type header to use for POST/PUT data, eg. application/x-www-form-urlencoded. Default is text/plain.

  • -u PUT-file

    File containing data to PUT. Remember to also set -T.

  • -v verbosity

    Set verbosity level – 4 and above prints information on headers, 3 and above prints response codes (404, 200, etc.), 2 and above prints warnings and info.

  • -V

    Display version number and exit.

  • -w

    Print out results in HTML tables. Default table is two columns wide, with a white background.

  • -x -attributes

    String to use as attributes for table. Attributes are inserted table here.

  • -X proxy[:port]

    Use a proxy server for the requests.

  • -y -attributes

    String to use as attributes for tr.

  • -z -attributes

    String to use as attributes for tb

  • -Z ciphersuite

    Specify SSL/TLS cipher suite (See openssl ciphers)

Parameters that

Server Software

The value, if any, returned in the server HTTP header of the first successful response. This includes all characters in the header from beginning to the point a character with decimal value of 32 (most notably: a space or CR/LF) is detected.

Server Hostname

The DNS or IP address given on the command line

Server Port

The port to which ab is connecting. If no port is given on the command line, this will default to 80 for http and 443 for https.

SSL/TLS Protocol

The protocol parameters negotiated between the client and server. This will only be printed if SSL is used.

Document Path

The request URI parsed from the command line string.

Document Length

This is the size in bytes of the first successfully returned document. If the document length changes during testing, the response is considered an error.

Concurrency Level

The number of concurrent clients used during the test

Time taken for tests

This is the time taken from the moment the first socket connection is created to the moment the last response is received

Complete requests

The number of successful responses received

Failed requests

The number of requests that were considered a failure. If the number is greater than zero, another line will be printed showing the number of requests that failed due to connecting, reading, incorrect content length, or exceptions.

Write errors

The number of errors that failed during write (broken pipe).

Non-2xx responses

The number of responses that were not in the 200 series of response codes. If all responses were 200, this field is not printed.

Keep-Alive requests

The number of connections that resulted in Keep-Alive requests

Total body sent

If configured to send data as part of the test, this is the total number of bytes sent during the tests. This field is omitted if the test did not include a body to send.

Total transferred

The total number of bytes received from the server. This number is essentially the number of bytes sent over the wire.

HTML transferred

The total number of document bytes received from the server. This number excludes bytes received in HTTP headers

Requests per second

This is the number of requests per second. This value is the result of dividing the number of requests by the total time taken

Time per request

The average time spent per request. The first value is calculated with the formula concurrency * timetaken * 1000 / done while the second value is calculated with the formula timetaken * 1000 / done

Transfer rate

The rate of transfer as calculated by the formula totalread / 1024 / timetaken

For example,

As you can see from the above, ab supports many parameters, but in fact we generally only need -n (total number of requests sent) and -c (number of concurrent threads requested) parameters.

As can be seen from the examples in the figure, 200 concurrent threads were started this time, and a total of 2000 times were run (in order to avoid leaking information security, the interface was made private).

As a result, we observed

Requests for second was 722.77, or 722.77 Requests per second;

Time per request is 276.714, that is, the average Time of a request is 276.714ms.

In addition, it can also be 99 lines, 98 lines, 95 lines and other time indicators;

In particular, it should be noted that non-2xx responses was 2000, while the total number of requests was 2000, it meant that all the returns of 2000 requests were non-2000, and it was necessary to consider whether all the returns of requests were wrong;