pipeline

Below is the effect of 6000 data on Redis set with and without pipes. It can be seen that it takes less than 1 second with pipes and about 4 seconds without pipes.

Source code address: github.com/qiaomengnan…

Message interaction between the client and the server

When the client interacts with the Redis server, the client sends the command first, the server returns the result after processing, and the client reads the result. The operation is over, and it takes a network packet to return.

If you execute multiple instructions in a row, it takes multiple network packets to get back and forth,

Request from the client perspective, the two is a write – > read, write – > read, four operation completes, the writing is written instruction, reading is reading returns as a result, the reading is to wait for the server to respond, it will experience a little time consuming, for example, first read takes 0.3 seconds, and takes 0.3 seconds to read the second time, add up to 0.3 seconds.

Changing the interaction

If we execute the two instructions set x1 1 and set x2 2 to store two values, obviously the two operations are not related, we can send the two sets of instructions together, and then read the return value, saving the blocking time of the intermediate read return value, that is, adjust the read and write order, write -> read, The order of write -> read was changed to write and read,

Two consecutive write operations and consecutive read operations only take one network round-trip time. Consecutive write operations are combined into one, and consecutive read operations are also combined into one, that is, the read wait time only takes 0.3 seconds. Redis pipeline actually changes the write read order to make the speed faster.

The server

The principle of

The speed can be greatly improved by changing the write and read order. The fundamental reason is to reduce the overhead of multiple network trips

The figure above shows the process of initiating a complete operation. The steps are as follows:

  1. The client calls Wirte to write the message to the send Buffer allocated by the operating system kernel for the socket.

  2. The client operating system kernel sends the buffered content to the network adapter, which sends the data to the network adapter on the server.

  3. The server operating system kernel puts the data from the network card into the RECV Buffer, the receive buffer allocated by the kernel for the socket.

  4. The server retrieves the message from the receive buffer for processing by calling read.

  5. The server calls Wirte to write the response message to the send Buffer allocated by the kernel for the socket.

  6. The server operating system kernel sends the buffer contents to the network card, which in turn sends the data to the client’s network card.

  7. The client operating system kernel puts the data from the network card into the RECV Buffer, a receive buffer prepared by the kernel for the socket.

  8. The client process calls READ to fetch the message from the receive buffer and return it to the upper level business logic for processing.

  9. The end.

Steps 5 and 8 are the same as steps 1 and 4, but in different directions, one is a request and the other is a response.

The client writes data to a buffer in the operating system kernel and returns immediately, rather than waiting for it to be sent to the server. The operating system kernel sends the data asynchronously to the server.

If the send buffer is full, write waits for the buffer to be free, which is how long wirte takes. If the buffer is not full, return is very fast, and then the next write can proceed.

The client reads data from the local kernel’s receive buffer, rather than pulling data from the server. If the server responds quickly, the client reads data quickly.

If there is no data in the receive buffer, read waits for the data to arrive, returns after reading,

For the continuous write and read of the pipeline, the continuous write operation is basically equivalent to no time consuming. As long as the buffer is insufficient, the continuous write operation can continue to write, while the read operation is waiting for the data response of the server.

The response data is the result response of all previous operations, so the fast nature of the pipeline is that the client gets a huge performance boost by changing the order of reads and writes.

Suitable for the scene

When Redis multiple unrelated operation, that is, after an instruction is not dependent on the execution of an instruction on the results, you can use pipe processing, such as the opening of the for loop set value, if just use ordinary set instructions, interface limit if time is 3 seconds, it will be because Redis timeout caused a lot of writing reading waiting interface,

Since each set operations are irrelevant, so rest assured use pipes to do speed up processing, on the contrary if subsequent instructions need to rely on the previous results, the res pipe to do (pipe also support scripting, can send some scripts to do rely on logic in pipeline processing), from this point of view, pipeline and batch is similar.

Please note when coding that during pipeline, the pipeline will monopolize the connection, and other non-pipeline operations cannot be carried out, otherwise there will be errors. If you need to do other processing, you need to use a new connection to isolate the connection in pipeline use.

Note that redis will cache the response results on the server before all commands are processed. The more caches, the more memory will be occupied.

Pipeline pressure test

Redis comes with a pressure test tool, Redis-Benchmark, that allows you to test your pipe and see how it works.

The following is a pressure test of a common SET instruction with a QPS of about 145985.41/s.

root@f5cd3ecb4cd8:/data# redis-benchmark -t set -q

SET: 145985.41 requests per second, p50=0.167 msec
Copy the code

Add -p to indicate the number of parallel requests in a single pipe, and you can see the following result: for each increase in the number of requests, the QPS will increase by about 100,000. After the 14th point, the QPS will not increase, and even begin to decrease, because CPU consumption has reached 100% limit.

root@f5cd3ecb4cd8:/data# redis-benchmark -t set -P 2 -q SET: 273224.03 requests per second, p50= 0.175msec \ root@f5cd3ecb4cd8:/data# redis-benchmark -t set -p 3-q set: 378795.47 requests per second, p50= 0.183msec \ root@f5cd3ecb4cd8:/data# redis-benchmark -t set -p 4-q set: 483091.78 requests per second, p50= 0.183msec \ root@f5cd3ecb4cd8:/data# redis-benchmark -t set -p 5-q set: 549450.56 requests per second, p50= 0.191msec \ root@f5cd3ecb4cd8:/data# redis-benchmark -t set -p 6-q set 653607.88 requests per second, p50=0.199 msec \ root@f5cd3ecb4cd8:/data# redis-benchmark -t set -p 7-q set: 724652.19 requests per second, p50= 0.191msec \ root@f5cd3ecb4cd8:/data# redis-benchmark -t set -p 8 -q set: 806451.62 requests per second, p50= 0.215msec \ root@f5cd3ecb4cd8:/data# redis-benchmark -t set -p 9-q set: 877263.19 requests per second, p50= 0.255msec \ root@f5cd3ecb4cd8:/data# redis-benchmark -t set -p 10 -q set: 1408450.62 requests per second, p50= 0.255msec \ root@f5cd3ecb4cd8:/data# redis-benchmark -t set -p 11-q set 917440.38 requests per second, p50= 0.375msec \ root@f5cd3ecb4cd8:/data# redis-benchmark -t set -p 11-q set: 952390.50 requests per second, p50= 0.375msec \ root@f5cd3ecb4cd8:/data# redis-benchmark -t set -p 12-q set: 1020489.81 requests per second, p50=0.399 msec \ root@f5cd3ecb4cd8:/data# redis-benchmark -t set -p 13-q set: 1333453.25 requests per second, p50= 0.367msec \ root@f5cd3ecb4cd8:/data# redis-benchmark -t set -p 14-q set 1075290.25 requests per second, p50= 0.455msec \ root@f5cd3ecb4cd8:/data# redis-benchmark -t set -p 15-q set 1020459.19 requests per second, p50=0.479 msec \ root@f5cd3ecb4cd8:/data# redis-benchmark -t set -p 16-q set: 1000000.00 requests per second, p50= 0.519msec \ root@f5cd3ecb4cd8:/data# redis-benchmark -t set -p 17 -q set: 1020520.44 requests per second, P50 =0.551 msecCopy the code