A: What is keep alive?

HTTP Persistent Connection (also known as HTTP keep-alive or HTTP Connection reuse) is a TCP connection that uses the same TCP connection to send and receive multiple HTTP requests/replies. Instead of opening a new connection method for every new request/reply.

We know that HTTP adopts the “request-reply” mode. In the common mode, that is, non-Keepalive mode, each request/reply client and server need to create a connection and disconnect the connection immediately after completion (HTTP is a connectionless protocol). In keep-alive mode (also known as persistent connection), the keep-alive function keeps the connection between the client and the server Alive. When a subsequent request to the server occurs, the keep-alive function avoids establishing or re-establishing the connection.

What is the difference between Keep Alive and traditional?



Keep Alive does not need to open a new link every time a request is made.

3. How to configure Keep Alive?

Apache server:

httpd.conf:

KeepAlive On / / whether open keep aliveMaxKeepAliveRequests / 300 / biggest reusable requests per connection KeepAliveTimeout 3 / / reusable time time s On every requestCopy the code

Nginx server:

nginx.conf:

Keepalive_timeout // The server receives all connections within 10 seconds and closes if the number of connections exceeds 10Copy the code

Node.js

Nodejs.org/api/http.ht…

What is multiplexing?



  1. Once an HTTP2 request TCP connection is established, subsequent requests are sent as stream.
  2. The basic unit of each stream is a frame (binary frame). Each frame is divided into several types, such as HEADERS frame, DATA frame, etc.
  3. A HEADERS Frame constitutes a request, a HEADERS Frame and a DATA Frame constitute a response, and a request and response constitute a stream.



  1. Http1. X can use keep alive to block TCP requests from Head of Line, but it does not solve the problem.
  2. Blocking means that a TCP connection can only allow one request to pass through at a time, so subsequent requests must wait until the previous one completes, as shown on the left.
  3. The reason for this problem is that HTTP1.x requires each request to be identified and sent sequentially, otherwise the server cannot determine which specific request to respond to
  4. Each request is sent as a stream. Each stream has a unique identifier. Once a connection is established, subsequent requests can reuse the connection and be sent simultaneously. The server can respond to requests based on the stream’s unique identity.

Won’t multiplexing be turned off?

Is a connection to the same TCP used for multiplexing closed, and when, a question? See a paragraph from the standard:

HTTP/2 connections are persistent. For best performance, it is

expected that clients will not close connections until it is

determined that no further communication with a server is necessary

(for example, when a user navigates away from a particular web page)

or until the server closes the connection.

This means that there are two opportunities to close:

  1. The user leaves the page
  2. The server actively disables the connection.

But standards are standards, and different server implementations have their own conventions, as with Keep Alive, each server has its own configuration for multiplexing this connection

Syntax: http2_idle_timeout time; Default: http2_idle_timeout 3m; Context: http, serverSyntax: http2_recv_timeout time; Default: http2_recv_timeout 30s; Context: http, serverCopy the code

Nginx.org/en/docs/htt…