Big front-end encyclopedia, front-end encyclopedia, record the front-end related field knowledge, convenient follow-up access and interview preparation

keywords

Keywords: multiplexing, binary frame layer, Header compression, server push, QUIC (based on UDP protocol)

  • How does http2 ensure that simultaneous file transfers do not report errors?
  • What is the difference between multiplexing in HTTP2.0 and long connection multiplexing in HTTP1.x?

Differences between HTTP1 and HTTP1.1

Pipelining, persistent connections, head blocking,

HTTP/1.0 Each HTTP communication needs to go through three phases: establishing a TCP connection, transmitting HTTP data, and disconnecting TCP connections (as shown in the following figure).

Persistent connection

At the time, because the files being communicated were small and there were few references per page, this form of transfer was not a big problem. But as the browser popular, more and more image files in a single page, sometimes a page may contain hundreds of external reference resource files, if at the time of download each file, all need experience to establish a TCP connection disconnected, data transmission, and such a step, will undoubtedly increase a lot of unnecessary spending. To address this problem, HTTP/1.1 added the persistent connection method, which features the ability to transfer multiple HTTP requests over a SINGLE TCP connection and keep the TCP connection as long as the browser or server does not explicitly disconnect.

Persistent connections are enabled by default in HTTP/1.1, so you do not need to specify HTTP headers for persistent connections. If you do not want to use persistent connections, you can add **Connection to HTTP headers: Close (default: Connection:keep-alive) ****. ** By default, six PERSISTENT TCP connections can be established for the same domain name.

pipelines

Persistent connections reduce TCP establishment and disconnection times, but wait for the previous request to return before the next request can be made. If for some reason one request in the TCP channel is not returned in time, then all subsequent requests are blocked, which is known as queue head blocking.

HTTP/1.1 attempts to solve the problem of queue header blocking through pipelining techniques. Pipelining in HTTP/1.1 refers to the technique of submitting multiple HTTP requests to the server in batches. Although the requests can be sent in batches, the server still needs to respond to the browser’s requests based on the order in which they are requested. FireFox and Chrome both experimented with pipelining, but for various reasons they eventually abandoned it.

Provide virtual host support

In HTTP/1.0, each domain name was bound to a unique IP address, so only one domain name could be supported by a server. However, with the development of virtual host technology, it is necessary to bind multiple virtual hosts on a physical host. Each virtual host has its own separate domain name, and these separate domain names all share the same IP address. Therefore, the HTTP/1.1 request header added a Host field to represent the current domain name address, so that the server can do different processing based on different Host values.

Improvement and optimization

We know that HTTP/1.1 has made a number of optimizations for network efficiency, most notably in the following three ways:

1. Added persistent connection; 2. The browser maintains a maximum of six TCP persistent connections for each domain name. 3. Use CDN to implement domain name sharding mechanism.Copy the code

http2

The HTTP/2 solution can be summed up as: use only one TCP connection for a domain name and eliminate the queue header blocking problem

http3

QUIC protocol (Quick UDP Internet Connections)

HTTP/2 has some serious TCP related defects, but due to the rigidity of TCP, it is almost impossible to solve these problems by modifying TCP itself, so the idea to solve the problem is to bypass TCP and invent a new transport protocol other than TCP and UDP.

But this also presents the same challenges as modifying TCP because of the rigidity of intermediate devices, which only recognize TCP and UDP, and where the new protocol is adopted, the new protocol is also not well supported.

Therefore, HTTP/3 chooses a compromise method – UDP protocol, based on UDP to achieve similar to TCP multi-channel data flow, transmission reliability and other functions, we call this function QUIC protocol. For comparison of HTTP/2 and HTTP/3 protocol stacks, you can refer to the following figure:

As you can see from the figure above, the QUIC protocol in HTTP/3 contains the following features.

  • It realizes the functions of flow control and transmission reliability similar to TCP. Although UDP does not provide reliable transmission, QUIC adds a layer on top of UDP to ensure reliable transmission of data. It provides packet retransmission, congestion control, and other features found in TCP.
  • Integrated TLS encryption function. QUIC currently uses TLS1.3, which has more advantages than earlier versions, the most important of which is the reduction in the number of RTTS spent on handshakes.
  • The multiplexing function in HTTP/2 is realized. Unlike TCP, QUIC enables multiple independent logical data streams over the same physical connection (see figure below). Realize the separate transmission of data stream, and solve the problem of TCP squadron head blocking.

  • Realize the quick handshake function. Since QUIC is udP-based, it can use 0-RTT or 1-RTT to establish connections, which means that QUIC can send and receive data as quickly as possible, which can greatly improve the speed of first page opening.

The challenge of HTTP / 3

From the above analysis, we believe that HTTP/3 is a perfect protocol on a technical level. However, there are still a number of serious challenges to implementing HTTP/3 in a real-world environment, mainly in the following three areas.

First, as of now, HTTP/3 is not fully supported on the server or browser side. Chrome has supported Google’s version of QUIC for several years, but there are major differences between this version and the official QUIC.

Second, deploying HTTP/3 is also very problematic. Because the system kernel optimization of UDP is not nearly as good as TCP optimization, this is also an important reason to block QUIC.

Third, the rigidity of intermediate equipment. The UDP optimization of these devices is much lower than that of TCP. According to statistics, when QUIC is used, the packet loss rate is about 3% to 7%.

How does http2 ensure that simultaneous file transfers do not report errors?

  • Binary transmission
  • multiplexing

Binary transmission

In HTTP, where text is used before, all data transmitted in HTTP2 is split and encoded in binary format

multiplexing

There are two concepts in HTTP2: stream and frame. Frame is the smallest unit, and each frame has a Stream Identifier to identify which stream it belongs to.

In HTTP1. X case, each HTTP request to establish a TCP connection, which means that each request must carry on the three-way handshake, which can cause waste of time and resources, and the browser will limit the number of concurrent requests, under the same domain name when requesting a lot of resources, team head block can lead to the maximum number of requests, The remaining resources have to wait for other resources to complete before they can be requested. (Here is an optimization for this situation, which is to put different resources under different domains to break the browser’s maximum concurrency limit.)

How to ensure the transmission of multiple HTTP requests on the same TCP connection without errors?

Multiplexing means that there can be multiple streams in a TCP connection, that is, multiple requests, and each stream contains multiple frames. The Stream Identifier can identify the corresponding stream of each frame. When the frame reaches the server, The complete request can then be recombined according to the Stream Identifier, which improves transmission performance while ensuring transmission correctness.

The Header compression

Http1.x transmits the header as text and carries it with it on every request, which is essentially the same, and is wasteful if cookies are added

HTTP2, using HPACK compression format to encode the header, reduce the size of the header, the principle of the server and the client together maintain a static dictionary, used to record the occurrence of the header, after the transmission process has been directly transmitted by the sender has recorded the key name, The receiver can find the value by the key name

The server push

In HTTP2, a server can actively push other resources based on a request from a client

For example, an HTML page with a CSS and JS resource request, HTTP1. X, to send three requests, in HTTP2, do not request three times, the server found that the HTML contains CSS and JS resources, it will return three resources to the client, so that only one communication, you can obtain all resources.

What is the difference between multiplexing in HTTP2.0 and long connection multiplexing in HTTP1.x?

  • HTTP/1.X: Persistent connections are enabled by default. Multiple HTTP requests and responses can be sent on a TCP connection, reducing the consumption and delay of establishing and closing connections.

  • HTTP/2.0: Support for multiplexing, which is an upgrade to HTTP/1.x persistent connections. Multiplexing means that multiple streams can exist in a TCP connection, that is, multiple requests can be sent. The server can know which stream (the request) the frame belongs to through the identification in the frame, and restore the request by reordering. Multiplexing allows multiple requests to be made concurrently, each request and its response without waiting for other requests or responses, avoiding the problem of thread head blocking. In this way, a request task takes a long time and does not affect the normal execution of other connections, greatly improving transmission performance.

  • HTTP/1.*Once request-response, establish a connection and close when used up; Each request establishes a connection;
  • HTTP / 1.1Pipeling is a single-threaded process that serializes several requests and blocks them when any of them times outThread blocking;
  • HTTP / 2.0Support for multiplexing, which is an upgrade to HTTP/1.x persistent connections. Multiplexing means that multiple streams can exist in a TCP connection, that is, multiple requests can be sent. The server can know which stream (the request) the frame belongs to through the identification in the frame, and restore the request by reordering.Multiplexing allows multiple requests to be made concurrently, each request and its response without waiting for other requests or responses, avoiding the problem of thread head blocking. In this way, a request task takes a long time and does not affect the normal execution of other connections, greatly improving transmission performance.