The birth of reason

Before we look at HTTP/2, why is it implemented?

In previous HTTP 1.1, we initiated a request as follows: the browser requests the URL -> domain name resolution -> establishes a TCP connection -> the server processes the received file -> returns the data file -> the browser receives the data file and parses the render.

The biggest problem with this set of execution processes is that each request creates a TCP connection, which is costly for performance.

We can start by looking at a network diagram to quickly understand:

The HTTP 1.1 – Keep – the Alive

We expect that after the first request channel is established, it will not be closed within a certain time range, so that our subsequent requests will continue to go through this channel and complete the request response, which undoubtedly reduces the time to re-establish the TCP connection, thus improving performance.

Therefore, HTTP 1.1 provides the keep-alive attribute to meet this expectation.

And this time range, in fact, the server or proxy middleware can be configured.

Does this attribute solve the flow problem perfectly?

The answer is no!

In HTTP 1.1, even if the keep-alive attribute is used, two problems arise:

  1. HTTP 1.1 Data is transmitted in a serial text format. Therefore, file contents must be transmitted sequentially rather than in parallel. Otherwise, the server cannot restore the content sequence, resulting in data errors.
  2. When the total number of multi-user requests reaches the server’s concurrency limit, subsequent requests are put on hold.

Since keep-alive is not enough and browsers generally allow up to 6-8 TCP connections to a single domain name for resource control, HTTP/2 is coming into its own.

HTTP/2 – multiplexing

Based on the problems exposed in HTTP 1.1, HTTP/2 implements multiplexing to replace the blocking mechanism in HTTP 1.1.

Multiplexing, in fact, introduces the concepts of binary frame and stream.

Request or response, actually can be seen as a message, and a message may consist of one or more frames, frame is responsible for the label, the content of the data transmission to the server, and then according to the label of integrated data content, because you can order transport, so we can come to concurrent transmission data content, so we used to flow.

All requests under the same domain name can go through a TCP channel, no matter how much two-way data flow can be carried.

Matters needing attention

Since we use HTTP/2, we can omit some of the resource optimizations previously, otherwise it will be counterproductive, such as the following two points:

  1. Multiple file merging reduces the number of resource requests

Although we reduced the number of resource request, but we are increasing the resource file and the client cache resource file, when a plate change a code, the client will have to the resource file to download cache, but we have other plates code has not changed, it now seems actually is unreasonable.

  1. Multi-domain resources

When the client needs to request many resources, to avoid the limitation that the browser can only establish 6-8 TCP connections with the same domain name at the same time, we will put the resource files on different domain names so that the browser can download the resources at the same time. However, this will increase the pressure on the server, and the time to resolve the DNS will accumulate, which will lead to poor performance.