Address: www.sunbohao.com/font

The test method can be http3check.net/ to check whether a site supports HTTP/3

Open Chrome ://flags/ find Experimental QUIC Protocol and select Enabled to restart the browser. Firefox/Safari can refer to this article

HTTP / 3 is introduced

HTTP/3 is the upcoming (currently in the draft) third major version of the HTTP protocol. Unlike its predecessor HTTP/1.1 HTTP/2, HTTP/3 will deprecate TCP and use QUIC implementation based on UDP

To explain why HTTP/3 has switched to a QUIC implementation, we need to take a quick look at the history of HTTP

HTTP / 0.9

In 1991 Tim Berners-Lee designed a simple prototype of a hypertext transfer protocol, later known as HTTP/0.9

By this time TCP was a reliable and mature protocol, and the document specifically states, “HTTP currently runs on TCP, but can run on any connection-oriented service.”

Of course HTTP at this point is very simple, no headers, no status codes, just a simple:

   GET /index.html
Copy the code

The response contains HTML only and ends with TCP closed

The single HTML at this point provides a complete, self-contained page

HTTP / 1.0

In the years that followed, as the Internet exploded, HTTP evolved as a scalable, flexible, generic-purpose protocol, although it remained its specialty in transporting HTML

There are three key implementations of HTTP that complete this evolution:

1. The POST and HEAD commands are introduced to enrich the means of interaction between the browser and the server.

2. The introduction of status codes provides a way for clients to identify whether the server has successfully processed a request and what errors have occurred.

3. Request and response format changes. Each communication must contain HTTP headers describing metadata in addition to the data section. For example: Content-type, which allows HTTP to transmit not only HTML, but any Type of payload. Content-encoding, which describes the compression algorithm for the data, allows the client and server to negotiate the compression algorithm, thus reducing the amount of data transferred

At the same time, HTML supports images, styles, and other link resources. Browsers are forced to perform multiple requests to display a single page, while each request establishes a TCP link. As the number of page requests increases, the latency increases

To solve this problem, some browsers implement a non-standard Connection field of their own

  Connection: keep-alive
Copy the code

This field requires the server not to close the TCP connection so that subsequent requests can be reused. The server responds to this field, and a reusable TCP connection is established until either the client or the server actively closes the connection. However, this is not a standard field, and different implementations may behave differently

HTTP / 1.1

HTTP/1.1 fixes the inconsistencies of HTTP/1.0 and tweaks the protocol for better performance in the new Web ecosystem. Two of the most critical changes introduced are the default use of persistent TCP connections and HTTP pipes.

The HTTP pipeline simply means that the client does not need to wait for the server to respond to the previous request when sending a subsequent request. This feature effectively utilizes bandwidth and reduces latency, but the HTTP pipeline still requires the server to respond in the order in which the request was received, so if there is a response in the pipeline that executes slowly, All subsequent responses to the client are then delayed, a problem known as queue head blocking

At this time, the Web is gaining more and more interactive capabilities, some pages contain hundreds of external resources, in order to solve the problem of queue head blocking, reduce page latency, the client set up multiple TCP connections on each host, of course, the cost of new connection still exists, especially in the case of SSL/TSL popularity. So new connections can be even worse in real life, so most browsers set maximum connection limits to balance this (the way to overcome latency in those years).

SPDY and HTTP/2

In 2009, Google unveiled its own SPDY protocol to address the inefficiencies of HTTP/1.1, and it was used as the basis for HTTP/2 after it was proven to work in Chrome

The HTTP/2 standard is an improvement on SPDY. HTTP/2 solves the problem of queue header blocking by multiplexing HTTP requests over a TCP connection, which allows the server to reply in any order and then the client to reassemble the response

In addition, HTTP/2 allows the server to actively push data; In addition to compressing the data body, request headers are also allowed to be compressing, further reducing the amount of transmission

HTTP/2 solves most of these problems, but not all of them, and there are still wiring problems at the TCP protocol level:

With HTTP/2, browsers typically create dozens or even hundreds of parallel transfers within a single TCP connection. If a packet is lost on either side of the HTTP/2 connection, or if either side’s network is interrupted, the entire TCP connection is suspended and the lost packet needs to be retransmitted. Because TCP is a chain of sequential transmissions, if one point is lost, the rest of the link has to wait. This blocking caused by a single packet is known as head of line blocking on TCP.

As the packet loss rate increases, HTTP/2 performs worse and worse. At a 2% packet loss rate (a poor network quality), the test results show that HTTP/1.1 users perform better because HTTP/1.1 typically has six TCP connections, and even if one of the connections is blocked, the other connections without packet loss can continue to transmit.

HTTP/3

Because HTTP/2’s problems cannot be solved at the application layer alone, new protocol iterations must be replaced at the transport layer. The UDP protocol is as widely supported as TCP. UDP packets are sent and forgotten without handshakes, persistent connections, or error retries. The main idea behind HTTP/3 is to abandon TCP in favor of the UDP-based QUIC protocol

Unlike HTTP/2, which allows no encryption, QUIC strictly requires encryption to establish the connection, and encryption applies to all data flowing through the connection, not just the HTTP payload

To solve the transport-layer queue header blocking problem, HTTP/3 uses QUIC to divide data into separate data streams within each connection. Data streams are short “sublinks”, each of which handles its own error retries. Each HTTP request runs on a separate stream, so the loss of a packet does not affect the transmission of other requests

UDP is a stateless protocol (persistent connections are just a layer of abstraction), which allows QUIC to largely ignore the complexity of packets. For example, client changes of IP address during connection (such as a smartphone jumping from a mobile network to a home wifi) should theoretically not interrupt the connection. Because the protocol allows migration between different IP addresses without reconnecting.

HTTP / 3 practice

Nginx also officially supports a preview of HTTP/3

But in this case I’m going straight to the nginx-Http3 Docker image,

Add nginx configuration:

# Enable QUIC and HTTP/3. listen 443 quic reuseport; # Enable HTTP/2 (optional). listen 443 ssl http2; ssl_certificate xxx.com.pem; ssl_certificate_key xxx.com.key; Ssl_protocols TLSv1.2 TLSv1.3 ssl_protocols TLSv1.2 TLSv1.3 ssl_protocols TLSv1.2 TLSv1.3 # QUIC requires TLS 1.3 # add_header alt-svc 'h3-29=":443"; ma=2592000,h3-27=":443"; ma=2592000,h3-T050=":443"; ma=2592000,h3-Q050=":443"; ma=2592000,h3-Q046=":443"; ma=2592000,h3-Q043=":443"; ma=2592000,quic=":443"; ma=2592000; V = "46 lancet" ';Copy the code

In addition, open port 443/ UDP for cloud services

In addition, this optimization is an improvement over the previous optimization:

Working people, refueling ~~~~~~

Reference resources:

www.sunbohao.com/font/conten…

Scorpil.com/post/the-lo…

www.w3.org/Protocols/H…

Github.com/cloudflare/…

http3-explained.haxx.se/zh

http2-explained.haxx.se/zh

zh.wikipedia.org/wiki/HTTP/3

Blog. CJW. Design/blog/devopt…

www.bram.us/2020/04/08/…