Make a little progress every day.

preface

In the interview process, HTTP is a relatively high probability of being asked questions.

Kobayashi I collected 5 categories of HTTP interview questions, and these 5 categories of questions with the development and evolution of HTTP is relatively large, through the form of question-and-answer + graphic way to help you further learn and understand HTTP.

  1. HTTP Basic Concepts
  2. The Get and Post
  3. HTTP features
  4. HTTPS to HTTP
  5. HTTP/1.1, HTTP/2, HTTP/3 evolution
The outline

The body of the

01 Basic Concepts of HTTP

What is HTTP? Describe the

HTTP is the HyperText Transfer Protocol.

Can you explain “hypertext Transfer Protocol” in detail?

HTTP is called hypertext Protocol Transfer, and it can be broken down into three parts:

  • hypertext
  • transmission
  • agreement
Three parts

1. “Agreement”

In our daily life, we can also see “agreements” everywhere, for example:

  • They sign a “tripartite agreement” upon graduation.
  • When looking for a house, they sign a “rental agreement”;
Tripartite agreement and rental agreement

Protocols in real life are essentially the same as protocols in computers.

  • The word “coordination” means that there must be more than two participants. For example, there are three participants in the tripartite agreement: you, the company and the school. There are two players in a rental agreement: you and the landlord.
  • “Yi”, on behalf of the participants of a behavioral agreement and norms. For example, the tripartite agreement stipulates probation period, breach of contract, etc.; The rental agreement stipulates the lease period, the monthly rent amount, how to deal with the default.

In the case of HTTP, we can think of it this way.

HTTP is a protocol used in the computer world. It establishes a specification for communication between computers (two or more actors) and associated controls and error handling (behavior conventions and specifications) in a language that computers can understand.

2. “Transmission”

By “transfer,” we mean moving A bunch of stuff from point A to point B, or from point B to point A.

Don’t take this simple action lightly, it contains at least two important pieces of information.

HTTP is a two-way protocol.

When we surf the Internet, the browser is the requester A, baidu website is the answer B. The two sides agree to use HTTP protocol to communicate, so the browser sends the request data to the website, the website then returns some data to the browser, and finally rendered by the browser on the screen, you can see pictures and videos.

Request-reply

Although data is transmitted between A and B, it is allowed to have A relay or relay.

Just like the students in the first row want to pass the paper to the students in the last row, so the process of transmission will need to pass through many students (middlemen), so the transmission mode from “A < — > B” to “A <-> N <-> M <-> B”.

With HTTP, the middleman is required to comply with the HTTP protocol and can add anything extra as long as it doesn’t interrupt the basic data transfer.

In terms of transport, we can further understand HTTP.

HTTP is a convention and specification for transferring data between two points in the computer world.

3. “Hypertext”

What HTTP transfers is “hypertext.”

Let’s first understand “text”, in the early days of the Internet was just a simple character text, but now the meaning of “text” has been expanded to include pictures, videos, compressed packages, etc., in the eyes of HTTP these are considered “text”.

And hypertext, it’s text that goes beyond normal text, it’s a mixture of text, pictures, video, etc. And the key is hyperlinks, the ability to jump from one hypertext to another.

HTML is the most common hypertext, it itself is pure text file, but with a lot of internal tags defined images, videos and other links, after the browser interpretation, presented to us is a text, a picture of the web page.

OK, a detailed explanation of these three terms of HTTP provides a more accurate and technical answer than the seven words “hypertext Transfer protocol” :

HTTP is a “convention and specification” for the “transfer” of text, pictures, audio, video and other “hypertext” data between “two points” in the computer world.

Is it true that HTTP is a protocol for transferring hypertext from an Internet server to a local browser?

This is not true. Since it can also be “server < — > server,” it is more accurate to use a description between two points.

What are the common HTTP status codes?

Five types of HTTP status codes

1xx

The 1XX status code is an intermediate state in protocol processing, and is rarely used.

2xx

The 2XX status code indicates that the server successfully processed the client’s request, which is the most desirable state.

“200 OK” is the most common success status code, indicating that everything is OK. If the request is not HEAD, the server will return the body data in the response header.

“204 No Content” is also a common success status code, essentially the same as 200 OK, but with No body data in the response header.

206 Partial Content is applied to HTTP block download or power-off continuation. It indicates that the body data returned by the response is not the entire resource, but a part of it. It is also the status that the server successfully processed.

3xx

The 3XX status code indicates that the resource requested by the client is changed, and the client needs to re-send the request to obtain the resource with a new URL, that is, redirection.

301 Moved Permanently: indicates permanent redirection. In this case, the requested resource no longer exists and you need to use a new URL to access it again.

“302 Found” indicates a temporary redirect, indicating that the requested resource is still available but needs to be accessed using a different URL for the time being.

Both 301 and 302 use the Location field in the response header to indicate the next URL to jump to, and the browser will automatically redirect to the new URL.

304 Not Modified does Not indicate a jump. It indicates that the resource is Not Modified and redirects the existing buffer file. This parameter is also called cache redirection for cache control.

4xx

The 4XX status code indicates that the packet sent by the client is incorrect and cannot be processed by the server.

400 Bad Request: indicates that there is an error in the Request packet sent by the client.

403 Forbidden Indicates that the server forbids access to resources, but the request from the client fails.

404 Not Found indicates that the requested resource does Not exist or is Not Found on the server and therefore cannot be provided to the client.

5xx

5XX Status code Indicates that the client requests the packet correctly, but the server processes an internal error, which is a server-side error code.

“500 Internal Server Error” and “400” are general Error codes. We do not know what happened to the Server.

501 Not Implemented indicates that the functionality requested by the client is Not yet supported. It is similar to “opening soon, please look forward to it.”

502 Bad Gateway is an error code returned when the server functions as a Gateway or proxy, indicating that the server works properly and an error occurs when accessing the back-end server.

503 Service Unavailable indicates that the server is busy and cannot respond to the server temporarily, similar to the message Network Service is busy, please try again later.

What are the common HTTP fields?

The Host field

When a client sends a request, it specifies the domain name of the server.

Hostwww.A.com

Copy the code

With the Host field, you can send requests to different sites on the “same” server.

The Content – Length field

When the server returns data, the content-Length field indicates the Length of the response data.

Content-Length: 1000

Copy the code

The length of the response from the server is 1000 bytes, and the following bytes belong to the next response.

Connection field

The Connection field is most commonly used when a client asks the server to use a TCP persistent Connection for reuse by other requests.

HTTP/1.1 uses persistent connections by default, but in order to be compatible with older HTTP versions, you need to specify keep-alive as the value of the Connection header.

Connection: keep-alive

Copy the code

A TCP connection is established that can be reused until the client or server actively closes the connection. However, this is not a standard field.

The content-type field

The Content-Type field is used to tell the client what format the data is in when the server responds.

Content-Type: text/html; charset=utf-8

Copy the code

The above type indicates that a web page is being sent and the encoding is UTF-8.

When a client requests, it can use the Accept field to declare which data formats it accepts.

Accept: */*

Copy the code

In the code above, the client declares that it can accept data in any format.

The Content – Encoding field

Content-encoding Specifies the data compression method. What compression format is used to represent data returned by the server

Content-Encoding: gzip

Copy the code

The data returned by the server is compressed in gzip mode and the client needs to be decompressed in this mode.

At request time, the client uses the accept-Encoding field to indicate which compression methods it can Accept.

Accept-Encoding: gzip, deflate

Copy the code

02 the GET and POST

What’s the difference between GET and POST?

The Get method requests a resource from the server. This resource can be static text, pages, images, videos, etc.

For example, if you open my article, the browser will send a GET request to the server, and the server will return all the text and resources for the article.

A GET request

The POST method does the opposite. It submits data to the resource specified by the URI, and the data is placed in the body of the message.

For example, if you type in your message at the bottom of my POST and click submit, the browser will perform a POST request, put your message in the body of the message, and then concatenate the POST header and send it to the server over TCP.

A POST request

Are both GET and POST methods secure and idempotent?

First, the concept of security and idempotence:

  • In THE HTTP protocol, “secure” means that the request method does not “corrupt” resources on the server.
  • The so-called “idempotent” means that the same operation is performed many times and the result is “the same”.

Then it is clear that the GET method is secure and idempotent because it is a “read-only” operation, and the data on the server is safe no matter how many times it is done, and the result is the same each time.

POST is not secure because it is an “add or submit data” operation, modifying resources on the server, and multiple submissions create multiple resources, so it is not idempotent.


03 HTTP features

What do you know about HTTP (1.1) and how do you reflect it?

HTTP’s greatest strengths are “simplicity, flexibility and ease of extension, wide application, and cross-platform.”

1. Simple

The basic HTTP packet format is header + body, and the header information is also in the form of key-value text, which is easy to understand and reduces the threshold for learning and using.

2. Flexible and easy to expand

HTTP protocol in all kinds of request methods, URI/URL, status code, header fields and other components of the requirements are not fixed, allowing developers to customize and expand.

And because HTTP works at the application layer (OSI Layer 7), its lower layers can be changed at will.

HTTPS adds SSL/TLS security transport layer between HTTP and TCP, and HTTP/3 even replaces TCPP layer with UDP-based QUIC.

3. Wide application and cross-platform

The development of the Internet so far, HTTP application range is very wide, from the browser on the desktop to a variety of apps on the mobile phone, from reading news, brush post bar to shopping, financial management, eating chicken, HTTP application pieces bloom, and natural cross-platform superiority.

What about its disadvantages?

HTTP protocol has advantages and disadvantages of a double-edged sword, respectively is “stateless, plaintext transmission”, but also a major disadvantage of “insecure”.

Stateless double-edged sword

Stateless benefits because the server does not memorize the HTTP state, so no additional resources are required to record the state information, which reduces the burden on the server and allows more CPU and memory to be used for external services.

The downside of statelessness is that since the server has no memory, it can be very difficult to perform related operations.

For example, login -> add shopping cart -> order -> settlement -> pay, this series of operations to know the identity of the user. But the server does not know that these requests are related and asks for identity information each time.

Can you still have a pleasant shopping experience if you verify information every time you do it? Don’t ask, ask is sour!

For the stateless problem, there are many solutions, among which the relatively simple way to use Cookie technology.

Cookie Controls client status by writing Cookie information in request and response packets.

After the first request, the server will send a “sticker” with the client’s information. When the client requests the server, the server will recognize the “sticker”.

Cookie technology

2. Plaintext transmission double-edged sword

Plaintext means that the information in the transmission process is easy to read, through the browser F12 console or Wireshark captured packets can be directly viewed by the naked eye, for our debugging work with great convenience.

But that’s what happens. All of HTTP’s information is out in the open, the equivalent of information running naked. In the long process of transmission, the content of the information has no privacy at all, it is easy to be stolen, if there is your account password information, that your number is lost.

3. Not safe

The serious drawback of HTTP is that it is insecure:

  • Communications use clear text (not encryption) and the content can be eavesdropped. For example, the account information is easy to leak, that you have no number.
  • The identity of the communicating party is not verified, so it is possible to encounter camouflage. For example, visit fake Taobao, Pinduoduo, that you have no money.
  • The integrity of the message could not be proved, so it may have been tampered with. For example, the web page implantation spam, visual pollution, lost eyes.

HTTP security problems can be solved using HTTPS, that is, through the introduction of SSL/TLS layer, so as to achieve the ultimate security.

What is the performance of HTTP/1.1?

The HTTP protocol is based on TCP/IP and uses a “request-reply” communication mode, so the key to performance lies in these two points.

1. The long connection

One of the big performance issues with HTTP/1.0 in the early days was that for every request made, a new TCP connection (three-way handshake) was created, and it was a serial request, making unnecessary TCP connections and disconnections and increasing communication overhead.

In order to solve the problem of TCP connection, HTTP/1.1 proposed the communication mode of long connection, also called persistent connection. The advantage of this approach is that it reduces the overhead caused by the repeated establishment and disconnection of TCP connections and reduces the load on the server side.

The characteristic of a persistent connection is that the TCP connection remains as long as neither end explicitly disconnects.

Short connection and long connection

2. Pipeline network transmission

HTTP/1.1 uses long connections, which make it possible for pipelines to travel over networks.

Within the same TCP connection, the client can initiate multiple requests. As long as the first request is sent out, the second request can be sent out without waiting for it to come back, which can reduce the overall response time.

For example, the client needs to request two resources. In the past, in the same TCP connection, first send A request, and then wait for the server to respond, and then send A request B. The pipeline mechanism allows the browser to make both A and B requests.

Pipeline network transmission

But the server still responds to request A first and then to request B in the same order. If the response at the front is particularly slow, there will be a queue of requests waiting. This is called “queue head jam”.

3. Queue head is blocked

The request-reply pattern exacerbates HTTP’s performance problems.

Because when one request in the sequential sequence of requests is blocked for some reason, all the subsequent requests are also blocked, causing the client to never request data, this is called “queue head blocking”. It’s like being stuck in traffic on the way to work.

Team head block

HTTP/1.1’s performance was mediocre, and subsequent HTTP/2 and HTTP/3 were designed to optimize HTTP performance.


04 HTTP and HTTPS

What are the differences between HTTP and HTTPS?

  1. HTTP is a hypertext transmission protocol, and information is transmitted in plaintext. Therefore, it has security risks. HTTPS addresses HTTP insecurity by adding SSL/TLS between the TCP and HTTP network layers to encrypt packets.
  2. It is relatively easy to establish an HTTP connection. HTTP packets can be transmitted after TCP three-way handshake. In HTTPS, after the TCP three-way handshake, you need to perform the SSL/TLS handshake to transmit encrypted packets.
  3. The HTTP port number is 80 and the HTTPS port number is 443.
  4. HTTPS applies for a digital certificate from a CERTIFICATE Authority (CA) to ensure that the identity of the server is trusted.

What problems does HTTPS solve with HTTP?

HTTP has the following security risks because it is transmitted in plaintext:

  • Risk of eavesdropping, such as access to communication content on communication links, easy to lose user numbers.
  • Tampering risks, such as forced spam, visual pollution, and user blindness.
  • Impersonating risk, such as impersonating Taobao site, users money is easy to do not.
HTTP and HTTPS network layer

HTTPS adds SSL/TLS protocol between HTTP and TCP layer, which can solve the above risks:

  • Message encryption: Interactive information cannot be stolen, but your account will be lost due to “self-forgetting”.
  • Verification mechanism: communication content can not be tampered with, tampered with the normal display, but Baidu “bidding ranking” can still search for spam.
  • Certificate of identity: prove that Taobao is real Taobao, but your money will still be lost because of the “chop hands”.

The SSL/TLS protocol can ensure secure communication as long as it does not do evil.

How does HTTPS address the above three risks?

  • The hybrid encryption method realizes the confidentiality of information and solves the risk of eavesdropping.
  • Algorithm to achieve integrity, it can generate a unique “fingerprint” for data, fingerprint used to verify the integrity of data, to solve the risk of tampering.
  • Putting the server public key into the digital certificate solves the risk of impersonation.

1. Mixed encryption

The hybrid encryption method can ensure the confidentiality of information and solve the risk of eavesdropping.

Mixed encryption

HTTPS uses a hybrid encryption mode that combines symmetric and asymmetric encryption:

  • Asymmetric encryption is used to exchange “session keys” before communication is established, and asymmetric encryption is not used later.
  • In the process of communication, all the plaintext data are encrypted in the way of symmetric encryption “session secret key”.

Reasons for adopting the “hybrid encryption” approach:

  • Symmetric encryption uses only one key, which is fast in operation. The key must be kept secret and cannot be exchanged securely.
  • Asymmetric encryption uses two keys: public key and private key. The public key can be distributed arbitrarily while the private key is kept secret, which solves the key exchange problem but is slow.

2. Summary algorithm

The algorithm is used to realize the integrity, which can generate a unique “fingerprint” for the data to verify the integrity of the data and solve the risk of tampering.

Check integrity

Before the client sends a clear through the algorithm to calculate plaintext “fingerprint”, when sending the “fingerprint + plaintext” after encryption cipher text, together sent to the server, the server after decryption, using the same cleartext based algorithm is sent, by comparing the “fingerprints” of the client to carry and the calculate a “fingerprint” compare, if “fingerprint” the same, Note The data is complete.

3. Digital certificate

The client requests the public key from the server and encrypts the information with the public key. After receiving the ciphertext, the server decrypts it with its own private key.

This raises some questions. How do you ensure that the public key is not tampered with and trusted?

Therefore, a third-party authority (CA) is required to add the server public key to the digital certificate (issued by the CA). As long as the certificate is trusted, the public key is trusted.

Count the child certificate workflow

A digital certificate is used to ensure the identity of the server’s public key to avoid the risk of impersonation.

How does HTTPS establish a connection? What are the interactions?

Basic flow of SSL/TLS protocol:

  • The client requests and verifies the public key of the server.
  • Both parties negotiate to produce session secret keys.
  • The two parties use the session key for encrypted communication.

The first two steps are the SSL/TLS setup, also known as the handshake phase.

The “handshake phase” of SSL/TLS involves four communications, as shown below:

HTTPS connection establishment process

Detailed process for establishing SSL/TLS protocol:

1. ClientHello

First, the client initiates an encrypted communication request to the server, known as a ClientHello request.

In this step, the client sends the following information to the server:

(1) SSL/TLS protocol version supported by the client, for example, TLS 1.2.

(2) Client Random number produced by the Client, which is later used to produce “session secret key”.

(3) List of password suites supported by the client, such as RSA encryption algorithm.

2. SeverHello

When the server receives a request from the client, it sends a response, called SeverHello, to the client. The server replies with the following content:

(1) Check the SSL/ TLS protocol version. If the browser does not support the SSL/ TLS protocol, disable encrypted communication.

(2) Server Random number produced by the Server, which is later used to produce “session secret key”.

(3) List of confirmed cipher suites, such as RSA encryption algorithm.

(4) Digital certificate of the server.

3. The client responds

After receiving the response from the server, the client uses the CA public key in the browser or operating system to verify the authenticity of the digital certificate of the server.

If there is no problem with the certificate, the client extracts the server’s public key from the digital certificate and uses it to encrypt packets to send the following information to the server:

(1) A random number (pre-master key). The random number is encrypted by the server’s public key.

(2) Notification of change of encrypted communication algorithm, indicating that subsequent information will be encrypted with “session secret key” for communication.

(3) Notification of the end of client handshake, indicating the end of client handshake. This entry also makes a summary of all previous content occurrence data for the server to verify.

The random number of the first item above is the third random number in the whole handshake phase, so that the server and client have three random numbers at the same time, and then use the encryption algorithm negotiated by both parties to generate the “session secret key” of this communication.

4. Final response from the server

After receiving the third random number (pre-master key) from the client, the server calculates the session key for the communication through the negotiated encryption algorithm. The final message is then sent to the client:

(1) Notification of change of encrypted communication algorithm, indicating that subsequent information will be encrypted with “session secret key” for communication.

(2) Notification indicating that the handshake phase of the server is over. This entry also makes a summary of all the previous content of the occurrence of the data, used for client verification.

At this point, the entire SSL/TLS handshake phase is complete. Next, the client and server enter encrypted communication using the plain HTTP protocol, but with the “session secret key” to encrypt the content.


05 EVOLUTION of HTTP/1.1, HTTP/2, and HTTP/3

What is the performance improvement of HTTP/1.1 over HTTP/1.0?

Performance improvements in HTTP/1.1 over HTTP/1.0:

  • The use of TCP long connections improves the performance overhead associated with HTTP/1.0 short connections.
  • Support pipeline network transmission, as long as the first request is sent out, do not have to wait for it to come back, can send out the second request, can reduce the overall response time.

However, HTTP/1.1 still has performance bottlenecks:

  • Request/response headers are sent uncompressed, with more headers and more latency. Can only be compressedBodyThe part;
  • Send lengthy headers. Sending the same header each time causes more waste;
  • The server responds according to the order of requests. If the server responds slowly, the client cannot request data all the time, that is, the queue head is blocked.
  • No request priority control;
  • Requests can only start from the client and the server can only respond passively.

What is HTTP/2 optimized for the performance bottleneck in HTTP/1.1?

The HTTP/2 protocol is based on HTTPS, so HTTP/2 security is also guaranteed.

What are the performance improvements of HTTP/2 over HTTP/1.1?

1. Head compression

HTTP/2 will compress headers. If you make multiple requests at the same time and their headers are the same or similar, the protocol will help you eliminate duplicate headers.

This is known as the HPACK algorithm: a header table is maintained on both the client and the server, all fields are stored in this table, an index number is generated, and the same fields are not sent later, only the index number is sent, which increases speed.

2. Binary format

HTTP/2 is no longer like the plain text packets in HTTP/1.1, but fully adopts the binary format. The header information and data body are both binary, and they are collectively referred to as frames: header information frame and data frame.

Message difference

This is not friendly to humans, but it is friendly to computers, because computers only understand binary. After receiving a packet, they do not need to convert the plaintext packet into binary, but directly parse the binary packet, which increases the data transmission efficiency.

3. The data flow

HTTP/2 packets are not sent sequentially, and consecutive packets within the same connection may belong to different responses. Therefore, the packet must be marked to indicate which response it belongs to.

All packets of data for each request or response are called a Stream. Each data stream is marked with a unique number that specifies an odd number for streams sent by the client and an even number for streams sent by the server

The client can also specify the priority of the data flow. The server responds to a request with a higher priority.

HTT/1 ~ HTTP/2

4. Multiplexing

HTTP/2 allows multiple requests or responses to be concurrent in a single connection, rather than in a one-to-one sequence.

By removing serial requests in HTTP/1.1, there is no need to queue, and the “queue head blocking” problem is eliminated, reducing latency and greatly improving connection utilization.

For example, in A TCP connection, the server receives two requests from client A and client B. If it finds that A takes too long to process, it responds to the completed part of A request, then responds to B request, and then responds to the rest of A request.

multiplexing

5. Server push

HTTP/2 also improves the traditional “request-reply” mode of working to some extent, where services can actively send messages to clients instead of passively responding.

For example, when the browser just requests HTML, it proactively sends static resources such as JS and CSS files that may be used to the client in advance to reduce the delay. This is also called Server Push (also called Cache Push).

What are the drawbacks of HTTP/2? What optimizations have been made to HTTP/3?

The main problem with HTTP/2 is that multiple HTTP requests are multiplexing a TCP connection, and the underlying TCP protocol does not know how many HTTP requests there are. So once packet loss occurs, TCP’s retransmission mechanism is triggered, so all HTTP requests in a TCP connection must wait for the lost packet to be retransmitted.

  • In HTTP/1.1, if a single request is blocked in transit, all the post-queue requests are blocked
  • HTTP/2 multiple requests reuse a TCP connection, which blocks all HTTP requests once packet loss occurs.

This is all based on the TCP transport layer, so HTTP/3 changes the HTTP layer TCP protocol to UDP!

HTTP/1 ~ HTTP/3

UDP occurs regardless of the order and regardless of packet loss, so there will be no HTTP/1.1 queue header blocking and HTTP/2 a lost packet all retransmission problems.

We all know that UDP is not reliable transmission, but based on UDP QUIC protocol can achieve similar to TCP reliable transmission.

  • QUIC has its own set of mechanisms to ensure the reliability of transmission. When packet loss occurs in a stream, only this stream is blocked, and other streams are not affected.
  • TL3 is the latest update1.3The header compression algorithm has also been updatedQPack.
  • HTTPS to establish a connection, it takes six interactions, first three handshakes, thenThe TLS / 1.3Three handshakes. QUIC directly combines TCP and TCPThe TLS / 1.3Six interactions ofMerge into 3 times, reduce the number of interactions.
TCP HTTPS (TLS/1.3) and QUIC HTTPS

So, QUIC is a pseudo-TCP + TLS + HTTP/2 multiplexing protocol on top of UDP.

QUIC is a new protocol, for many network devices, do not know what is QUIC, only as UDP, which will cause new problems. So HTTP/3 is now very slow to become popular and it is not known if UDP will be able to reverse TCP in the future.


Shoulders of giants

[1] Ueno Xuan. Illustrated HTTP. Posts and Telecommunications Press.

[2] Luo Jianfeng. Perspective the HTTP protocol. Geek time.

[3] Chen HAO. The Past and present of HTTP. Cool shell CoolShell. https://coolshell.cn/articles/19840.html

[4] Ruan Yifeng. Getting started with HTTP. Nguyen other web logs. http://www.ruanyifeng.com/blog/2016/08/http.html


nagging

The 30 pictures in this article are drawn from one line and two lines. It is very laborious, and I deeply feel that drawing is also a physical work.

I love lazy actually do not love drawing, but in order to let you can better understand, after fighting with their own countless times, set foot on the time-consuming and exhausting drawing road of no return, I hope to help you!

Kobayashi is a tool man for you. Goodbye, see you next time!

Readers question and answer

Readers asked, “HTTPS is more symmetric encryption than HTTP. Is that ok?”

  1. When establishing a connection: HTTPS is more than HTTP TLS handshake process;

  2. When transmitting content: HTTPS encrypts data, usually symmetrically;

The reader asked: “I see that TLS and SSL are not distinguished in the article. Do they need to be distinguished?”

These two are actually the same thing.

SSL is short for Secure Sockets Layer, which is called Secure Sockets Layer in Chinese. It was designed in the mid-1990s by Netscape.

By 1999, SSL had become the de facto standard on the Internet because it was so widely used. That was the year THE IETF standardized SSL. After standardization, the name was changed to TLS (short for Transport Layer Security), which is called Transport Layer Security protocol in Chinese.

Many articles refer to these two together (SSL/TLS) because they can be seen as different phases of the same thing.

Reader asks: “Why are SSL handshakes 4 times?”

SSL/TLS 1.2 requires 4 handshakes and 2 RTT delays. The graph in my article draws each interaction separately and actually sends them all together for 4 handshakes:

In addition, SSL/TLS 1.3 optimizes the process, requiring only 1 RTT round-trip delay, i.e. only 3 handshakes:

T