The basic concept

Question: What is HTTP? Describe:

HTTP is the HyperText Transfer Protocol.

Question: Can you explain hypertext Transfer protocol in detail?

HTTP is called hypertext Protocol Transfer, and it can be broken down into three parts:

  • agreement

    In the case of HTTP, we can think of it this way.

    HTTP is a protocol used in the computer world. It establishes a form of communication between computers in a language that computers can understand

    (two or more participants), and associated controls and error handling (behavior conventions and norms).

  • transmission

    In terms of transport, we can further understand HTTP.

    HTTP is a convention and specification for transferring data between two points in the computer world

  • hypertext

    “Text”, in the early days of the Internet, was simply a character text, but now the meaning of “text” can be expanded

    Images, videos, compressed packages, etc., all count as “text” in the eyes of HTTP.

    To understand the “hypertext”, it is beyond the ordinary text of the text, it is a mixture of text, pictures, video and so on, the most critical is super

    Link that jumps from one hypertext to another

To sum up, HTTP is a computer world dedicated to “transferring” text, pictures, audio, video, and other “hypertext” data between “two points”

“Conventions and norms”.

Question: Is it true that HTTP is a protocol for transferring hypertext from an Internet server to a local browser?

This is not true. Since it can also be “server < — > server,” it is more accurate to use a description between two points.

Q: What are the common HTTP status codes?

  • 1xx

    The 1XX status code is an intermediate state in protocol processing, and is rarely used.

  • 2xx

    The 2XX status code indicates that the server successfully processed the client’s request, which is the most desirable state.

    “200 OK” is the most common success status code, indicating that everything is OK. If the request is not HEAD, the response header returned by the server will be

    You have body data.

    “204 No Content” is also a common success status code, essentially the same as 200 OK, but with No body data in the response header.

    “206 Partial Content” is applied to HTTP partitioned downloads or resumable breakpoints, indicating that the body data returned by the response is not a resource

    All, but part of it, is also the state of the server processing success.

  • 3xx

    The 3xx status code indicates that the resource requested by the client changes and the client needs to use the new URL ᯿ to send a new request to obtain the resource, also

    That’s redirection.

    301 Moved Permanently: indicates permanent redirection. In this case, the requested resource no longer exists and you need to use a new URL

    Access.

    “302 Found” indicates a temporary redirect, indicating that the requested resource is still available but needs to be accessed using a different URL for the time being.

    Both 301 and 302 use the Location field in the response header to indicate the next URL to jump to, and the browser will automatically redirect to a new one

    The URL.

    304 Not Modifified does Not indicate a jump. It indicates that resources are Not modified and redirects existing buffer files. Also called cache reconfiguration

    Direction is used for cache control.

  • 4xx

    The 4XX status code indicates that the packet sent by the client is incorrect and cannot be processed by the server.

    400 Bad Request: indicates that there is an error in the Request packet sent by the client.

    403 Forbidden Indicates that the server forbids access to resources, but the request from the client fails.

    404 Not Found indicates that the requested resource does Not exist or is Not Found on the server and therefore cannot be provided to the client.

  • 5xx

    5XX Status code Indicates that the client requests the packet correctly, but the server processes an internal error, which is a server-side error

    Code.

    “500 Internal Server Error” and “400” are general Error codes. What Error occurred on the Server

    I don’t know.

    501 Not Implemented indicates that the functionality requested by the client is Not yet supported. It is similar to “opening soon, please look forward to it.”

    502 Bad Gateway is usually an error code returned when the server functions as a Gateway or proxy, indicating that the server works properly and can be accessed

    An error occurred on the back-end server.

    503 Service Unavailable indicates that the server is busy and cannot respond to the server temporarily, similar to The network Service is busy, please wait

    Try again later.

Question: What about the OSI seven-tier model and the TCP/IP four-tier/five-tier model?

  • Application layer: Provides data transmission services for specific applications, such as HTTP and DNS protocols. The unit of data is packet.
  • Transport layer: Provides common data transfer services for processes. Since there are many application layer protocols, defining common transport layer protocols can support the increasing number of application layer protocols. The transport layer consists of two protocols: TRANSMISSION control protocol TCP, which provides connection-oriented and reliable data transmission service. The data unit is packet segment. User datagram protocol (UDP), which provides connectionless, best – effort data transfer service, the data unit is user datagram. TCP mainly provides integrity service, UDP mainly provides timeliness service.
  • Network layer: Provides data transmission services for hosts. The transport layer protocol provides data transfer services for the processes in the host. The network layer encapsulates the packet segments or user datagrams passed down from the transport layer into packets.
  • Data link layer: The network layer aims at data transmission services between hosts. There may be many links between hosts. The link layer protocol provides data transmission services for hosts on the same link. The data link layer encapsulates the packets passed down from the network layer into frames.
  • Physical layer: Considers how to transfer the data bit stream over the transport medium, not the specific transport medium. The role of the physical layer is to mask as much as possible the differences in transmission media and communication means so that the data link layer does not feel these differences

Question: What is the flow of data transmission?

Data encapsulation process (Express packaging)

  • Application layer transmission

    This can be understood as the translation process, of course, this belongs to the computer translation process, the application layer is the data binary encoding.

  • Transport layer Transmission (Data segment) The transport layer divides the upper-layer data into multiple data segments (to solve transmission errors) and encapsulates the data after each segment — the TCP packet header. The TCP packet header contains a key field, the port number (ensuring upper-layer application data communication).

  • Network layer Transport (packet) The network layer encapsulates the upper-layer data again — the IP header, which contains a key field of information — the IP address, which identifies the logical address of the network.

  • The data link layer transmits (data frames) the data link layer encapsulates the upper layer of data again – the MAC header, which contains a key field of information – the MAC address, which can be understood as a physical address hardwired into the hardware, as unique as our personal bank card number. The tail encapsulation in this encapsulation process is not discussed.

  • Physical layer transmission (bitstream) The physical layer converts the upper binary data into electrical signals for transmission over the network.

Question: What is the packet structure of HTTP?

Request message:

Response message:

It should be noted that when the request mode is GET in the request packet, there is no body. Body only exists in POST.

Question: What are the HTTP request methods?

  • GET

    Used to obtain resources

    The server data is not modified

    Don’t send the Body

    GET includes the parameters in the URL

    GET /users/1 HTTP/1.1
    Host: api.github.com
    Copy the code
  • POST

    Used to add or modify resources

    The content sent to the server is written inside the Body

    POST /users HTTP/1.1
    Host: api.github.com
    Content-Type: application/x-www-form-urlencoded
    Content-Length: 13
    name=mlx&age=10
    Copy the code

    PUT

    Used to modify resources

    The content sent to the server is written inside the Body

    PUT /users/1 HTTP/1.1 Host: api.github.com Content-Type: application/x-www-form-urlencoded Content-Length: 13 name= MLXCopy the code

    DELETE

    Used to delete a resource

    Don’t send the Body

    DELETE /users/1 HTTP/1.1
    Host: api.github.com
    Copy the code

    HEAD

    Use the same method as GET

    The only difference from GET is that there is no Body in the response returned

Question: What’s the difference between HTTP’s many request methods?

Url describes a network resource, and POST, GET, PUT, delete is the resource to add, delete, change, search operations!

New terms need to be introduced: security and idempotence

Concepts of security and idempotence:

  • In THE HTTP protocol, “secure” means that the request method does not “corrupt” resources on the server.
  • The so-called “idempotent” means that the same operation is performed many times and the result is “the same”.

The GET method is secure and idempotent because it is a “read-only” operation, and the data on the server is safe no matter how many times it is operated, and the result is the same each time.

POST is not secure because it is an “add or submit data” operation, modifying resources on the server, and multiple submissions create multiple resources, so it is not idempotent.

PUT and DELETE operations are idempotent.

To highlight the differences between GET and POST:

  • GET is harmless when the browser falls back, while POST resubmits the request. (also known as idempotent above)
  • The URL generated by GET can be bookmarked, but not by POST.
  • GET requests are actively cached by browsers, whereas POST requests are not, unless set manually.
  • GET requests can only be url encoded, while POST supports multiple encoding methods.
  • GET request parameters are retained in browser history, while parameters in POST are not.
  • GET requests pass parameters in the URL with length limits, whereas POST does not.
  • GET accepts only ASCII characters for the data type of the argument, while POST has no restrictions.
  • GET is less secure than POST because parameters are exposed directly to the URL and therefore cannot be used to pass sensitive information.
  • The GET argument is passed through the URL, and the POST is placed in the Request body.

However, there are buts everywhere, and the underlying layer of HTTP is TCP/IP. So the bottom layer of GET and POST is also TCP/IP, that is, both GET and POST are TCP links. GET and POST can do the same thing. It’s just semantics and server or browser limitations that make the difference. There is one more important difference:

For a GET request, the browser sends both HTTP headers and data, and the server responds with 200.

For POST, the browser sends a header, the server responds with 100 continue, the browser sends data, and the server responds with 200 OK (returns data).

GET generates a TCP packet; POST generates two TCP packets.

Question: How to establish a connection after creating a packet?

HTTP is based on TCP/IP, so HTTP requests are TCP/IP connections.

What is TCP?

  • TCP provides a connection-oriented, reliable byte stream service
  • In a TCP connection, only two parties communicate with each other. Broadcast and multicast cannot be used with TCP
  • TCP sorts data into sections and uses cumulative validation to ensure that the order of the data is constant and non-repetitive
  • TCP uses the sliding window mechanism to control traffic, and dynamically changes the window size to control congestion

TCP does not guarantee that the data will be received by the other party, because it is impossible. What TCP can do is, if possible, deliver the data to the receiver and notify the user otherwise (by aborting retransmission and breaking the connection). So TCP is not exactly a 100% reliable protocol. What it does provide is reliable delivery of data or reliable notification of failures.

A TCP connection requires three handshakes, and a disconnection requires four waves

Three-way handshake

Three-way Handshake A TCP connection is established with Three packets sent by the client and server.

The purpose of the three-way handshake is to connect to a specified port on the server, establish a TCP connection, synchronize the serial number and confirmation number of the two connected parties, and exchange TCP window size information. In socket programming, when the client executes connect(). Three handshakes will trigger

  • First handshake (SYN=1, seq=x):

    The client sends a TCP PACKET with the SYN flag at position 1, indicating the server port to which the client intends to connect, and the initial Sequence Number X, stored in the Sequence Number field of the packet header.

    After sending the packets, the client enters the SYN_SEND state.

  • Second handshake (SYN=1, ACK=1, seq=y, ACKnum=x+1):

    The server sends back an acknowledgement (ACK) reply. That is, the SYN flag bit and ACK flag bit are both 1. The server selects its ISN serial Number, puts it in the Seq field, and sets the Acknowledgement Number to the ISN plus 1 of the client, that is, X+1. After sending the packets, the server enters the SYN_RCVD state.

  • Third handshake (ACK=1, ACKnum=y+1)

    The client sends an ACK packet again with the SYN bit 0 and ACK bit 1. The client sends the SEQUENCE number field (+1) of the ACK packet sent by the server in the CONFIRM field, and writes the ISN (+1) in the data segment

    After the packet is sent, the client enters the ESTABLISHED state. When the server receives the packet, the TCP handshake ends.

Four times to wave

Removing a TCP connection requires sending Four packets. This is called a four-way handshake, also called improved three-way handshake. Either the client or the server can initiate the wave action actively. In socket programming, either side performs the close() operation to generate the wave action

  • First wave (FIN=1, seq=x)

    Suppose that the client wants to close the connection, the client sends a packet with the FIN flag position 1, indicating that it has no more data to send, but can still accept data.

    After sending the packet, the client enters the FIN_WAIT_1 state.

  • Second wave (ACK=1, ACKnum=x+1)

    The server validates the client’s FIN packet and sends an acknowledgement that it has received the client’s request to close the connection but is not ready to close the connection.

    After sending the packet, the server enters the CLOSE_WAIT state. After receiving the acknowledgement packet, the client enters the FIN_WAIT_2 state and waits for the server to close the connection.

  • Third wave (FIN=1, seq=y)

    When the server is ready to close the connection, it sends a request to end the connection to the client. FIN is set to 1.

    After sending the packet, the server enters the LAST_ACK state and waits for the last ACK from the client.

  • Fourth wave (ACK=1, ACKnum=y+1)

    The client receives a close request from the server, sends an acknowledgement packet, and enters a TIME_WAIT state, waiting for a possible ACK packet requiring retransmission.

    After receiving the acknowledgement packet, the server closes the connection and enters the CLOSED state.

    The client waits for a fixed period of time (2MSL, 2 Maximum Segment Lifetime), but does not receive an ACK from the server. Therefore, the client closes the connection and enters the CLOSED state.

The reason for three handshakes

  • First handshake: The client sends a network packet and the server receives it. In this way, the server can conclude that the sending capability of the client and the receiving capability of the server are normal.
  • Second handshake: The server sends the packet and the client receives it. In this way, the client can conclude that the receiving and sending capabilities of the server and the client are normal. However, the server cannot confirm whether the client’s reception capability is normal.
  • Third handshake: The client sends the packet and the server receives it. In this way, the server can conclude that the receiving and sending capabilities of the client are normal, and the sending and receiving capabilities of the server are also normal.

The reason for the four waves

After the client sends a FIN connection release packet, the server receives the packet and enters the close-wait state. This state allows the server to send unfinished data.

Question: can you talk about it in a general way?

Computer communication is essentially no different from human communication. We send wechat everyday is also communication, understand yourself how to send wechat you understand the TCP three handshake.

As shown in the picture above, Xiao Ming owes me money. I ask Xiao Ming for money on wechat. This is the procedure

  1. I send out: Xiao Ming, are you there? I am a Mlx. (For the first handshake, confirm the presence of the other party and send the serial number Seq as Mlx)
  2. Xiao Ming replied: I am Xiao Ming, I at, what do? (For the second handshake, the receiver confirms that the other party is there, knows that the other party can find him, receives the serial number of Seq, and sends his serial number as Xiao Ming)
  3. I reply: in line, the next chat return money of matter! (Shake hands for the third time to make sure they are there and that they have heard from you. And the third handshake carries the data)

Can I shake hands twice?

Obviously not. Why?

Or the above example, you send the wechat last sentence in the line, talk about the matter of paying back the money has been round and round to send out, you wait for a long time, but did not send out, you think, ah this is wrong, you sent again. When Xiao Ming saw this, he began to talk with you about paying back the money. Just when you decided to pay back the money after a few days because of your close relationship, this sentence came out again. You said that Xiao Ming saw this time what is his psychology? Affirmation is feel just don’t still talk of good yao, how urge again, decide to return money, break up with you this friend.

Four times to wave

You’re talking to your girlfriend, it’s 11:00 p.m., you want to sleep, so you want to stop talking.

  1. You: Honey, I want to go to sleep. (First wave, send FIN, enter the process of waiting for girlfriend’s approvalFIN_WAIT_1)
  2. Girl friend: know, wait me to play this play see sleep (second time wave, reply ACK), girl friend also know quick sleep, times speed play (enterCLOSE_WAITYou are waiting for your girlfriend to finish watching the show (FIN_WAIT_2 )
  3. After a while, my girlfriend finished watching the drama: dear, I finished watching it, I can go to sleep, good night oh (the third wave, the server into waiting for you to say good night can go to sleep stage)
  4. You: Ok dear, good night (wave for the fourth time, girlfriend gets the message that you are going to sleep too, and the two of them fall asleep)

The client waits for a fixed period of time (2MSL, 2 Maximum Segment Lifetime), but does not receive an ACK from the server. Therefore, the client closes the connection and enters the CLOSED state. So what is waiting for 2MSL?

Corresponding to step 4 above, after you send goodnight, there are two situations

  1. You sent the good night did not send, this time the girlfriend see you do not send a message will inform you once she can sleep
  2. You sent the good night sent out, the girlfriend saw after also do not return you directly went to sleep.

In both cases, you have to wait. For how long? Let’s say wechat sends messages for five minutes and they don’t go out

So in the first case, the message you send will be delivered in 5 minutes at the latest, otherwise it will not be delivered, and your girlfriend will send you a message in 5 minutes, so you only need to wait 10 minutes at most to confirm whether the message was sent successfully or failed. Ten minutes is exactly double the longest message, i.e. MSL*2,2MSL.

Q: You say TCP is a reliable transmission. How is it guaranteed to be reliable?

TCP uses timeout retransmission for reliable transmission: if an already sent segment is not acknowledged within the timeout period, the segment is retransmitted.

Question: What version of HTTP are you using? What’s the difference?

There are several important versions of HTTP: HTTP1.0, HTTP1.1, and HTTP2.0

Currently HTTP1.1 is used more, 2.0 less.

Http1.0 features

  • Stateless: The server does not track the requested status
  • No connection: The browser establishes a TCP connection for each request
stateless

For stateless features, the cookie/session mechanism can be used for identity authentication and status recording

There is no connection

There are two types of performance defects caused by no connection:

1. The connection cannot be reused

Each time a request is sent, a TCP connection (three handshakes and four waves) is required, resulting in low network utilization

2. Queue head is blocked

Http1.0 specifies that the next request cannot be sent until the response to the previous request arrives. If the previous request is blocked, the subsequent request is blocked as well

Http1.1 features

To address the performance flaws of HTTP1.0, Http1.1 came along

Http1.1 features:
  • Long Connection: The Connection field is added and the keep-alive value can be set to keep the Connection open
  • Pipelining: Based on the long connection above, pipelining can continue to send subsequent requests without waiting for the first response, but the response is returned in the order requested
  • Cache processing: Added field cache-control
  • Breakpoint transmission
A long connection

Http1.1 maintains long connections by default. When data is transferred, keep the TCP connection open and continue to transfer data over this channel

pipelining

Based on the long connection basis, let’s look at the request response without pipelining:

TCP is not disconnected, using the same channel

Request 1 > Response 1 --> Request 2 > Response 2 --> Request 3 > Response 3Copy the code

Pipelined request response:

Request 1 --> Request 2 --> Request 3 > Response 1 --> Response 2 --> Response 3Copy the code

Even if the server prepares response 2 first, response 1 is returned in the order requested

Although pipelined, multiple requests can be sent at once, but the response is still returned sequentially, still does not solve the problem of queue head blocking

Cache handling

When a browser requests a resource, it checks whether there is a cached resource. If there is a cached resource, the browser directly obtains the cached resource and does not send another request. If there is no cached resource, the browser sends a request

Control by setting the field cache-control

Breakpoint transmission

When uploading or downloading resources, divide the resources into multiple parts and upload or download them separately. If a network fault occurs, you can continue to upload or download the resources from the places where the resources have been uploaded or downloaded, instead of starting from the beginning to improve efficiency

The two parameters that are implemented in the Header, the Range that the client sends the request and the content-range that the server responds to

The host domain

HTTP1.0 is not supported because it was previously thought that only one address per server is supported. HTTP2.0 supports both request and response messages

Http2.0 features

  • Binary framing
  • Multiplexing: Sending requests and responses simultaneously over a shared TCP connection
  • The head of compression
  • Server push: The server can push additional resources to the client without an explicit request from the client
Binary framing

Divide all transmitted information into smaller messages and frames and encode them in binary format

multiplexing

Based on binary framing, where all access under the same domain name is routed through the same TCP connection, HTTP messages are broken up into separate frames, sent out of order, and the server reassembles the messages based on identifiers and headers

The difference between

  1. The main difference between HTTP1.0 and HTTP1.1 is the transition from no connection to long connection
  2. The main difference between Http2.0 and 1.x is multiplexing

Q: What are the advantages and disadvantages of HTTP?

advantages

1. Simple

The basic HTTP packet format is header + body, and the header information is also in the form of key-value text, which is easy to understand and reduces the threshold for learning and using.

2. Flexible and easy to expand

HTTP protocol in all kinds of request methods, URI/URL, status code, header fields and other components of the requirements are not fixed, allowing developers to customize and expand.

3. Wide application and cross-platform

disadvantages

Stateless double-edged sword

Stateless benefits because the server does not memorize the HTTP state, so no additional resources are required to record the state information, which reduces the burden on the server and allows more CPU and memory to be used for external services.

The downside of statelessness is that since the server has no memory, it can be very difficult to perform related operations. For example, login -> add shopping cart -> order -> settlement -> pay, this series of operations to know the identity of the user. But the server does not know that these requests are related and asks for identity information each time.

2. Plaintext transmission double-edged sword

Plaintext means that information in transit is easy to read, either through the F12 console of a browser or through Wireshark

Direct visual inspection, debugging work for us with a great convenience. But that’s what happens. All of HTTP’s information is out in the open, the equivalent of information running naked. In the long process of transmission, the content of the information has no privacy and can be easily stolen

3.HTTP is insecure and cannot verify the identities of communication parties

Q: How do you address your weaknesses?

For stateless:

HTTP/1.1 introduced cookies to store state information.

Cookie is a small piece of data that the server sends to the user’s browser and saves locally. It will be carried when the browser sends another request to the same server, which is used to tell the server whether the two requests come from the same browser. Since each subsequent request will need to carry Cookie data, there is an additional performance overhead.

For plaintext transmission and security

HTTPS addresses HTTP insecurity by adding SSL/TLS between the TCP and HTTP network layers to encrypt packets.

Q: What is HTTPS? What’s the difference from HTTP?

  • HTTP is a hypertext transmission protocol, and information is transmitted in plaintext. Therefore, it has security risks. HTTPS addresses HTTP insecurity by adding SSL/TLS between the TCP and HTTP network layers to encrypt packets.
  • It is relatively easy to establish an HTTP connection. HTTP packets can be transmitted after TCP three-way handshake. In HTTPS, after the TCP three-way handshake, you need to perform the SSL/TLS handshake to transmit encrypted packets.
  • The HTTP port number is 80 and the HTTPS port number is 443.
  • HTTPS applies for a digital certificate from a CERTIFICATE Authority (CA) to ensure that the identity of the server is trusted.

HTTPS is not a new protocol. It allows HTTP to communicate with SSL (Secure Sockets Layer), and then SSL and TCP, which means HTTPS uses tunnels to communicate.

Using SSL, HTTPS has encryption (anti-eavesdropping), authentication (anti-counterfeiting), and integrity protection (anti-tampering).

How does HTTPS address risk?

  • The hybrid encryption method realizes the confidentiality of information and solves the risk of eavesdropping.
  • Algorithm to achieve integrity, it can generate a unique “fingerprint” for data, fingerprint used to verify the integrity of data, to solve the risk of tampering.
  • Putting the server public key into the digital certificate solves the risk of impersonation.

Q: So what is hybrid encryption?

HTTPS uses a hybrid encryption mode that combines symmetric and asymmetric encryption:

  • Asymmetric encryption is used to exchange “session keys” before communication is established, and asymmetric encryption is not used later.
  • In the process of communication, all the plaintext data are encrypted in the way of symmetric encryption “session secret key”.

What are symmetric and asymmetric encryption?

Symmetric key encryption

Symmetric-key Encryption: The same Key is used for Encryption and decryption. In short, it means that you and your partner have prepared the same password book in advance, and decrypt it against the password book. The disadvantage is how you give the password book to your partner. The mainstream is DES, AES-GCM and so on

  • Advantages: fast operation speed;
  • Disadvantages: Cannot securely transfer the key to the communicator.

Asymmetric key encryption

Asymmetric Key Encryption, also known as public-key Encryption, uses different keys for Encryption and decryption.

The public key is available to all. After obtaining the public key of the receiver, the sender can use the public key to encrypt the communication, and the receiver uses the private key to decrypt the communication. Mainstream RSA, DSA, etc

  • Advantages: The public key can be transmitted to the communication sender more securely.
  • Disadvantages: Slow operation speed.

Many people don’t understand what public and private keys mean. Actually, let me rephrase it, and it’ll make sense. Asymmetric encryption is a combination of public lock and private key encryption.

Public lock corresponds to public key, in fact, I personally think public key is not a very good name, use public lock is better.

Let’s say I have a bunch of locks, and those locks are public keys, and those locks correspond to a key, and that key is the private key. Whoever wants to send me encrypted content, all they have to do is take the lock from me, lock up what they’re about to send, and then they can send it to me in the open, and I’m the only one who has the key, which means I’m the only one who can open the lock and see what’s inside.

Symmetric encryption is fast, but it is difficult to secure the password book to each other. Asymmetric encryption can safely send messages to each other, but the operation is slow. HTTPS patted his head and thought, gee, why don’t I just combine you two? You’re asymmetrically secure, right? Why don’t I just use you to send the passbook to each other securely?

In fact, the main principle of asymmetric encryption is the one-way trap door function

The one-way trap gate function is a special kind of one-way function with a trap gate. The one-way trap door function contains two obvious characteristics: one is unidirectional, the other is the existence of trap doors. The so-called unidirectional, or irreversibility, is that for a function y=f(x), if you know x it’s easy to compute y, but if you know y it’s hard to compute x=f ^(-1) (y). A one-way function is named because it can be evaluated in only one direction. The so-called trapdoor also becomes the back door. For a one-way function, if there is a z such that x=f ^(-1) (y) can be easily calculated if z is known, but x=f ^(-1) (y) cannot be calculated if z is not known, then the function y=f(x) is called a one-way trap gate function, and z is called a trap gate.

Therefore, HTTPS uses asymmetric encryption to send the symmetric encryption key, that is, the password book locked to the other party. The other party then gets the ciphertext and opens the lock with its own key and gets the cipher book, and then it can communicate in symmetric encryption.

Q: What are the optimizations for HTTP2.0?

The HTTP/2 protocol is based on HTTPS, so HTTP/2 security is also guaranteed

HTTP/2 will compress headers. If you make multiple requests at the same time and their headers are the same or similar, the protocol will help you eliminate duplicates. This is known as the HPACK algorithm

HTTP/2 is no longer like plain text packets in HTTP/1.1, but fully adopts binary format. Header information and data body are both binary and collectively called frame: header information frame and data frame are also binary sub-frames mentioned above

HTTP/2 packets are not sent sequentially, and consecutive packets within the same connection may belong to different responses. Therefore, the packet must be marked to indicate which response it belongs to. All packets of data for each request or response are called a Stream. Each data stream is marked with a unique encoding number that specifies that the data streams sent by the client are numbered odd and those sent by the server are numbered even

HTTP/2 allows multiple requests or responses to be concurrent in a single connection, rather than in a one-to-one sequence. By removing serial requests in HTTP/1.1, there is no need to queue, and the “queue head blocking” problem is eliminated, reducing latency and greatly improving connection utilization.

HTTP/2 also improves the traditional “request-reply” mode of working to some extent, where services can actively send messages to clients instead of passively responding. For example, when the browser just requests HTML, it proactively sends static resources such as JS and CSS files that may be used to the client in advance to reduce the delay. This is also called Server Push (also called Cache Push).

Q: HTTP2.0 you say so well, so what are its flaws?

The main problem with HTTP/2 is that multiple HTTP requests are multiplexing a TCP connection, and the underlying TCP protocol does not know how many HTTP requests there are. So once packet loss occurs, TCP’s ᯿ transmission mechanism is triggered, so that all HTTP requests in a TCP connection must wait for the lost packet to be retransmitted.

In HTTP/1.1, if a single request is blocked in transit, all the post-queue requests are blocked

HTTP/2 Multiple requests reuse a TCP connection, which blocks all HTTP requests once packet loss occurs.

This is all based on the TCP transport layer, so HTTP/3 changes the HTTP layer TCP protocol to UDP! UDP occurs regardless of the order and regardless of packet loss, so there will be no HTTP/1.1 queue header blocking and HTTP/2 a lost packet all retransmission problems.

Question: What are the headers in HTTP packets?

Generic header field

Header field name instructions
Cache-Control Controls the behavior of caching
Connection Controls header fields that are no longer forwarded to agents and manages persistent connections
Date Date and time when the packet was created
Pragma Packet instructions
Trailer View the header of the packet end
Transfer-Encoding Specifies the transmission code of the packet body
Upgrade Upgrade to another protocol
Via Proxy server information
Warning Error notification

Request header field

Header field name instructions
Accept The type of media that the user agent can handle
Accept-Charset Preferred character set
Accept-Encoding Priority content encoding
Accept-Language Preferred language (natural language)
Authorization Web Authentication Information
Expect Expect specific behavior from the server
From Email address of the user
Host Request the server where the resource resides
If-Match Compare Entity Tag (ETag)
If-Modified-Since Compares the update times of resources
If-None-Match Compare entity tags (as opposed to if-match)
If-Range Send scope requests for entity Byte when the resource is not updated
If-Unmodified-Since Compare resource update times (as opposed to if-modified-since)
Max-Forwards Maximum transmission hop by hop
Proxy-Authorization The proxy server requires authentication information of the client
Range Byte range request for the entity
Referer The original acquirer of the URI in the request
TE Priority of transmission encoding
User-Agent HTTP client program information

Response header field

Header field name instructions
Accept-Ranges Whether to accept byte range requests
Age Calculate the elapsed time of resource creation
ETag Matching information of resources
Location Causes the client to redirect to the specified URI
Proxy-Authenticate The proxy server authenticates the client
Retry-After Request the timing of the request to be made again
Server HTTP server installation information
vary Proxy server cache management information
WWW-Authenticate Authentication information about the server to the client

Entity head field

Header field name instructions
Allow HTTP methods supported by the resource
Content-Encoding The encoding method applicable to the entity body
Content-Language The natural language of entity subjects
Content-Length The size of the entity body
Content-Location Replace the URI of the corresponding resource
Content-MD5 The packet digest of the entity body
Content-Range The location range of the entity body
Content-Type The media type of the entity body
Expires The date and time when the entity body expires
Last-Modified The last modified date and time of the resource