As a WEB developer, working with browsers on a daily basis, you should have a good understanding of HTTP, but now you find that you don’t know enough about HTTP. Some WEB developers don’t even know enough about status codes, they only know 200, 404, and 500, and methods only know get and POST. So I want to learn more about HTTP. That’s why I wanted to write this article.

The article may be relatively basic, I hope to give you a little help, but also let their understanding of HTTP more in-depth.

1. URI, URL, and URN

Uris (Uniform Resource Identifiers) are unified definitions of urNs and UrNs that are used to represent unique resources on the Internet. Because there is no mature solution for URNs, urIs and urls are the same in real life.

A URL (Uniform Resource Locator) is used to describe a specific location of a resource on a specific server.

The full URL path consists of nine sections:

<scheme>://<user>:<password>@<host>:<port>/<path>; <params>? <query>#<frag>

We will explain the meanings of each of the nine parts in turn:

  1. Scheme: indicates the name of the protocol. Common protocol names include HTTP, HTTPS, and file. It has no default value and must be followed by ://
  2. User: Indicates the user name. Users are required to add a user name when accessing certain resources. However, the user name is directly displayed in the path. When this parameter is not defined, the default is anonymous.
  3. Password: indicates the password used to access certain resources. The password is usually used together with the user and separated by:. Like the user attribute, it has low security and is not recommended. The default value depends primarily on the browser. For Internet Explorer, the default value is IEUser, and for Netscape Navigator, the default value is Mozilla
  4. Host: indicates the name of the host server that can access resources on the Internet. It can be a host name, such as, or an IP address.
  5. Port: indicates the port number on which the host server is listening. The default port number is 80.
  6. Path: Indicates the local location of the resource on the server, separated by/from the previous URL attribute.
  7. Params: Some agreements use this property to specify input parameters. Parameters are entered as key-value pairs. Multiple parameters can be entered. Separated.
  8. Query: A query string that is typically used to narrow down the query for the requested resource type. It is written as params, in the form of key-value pairs, and multiple query parameters can be written, separated by an ampersand (&).
  9. Frag: Represents an anchor point within the resource located by the URI that the browser can jump to. For a large text, for example, the URL of the resource will point to the entire text document, but we can point to a section with the frag parameter. It, like the rest of the URL, is separated by a #.

URN (Permanent Uniform Resource Locator) can be found after a resource has been moved, but there is no mature use for it, so I won’t go into much detail.

2. HTTP message

In fact, when I first heard the word, I once thought the message was a leopard print 😂, which made a joke.

HTTP packets are composed of simple strings one by one. The HTTP packets sent from the client to the server are called request packets, and the HTTP packets sent from the server to the client are called response packets. There are no other packets. The formats of the request packet and response packet are similar except for the format of the start line. In addition, the request packet can be divided into three parts:

  1. Start line: All HTTP packets start with a start line. The start line of a request message tells you what to do, and the start line of a response message tells you what happened.
    • Format of the start line of the request packet: method + REQUESTED URL+ version
    • The format of the start line of the response packet is version + status code + cause phrase
  2. Header field: There can be zero headers or multiple headers in a packet. Each header contains a name, followed by a colon (:), an optional space, a value, and a CRLF (space). The header is terminated by a CRLF, indicating that the header is separated from the principal.
  3. Body: The body part of an entity contains a block of arbitrary data. Not all messages contain the body of the entity, sometimes just a blank line.

3. The HTTP method

HTTP methods are used to define operations on resources, but it is important to note that not all methods are supported by each server, depending on the configuration of each server.

Usually we use the following methods:

  1. GET: Usually requests the server to send a resource that does not require an entity at the time of the request.
  2. HEAD: It is similar to GET, but the response packet returned by it does not contain the body in the entity, only the HEAD part. It is mainly used to check the resource when the client requests the resource, to check whether the resource exists, or to test whether the resource has been modified.
  3. POST: Sends data to the server that needs to be processed. It is mainly used for form submission. The entity needs to be carried with the request.
  4. PUT: Stores the body of the request on the server and requires the entity to be carried with the request.
  5. TRACE: for possible through a proxy server message sent to the server to track, and initiate a diagnosis on purpose server, the server will eventually return to a TRACE response, and carry it in the response entity receives the original message, so that the client can compare, to see if the data changes.
  6. OPTIONS: Asks which methods can be executed on the server without the need to carry entities with the request.
  7. DELETE: Requests the server to DELETE a resource, but the client program cannot guarantee that the deletion will be performed because the HTTP specification allows the server to revoke the resource without notificating the client.
  8. CONNECT: Establishes a connection tunnel for the proxy server

Can HTTP methods be customized?

This is also an interview question. At first I said no, but now IT seems that I was wrong. HTTP supports extension methods.

Extension methods usually refer to methods that are not defined in HTTP/1.1, that is, except for the eight mentioned above, but it is important to note that extension methods should not be arbitrarily extensible and should be understood by HTTP applications.

4. HTTP status code

Define the server’s response to the request.

HTTP status codes are composed of three digits and are divided into five categories:

  1. 1xx: informational status code
  2. 2xx: success status code
  3. 3xx: indicates the redirection status code
  4. 4xx: indicates the client error status code
  5. 5xx: indicates the server error status code

Let’s look at some specific typical status codes:


101 Switching separate Protocols. When HTTP is upgraded to WebSocket, if the server agrees to the change, it will send status code 101.


The 200 OK request is fine. The body of the entity contains the requested resource.

201 Created A request to create a server object. The entity body of the response should contain various urls that refer to the resources that have been created.

202 The Accepted request was Accepted, but the server has not performed any action on it.

The header of the 203 non-Authoritative Information entity contains Information not from the source server, but from a copy of the resource.

204 No Content is similar to 200, but does not have an entity body. It is primarily used to update a new document (such as refreshing a form page) without the browser realising it.

205 Reset Content is responsible for telling the browser to clear all HTML form elements from the current page.

206 Partial Content Partial or Range request was successfully executed. Mainly used for block download and resumable. The response must contain content-Page, Date, ETag, and content-Location headers.


300 Multiple Choices This status code is returned when a client requests a URL that actually points to Multiple resources, such as the English and French versions of an HTML document on the server. When this code is returned, a selection list is returned, allowing the client to choose which version it wants.

301 Moved Permanently, which is used when the requested URL is removed. The response should include the URL of the current resource, that is, permanent redirection.

302 Found is similar to 301, but this is a temporary redirect.

303 See Other Tells the client to use another URL to obtain the resource. The new URL is located at the head of the response packet. The primary purpose of the new URL is to allow the response to the POST request to redirect the client to a resource.

304 Not Modified If the server calls the page from the cache when the client makes a GET request, the server can determine if the page has been updated, and if it has Not, it will return the status code when the negotiation cache hits.

305 Use Proxy Indicates that resources must be accessed through a Proxy.

Temporary Redirect is similar to 301, except that the response header shows the Temporary URL of the current resource.


400 Bad Request tells the client that it sent an incorrect Request.

401 Unauthorized The client does not have permission to access the resource.

The 403 Forbidden request has been rejected by the server. If the server wants to specify the reason for refusing the request, the server will write the description in the entity body, but usually the status code is the server does not want to specify the reason.

404 Not Found The requested URL was Not Found on the server.

405 Method Not Allowed The Method of the request is Not Allowed by the server. The Allow in the response packet tells the client which Method is supported.

406 Not Acceptable Clients can specify parameters that they are willing to accept entities of the specified type, and return this status code if the server does Not have an entity of the type that matches the client. That is, the resource cannot meet the client conditions.

407 Proxy Authentication Required Is similar to 401, but is used for the Proxy server that requires resource Authentication.

408 Request Timeout The server waits too long.

409 Conflict Multiple requests are in Conflict over resources.

410 Gone is similar to 404 except that the server once owned the resource.

411 Length Required The server specifies content-length in the header of the request packet.

412 Precondition Failed Client initiates conditional requests and one of the Precondition fails.

413 Request Entity Too Large The Entity requested by the client is Large.

414 request-uri Too Long Indicates that the REQUESTED URL is Too Long.

415 Unsupported Media Type Indicates that the server cannot support the content Type of the entity sent by the client.

416 Request Range Not Satisfiable When a resource Range is requested, but the Range is invalid or Not satisfied.

The Expect header of the 417 Expecation Failed request contains an expectation, but the server does not meet the expectation.

429 Too Many Request: The client sends Too Many requests.

431 Request Header Fields Too Large The field content of the Request Header is Too Large.


500 Internal Server Error The Server fails.

501 Not Implemented The requests sent by the client are beyond the capabilities of the server.

502 Bad Gateway The server itself is normal, but an error occurred during access.

503 Service Unavailable The server is busy and cannot respond to the Service temporarily.

504 Gateway Timeout is similar to 408 except that the response came from a Gateway or agent that timed out while waiting for another server to respond to its request.

505 HTTP Version Not Suppored The server received a protocol it could Not support.

5. Cross domain

Why cross-domain?

Browsers have an internal security mechanism called the same-origin policy. Sameoriginpolicy is a convention that is the core and most basic security feature of browsers. Without Sameoriginpolicy, the normal functions of browsers may be affected. The Web is built on the same origin policy, and the browser is just an implementation of the same origin policy. The same origin policy prevents javascript scripts in one domain from interacting with the contents of another domain. Same-origin means that both pages have the same protocol, host, and port number

What is cross-domain?

In fact, the above is very clear, when a request URL protocol, domain name, port between any one and the current page URL is different from the cross-domain. The cross-domain response will be intercepted by the browser. It is worth noting here that this is the browser’s behavior, and it has successfully requested the data from the server, but the browser will intercept the response.

There are several restrictions on cross-domain:

  1. Cannot read and modify each other’s DOM
  2. The Cookie, IndexDB, and LocalStorage of the peer cannot be accessed
  3. Unable to make XMLHttpRequest request

How to solve cross-domain problems?

Usually solve the cross-domain approach refers to the XMLHttpRequest request is not possible, after all, the communication between the front and back end or data is the most important, the solution to the following ways


Because the XMLHttpRequest object is subject to the same origin policy, but the script tag is not, you can send a GET request by filling in the target address with its SRC attribute. This is the core principle of JSONP.

It is easy to use the script tag, the SRC attribute to fill in the URL, and the callback function to perform the operation after the request.

let jsonp = ({ url, params, callback }) = > {
  // Concatenate the URL with params
  const getURL = () = > {
    let dataStr = "";
    for (let key in params) {
      dataStr += `${key}=${params[key]}& `;
    dataStr += `callback=${callback}`;
    return `${url}?${dataStr}`;
  callback = callback || Math.random().toString.replace(","."");
  // Create the script element and add it to the current document
  let scriptEle = document.createElement("script");
  scriptEle.src = getURL();
  return new Promise((resolve) = > {
    window[callback] = (data) = > {
Copy the code

JSONP’s approach is probably one of the first to address cross-domain issues. It is very limited and can only be used for GET requests, but the earlier things are, the more compatible they are. 🤣 🤣 🤣


CORS is a W3C standard that stands for Cross-Origin Resource Sharing. CORS requires both browser and server support. Currently, all browsers support this function. The value of Internet Explorer cannot be lower than IE10. It allows browsers to make XMLHttpRequest requests to cross-source servers, overcoming the limitation that AJAX can only be used from the same source.

As mentioned in MDN, HTTP access control (CORS), CORS is divided into two kinds of requests, one is simple request, the other is pre-check request (non-simple request)

A simple request

The simple request is that the client identifies the source through Origin in the request packet. After receiving the request, the server adds the access-control-allow-Origin field corresponding to the response. If the Origin field is not in the scope of this field, the server will add the access-control-allow-Origin field. The browser then intercepts the response. If the access-control-allow-origin: * returned by the server indicates that the resource can be accessed from any foreign domain.

Precheck request (non-simple request)

A precheck request request must first be made to the server using the OPTIONS method to see if the server will allow the actual request. The use of precheck requests prevents cross-domain requests from having an unintended impact on the server’s user data. Here we use the example in MDN to see:

var invocation = new XMLHttpRequest();
var url = "http://bar.other/resources/post-here/";
var body = '

function callOtherDomain() {
  if (invocation) {"POST", url, true);
    invocation.setRequestHeader("Content-Type"."application/xml"); invocation.onreadystatechange = handler; invocation.send(body); }}Copy the code

You can see its pre-requested process:

OPTIONS /resources/post-here/ HTTP/1.1 Host: bar.other user-agent: Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10.5; en-US; The rv: 1.9.1 b3pre) Gecko / 20081130 Minefield / 3.1 b3pre Accept: text/HTML, application/XHTML + XML, application/XML. Q = 0.9 * / *; Q = 0.8 Accept - Language: en - us, en. Q =0.5 accept-encoding: gzip,deflate accept-charset: ISO-8859-1, UTF-8; Q = 0.7 *; Q =0.7 Connection: keep-alive Origin: http://foo.example access-control-request-method: POST access-control-request-headers: x-pingother, content-type HTTP/1.1 200 OK Date: Mon, 01 Dec 2008 01:15:39 GMT Server: Apache/2.0.61 (Unix) access-control-allow-origin: http://foo.example Access-Control-Allow-Methods: POST, GET, OPTIONS Access-Control-Allow-Headers: X-PINGOTHER, Content-Type Access-Control-Max-Age: 86400 Vary: Accept-Encoding, Origin Content-Encoding: gzip Content-Length: 0 Keep-Alive: timeout=2, max=100 Connection: Keep-Alive Content-Type: text/plainCopy the code

As you can see from the above, first check the request, using the OPTIONS method, The access-control-Request-method and Access-control-request-headers fields in the Request header indicate the Method of the Request and the custom header field to be carried by the Request child. After receiving the Request, This decision is used to determine whether the request is allowed.

If it matches, the response to the precheck request is returned:

Access-Control-Allow-Origin: http://foo.example
Access-Control-Allow-Methods: POST, GET, OPTIONS
Access-Control-Allow-Headers: X-PINGOTHER, Content-Type
Access-Control-Max-Age: 86400
Copy the code

The actual request is published after the precheck request is successful.


Now local development often uses Porxy for cross-domain processing, remember local development. Webpack proxy is just a layer of proxy, used to specify the path, proxy to the address provided by the back end, behind the use of node server. Webpack’s http-proxy-middleware middleware is used to forward requests to other servers.


Nginx is often called a reverse proxy, so I won’t go into that here.

6. HTTP proxy

Most simple HTTP only the client and the server, but in some cases, we do not want the client to access the server directly, this time you can use the agent as an intermediary, client access agent, and then the proxy to access the server, thus reducing direct access to the server security problem, also can undertake filtering to the client’s data. For example, schools can use proxies to prevent students from accessing “you know” websites.

There are two types of proxy, one is the client proxy (forward proxy) and the other is the server proxy (reverse proxy). There are many reverse proxy implementations, which will not be described in detail below.

Implementation of the client proxy

  1. Manual configuration:

This is configured through the browser Settings, which vary from browser to browser. Google Chrome can install a Proxy SwitchOmega plug-in.

  1. The PAC file

Provided with a URI pointing to a proxy autoconfiguration file written in Javascript, the client takes back the Javascript file and runs it to determine whether the proxy should be used and, if so, which proxy server to use.

  1. WPAD

Some browsers support the WEB Proxy Automatic Discovery Protocol (WPAD), which automatically detects which “configuration server” the browser can download an autoprofile from.

Related field

The via field is especially important if you want to trace the flow of packets through the proxy to detect problems.

Via is a generic field that appears in both request and response headers. When a packet passes through a proxy server, the proxy server information is added after via, separated by commas.

For example, when there are two proxy servers between the client and the server:

Client -> Agent 1- > Agent 2- > Server

The Via field in the request packet received by the server should be: Via: proxy_servicE1, proxy_service2

The client receives the Via field in the response packet in the reverse order:

Via: proxy_service2, proxy_service1

Via is good, but there is a problem. It only solves the problem of the client and the source server determining whether there is a proxy, but it does not know the real information about each other.

So I’m going to introduce two fields


X-forwarded-for means “For whom”. This field does the same thing to VIA. It adds information about the passing proxy server to this field, but unlike VIA, it adds the IP address of the requesting party, so on the far left is the client’s IP address.

Seems to be an optimal solution, but there are more or less problems:

  1. The x-forwarded-for operation has to resolve HTTP headers to forwarded-for. This increases workload and degrades performance
  2. X-forwarded-for indicates that the original packet needs to be modified. This is not allowed in some cases.


X-real-ip, as indicated by the field name, retrieves the Real IP address and does not append information about the intermediary between the client and server.

The agency agreement

This x-Forwarded-for field tells us that it has some problems. To address these problems, a proxy protocol is proposed. It is defined by the proxy software HAProxy.

To use the plaintext version, you simply add text in this format above the HTTP request line:

// PROXY + TCP4/TCP6 + Address of the requester + address of the receiver + request port + receive port
PROXY TCP4 0.0. 01. 0.0. 02. 1111 2222
GET / HTTP/1.1.Copy the code

7. The cache

Caching is often asked about in interviews because of its relevance to project optimization.

There are two kinds of caches, one is strong cache, the other is negotiated cache.

Strong cache

We’re talking about the cache-control field, which has three properties:

  1. Public: Indicates that all nodes, including proxy servers and clients, that pass through the HTTP request return process can cache the resource.
  2. Private: Only the browser that initiates the request can cache it
  3. No-cache: The server must verify whether the cache can be cached and enter the negotiation cache phase.

Due to

That is, when the cache expires.

Max-age = Indicates the expiration time, that is, the number of seconds after the cache expires

S-maxage = Is valid only on the proxy server. It replaces max-age on the proxy server

Max-age = After the max-age expires, if the returned resource has this setting, it means that if the max-age expires, as long as the time is not expired, you can continue to use the expired resource without requesting the resource from the server


Must-revalidate: When the cache expires, the data must be retrieved from the source server again to verify whether the data is really expired.

Proxy-revalidate: After caching, the proxy server must request data from the source server again for validation


No-store: No node can cache

No-transform: Tells the proxy server not to modify the resource

Negotiate the cache

Last-modified: Indicates the time of the Last modification. Usually used with if-modified-since or if-unmodified-since. When the browser requests the resource for the first time, the server will return a response header with last-modified, which is the Last time the file was Modified. When the browser requests the resource again, the request header will carry the if-modified-since field and send it to the server. The server will determine whether the time carried by this field is the same as the time when the current resource is modified. If not, it will return a new resource. It is the same as the ordinary resource request process.

Etag: indicates the data signature. A resource generates a unique signature for its contents that changes as soon as the data is modified. Use it in conjunction with if-match: or if-non-match :<>, in the same way as last-modified and if-modified-since. When a browser requests a resource for the first time, the response header returned by the server carries the Etag field, which is the Hash address of the resource. When the browser accesses the resource again, the request header carries the IF-match field, which is the Hash address returned by the server. When the server receives the Hash address, it determines whether the Hash address is the same as the current Hash address of the resource. If the Hash address is the same, it returns a 304 status code, and the browser goes directly to the cache. If not, the process is the same as a normal request for the resource.


The HTTP caching process is as follows:

  1. When a browser requests a resource, the server responds with a cache-control field in the response header. The server first determines whether the cache-control field is strong or whether the Cache time has expired

  2. If it is not a strong cache, the negotiation cache is directly used. If the cache time expires, the negotiation cache is also directly used

  3. The negotiation cache server determines whether the if-modified-since and if-match in the request header are the same as the last Modified date or Hash identifier of the current resource. If so, it returns a 304 status code and tells the browser to cache the data. If not, it is the same as a normal request resource. Returns resources.


Since HTTP is stateless, the server does not know who is requesting it, so in general, we can set cookies to tell the server who is using it.

Cookie storage is only 4KB, and the storage is on the hard disk, the storage capacity is small, this is also a headache Cookie.

The second problem is that cookies cannot cross domains, but only under the same domain name. The cross-domain problem mentioned above is mentioned.

We can Set a Cookie by using set-cookie, usually the server side sets a Cookie by using the set-cookie field, and once that’s Set, Every time the browser requests resources, the Cookie value from the server is brought to the server through the Cookie field in the request header. Cookies are stored as key-value pairs, and you can set many of them.

The attribute of the Cookie

Max-age and Expires set the expiration time of cookies. When both of them exist, the browser preferentially selects the time defined by max-age.

Secure sends cookies only when HTTPS is used

HTTPOnly cannot be accessed through document.cookie. In order to prevent XSS attack, prohibit JS file to access cookie.

Domain Specifies the Domain name of the Cookie. For example, you can set a Domain name for the Cookie so that all secondary domains under this Domain can use the Cookie information

If the Path is /, the Cookie can be used by all paths in the domain name

Cookie security

In addition to the above HTTPOnly used to prevent XSS attacks, in CSRF attacks, cookies also have a solution, namely SameSite.

SameSite is used to limit third-party cookies, thereby reducing security risks.

SameSite can be set to three values:

  1. Strict: Strictly prohibits third-party cookies. In cross-site scenarios, cookies are not sent under any circumstances. In other words, the Cookie is only included if the URL of the current page matches the target of the request.

  2. Lax: For the most part, third-party cookies are not sent, except for Get requests that navigate to the destination url.

  3. None: The request carries cookies automatically.

How do I clear cookies

  1. Manually clear on the browser.
  2. Set the Cookie time to -1 to delete the Cookie when the browser is closed.
  3. If max-age is set to 0, the browser immediately deletes the Cookie. Max-age =0 indicates that the Cookie cannot be cached but is available for the duration of the session. The Cookie can be used to record user information before the browser session is closed.

9. HTTP connection

There are two types of HTTP connections: short and long.

Short connection

In HTTP version 0.9/1.0, the communication between the client and the server is very simple. It is called “request-reply”. When the client receives the response packet, the TCP channel is closed.

But we know that when the TCP channel opening and closing, shake hands after three and four wave process, if every time after the client receives the response message, is closing the TCP channel, that every time when client requests the resources through the process of three-way handshake and four wave, which greatly increased the request to the time and cost of resources, This is why the concept of persistent connections was introduced in version 1.1 of the HTTP protocol.

A long connection

The idea of a persistent connection is simply to spread one request-reply across multiple request-replies. Because TCP takes time to open and close, we reuse the channel.

Associated fields

By default, clients use persistent connections by writing Connection:keep-live in the request header. However, whether or not the client explicitly asks for persistent connections, if the server supports persistent connections, it will always put a “Connection: keep-alive” field in the response packet to tell the client: “I support persistent connections, I will send and receive data with this TCP.”

When the server sees that the Connection field is close, it will know that the client wants to close the Connection, so it will include this field in the response packet. After sending, the Socket API is called to close the TCP connection.

It is worth remembering that the server does not actively close persistent connections.

10. The team leader is blocked

As mentioned in the previous section, THE HTTP protocol is a “request-reply” mode. When multiple resources are requested at the same time, the previous request is processed first and subsequent requests wait, resulting in performance degradation.

So how do you solve the problem of waiting in line?

  1. Concurrent connections

With a “request-reply”, the client can set up multiple persistent connections at once and process them concurrently, substituting quantity for quality, so the time is greatly reduced. The RFC2616 clearly stipulates that the client can only have 2 concurrent connections at most. However, browsers think that 2 concurrent connections cannot meet the requirements, so they ignore this rule. For example, Google Chrome has 6 concurrent connections. Later RFC2616 also cancelled the 2 limit.

But the performance squeezed by concurrent connections can’t keep up with the insatiable demands of a rapidly evolving Internet

  1. Domain name subdivision

Browsers limit the number of concurrent connections to ease the strain on the server, but the speed at which concurrent connections can be optimized is minimal if a large number of resources are requested simultaneously.

Since the amount of concurrency is limited, we can add more domain names, all pointing to the same server, and thus increase the amount of processing. This is called domain sharding.

For example,….

The domain has a lot of secondary domains that point to the same server, allowing for more concurrent long connections and actually better solving the congestion problem.

11. Data negotiation

When a client sends a request to a server, the client tells the server what the data format is and what the data restrictions are. The server then returns the response based on the header information sent by the client.

MIME type

Here are some MIME Types for the sake of the rest of the article.

MIME: Multipurpose Internet Mail Extensions (MIME for short). It’s not important. After all, it’s a concept. Why is it not important 😂😂😂

MIME is a large class, and HTTP only defines some of them as the Type of the body data, which is called the MIME Type.

The MIME types we usually see are:

  1. Text/HTML: Text/HTML: text/ CSS: text/ HTML: text/ HTML: text/ CSS
  2. Image: image files, such as image/ GIF, image/jpeg, image/ PNG, etc.
  3. Audio /video: Audio/mPEG, video/ MP4, etc.
  4. Application: The data format is not fixed, which may be text or binary, and must be interpreted by the upper-layer application. If you don’t know what type the data is, you can use application/octet-stream. If you don’t know what type the data is, you can use octet-stream. That is, opaque binary data.



Accept: Specifies the corresponding data type. Data types are specified according to MIME type and separated by ‘,’

Accept-encoding: Specifies how data is encoded. This specifies how data can be compressed by the server. This is selected from the Encoding type and is also separated by ‘,’.

There are only one of the following data compression formats for Encoding type:

  1. Gzip: The GNU Zip compression format, also the most popular compression format on the Internet;
  2. Deflate: Zlib (Deflate) compression format, second only to Gzip in popularity;
  3. Br: A new compression algorithm (Brotli) optimized specifically for HTTP. Header field used by the data type

Accept-language: specifies the Language in which the message is returned

User-agent: indicates the device information of the browser


We only set the content-Type field in the request header to indicate the Type of data sent by the client to the server. This field is mainly applied to POST requests.

The most common content-type field in a request header is used here to tell the server what format to pass the data to:

  1. application/x-www-form-urlencoded
  • Presents as key-value pairs, and joins multiple key-value pairs with &
  • Characters are encoded in URL encoding mode
  1. multipart/form-data
  • The data is divided by boundary. The value of boundary is specified by the browser
  • Each part of the data contains HTTP header description information and ends with —

The response


Content-type: Selects one of the data types in Accept and returns it to the client as the data Type of the data.

Content-encoding: which data compression method is used by the server,

Content-language: Specifies the Language type of the returned data

12.HTTP redirection and jump

HTTP redirection means that when a client requests the URL of a resource to be moved to another URL, the server tells the browser the new URL of the resource.


This field is only a response field and can only be written by the server. This field only takes effect when the status codes are 301 and 302. The content of this field is the new URL of the resource.


A 301 represents a permanent redirect. When the browser sees a 301, it knows that the original URI is “outdated” and optimizes appropriately. Such as history, updating bookmarks, and the next time you might access it directly with the new URI, eliminating the cost of jumping again. The search engine crawler sees 301, updates the index library, no longer uses the old URI, and reads the data directly from the cache when it accesses again. So try not to use a 301 status code unless you are absolutely certain that it will not change.


302 represents a temporary redirect, that is, the original URI is still valid, but now the new URI is only temporary, the browser or crawler will see 302, the original URI is still valid, but temporarily unavailable, so will only perform a simple jump to the page, no record of the new URI, and no other unnecessary actions. Next time, use the original URI.

13. Features, advantages and disadvantages of HTTP

The characteristics of HTTP

  1. Flexible and extensible

    HTTP started out as a simple and open message, with a basic definition of the message. The components of the message were not restricted by strict syntax and semantics, and could be customized by the developer. Slowly with the development of Internet technology, the request method, status code and so on are gradually added. The body is no longer limited to TEXT or HTML, but also adds arbitrary data such as pictures and audio and video.

  2. Reliable transport

    Because the HTTP protocol is based on TCP/IP, which is itself a reliable transport protocol, HTTP is also reliable.

  3. Application layer protocol

    I won’t say much about that.

  4. Request-reply

    The HTTP protocol uses the request-reply mode. Request-reply, in short, is one and one.

    The request-reply mode also makes clear the positioning of the communication parties in THE HTTP protocol. The requestor always initiates the connection and request first, which is active, while the responder can only reply after receiving the request, which is passive. If there is no request, there will be no action.

    Of course, the roles of requestor and responder are not absolute. In a browser-server scenario, the server is usually the responder, but if you use it as a proxy to connect to the back-end server, it can play both the requestor and responder roles.

    The REQUEST-reply mode of HTTP conforms to the traditional C/S (Client/Server) system architecture, with the requester as the Client and the responder as the Server. Therefore, with the development of the Internet, the Browser/Server (B/S) architecture emerged, with lightweight Browser instead of heavy client applications, to achieve zero maintenance of the “thin” client, and servers instead of private communication protocols to use HTTP protocol.

    In addition, the request-reply mode is fully consistent with Remote Procedure Call (RPC), which encapsulates HTTP request processing into Remote function calls, leading to WebService, RESTful, and gPRC. stateless

  5. stateless

    How to understand this “stateless”, in the entire HTTP protocol does not specify the recognition state, the communication between the client and the server is in the “mud-confused” state, before the establishment of a connection between the two do not know, each received and sent packets are independent of each other, there is no connection. Sending and receiving messages does not have any impact on the client or server, and no information is required to be saved after connection.

  6. Other features

    In addition to the above five features, the HTTP protocol can list many other features, such as the transmission of entity data can be cached and compressed, segmentation of data, support for authentication, support for internationalization language, and so on. But these are not the basic features of HTTP, because they are all derived from the first “flexible and extensible” feature.

Pros and cons of HTTP

There are some features of HTTP mentioned above, so let’s take a look at these features and see which ones are good and which ones are bad.

Firstly, the biggest advantage of HTTP is that it is simple, flexible and easy to expand. Take the message as an example, the words above are basically simple and easy to understand, which is convenient for beginners to remember, and the transmission types are diverse.

The second advantage is that it is widely used and the environment is mature, which is very important. In the past, the reason why the new framework and small framework are seldom used is that the ecology and maturity determine. The more mature the framework is, the stability and security have been verified reliably, which is particularly important for Internet security.

There are two double-edged features. The first one is stateless. Stateless has many benefits. And, “stateless” also said the server are the same, there is no “state”, so it’s easy to form the cluster, load balancing put forward requests to any server, not because of inconsistent state leads to handle errors, use “pile machine” “stupid” easy high concurrency is available. The downside of statelessness is that when you log on to an e-commerce site or a site that requires users, such as orders and payments, you need a strong identity authentication. If you don’t solve the problem of statelessness, how will the server know who you are? Well, then anyone can get your information.

The second double-edged sword is plaintext transmission. The best point of plaintext transmission is that it can clearly know what is displayed in the message, which provides great convenience for developers to debug. The disadvantage is that it brings security risks. A typical WIFI trap takes advantage of the drawbacks of HTTP cleartext, lures you into a hot spot, and then frantically grabs all your traffic in order to get your sensitive information.

There is also a disadvantage is that the head of the team is blocked before, here is not much to explain.


In the previous section, we discussed the features and pros and cons of HTTP. Among the cons, we can solve the stateless drawback of HTTP through cookies, but plaintext transmission and insecurity cannot be solved, hence the HTTPS protocol.

The default port number for HTTP is 80, and the default port number for HTTPS is 443. The rest of the packet transmission features are the same.

What’s safe about HTTPS?

HTTP is based on TCP/IP, whereas HTTPS is based on SSL/TLS. SSL/TLS is a secure protocol, so HTTPS is secure.


SSL, also known as Secure Sockets Layer (SSL), is at Layer 5 (session Layer) in the OSI model. It was invented by Netscape in 1994 and has two versions, V2 and V3. V1 was never made public due to serious flaws.

TLS is an updated version of SSL v3, but with a different name. The actual version is SSL V3.1. TLS is composed of recording protocol, handshake protocol, warning protocol, change cryptographic specification protocol, extension protocol and so on. It uses many cutting-edge cryptography technologies such as symmetric encryption, asymmetric encryption and identity authentication.

So SSL/TLS is much more secure than TCP/IP.

15. TLS1.2

Components of the TLS protocol

  1. The Record Protocol defines the basic unit for TLS to send and receive data: Record. It’s kind of like a segment in TCP, where all the other subprotocols need to be sent through the logging protocol. However, multiple records can be sent at once in a TCP packet, and there is no need to return an ACK as TCP does.

  2. The Alert Protocol’s job is to send an Alert message to the other party, sort of like a status code in HTTP. For example, a protocol_version does not support the old version, a bad_certificate is a problem with the certificate, and the other party can choose to continue or terminate the connection immediately after receiving an alert.

  3. The Handshake Protocol is the most complex subprotocol in TLS. It is much more complex than TCP’s SYN/ACK. During the Handshake, the browser and server negotiate TLS version numbers, random numbers, cipher suites, and other information, and then exchange certificate and key parameters. Finally, the two parties negotiate to get the session key, which is used in the subsequent hybrid encryption system.

  4. Change Cipher Spec Protocol is a simple “notification” that all subsequent data will be encrypted. And then conversely, before it, the data is in plain text.

TLS Handshake Procedure

  1. After a TCP connection is established, the browser first sends a “Client Hello” message, that is, “Hello” to the server. It contains the version number of the Client, the supported cipher suite, and a Random number (Client Random) for subsequent generation of the session key.
What is a cipher suite

One of the security features of HTTPS is confidentiality. The most common way to achieve confidentiality is encryption. The key to unlock this secret is a secret key. The encryption algorithm is public but the secret key is absolutely secret, so by the way the secret key is used, encryption is usually divided into two parts, one is symmetric encryption, the other is asymmetric encryption.

A cipher suite is a combination of symmetric algorithms, such as GCM, CCM, and Poly1305.

  1. After receiving a Client Hello message, the Server returns a Server Hello message. Check the version number, also give a Random number (Server Random), and then select one from the list of clients to use as the cipher suite for this communication. And the server sends a certificate to the client that identifies itself. After the certificate is sent, the Server sends a Server Key Exchange message containing the public Key (Server Params), which is used to implement the Key Exchange algorithm and its own private Key for signature authentication.

  2. After receiving the certificate and signature, the Client authenticates the certificate. After passing the authentication, a public Key Client Params is generated based on the password suite selected by the server, and the public Key parameter is passed to the server as Client Key Exchange. Then generate a Random number (Pre Random) from Server Params and Client Params.

  3. The Server generates Pre Random according to the Client Params sent by the Client. Now both the Client and the Server have Server Params, Client Params and Pre Random. Then both the client and the server generate a Master key, called “Master Secret,” for encrypting the session based on these three parameters.

  4. After the master key is generated, the session key (client_write_key) sent by the client and the session key (server_write_key) sent by the server are generated using a property (PRF) to avoid security risks caused by using only one key.

  5. After the session key of the client and server is generated, the client sends a Change Cipher Spec message and a Finished message to summarize all the sent data and encrypt it for the server to authenticate. The server sends the Change Cipher Spec and Finished messages. Both parties verify that encryption and decryption are OK. The handshake ends.

16. TLS1.3 advantage

Strengthen the security

In the past ten years of TLS1.2, a lot of experience has been accumulated, and a lot of vulnerabilities and weaknesses of encryption algorithms have been discovered successively. Therefore, TLS1.3 fixes these insecure factors in the protocol.

So in TLS1.3, the cipher suite was partially streamlined, leaving only five:

  • TLS_AES_128_GCM_SHA256
  • TLS_AES_256_GCM_SHA384
  • TLS_CHACHA20_POLY1305_SHA256
  • TLS_AES_128_GCM_SHA256
  • TLS_AES_128_GCM_8_SHA256

The RSA and DH key exchange algorithms are abolished.

Performance improvement

The performance improvement of TLS1.3 is mainly in the handshake process, as can be seen from TLS1.2, the handshake protocol is quite complex. So optimizing the handshake step can save a lot of time.

Now that the cipher suite is vastly simplified, there is no need to go through the same complex negotiation process as before. TLS1.3 compressed the previous “Hello” negotiation process, deleted the “Key Exchange” message, reduced the handshake time to “1-RTT”, and doubled the efficiency.

In fact, the specific approach is to use the extension. In the Client Hello message, the Client can directly use “Supported_groups” to specify the supported curves, such as P-256 and X25519, and use “key_share” to specify the Client public key parameters corresponding to the curves. Use “signature_ALGORITHMS” with the signature algorithm.

After receiving the extension, the server selects a curve and parameters in these extensions, and then returns the public key parameters on the server side with the “KEY_share” extension. The key exchange between the two sides is realized, and the following process is basically the same as 1.2.

17. HTTPS optimization

HTTPS is slower than AN HTTP connection because it starts with the TCP three-way handshake, followed by a TLS connection where the encryption algorithm is calculated. Let’s take a look at how to speed up HTTPS connections.

1. Hardware optimization

First choose a faster CPU, preferably also built in AES optimization, so that you can speed up the handshake, but also can speed up transmission.

Secondly, you can choose the “SSL accelerator card”, encryption and decryption call its API, let the specialized hardware to do asymmetric encryption and decryption, share the CPU calculation pressure.

But “SSL accelerator card” also have a question, for computing algorithm, and not flexible customization has certain limitations, so there is a third way faster, “SSL acceleration server” and “uninstall” thoroughly with dedicated server cluster to TLS handshake encryption decryption calculation, performance than the pure nature “accelerator card” to the more powerful.

2. Software optimization

There are two ways to upgrade the software. The first is to upgrade the software itself, and the second is to upgrade the protocol.

A software upgrade, needless to say, is an upgrade of the software you are using, and this is the easiest optimization because there are some optimizations in the software upgrade itself.

Protocol optimization

Protocol optimization is to optimize the core TLS, because THE “slow” point of HTTPS itself is TLS. The first step to optimizing it is to upgrade TLS to version 1.3.

If you cannot upgrade to version 1.3, then the key exchange protocol used in the handshake should be elliptic curve ECDHE algorithm as far as possible. Because it is not only fast operation, high security, but also support “False Start”, can reduce the handshake message round trip from 2-RTT to 1-RTT, to achieve a similar effect with TLS1.3.

In addition, the elliptic curve should also choose a high-performance curve, preferably X25519, the suboptimal choice is P-256. AES_128_GCM can also be used as the symmetric encryption algorithm, which is slightly faster than AES_256_GCM.

Certificate of optimization

There are two certificate optimization points: the first is certificate transmission and the second is certificate verification.

The server certificate can be an elliptic curve (ECDSA) certificate rather than an RSA certificate, because the ECDSA certificate is much smaller and can reduce the bandwidth.

The client certificate verification is actually a very complicated operation. In addition to the public key decryption and verification of multiple certificate signatures, the certificate may be revoked and invalid. Sometimes, the client will access the CA again, download the CRL or OCSP data, which will generate a series of network communications such as DNS query, connection establishment, data sending and receiving. Add several RTT.

The Certificate revocation List (CRL) is periodically published by the CA. The CRL contains the serial numbers of all the certificates whose trust is revoked. You can query the CRL to know whether the certificates are valid. However, the CRL is issued periodically and the SIZE of the CRL is a few MB. In this way, each query consumes a lot of resources and affects efficiency. Nowadays we use OCSP Stapling, which allows the server to access the CA in advance to get the OCSP response, and then send it to the client along with the certificate in the handshake, without the client having to connect to the CA server to query.

Session reuse

Session reuse is similar to caching in that it reuses the master key calculated during the TLS process.

Session reuse is divided into two types. The first type is Session ID. After the client and server connect for the first time, each store a Session ID number, and the memory stores the master key and other related information. When the client connects again, it sends an ID, the server looks in memory, finds it, and directly restores the session state with the master key, skipping certificate verification and key exchange, and establishing secure communication with a single message round trip.

The second is Session Ticket, which is similar to HTTP Cookie. The storage responsibility is transferred from the server to the client. The server encrypts the Session information and sends the message “New Session Ticket” to the client for the client to save. During reconnection, the client uses the session_ticket extension to send a Ticket instead of Session ID. After decrypting and verifying the validity period, the server can resume the Session and start encrypted communication.

The problem with both of these schemes is that they only implement 1-RTT. TLS1.3 further implements a “0-RTT”, which is similar to a “Session Ticket” but sends Early Data along with the Ticket. This eliminates the server verification step in 1.2 and is called “pre-shared Key”, or “PSK” for short.


HTTP2.0 main optimization points are the following:

  1. The head of compression
  2. Channel multiplexing
  3. Binary transmission in frames
  4. Server push

The head of compression

In HTTP1.1, compression is only in Content-Encoding, which is the compression of the returned resource by the server, but the Header is ignored in the packet.

During development, we can find that the Header part of the packet also carries a large number of fields, such as Accept, user-agent, Cookie, etc., and there is a point that these fields are repeated, which is a waste of performance.

So in HTTP2.0, the use of HPACK algorithm for the head compression, at both ends of the client and the server to establish a “dictionary”, with the index number to represent the repeated string, but also with Huffman encoding to compress the integer and string, can reach 50%~90% of the high compression rate.

Channel multiplexing

In the 1.1 version of HTTP, we said that HTTP has a very performance affecting disadvantage, which is queue head congestion. Although we use long links, domain sharding, increase concurrent connections, but these are also helpless solutions, not from the fundamental solution.

In HTTP2.0, a channel multiplexing feature has been added, that is, the TCP/IP channel can be multiplexed to send requests in parallel.

In HTTP1.1, the request is sent sequentially, that is, after the previous request-reply is completed, the next request-reply is executed. Although the long link is also multiplexed over the TCP/IP channel. But the requests are sent sequentially in the channel. But in HTTP2.0, requests within the channel are sent in parallel, which obviously solves the serialization problem and greatly increases efficiency.

Binary transmission in frames

In addition to the channel multiplexing mentioned above, another improvement is binary frame transmission. In HTTP1.1, the messages are all plain text, which is easy to develop and read, but cumbersome to process and to compile, greatly reducing performance.

Instead of continuing this feature in HTTP1.1, HTTP2.0 is fully converted to binary mode, all zeros and ones for computers, not too nice. Switching to binary made it much easier to optimize, and the idea of transmission in frames was born.

The split frame transfer is to break up the original header-body feature into small binary frames. The “HEADERS” frame is used to store the Header DATA, and the “DATA” frame is used to store the entity DATA. The step of this part is to break the whole into pieces, and then divide the pieces into pieces on the server side. Each frame goes from client to server and from server to client. This two-way transmission is called a stream, and each stream is marked with an ID. Each stream flows with a sequence of small fragments called frames, and each stream can be concurrent with multiple fragments, which is called channel multiplexing.

At the “flow” level, messages are sequences of “frames” in order, while at the “connection” level, messages are “frames” sent and received out of order. Multiple requests/responses do not need to be queued, so there is no “queue head blocking” problem, which reduces latency and greatly improves connection utilization. To make better use of connections and increase throughput, HTTP/2 also adds several control frames to manage virtual “flows”, enabling priority and flow control.

Server push

HTTP/2 also changes the traditional “request-reply” mode to some extent. Instead of responding to requests passively, servers can create new “streams” that send messages to clients actively. For example, sending possible JS and CSS files to the client in advance of the browser’s request for HTML reduces latency. This is called “Server Push” (also called Cache Push).

Other features

In addition to the advantages described above, HTTP2.0 also adds security. Due to the rise of HTTPS, all browsers explicitly support encrypted HTTP2.0, so today’s version of HTTP2.0 uses the HTTPS protocol and runs over TLS. However, HTTP2.0 also requires backward compatibility, requires plaintext transmission, does not force the use of encrypted communication, but the format is still binary, but does not require decryption.

So for compatibility purposes, HTTP2.0 defines two strings to distinguish between the plaintext and encrypted forms, “H2” for encrypted HTTP2.0 and “H2c” for plaintext HTTP2.0, which solves the compatibility problem.

Let’s look at the HTTP1.1, HTTPS, HTTP2.0 protocol stack.From the above diagram we can see at a glance the differences between the various protocols.


Looking through the HTTP protocol

HTTP protocol principle + practice Web development engineers must learn

(Intensive reading recommended) HTTP Soul question, strengthen your knowledge of HTTP

The last

After reading this article, do you feel quite dry, I feel very dry, because THE HTTP protocol is very abstract, but as WEB engineers, we deal with HTTP every day, do not understand the characteristics of HTTP is really very wrong, and indeed for our development is very helpful. When I write this article, I encountered a BUG related to HTTP protocol, there is a FEEDBACK, the customer side is urgent, although it does not affect the use but who let the customer is party A!! The problem is that the back end returns an image address to the client, and then there is a function to view the original image, but when clicking on the original image, it turns out that it only downloads the image. This FEEDBACk was submitted to the front end, and it was thought that there was a problem on our side. I checked for a long time and found no problem. When I applied it, I found it was ok in some cases, while in other cases I downloaded it. At first, I thought it was related to the format of the picture, and after repeated verification, I found it was not the problem of the picture. I was surprised by the back end and another front end, and then the other front end noticed the difference, and the difference was in the HTTP message, and it was a field Server that didn’t pay much attention to, and found that the normal Server was different from the BUG Server. Therefore, I found the key problem of this BUG, the difference of the server. Later, I reported this problem to operation and maintenance and finally solved it. 🤣🤣🤣 After this incident, I feel the importance of HTTP protocol.

After the language

I think it’s ok, please give me a thumbs up when I leave, and we can study and discuss together!

You can also follow my blog and hope to give me a Start on Github, you will find a problem, almost all my user names are related to tomatoes, because I really like tomatoes ❤️!!

The guy who wants to follow the car without getting lost also hopes to follow the public account or scan the qr code below 👇👇👇.I am a primary school student in the field of programming, your encouragement is the motivation for me to keep moving forward, 😄 hope to refueling together.