This is the third day of my participation in the August Text Challenge.More challenges in August

Thoroughly master “Front-end Performance Optimization” based on HTTP Network layer

  • Product performance optimization scheme

HTTP Network layer optimization code compiler layer optimization Webpack code runtime layer optimization HTML/CSS javascript Vue React security optimization XSS + CSRF data buried and performance monitoring…

From entering the URL to seeing the page

  • Step 1: URL parsing
  • Address resolution

  • Encoding: Chinese characters special characters, etc
  • Step 2: Cache check
  • The cache location
    • Memory Cache Memory Cache
    • Disk Cache Indicates the Disk Cache
  • Cache checking mechanism
    • Search the disk cache for a match and send a network request if there is no match
    • Normal flush (F5) : Since the TAB page is not closed, the memory cache is available and is used before the disk cache
    • Forced refresh (CTRL +F5) : The browser does not use Cache. Therefore, all requests are sent with cache-control: no-cache in the header. The server directly returns 200 and the latest content
  • Cache type
    • Strong cached Expires/cache-control
      • The browser’s handling of strong caching is determined by the response header returned when the resource is first requested
      • Expires: cache expiration time, used to specify the expiration date of a resource (HTTP1.0)
      • Cache-control: cache-control: max-age=2592000 Cache-control: cache-control: max-age=2592000 Cache-control: cache-control: max-age=2592000
      • Cache-control takes precedence over Expires when both are present

  • Negotiate the cache last-Modified/ETag
    • Negotiation cache is a process in which the browser sends a request to the server with the cache id after the cache is invalid, and the server decides whether to use the cache based on the cache ID
    • When the client requests data for the first time, the server allocates a cache id to the client. When the client requests the server again, it sends the cache ID to the server. The server returns the corresponding content according to whether the resource file is updated: If there is no update, 304 is returned to inform the client to read the cache information. If there is an update, 200 and the latest resource information are returned, along with the new cache identifier last-Modified/ETag

  • Data caching, data caching is manually, a localStorage in js/sessionStorage can be used to save the data to the local, the next time a request can get data directly from here

  • Step 3: DNS resolution
  • In DNS resolution, the browser sends the domain name information to the DNS server before requesting data. The DNS server then returns the IP address of the real server corresponding to the domain name to the client. The client then sends a request to the real server based on the OBTAINED IP address
  • Each DNS resolution takes about 20 to 120 milliseconds
  • DNS resolution is generally divided into two types: iterative query and recursive query
    • Recursive query: The system checks whether there is a cache before parsing. If there is a cache, the system directly reads the cache data. If there is no cache, the system checks the local hosts file and resolves the cache on the local DNS server
    • Iterative query: the root domain is resolved first, then the level domain and then the authority domain

  • DNS Resolution optimization
    • Reduce the number of DNS requests
    • DNS Prefetch: Domain name resolution is performed at the same time as browser rendering
    • Deployment by server
      • Rational utilization of resources
      • The ability to withstand pressure is strengthened
      • Improving HTTP concurrency
<meta http-equiv="x-dns-prefetch-control" content="on">
<link rel="dns-prefetch" href="//static.360buyimg.com"/>
<link rel="dns-prefetch" href="//misc.360buyimg.com"/>
<link rel="dns-prefetch" href="/ / img10.360buyimg.com"/>
<link rel="dns-prefetch" href="//d.3.cn"/>
<link rel="dns-prefetch" href="//d.jd.com"/>
Copy the code
  • Step 4: TCP three-way handshake
  • Seq number, used to identify the byte stream sent from the end of the TCP source project and marked when the initiator sends data
  • Ack id. The ack id is valid only when the ACK flag bit is 1. Ack =seq+1
  • Mark:
    • ACK: Confirms that the serial number is valid
    • RST: resets the connection
    • SYN: Initiates a new connection
    • FIN: Releases a connection

Three handshakes why not two or four?

  • As a reliable transmission control protocol, THE core idea of TCP is to ensure reliable data transmission and improve transmission efficiency
  • If the message is sent twice, the server sends a message to the client. If the client does not respond, the server cannot determine whether the client received the message
  • If it’s four times it’s one more time and it’s not necessary
  • Step 5: Data transfer

The HTTP message

  • The request message
  • The response message
  • Response status code
    • 200 OK Return successful
    • 202 Accepted: Server Accepted the request, but not yet processed (asynchronous)
    • 204 No Content: The server successfully processed the request, but does not need to return any physical Content
    • 206 Partial Content: The server has successfully processed some GET requests (breakpoint continuation Range/ if-range/content-range /Content-Type: “multipart/byteranges”/content-Length… .).
    • 301 Moved Permanently
    • 302 Move Temporarily
    • 304 Not Modified
    • 305 Use Proxy
    • 400 Bad Request: The Request parameters are incorrect
    • 401 Unauthorized: Authorization
    • 404 Not Found
    • 405 Method Not Allowed
    • 408 Request Timeout
    • 500 Internal Server Error
    • 503 Service Unavailable
    • 505 HTTP Version Not Supported
  • Step 6: TCP wave four times

  • Why three handshakes to connect and four waves to close?
    • After receiving a SYN request packet from a client, the server sends a SYN+ACK packet
    • But close connection, when FIN message server received, probably does not immediately close links, so can only reply an ACK packet, first tell clients: “you FIN I received a message”, until all the message server to send out, I can send a FIN message, so you can’t send together, so need four steps to shake hands.
  • Step 7: Render the page

Web front-end advanced – browser underlying rendering mechanism and performance optimization

Performance Optimization Summary

  • 1. Implement strong cache and negotiated cache for static resource files (extension: files are updated in time) 2. Use localStorage for data cache (cookie, localStorage, vuex, redux, etc.) for interface data that is not frequently updated.
  • DNS optimization 1. Deploy servers by server to increase HTTP concurrency 2.
  • TCP three-way handshakes and four-way waves
  • Data transmission 1. Reduce the size of data transmission
    • Content or data compression (Webpack, etc.)
    • Enable GZIP compression on the server (generally about 60% compression)
    • Batch requests for large quantities of data (such as pull-down refresh or paging to ensure less data for the first load)

    2. Reduce the number of HTTP requests

    • Resource file merge processing
    • Use font ICONS
    • Sprite (Sprite) CSS-sprit
    • Image BASE64
  • CDN Service (Geographic Distribution)
  • Using HTTP2.0
  • Network optimization is the focus of front-end performance optimization, because most of the consumption is at the network layer, especially the first page load, how to reduce the waiting time is very important (reduce the effect and time of blank screen).
    • LOADING humanized experience
    • Skeleton screen (client + Server)
    • Image lazy loading

The differences between HTTP1.0 and HTTP1.1 and HTTP2.0

  • The difference between HTTP1.0 and HTTP1.1
    • Cache handling: HTTP1.0 mainly uses Last_Mdified and Expires as the cache criteria, whereas HTTP1.1 introduces more cache control policies: ETag and Cache_control
    • Bandwidth optimization and network connection usage: HTTP1.1 supports Partial Content 206 (Partial Content)
    • Error notification management: add 24 error status codes in HTTP1.1. For example, 409 (Conflict) indicates that the requested resource is in Conflict with the current state of the resource. 410 (Gone) Indicates that resources on the server are permanently deleted
    • Host header handling: HTTP1.0 assumes that each server is bound to a unique IP address, so the URL in the request message does not pass the hostname. However, with the development of virtual hosting technology, there can be multiple virtual hosts (multi-homed Web Servers) on a physical server, and they share the same IP address. HTTP1.1 supports the Host header for both Request and response messages, and raises an error if there is no Host header in the Request message.
    • Long connections: Connection: keep-alive is enabled by default in HTTP1.1 to compensate for the need to create a Connection on every request
  • Some new features of HTTP2.0 compared to Http1.x
    • The new Binary Format, http1. x, is text-based. There are natural defects in text-based Format parsing, and there are many scenarios to consider for robustness. Binary is different, only recognize the combination of 0 and 1, based on this consideration of HTTP2.0 protocol parsing decision to use binary format, convenient and robust implementation
    • The header compression: The http1. x header contains a large amount of information, and must be sent repeatedly each time. HTTP2.0 uses encoder to reduce the size of the header that needs to be transmitted
    • Server push: For example, when a web page has a style. CSS request, when the client receives the style. CSS data, the server will also push the style.js file to the client. When the client tries to obtain the style.js file again, it can directly obtain it from the cache without sending the request
    • MultiPlexing
      • HTTP1.0: Each time a request is responded, a TCP connection is established and closed
      • HTTP1.1: long Connection (keep-alive), several requests queued serialized single thread processing, the back of the request waiting for the previous request to return to obtain the opportunity to execute, once a certain request timeout, the back of the request can only be blocked, also known as “keep-alive”
      • HTTP2.0: Multiplexing. Multiple requests are executed in parallel on a connection. One request task is time-consuming and does not affect the normal execution of other connections