This is the 10th day of my participation in Gwen Challenge

What happens from the time you enter the URL to the time the page loads?

Refer back to my previous article: Front-end HTTP summary

Performance optimization can probably be done around the following four processes:

  1. The DNS
  2. A TCP connection
  3. HTTP Request Response
  4. Browser rendering

The two steps of DNS resolution and TCP connection, our front-end can do very limited efforts, by comparison, HTTP request, browser rendering process optimization is the core of front-end performance optimization.

The DNS

DNS parsing takes time. You should minimize the number of parsing times or prefetch the parsing, such as browser DNS cache and DNS prefetch.

A TCP connection

Long connections, preconnections, etc., can be solved every time to shake TCP three times.

HTTP Request Response

We end up writing front-end code that can be packaged into some browser-aware resource bundle, the size of which often determines the number and duration of HTTP requests, or we can put some resources in the cache instead of requesting them every time.

gzip

Accept-encoding :gzip in the request header, the server understands the need for gzip compression, and it will start its own CPU to complete the compression, which can be understood as sacrificing some server time and CPU cost, saving some HTTP transmission time. There is also Gzip compression in Webpack, which is actually used to do some server work during the build process, distributing the load for the server.

Webpack optimization

  1. Loader: For example, babel-loader, the most common optimization method is to use include or exclude to avoid unnecessary translation. You can also choose to enable caching to cache translation results to the file system.
  2. Plug-ins: WebPack builds are still slowed down by the sheer size of third-party libraries. We can choose something like DllPlugin that will be repackaged when the version changes depending on itself.
  3. Tree-shaking: Based on the import/export syntax, tree-shaking can learn during compilation which modules are not actually being used. This code will be removed at the end of packaging.
  4. Multi-process: Webpack is single-threaded and can only wait for processing one by one while multitasking. While our CPU is multi-core, using Happypack, for example, will fully release the advantages of CPU in multi-core concurrency and help us to decompose tasks into multiple sub-processes for concurrent execution, greatly improving the packaging efficiency.

The cache

  1. HTTP Cache(emphasis)

It is divided into strong cache and negotiated cache.

Strong caching: Use the Expires and cache-control fields to Control the Cache time. The browser determines a match based on the Expires and cache-Control fields. If the match is found, the browser directly obtains the resource from the cache and returns an HTTP status code of 200, and does not send HTTP requests to the backend. The server returns the Response Headers by writing the expiration timestamp in the Expires field. Compare the local time with the expires timestamp. If the local time is less than the expires time set by the server, the resource is directly fetched from the cache. HTTP1.1 adds a cache-control field to set a time limit that is valid up to seconds, avoiding potential issues with timestamps. When cache-control and Expires are present together, we use cache-control.

Negotiation cache: If negotiated caching is enabled, it returns a last-Modified timestamp field with the response header on the first request, and then each time we request, we carry an if-Modified-since field with the last-Modified value returned to it from the Last response. After receiving the timestamp, the server checks whether the timestamp is consistent with the Last modification time of the resource on the server, so as to determine whether the resource has changed. If the resource has changed, the server will return a complete Response content and add a new last-Modified value to the Response Headers. Otherwise, the cache resource is returned unchanged (304 Not Modified), and the resource is redirected to the browser cache. Response Headers does Not add the last-Modified field. When we modify a file too quickly (such as when it took 100ms to complete the change), if-modified-since can only detect the time difference in seconds, so it doesn’t sense the change — it doesn’t request it again when it should. An Etag is a unique identification string generated by the server for each resource, which is encoded based on the contents of the file, and the corresponding Etag is different as long as the contents of the file are different, and vice versa. Therefore, Etag can accurately sense changes in the file. The next time a request is made, a string named if-none-match will be added to the header for the server to Match. When both Etag and Last-Modified exist, Etag prevails.

The strong cache has a higher priority. The negotiation cache is enabled only when the strong cache fails to be matched.

1. Strong cache: it does not send a request to the server, but reads resources directly from the cache. In the Network option of the Chrome console, you can see that the request returns a status code of 200.

2. Negotiation cache: sends a request to the server. The server will judge whether the negotiation cache is matched according to some parameters of the request header.

What they have in common is that they both read resources from the client cache; The difference is that strong caches don’t make requests, negotiated caches do.

  1. Memory Cache

Memory Cache refers to the Cache that exists in Memory. In terms of priority, it is the first cache that the browser tries to hit. It is one of the most responsive caches in terms of efficiency, but when the current page is closed, the data in memory is gone. Base64 images, small JS files, and CSS files may be written to the memory cache.

  1. Service Worker Cache

A Service Worker is a Javascript thread that is independent of the main thread. It is independent of the browser form and can help us to achieve offline caching, message push and network proxy functions. 4. Push Cache(HTTP2 new feature) will only ask for Push Cache if the Memory Cache, HTTP Cache, and Service Worker Cache have not been hit.

The local store

  1. Cookie: HTTP protocol is a stateless protocol, the server does not know the client after the request, and Cookie is attached to the HTTP request, so that the server knows the status of the client. Small volume.

  2. Web Storage: makes up for the limitations of cookies, large volume, Storage capacity 5-10M. Local Storage and Session Storage.

    Local Storage: persistent Local Storage, the only way to make it disappear is to manually delete the Session Storage: temporary Local Storage, when the Session ends (the page is closed), the Storage content is released. Local Storage, Session Storage, and Cookie all follow the same-origin policy. However, the special point of Session Storage is that even if two pages under the same domain name are not opened in the same browser window, their Session Storage content cannot be shared.

  3. IndexDB: A non-relational database that runs on a browser. It’s huge.

CDN optimization

It refers to a group of servers located in various regions that satisfy data requests based on which servers are closest to the user. The process of copying resources from the root server to the CDN server is called caching; The CDN server does not have this resource, and the process of asking the root server for this resource is called backsourcing. In many front-line Internet companies, “static resources go CDN” is not a suggestion, but a regulation.

Content show

The browser kernel can be divided into two parts: a rendering engine and a JS engine. As JS engines have become more independent, the kernel has become a shorthand for rendering engines. Common browser cores can be divided into four types: Trident (IE), Gecko (Firefox), Blink (Chrome, Opera), and Webkit (Safari). Chrome’s core used to be Webkit, but now Chrome’s core has been upgraded to Blink.

Browser rendering process: First of all, the HTML interpreter interprets HTML to build a DOM tree, and the browser CSS interpreter recognizes and loads all CSS style information to generate a CSSOM tree, which is merged with the DOM tree, and finally generates a rendering tree. The relative position information of all elements in the page, size and other information is calculated, and the layout rendering tree based on the rendering tree is obtained. Finally, the browser iterates through the layout rendering tree, a process called drawing rendering trees, and finally, the first rendering of the page is complete.

Avoiding CSS blocking

  1. The CSS is in the head tag
  2. Enable CDN to optimize static resource loading speed

Avoid JS blocking

JS can be loaded in three ways: async/defer/ default. Use async when dependencies on DOM elements and other JS are not strong. When we rely on DOM elements and other JS, we choose defer.

Avoid backflow and repainting

Backflow: Backflow is triggered when our changes to the DOM cause a change in the DOM geometry, such as changing the width, height, or hiding elements.

Redraw: Redraw is triggered when we make changes to the DOM that result in a style change without affecting its geometry (such as changing the color or background color).

Asynchronous update strategy

The macro queue has one and only one script (the whole code). The global context is pushed onto the call stack and synchronized code execution. When the synchronized code is finished, the script is removed from the Macro queue and processed by the Micro-task. But it’s important to note that when Macro-Task is out, tasks are executed one by one; Micro-tasks, on the other hand, are executed in teams. Therefore, we process the micro queue by executing the tasks in the queue one by one and de-queuing them until the queue is empty.

Based on the above figure, if I want to update the DOM in an asynchronous task, should I wrap it as micro or Macro?

DOM update in asynchronous task is a macro task: The task is now pushed into the macro queue, but since the script itself is a Macro task, the next step is to process the micro queue, and the next step is to execute render. In render, the target asynchronous task is not actually executed and the DOM that you want to modify is not modified.

DOM update in asynchronous task is a micro task: After the execution of the script script, we went to process the micro-task queue. After the processing of the micro-task, our asynchronous task DOM was also modified, and then we could go to the render process, directly presenting the most immediate update results for the user.

Therefore, we should update the DOM as close to the rendering time as possible, i.e., micro. When we update data using the interface provided by Vue or React, the update does not take effect immediately. Instead, it is pushed into a microtask queue. When the time is right, update tasks in the queue will be triggered in batches. This is called asynchronous updating.

Server side rendering

Server rendering solves a very critical performance problem —- the first screen loads too slowly. In client-side rendering mode, in addition to loading the HTML, we have to wait for the part of JS required for rendering to load, and then have to run the part of JS in the browser again. In contrast, in the server rendering mode, the server provides the client with a web page that can be directly presented to the user. The middle link has been removed as early as in the server. The server rendering transforms the virtual DOM into the real DOM, which is equivalent to running the framework code such as Vue and React on Node first. Put so many browsers render stress concentrated, dispersed to compared quantity are not many server, the server must be overwhelming, so, the service side rendering is better we can use the first low cost are used, or unsatisfactory performance, then we can go to consider the service side rendering.

Although this article does not involve code, front-end performance problems can not be solved in one or two lines of code, the optimization results are not better, nor the best, to pursue the ultimate optimization, you must be able to start from where to solve the performance problems, but also need to weigh all aspects of the factors, choose a suitable optimization method for the project.