preface

Before the project after deployment of the first screen load too slowly, so want to simple optimization has recently, by the way you want to in a beginner’s ideas organized the project optimization from the Angle of comb, for the level of reason, many ways may only write down ideas, can’t apply on their own projects, and probably a lot of optimization scheme has been slightly out of date. Mainly want to do about optimization knowledge comb it, after all, optimization is an eternal topic. The project is developed based on the VUE framework, but the idea of optimization method is not limited to the framework.

Code level optimization

Code level optimization this part is actually more messy, shallow meaning is how to write high-performance code? Emm is mainly about avoiding operations that affect performance from a novice’s perspective.

Lazy loading of routes and modules

Very common lazy loading code, do not load all the routes or modules at once, to the need to reference the load, with functions instead of objects to introduce modules and routes, belongs to the basic operation of the VUE framework.

// Route lazy loading
export default new Router({
  routes: [{path: '/'.name: 'HelloWorld'.// Method 1: vUE asynchronous component implementation
      // component: resolve => (require(['@/components/HelloWorld'], resolve))
      // Import ();
      component: () = > import('@/components/HelloWorld')}]})// The module loads lazily
export default {
  components: {
  	/ / method
    'HelloWorld': () = > import('./HelloWorld'),
    / / method 2
    // HelloWorld': resolve => (['./HelloWorld'], resolve)}}Copy the code

Lazy loading of images

In fact, image lazy loading is usually more applied to sites with more images. In my own project, I did not use complex long screen lazy loading due to the small number of images. Simply use a loading effect instead of loading an image or component when the ajax request for the image or module does not return, simply implementing basic lazy loading? The z-index of a div is set to show on top of images and components. At the same time, change its CSS to hidden on the asynchronous callback of the Ajax request. Open source component libraries such as elementUI should have a similar loading component.

For long pages with multiple images, empty divs are used instead of image positions when you are not scrolling through the page. Once we’ve made the div visible by scrolling, the contents of the div element change to render the contents, which is lazy loading of the image.

The key is to get two values: the height of the current viewable area, usually obtained with the window.innerheight property.

2. The height of the element from the top of the viewport. We use the getBoundingClientRect() method to get the size of the returned element and its position relative to the viewport. The return value of this method is a DOMRect. The DOMRect object contains a set of read-only attributes — left, top, Right, and bottom — that describe the border, in pixels. All attributes except width and height are relative to the top-left position of the viewport. The top attribute represents the height of the element from the top of the visible area, which is used for the function.

<! DOCTYPEhtml>
<html lang="en">
<head>
  <meta charset="UTF-8">
  <meta name="viewport" content="Width = device - width, initial - scale = 1.0">
  <meta http-equiv="X-UA-Compatible" content="ie=edge">
  <title>Lazy-Load</title>
  <style>
    .img {
      width: 200px;
      height:200px;
      background-color: gray;
    }
    .pic{// NecessaryimgStyle}</style>
</head>
<body>
  <div class="container">
    <div class="img">// Note that we didn't introduce a real SRC for it<img class="pic" alt="Loading" data-src="./images/1.png">
    </div>
    <div class="img">
      <img class="pic" alt="Loading" data-src="./images/2.png">
    </div>
    <div class="img">
      <img class="pic" alt="Loading" data-src="./images/3.png">
    </div>
    <div class="img">
      <img class="pic" alt="Loading" data-src="./images/4.png">
    </div>
    <div class="img">
      <img class="pic" alt="Loading" data-src="./images/5.png">
    </div>
     <div class="img">
      <img class="pic" alt="Loading" data-src="./images/6.png">
    </div>
     <div class="img">
      <img class="pic" alt="Loading" data-src="./images/7.png">
    </div>
     <div class="img">
      <img class="pic" alt="Loading" data-src="./images/8.png">
    </div>
     <div class="img">
      <img class="pic" alt="Loading" data-src="./images/9.png">
    </div>
     <div class="img">
      <img class="pic" alt="Loading" data-src="./images/10.png">
    </div>
  </div>
</body>
</html>
<script>
    // Get all image labels
    const imgs = document.getElementsByTagName('img')
    // Get the height of the viewable area
    const viewHeight = window.innerHeight || document.documentElement.clientHeight
    // num is used to count which image is currently displayed, so as not to check the first image every time
    let num = 0
    function lazyload(){
        for(let i=num; i<imgs.length; i++) {
            // Subtract the height from the top of the element to the top of the viewable area
            let distance = viewHeight - imgs[i].getBoundingClientRect().top
            If the height of the viewable area is greater than or equal to the height from the top of the element to the top of the viewable area, the element is exposed
            if(distance >= 0) {// Write real SRC to the element to show the image
                imgs[i].src = imgs[i].getAttribute('data-src')
                // The first I image has been loaded, next time from the first I +1 check whether it is exposed
                num = i + 1}}}// Listen for Scroll events
    window.addEventListener('scroll', lazyload, false);
</script>
Copy the code

An ajax request

Writing native Ajax is not usually done in projects these days, after all, because of the asynchronous callback hell. My own project uses AXIos, defined as follows: AXIos is a lightweight HTTP client that executes HTTP requests based on the XMLHttpRequest service, supports rich configuration, supports Promise, supports browser side and Node.js side. But often we need to encapsulate Axios, because it would be too cumbersome to rewrite everything like setting the timeout, setting the request header, determining which request address to use based on the project environment, error handling, and so on every time an HTTP request is made. Here’s my own very simple Axios wrapper.

import Vue from 'vue'
import axios from 'axios'
var service = axios.create({
    baseURL: ' '.// http://qinghai.free.idcfengye.com/
    timeout:400000});// Add request interceptor
service.interceptors.request.use(function (config) {
    // What to do before sending the request
    return config;
}, function (error) {
    // What to do about the request error
    return Promise.reject(error);
});
// Add a response interceptor
service.interceptors.response.use(function (response) {
    // What to do with the response data
    return response;
}, function (error) {
    // Do something about the response error
    return Promise.reject(error);
});

Vue.prototype.$http = service;
// Mount it on the vue prototype so that you can use this.$HTTP later in the vue file to get the service.
Copy the code

Of course, ajax optimization doesn’t really mean simple AXIos encapsulation, which is a common operation. This optimization problem also stems from the project’s home page, where too many Ajax requests are being made at the same time, and running too many asynchronous tasks at once can cause the page to stall. I started with the axios request wrapped above. To solve the lag problem, I wanted to use fetch() instead of Ajax requests at first. Fetch () is defined as follows:

The Fetch API is a new Ajax solution where Fetch returns promises. Fetch is not a further encapsulation of Ajax, but rather native JS, without using XMLHttpRequest objects. The fetch (url, options). Then ().

In fact, I feel there are three advantages: 1. Use promise, which also supports async and makes writing async more convenient; 2. Whether to carry cookies can be customized; Fetch is used in ServiceWorker. But in real projects, Ajax is often wrapped, such as axios above, so that the first two items don’t really matter. But the key is the third one. Service work is based on web workers.

As we all know, javaScript is single threaded. With the complexity of Web business, developers gradually do a lot of resource-consuming operations in JS, which makes the disadvantages of single threading even more obvious. The Web worker was created on this basis, it is off the main thread, and we can hand over complex and time-consuming tasks to the Web worker. However, as an independent thread, Web worker should have more functions than that. Service work adds the ability of offline cache on the basis of Web worker.

Features: 1, must be HTTPS environment, local debugging localhost or 127.0.0.1 environment is also acceptable, 2, rely on cache API implementation 3, rely on H5 FETCH API; 4. Rely on promise for implementation. But I don’t have such a complicated optimization scheme here, so I won’t go into it.

I used the following basic processing methods: 1. I used axios to deal with multiple concurrent requests. When a certain data on a page comes from multiple unrelated requests, it needs to be processed and presented in a unified manner. Axios.all (iterable) : array of requests; Axios.spread (callback), argument: corresponding request return value. Examples of API applications are as follows:

  methods: {
    getAllTask() {
      return axios.get('/data.json', {
        params: {
          id: 10}})},getAllCity() {
      return axios.get('/city.json', {
        params: {
          id: 11}}}}),mounted() {
    axios.all([this.getAllTask(), this.getAllCity()])
      .then(axios.spread(function(allTask, allCity) {
        console.log('Request 1 result', allTask)
        console.log('Request 2 result', allCity)
      }))
  },
Copy the code

2, try to reuse Ajax requests, when different modules can share the same interface of the same information, do not request twice in two modules, but try to use communication between components to achieve information sharing;

2. Set the HTTP cache. HTTP caching is one of the caching mechanisms we are most familiar with in our daily development. It is divided into strong cache and negotiated cache.

Strong cache

The strong cache has a higher priority. The negotiation cache is enabled only when the strong cache fails to be matched. Strong caching is controlled using the Expires and cache-Control fields in the HTTP header. In a strong cache, when a request is made again, the browser determines whether the target resource “matches” the strong cache based on the expires and cache-control parameters. If so, the browser directly obtains the resource from the cache without communicating with the server.

When the server returns a Response, write the expiration time in the Response Headers field. Next, if we try to request a resource from the server again, the browser will compare the local time to the Expires timestamp, and if the local time is less than the expires date, it will simply fetch the resource from the cache. Expires writes an absolute timestamp, such as year x month x date. In cache-Control, we use the max-age field to Control the expiration date of the resource. Max-age is not a timestamp, but a length of time. Max-age is a relative time, which means it has the ability to avoid the time lag that expires can cause. Again, cache-Control takes precedence over Expires. Cache-control also has the higher priority s-maxage: used to indicate the valid time of the Cache on the Cache server (such as the Cache CDN) and is only valid for the public Cache. (Public and private are opposing concepts for whether a resource can be cached by a proxy service.)

Negotiate the cache

The negotiated cache depends on the communication between the server and the browser. Under the negotiated cache mechanism, the browser needs to ask the server for information about the cache to determine whether to re-initiate the request, download the complete response, or obtain cached resources locally. When the server prompts that the cache resource is Not Modified, the resource is redirected to the browser cache, in which case the status code for the network request is 304.

Implementation: Last-Modified to Etag. Last-modified is a timestamp that returns with Response Headers on the first request if we enable negotiated caching. Each subsequent request is accompanied by a timestamp field called if-Modified-since, whose value is the last-Modified value returned to it by the previous response. After receiving the timestamp, the server compares the timestamp with the last modification time of the resource on the server to determine whether the resource has changed. If it changes, a complete Response is returned and a new last-Modified value is added to Response Headers; Otherwise, a 304 Response is returned, and Response Headers does not add the last-Modified field.

But there may be a bug: if we edit the file but don’t modify it, the server may think we did; If the time for modifying a file is too fast, the server may not notice it. That is, the server does not correctly perceive file changes. This leads to the Etag, which is a unique identification string generated by the server for each resource, encoded based on the content of the file, which corresponds to a different Etag as long as the content of the file differs. The downside of Etag generation is that it incurs server overhead and affects server performance. Similarly Etag is higher than last-mo______ in terms of priority.

HTTP Cache decision process: When the content of our resources can not be reused, directly set no store for cache-control, reject all forms of Cache; If yes, set cache-control to no-cache. If yes, set cache-control to no-cache. Otherwise, consider whether the resource can be cached by the proxy server and decide whether to set it to private or public based on the result. Then consider the expiration time of the resource and set the corresponding max-age and s-maxage values. Finally, configure Etag and last-Modified parameters required by the negotiation cache.

Component libraries are introduced on demand

This is easy to understand, for example, when you use component libraries such as elementUI or Echarts, you usually don’t use all of the components they provide, so when you import, you don’t need to import the whole, just the parts you need.

JS code for V8 engines

There is no doubt that this is another big pit, as for my understanding of this is also from other people’s blogs, I do not guarantee the correctness of the conclusion, just record my own understanding. First we need to understand the following two underlying characteristics of the V8 engine.

Hidden classes

Attributes are accessed using a completely different technique than dynamic lookup in V8: creating hidden classes for objects dynamically. Every time a new property is added to an object, the object’s corresponding hidden class changes. At first glance it may seem inefficient to create a new hidden class every time you add an attribute. In fact, hidden classes can be reused when classes are used to transfer information. The next time you create a Point object, you can directly share the hidden classes created from the original Point object.

In this way, all attributes in a constructor are held together by a chain of hidden classes, and objects created by the constructor can share the chain directly. The main advantages are as follows: 1. Attribute access no longer needs to look up from the dynamic dictionary; 2. Enable V8 to use classic class-based optimization and inline caching techniques.

Inline caching: V8 finds the object’s hidden class the first time it executes code that accesses an object’s properties; Also, V8 assumes that all other object attributes in the same snippet are accessed through this hidden class. Only if the prediction fails will V8 modify the inline code and remove the inline optimizations just added. When there are many objects sharing the same hidden class, the speed of accessing properties in this implementation is close to that of most dynamic languages. Property access using inline cached code and hidden classes combines with dynamic code generation and optimization to speed up property access when you build multiple instances based on a single constructor.

V8 code lessons learned from hiding: 1. Initialize all object members in constructors (so these instances don’t change their hidden classes later); 2. Always initialize object members in the same order; 3. Never delete an attribute of an object; Method: Code that executes the same method repeatedly will run faster than code that executes multiple different methods only once (due to inline caching). 5. Arrays: Avoid sparse arrays

Two compilation

V8 has two different runtime compilers: 1. The “full” compiler (Unoptimized). At first, all V8 code runs in unoptimized state. The benefit of it is that it compiles very fast, and it makes the code run very fast for the first time. 2. Optimized the compiler. When V8 finds that a piece of code is running very hot, it optimizes the code based on the usual execution path to generate optimized code. The execution of optimized code is very fast.

It is possible for the compiler to fall back from the “optimized” state to the “complete” state. This is deoptimized. This was an unfortunate process, as the optimized code did not execute correctly and had to fall back to the Unoptimized version. Of course, the most unfortunate thing is that code is constantly optimized, and then deoptimized, which leads to a significant performance penalty, such as: for… In traverses the object’s properties and try… The code in catch prevents the compiler from reaching the state optimized.

Lesson learned: 1. The code inside in is presented as a separate function so that the V8 engine can optimize it; 2. Use try.. catch

closure

Closures can complicate the logic of your program, and sometimes it’s not clear if the object memory is being freed, so be careful to free large objects in closures, or you will cause a memory leak. Closures are used with care; sometimes improper closure use can lead to a large footprint.

Storage layer optimization

In fact, the above ajax request for caching has been written a lot, well, the layout feels a bit wrong, but I don’t plan to change it. This is mainly about webpack packaging changes, after all, is the storage content optimization? But I built the project using Vuecli, and all the optimizations are already set by default, right? Like a tree – shaking?

When you build a project using vuE-CLI, the WebPack configuration is hidden under vuecli’s framework, but it’s easy to make your own special Webpack configuration, according to vuecli’s official website: The easiest way to adjust the WebPack configuration is to provide an object in the configureWebpack option in vue.config.js: this object will be merged into the final Webpack configuration by webpack-Merge. If you need to conditionally configure the behavior based on the environment, or if you want to change the configuration directly, replace it with a function that executes lazily after the environment variable is set. The first parameter to the method receives the parsed configuration. Within a function, you can modify the configuration directly, or return an object that will be merged. The specific configuration scheme because I do not understand, I will not repeat.

webpack-bundle-analyzer

If you are building a project using vue-cli3, vue-cli-service build –report will generate a report-.html. Open this HTML to see the loading status of webpack modules and dependencies. NPM install installs the webpack-bundle-Analyzer module directly if it is not a vuecli built project. If the version number is too high, there may be unexpected errors. 5.0.0 is recommended. Then import the package in the VUE configuration and customize the command. For details, see the official website. Then you can see the packaging analysis of your own project, changing global import to targeted import and use a lighter component library for those modules that are used less often. In general, according to the package analysis diagram, minimize the size of the project.

Gzip compression

Gzip compression is one of the most commonly used methods to optimize first screen loading speed. The idea behind Gzip compression is to find recurring strings in a text file and temporarily replace them to make the entire file smaller. According to this principle, the higher the rate of code repetition in a file, the more efficient compression will be and the greater the benefits of using Gzip. And vice versa. There are two main implementation methods:

1. The project is packaged and deployed normally, and the nginx configuration is modified directly on the server side. When set up this way, when you request, the server will compress the file into a.gz format and send it to you. The client will receive the.gz format and then unzip it and do the following. Equivalent to compression time for file transfer time, usually positive optimization, unless the project size is too small.

http { include mime.types; default_type application/octet-stream; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; Enable gzip gzip on; Gzip_buffers 4 16k; Gzip_comp_level 6; Gzip_types text/plain Application /javascript text/ CSS application/ XML text/javascript application/x-httpd-php; server { listen 8462; server_name localhost; location / { root dist; index index.html index.htm; } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; }}}Copy the code

2. Special Settings for Webpack and installation of compression-webpack-plugin during project packaging; Two files are generated at the same time. The first is a normal file and the other is a gz-compressed file. Both files are deployed on the server. The following is a reference to the Webpack configuration of vuecli construction project. You can directly modify the Webpack configuration without vuecli construction.

const CompressionPlugin = require('compression-webpack-plugin');
module.exports= {
    configureWebpack: {
        plugins: [
            new CompressionPlugin({
                algorithm: 'gzip'.// Use gzip compression
                test: /\.js$|\.html$|\.css$/.// Match the file name
                filename: '[path].gz[query]'.// Compressed file name (keep the original file name, suffix.gz)
                minRatio: 1.// The compression rate is less than 1
                threshold: 10240.// Compress data over 10K
                deleteOriginalAssets: false.Set this parameter with care. If you want to provide non-gzip resources, do not set this parameter or set it to false (for example, you can load the original resource file after deleting the packaged gz file).})],}},Copy the code

Then use the gzip_static on property in the Nginx configuration to statically load the local GZ file and complete gzip. Compared with the previous scheme, although this method has a larger volume of project files, it avoids the real-time compression process of the server and is faster.

CDN cache optimization

Definition: A Content Delivery Network (CDN) is a group of servers distributed in different regions. These servers store copies of the data, so the server can fulfill requests for data based on which servers are closest to the user. CDN provides fast service and is less affected by high traffic. Compared with other cache is to optimize the web flow, more is to optimize the CDN cache first screen loading speed.

CDN has two core points, one is cache, one is back source. Both of these concepts are very easy to understand. “Caching” means that we copy a copy of the resource to the CDN server, and “back source” means that the CDN finds that it does not have the resource (usually the cached data expires) and turns to the root server (or its upper layer server) for the resource.

CDN is usually used to store static resources. The “root server” mentioned above is essentially a business server whose core task is to generate dynamic pages or return non-static pages, both of which require computation. The business server is like a workshop where machines hum to produce the resources we need; In contrast, a CDN server is like a warehouse, which only acts as a “habitat” and “porter” for resources.

Static resources refer to resources such as JS, CSS and images that do not need to be calculated by a business server. And “dynamic resources”, as the name implies, is the need for back-end real-time dynamic generation of resources, the more common is JSP, ASP or rely on the server render HTML pages. What is an “impure static resource”? It is an HTML page that requires the server to do additional calculations outside of the page. Specifically, before I open a site, the site needs to verify my identity through a number of means, such as permission authentication, to decide whether to present the HTML page to me. The HTML is static in this case, but it is coupled to the operations of the business server, and it is not appropriate for us to dump it on the CDN.

In conclusion, static resources can be optimized by CDN. At the same time, static resources often do not need cookies to carry any authentication information, so put static resources and the main page under different domain names, perfect to avoid unnecessary cookies.

So much for the theoretical introduction. In my own project practice, I did not deploy all static resources on CDN, after all, the technical capacity is limited. Just some introduced common framework code, using the free resources provided by BootCDN to replace. Taking this project as an example, I modified vUE, Vuex, AXIos, Echarts and elementUI to be introduced as CDN. There are two main benefits: 1. After separating the public library, the project packaging volume is smaller and the packaging speed is improved; 2. CDN loading is faster and the server pressure is reduced.

The implementation procedure is as follows: 1. Add the CDN code in index. HTML

.<link href="https://cdn.bootcss.com/element-ui/2.7.2/theme-chalk/index.css" rel="stylesheet">
  </head>
  <body>
    <div id="app"></div>
    <script src="https://cdn.bootcdn.net/ajax/libs/vue/3.0.2/vue.cjs.js"></script>
    <script src="https://cdn.bootcdn.net/ajax/libs/vuex/4.0.0-rc.1/vuex.cjs.js"></script>
    <script src="https://cdn.bootcdn.net/ajax/libs/vue-router/3.4.8/vue-router.common.js"></script>
    <script src="https://cdn.bootcss.com/axios/0.18.0/axios.min.js"></script>
    <script src="https://cdn.bootcdn.net/ajax/libs/element-ui/2.15.0/index.js"></script>
    <script src="https://cdn.bootcdn.net/ajax/libs/vue-echarts/5.0.0-beta.0/vue-echarts.js"></script>
  </body>
</html>
Copy the code

2. Add the webpack configuration code to vue.config.js. For details about externals in webpack configuration, see the address

Optimization of rendering layers

Server-side rendering techniques

When it comes to rendering level optimization, it has to be said that SSR technology is especially popular now. In fact, it is a relative concept, and its opposite is client rendering. Client rendering is a common normal situation, the server will send the static file required for rendering to the client, after the client is loaded, they run JS in the browser, according to the running results of JS, generate the corresponding DOM. This feature makes client-side rendering source code always extremely concise. The page presents content that you wouldn’t find in an HTML source file — that’s the nature of it.

In server-side rendering mode, when the user first requests a page, the server renders the required component or page as an HTML string and returns it to the client. What the client gets is HTML content that can be rendered and presented to the user without having to run through the JS code to generate the DOM content. A web site rendered on the server is “WHAT you see is what you get.” The content rendered on the page can be found in the HTML source file. There are frameworks like Nust. Js already available for server-side rendering practices, but for my own technical reasons I haven’t tried this new technology. Here I will only mention the advantages and disadvantages of SSR, many places will also mention this.

Advantages: 1, mainly for benefit reasons, because after SSR, search engines and all kinds of crawlers can climb the content of the website, so as to facilitate the promotion of the website.

2. Server rendering solves a very critical performance problem — slow loading of the first screen. In client-side rendering mode, in addition to loading the HTML, we have to wait for the part of JS required for rendering to load, and then have to run the part of JS in the browser again. All of this happens after the user clicks on our link, and until the process is over, the user doesn’t see the real face of our page, which means the user is waiting! In contrast, the server rendering mode, the server to the client is a direct can be taken to present to the user’s web page, the middle link as early as in the server to help us do.

Cons: Server-side rendering is essentially what the browser does, and the server does it. This will render the resource faster when it arrives at the browser. This may seem reasonable at first glance, but in fact it multiplies the stress on the server side, incurs a lot of cost, and is likely to outweigh the cost.

CSS selector optimization

The CSS engine looks up stylesheets and matches each rule from right to left, which is the opposite of our normal writing habits, so when you use selectors without realizing it, you write high-performance, expensive selectors. Example: #mylist li {}. If you write like this, the browser will have to iterate over every LI element on the page, each time checking that the parent id of the li element is myList, which can consume a lot of performance. MyList_li {} Similarly, the # wildcard in CSS matches all elements, so that when you use it, the browser will iterate over each element.

The CSS performance improvement schemes are as follows: 1. Avoid using wildcards and select only the elements that need to be used. 2. Focus on attributes that can be inherited to avoid repeated matching and definition; 3. Use less tag selectors and more class selectors. 4. Don’t gild the lily. Id and class selectors should not be held back by redundant label selectors. 5. Reduce nesting. Descendant selectors have the highest overhead, so we should try to keep the depth of the selectors to a minimum (no more than three layers at most) and use classes to associate each tag element whenever possible.

DOM optimization

Reduce backflow and redraw

Redrawing does not necessarily lead to backflow, backflow does lead to redrawing. By comparison, backflow does more work and costs more. The definition is as follows:

Backflow: When we make changes to the DOM that result in a change in the DOM’s geometry (such as changing the width or height of an element, or hiding an element), the browser recalculates the element’s geometry (which also affects the geometry and position of other elements), and then draws the calculated results. This process is called backflow (also known as rearrangement).

Redraw: When we make changes to the DOM that result in a style change without affecting its geometry (such as changing the color or background color), the browser doesn’t have to recalculate the element’s geometry and simply draw a new style for the element (skipping the backflow shown above). This process is called redrawing.

1. Use variables as much as possible to cache DOM related data to avoid DOM changes;

2. Avoid changing styles line by line and use class names to merge styles.

3. Take the DOM offline: When we “take” an element off the page by setting display: None, we will not be able to trigger backflow and redraw — this “take” of the element is called DOM offline. Removing an element and putting it back will trigger a backflow, but anything done to it in the meantime will not affect performance much.

Reduce DOM retrieval times

When you need to manipulate and modify a DOM multiple times, you can save the performance cost of retrieving the DOM by only retrieving it once and storing it in a variable.

Reduce the number of DOM modifications

Changes to the DOM cause the render tree to change, leading to a (possibly) backflow or redraw process. Because JS runs much faster than DOM. The core idea behind reducing DOM manipulation is to let JS partial pressure the DOM. DOM Fragment](developer.mozilla.org/zh-CN/docs/…) The train of thought.

The DocumentFragment interface represents a minimal document object that has no parent file. It is used as a lightweight version of Document to store formatted or poorly formatted XML fragments. Because the DocumentFragment is not part of the real DOM tree, its changes do not cause reflow of the DOM tree or performance issues.

let container = document.getElementById('container')
// Create a DOM Fragment object as a container
let content = document.createDocumentFragment()
for(let count=0; count<10000; count++){// Span can now be created using the DOM API
  let oSpan = document.createElement("span")
  oSpan.innerHTML = 'I'm a little test.'
  // Manipulate the DOM Fragment object as if it were a real DOM
  content.appendChild(oSpan)
}
// After the content is processed, the actual DOM changes are triggered
container.appendChild(content)
Copy the code

The DOM Fragment object allows us to call a variety of DOM apis as if we were manipulating the real DOM, ensuring the quality of our code. And its identity is very pure: when we try to append it into the real DOM, it will surrender all of its cached descendants and get out of the way, perfectly fulfilling its role as a container and not in the real DOM structure. This structured and clean feature makes DOM Fragment popular as a classical means of performance optimization, which is reflected in the source code of jQuery, Vue and other excellent front-end frameworks. Using microtask queues to implement asynchronous updates to avoid overrendering is to use JS to partial pressure the DOM.