preface

Websites are generally divided into two: front end and back end. We can understand that the background is used to realize the functions of the website, such as: to achieve user registration, users can comment for the article and so on. And the front end? In fact, it should be the expression of function. And the vast majority of the impact on user access experience comes from the front-end page.

And what is the purpose of our website construction? Isn’t the whole point of getting people to visit? So we can understand that the front end is really the user. In addition to the background need to do performance optimization, in fact, the front end of the page more need to work on performance optimization, only in this way to bring better user experience to our users. For example, many people ask if a man only looks when looking for a girlfriend. Some wise men give the following answer: Face and body determine whether I want to know her mind, and mind determines whether I will reject her face and body. Similarly, the same is true of websites. The user experience at the front of the website determines whether users want to use the functions of the website, and the functions of the website determines whether users will vote against the front-end experience.

Not only that, if the front end is well optimized, it can not only save money for the enterprise, it can also bring more users to the user because of the enhanced user experience. Having said that, how can we optimize the performance of our front-end pages?

Generally speaking, the web front end refers to the part before the website business logic, including browser loading, website view model, picture service, CDN service and so on. The main optimization means are browser access, CDN, server rendering and HTTP related etc..

But to do performance optimization is very important, is the balance between various methods, we can not think that we use a method, the web experience will be better, the loading speed will be faster, all the performance improvement methods should be based on the actual experience.

1. Please reduce HTTP requests and DNS optimization:

Communication between the browser (client) and the server takes up a lot of time, especially when the network is bad.

A brief description of the flow of a normal HTTP request: As input “www.xxxxxx.com” in your browser and press enter, the browser to the URL to the server connection is established, then the browser sends a request to the server information, the server returned again after receive the requested information of the corresponding information, receive the response from the server browser information, the interpretation of these data is carried out.

And when we request a web page file with a lot of pictures, CSS, JS and even music information, will frequently establish a connection with the server, and release the connection, which is bound to cause a waste of resources, and each HTTP request will have a performance burden on the server and browser.

It takes more network resources to download one 100KB image than to download two 50KB images at the same network speed. So, reduce HTTP requests.

Solutions:

  1. Consolidation and compression of resources

Merging CSS and JS files, CSS compression and JS compression. In the current front-end ecosystem, we can automate compression with tools like WebPack, tools like Uglyfy-js, and even tools like Tree shaking to automate code streamlining. But don’t overmerge resources. Otherwise, the home page will be slow to load.

  1. The image processing
    1. We can use PNG24. Webp format is supported. Webp format is preferred. (Android has good support for WebP, ios has poor support.
    1. Small images are base64 format, embedded directly into the code, which can help us reduce HTTP requests. But also, this will cause the reference of our code to become larger, and the request speed will slow down. This is the need for a balance, now the sprite-based method is not particularly popular. Currently, only Facebook can be found in use on the big Internet sites
    1. The use of image preloading and lazy image loading, for similar waterfall layout or e-commerce image more websites, the reasonable use of image preloading can make the home page can be opened more quickly, and reduce bandwidth consumption
    1. For small ICONS, consider using font ICONS, which are lighter and more controlled than images

2. Understand the concepts of Repaint and Reflow layers

  • Repaint is when the appearance of an element is changed without changing the layout (width and height), such as changing the visibility, outline, background color, etc.

  • Reflow is when changes to the DOM affect the geometry of the element (width and height). The browser recalculates the geometry of the element, invalidating the affected part of the rendering tree. The browser validates the visibility property of all other nodes in the DOM tree, which is why Reflow is inefficient. For example, change window size, change text size, change content, change browser window, change style property, etc. If Reflow is too frequent, CPU usage will soar, so the front end needs to know about Repaint and Reflow.

Backflow will definitely be redrawn, but redrawing does not necessarily cause backflow. Therefore, if the same effect can be achieved, redrawing can help me save the time of backflow

For example, setting the style property to change the style of a node will result in a reflow, so it is better to set the style property to class. For animated elements, the position property should be fixed or Absolute. This will not affect the layout of other elements. If the functional requirement cannot set position to fixed or Absolute, the smoothness of speed is weighed. For example, animation uses translate instead of positioning, transparency uses RGBA instead of opacity, and some inadvertant operations will also lead to massive backflow, such as using offsetWidth in loops.

Layer: The browser will create a layer for the parts that change frequently, such as video. You can use chrome layout to find the layer and create a new layer so that the browser can only return the parts that change. For example, will-change can create a new layer for an element. We can consider using new layer for GIF and other changes, but pay attention to too many layers will also lead to a lot of time spent in layer synthesis.

3. Reduce DOM manipulation

Rationale: DOM manipulation is expensive, which is often a performance bottleneck in web applications.

Slow by nature. “Think of DOM as an island and JavaScript(ECMAScript) as another island, connected by a toll bridge,” says The analogy in High Performance JavaScript. So every time you visit the DOM, you’ll be charged a toll, and the more times you visit the DOM, the more you’ll be charged. Therefore, it is generally recommended to minimize the number of bridge crossings.

Solutions:

  • 1. Modifying and accessing DOM elements can result in Repaint and Reflow on a page, and looping over the DOM is a sin. So use JavaScript variables wisely to store content, consider the performance overhead of looping through a large number of DOM elements, and write once at the end of the loop.

  • 2. Reduce the need to query and modify DOM elements by assigning them to local variables and inserting them at a time using innerHTML.

    1. Consider using a front-end MV * framework

4. CDN acceleration (Content Delivery Network)

Basic Principles:

The full name of CDN is Content Delivery Network.

To put it simply, its main job is to distribute the content we need to be distributed to various nodes around the world, so that people around the world can get the content they want at the nearest network node, reducing the network transmission distance so as to achieve the purpose of acceleration.

When user initiated content request, through the CDN manufacturer intelligent DNS domain name resolution to get the CDN manufacturer edge node server IP (CDN manufacturers association) registered in operator, then the edge node server by request, the request data content (finish this by the browser), the edge nodes will detect whether the current node has data, If not, go to front (the parent node, the parent may also have a parent node, different network environment policy will be slightly different). If not, go to the source station to get, and return in sequence. If an edge node can be found, the validity period of the content is checked and returned to the user when the validity period is determined

However, please note that the browser has a limit for concurrent requests with the same domain name, so if the CDN resources are too much, we can consider using multiple CDN domain names

5. Reference CSS and JS files in external files, CSS at the top and JS at the bottom

Basic Principles:

The benefits of introducing external files are obvious and necessary when the project is a little more complex, easy to maintain, easy to extend, easy to manage, and easy to reuse.

The right way:

JavaScript is the king of the browser, because when the browser executes JavaScript code, it can’t be doing something else at the same time, leaving the page waiting for the script to parse and execute every time it appears (whether JavaScript is embedded or linked out). Render the page after the JavaScript code completes. This is the blocking feature of JavaScript. Because of this blocking nature, it is recommended to put JavaScript code in front of the tag, which can effectively prevent JavaScript blocking and make the HTML structure of the page faster release.

The HTML specification clearly states that CSS should be contained within the area of the page, which I won’t explain here.

However, it is important to note that browser JS prevents HTML rendering, but why can we load resources under JS? In fact, the browser internal also has a resource request pre-check mechanism, can be in js execution concurrent request resources

5. Use asynchronous loading and server-side rendering in the front-end framework

  1. Why asynchronous loading

Now the popular use of Webpack tools to build front-end engineering, can merge our JS and CSS as well as image and other resources, merge into a file, so we reduce the number of HTTP requests that our website performance optimization is better? No, there may be a lot of static resources on our website, and all of them will be rolled into a JS file. Then this file will be very huge, and the loading speed of homepage resources will be very slow. So what is the way to optimize it? The answer is asynchronous loading, we don’t pack all the component files, we break them up, we load them when we use the component, usually with webpack and the asynchronous loading method import(‘a.js’).

  1. Server side rendering

The most important difference between client-side rendering and server-side rendering is who does the full stitching of the HTML file. If it’s done on the server side and then returned to the client, it’s server-side rendering, whereas if the front end does more work to finish stitching the HTML, This is client-side rendering, for example in React we can use renderToString to render components into HTML

Advantages:

  • The first screen loads fast

Instead of loading a single-page application, I only need to load the content of the current page and do not need to load all js files like React or Vue

  • SEO optimization

For single-page applications, search engines can’t catalog pages that ajax crawls data and then renders dynamically with JS

6. Use cache and local storage on the client

For static resources that rarely change, we can consider using cache, and set a certain expiration time, which can effectively improve the loading speed of pages and save bandwidth