Website performance optimization is a necessary skill, and needs long-term accumulation, the following is my summary of some performance optimization strategies, mainly divided into several aspects:

  1. Network Request optimization
  2. Page rendering optimization
  3. JS blocking performance and memory leaks
  4. Load balancing

1 Network request optimization

1.1 Browser Cache

Before the browser sends a request to the server, it will first check whether there is the same file locally. If there is, it will directly pull the local cache. This is similar to Redis and Memcache deployed in the background, which play the role of intermediate buffer.

By default, the browser cache is stored in memory, which is cleared when the process ends or the browser closes, while the cache stored on hard disk is preserved for the long term. Most of the time, in the network panel, we will see two different states: from memory cache and from disk cache. The former refers to the cache from memory, and the latter refers to the cache from disk. The only thing that controls where the cache is stored is the Etag field we set on the server. When the browser receives the response from the server, it checks the response Header and writes the cache to hard disk if there is an Etag field.

Using Nginx as an example, set the Etag

etag on; // Enable eTAG to validate Expires 14d; // Set the cache expiration time to 14 daysCopy the code

Open our website and look at our request resources in the Network panel of Chrome DevTools. If you see the Etag and Expires fields in the response header, your cache configuration is successful.

Must bear in mind when we configure the cache, the browser to handle user request, if hit the cache, the browser will directly pull the local cache, not any communication with the server, that is to say, if we have updated the file on the server side, will not be a browser that can’t replace cache invalidation. Therefore, in the construction phase, we need to add MD5 hash suffixes for our static resources to avoid the synchronization problem of the front and back end files caused by resource updates.

1.2 Resource packaging and compression

The browser caching we’ve done before only works the second time a user visits our page, and resources must be optimized to achieve good performance the first time a user opens the page. We often boil down network performance optimization measures into three aspects: reducing the number of requests, reducing the volume of requested resources, and improving the network transmission rate. Now, let’s break it down one by one:

Webpack, for example

  • Compression JS
new webpack.optimize.UglifyJsPlugin()
Copy the code
  • Compressed Html
new HtmlWebpackPlugin({
            template: __dirname + '/views/index.html'// new an instance of the plugin and pass in the relevant argument filename:'.. /index.html',
            minify: {
                removeComments: true,
                collapseWhitespace: true,
                removeRedundantAttributes: true,
                useShortDoctype: true,
                removeEmptyAttributes: true,
                removeStyleLinkTypeAttributes: true,
                keepClosingSlash: true,
                minifyJS: true,
                minifyCSS: true,
                minifyURLs: true,
            },
            chunksSortMode: 'dependency'
        })
Copy the code

When we use htML-webpack-plugin to automatically inject JS and package HTML files with CSS, we rarely add configuration items to them. Here I give an example, you can copy it directly.

PS: Here’s a trick: when we write the SRC or href attributes of HTML elements, we can omit the protocol part, which is also easy to save resources.

  • Compress CSS

When using WebPack, we usually import CSS files as modules (the idea of WebPack is that everything is a module), but when we go live, we need to extract and compress these CSS files, which seems to be a complicated process with a few simple lines of configuration

const ExtractTextPlugin = require('extract-text-webpack-plugin')
module: {
        rules: [..., {
            test: /\.css$/,
            use: ExtractTextPlugin.extract({
                fallback: 'style-loader',
                use: {
                    loader: 'css-loader',
                    options: {
                        minimize: true}}})}]}Copy the code
  • Using webpack3 new features: ModuleConcatenationPlugin
new webpack.optimize.ModuleConcatenationPlugin()
Copy the code
  • ShouldUseSourceMap of the prod environment is set to false, and the map file generated by the build is removed
devtool: shouldUseSourceMap ? 'source-map' : false.Copy the code

Finally, we should also enable Gzip transfer compression on the server, which reduces the size of our textlike files to a quarter of their original size. The effect is immediate. Again, switch to our nginx configuration file and add the following two configuration items:

gzip on;
gzip_types text/plain application/javascriptapplication/x-javascripttext/css application/xml text/javascriptapplication/x-httpd-php application/vnd.ms-fontobject font/ttf font/opentype font/x-woff image/svg+xml;
Copy the code

If you see this field in the response header of the website request, then we have successfully configured the Gzip compression:

【 special attention!! 】 Do not Gzip image files! Do not Gzip image files! Do not Gzip image files! I will tell you only counterproductive, as for the specific reason, also have to consider the server CPU usage in the process of compression and compression ratio index, to compress images not only takes up a lot of resources background, compression effect is not significant, can be said to be “more harm than good”, so please remove the images of related items in gzip_types. We will introduce the processing of images in more detail next.

1.3 Image resource optimization

  • Do not zoom images in HTML
  • Using CSS Sprite
  • Use a font icon (iconfont)

1.4 use the CDN

CDN is used to store static resources to avoid bandwidth explosion and speed up resource download.

2 page rendering performance optimization

2.1 Reduce redrawing and reflow

  • CSS attribute read-write separation: every time the browser reads the element style, it must be re-rendered (reflux + redraw), so when we use JS to read and write the element style, it is best to separate the two, read and write first, to avoid the situation of cross-use. The most objective solution, which I recommend, is to not use JS to manipulate element styles.
  • Batch manipulate element styles by toggling the class or using the style.csstext attribute of the element.
  • DOM element offline update: When performing operations on the DOM, examples, appendChild, etc., can use the Document Fragment object to perform offline operations, insert the page again after the element is “assembled”, or use display: None to hide the element and perform operations after the element is “gone”.
  • Set unused elements to invisible: Visibility: hidden to reduce the stress of redrawing and display them when necessary.
  • Compress the DEPTH of the DOM, do not have too deep child elements in a render layer, use the DOM less to complete the page style, use pseudo elements or box-shadow instead.
  • Specify size before rendering: Since the IMG element is inline, it will change width and height after loading the image, in severe cases causing the entire page to be rearranged, so it is best to specify the size before rendering, or take it out of the document stream.
  • Separate rendering layers are triggered for elements in the page that are likely to be heavily rearranged and redrawn, using the GPU to share the CPU load. (This strategy needs to be used with caution, considering whether there can be a predictable performance optimization at the expense of GPU usage, since having too many rendering layers on a page is an unnecessary strain on the GPU, and usually we use hardware acceleration for animation elements.)

2.2 Reduce page re-rendering and Dom nesting

  • ShouldComponentUpdate () shouldComponentUpdate (); shouldComponentUpdate (); shouldComponentUpdate (); You can use PureComponent instead of Component. Note that PureComponent does not re-render for reference type changes. Use fragments instead of Components to reduce Dom nesting.

JS blocking performance versus memory leaks

3.1 Skillfully use JS anti-shake and throttling

Function stabilization: Combine several operations into one operation. The idea is to maintain a timer that fires after the delay, but if it fires again within the delay, the timer will be cancelled and reset. In this way, only the last operation can be triggered.

function debounce(fn, wait) {
    var timeout = null;
    return function() {
        if(timeout ! == null) clearTimeout(timeout); timeout =setTimeout(fn, wait); }}Copy the code

Function throttling: causes functions to fire only once in a given period of time. The principle is to trigger the function by judging whether a certain time has been reached.

var throttle = function(func, delay) {
            var prev = Date.now();
            return function() {
                var context = this;
                var args = arguments;
                var now = Date.now();
                if(now - prev >= delay) { func.apply(context, args); prev = Date.now(); }}}Copy the code

Difference: Function throttling guarantees that a true event handler will be executed within a specified period of time, no matter how frequently the event is triggered, whereas function buffering only fires once after the last event. For example, in an infinite load scenario, we want the user to make Ajax requests every once in a while while the page is scrolling, rather than asking for data when the user stops scrolling. This scenario is suitable for throttling technology to achieve.

3.2 Memory Leakage

  • The closure memory leak Pattern
  • When a page (SPA) is willunmounted, remember to close some resources, such as WebSocket disconnection, eChart object null, etc.

4 Load Balancing

  1. Manage multiple processes using PM2
  2. Nginx does reverse proxy
  3. Docker manages multiple containers