Specifically from two aspects of the summary

I. Network communication (reduce the number of requests and the amount of transmission)

1, merge files (reduce request times)Copy the code

1. Resource cache

1.1 use the CDN

Separating the static resources of the website, such as static HTML, Image, style CSS, script JS, etc., and deploying the static resources into THE CDN can obviously accelerate the loading speed of these resources.Copy the code

1.2 Use HTTP caching

The HTTP cache caches the resources loaded by the browser to the local PC. If the cached resources do not expire, the local PC can use the resources directly during the next loading, reducing the number of HTTP requests and speeding up the resource loading speed. To do this, set the cache-control parameter in the HTTP Header. HTTP 1.0 used Pragma and Expires parameters for caching, but they are no longer recommended.Copy the code

2. Merge and compress resources

2.1 Reducing HTTP Requests

Is it faster to load a 10M file with one HTTP request, or to split the file into 10 1M files and load them in parallel with 10 HTTP requests? Since fewer HTTP requests can make a site more responsive, the conclusion seems to be that one HTTP request is faster. The answer is: not necessarily! I did a little experiment: Index1. HTML and index2.html. Index1. HTML loads a 2M JS file bundle.js with a <script> tag. Index2.html uses 6 <script> tags to load bundle1.js, bundle2.js... Bundle6.js. These 6 JS files are split evenly by bundle.js. HTML and index2.html are requested for 10 times respectively to obtain the loading time of bundle1.js and the loading time from bundle1.js to bundle6.js (the end time is when the last JS file is loaded). The average loading time is calculated as follows: 1.07 s and 1.87 s. The experimental results show that loading a combined resource file by one HTTTP request is more efficient than loading multiple resource files concurrently by multiple HTTTP requests. But the conclusion is only for the average load time, for a single comparison, it is entirely possible to come to the opposite conclusion, for example, in my experiment, the maximum load time of a single HTTTP request is 2.36s, which exceeds the average load time of the second method 1.87s. Some people might wonder, why is parallel less efficient than serial? In fact, the bottleneck of HTTP requests is bandwidth, not the number of requests. Adding new requests does not reduce the overall resource load time when one request is already using bandwidth well. In fact, reducing HTTP requests to improve website performance is mainly based on the following two reasons: THE establishment of HTTP connection is time-consuming, generally requires hundreds of ms, and each HTTP request has a certain network delay. The more HTTP requests are required, the more time-consuming these two parts will be. Of course, HTTP 1.1's default support for Keep-alive, which enables the reuse of connections, greatly optimizes this problem. 2) Each HTTP request needs to be accompanied by additional data, such as header information in the request and response, Cookie information. When the requested resource is small, the additional data attached may be larger than the actual resource.Copy the code

3. Enable gzip on the server

Enable GZIP compression on the server to reduce the volume of resource files during network transmission. Enable gzip compression (the amount of data code transmitted by network compression will be reduced to one third of the original, and then decompressed after incoming)

4. Image compression

Reduce the size of the file using WebP images. WebP is an image file format that supports lossy compression and lossless compression, derived from the image encoding format VP8. According to Google's tests, WebP with lossless compression is 45% smaller than PNG files, and WebP can still reduce the size of PNG files by 28% even after they are compressed by other compression tools. 2) Use the font icon IconFont. You can set the size and color of the Icon graphic arbitrarily (monochrome only, because you are essentially setting the color for the font). 3) Use CSS Sprites to combine multiple images into one, thus reducing the number of HTTP requests. 4) Using Base64 to directly encode images into strings to write CSS files, also to reduce the number of HTTP requests. However, it is important to note that base64-encoded images should be small (preferably in the tens of bytes) because they are generally larger than the original file after being base64-encoded. And too long Base64 encoded strings can affect the overall readability of CSS. 5) For sites that need a lot of images, image resources should be deployed separately and accessed using different domain names. If images and other resources are deployed on a server or cluster, the egress bandwidth of the server will be greatly affected. Using different domain names to load image resources can better take advantage of the parallel download feature of the browser, because the browser has a limit on the maximum number of concurrent requests under a domain name.Copy the code

Second, the code layer

Save memory, save CPU 1, try not to use closures (save memory) 2, eliminate invalid loops 3, recursive process optimization (add cache)Copy the code