1. Why performance tuning?

With the rapid development of the Internet, users have higher and higher requirements for webpage response speed, especially for mobile terminal. The performance directly affects the user experience. For a front-end engineer, how to make the page load faster, improve the user experience, how to reduce the bandwidth occupied by requests, reduce the server pressure, we must think about in our work.

2. From entering URL to page loading

First of all, it’s important to understand that when you type in the URL in the browser, the entire link to the page has been loaded.

1. DNS(Domain name System) resolution

After entering the URL, you need to find the server corresponding to the URL to load the HTML. To query the server address, the first step is to use DNS to resolve the domain name into an IP address. During the resolution process, the browser queries the records in its cache, the system cache, and the router cache in sequence. If no, the browser queries the local host file until it obtains the IP address. If no, the browser sends a domain name resolution request to the DNS server.

2. A TCP connection

When establishing a connection, the hosts at both ends must synchronize their initial serial numbers and send ACK confirmation messages. The process is three handshakes. And that’s what it looks like in a simple diagram.

SequenceDiagram client ->> Server: Hey, I want to establish a connection, my initial serial number is sent to your server -->> Client: Ok, reciprocate, my serial number is sent to you, confirm to connect? Client -) server: Yes, let's start chatting!

Building TCP requests adds a significant amount of network latency.

3. The HTTP request

The client constructs HTTP request packets, encapsulates them in TCP packets, and sends them to a specified port on the server over TCP.

4. The server responds to the request

After receiving the HTTP request, the server searches for the resource requested by the client and returns the corresponding HTML file.

5. Page rendering

  • The HTML document is parsed into a DOM tree with the document as the root, and if JavaScript is encountered during parsing, parsing is paused and the corresponding file is downloaded, blocking.
  • The browser parses the CSS and builds the CSSOM tree.
  • The DOM tree and the CSSOM tree are merged into a render tree, and the browser confirms the location of each element on the page.
  • The browser draws and optimizes the page based on the layout results.

3. The train of thought

Understanding the whole process above, it is clear that performance optimization can be started from two aspects, one is HTTP layer optimization, the second is the rendering layer optimization. There are a number of proven tools and methods available to help with tuning, and it is important to note that writing and implementing approaches that contribute to performance should take precedence in actual development. Let’s introduce it in detail.

4. Optimize the HTTP layer

1. The DNS resolution

DNS pre-resolution technology enables domain names with this attribute to be resolved in the background cache without the user clicking the link. Since domain name resolution and content loading are serial operations, users do not need DNS resolution when clicking the link, reducing the user waiting time and improving user experience.

// Use meta information to tell the browser that the current page is to be pre-resolved by DNS
<meta http-equiv="x-dns-prefetch-control" content="on">
<link rel="dns-prefetch" href="www.baidu.com" />  
// Only some browsers support it
Copy the code

2. Using HTTP / 2

  • HTTP/2 transmits data in binary format rather than HTTP 1.x’s text format, and binary protocols parse more efficiently.
  • HTTP/2 multiplexing replaces the serialized single-thread and blocking mechanism, and all requests are done concurrently over a TCP connection. In HTTP 1.x, multiple TCP links must be used if multiple requests are to be made concurrently, and the browser limits the number of TCP connection requests for a single domain name to 6-8 in order to control resources
  • HTTP/2 can actively push other resources while sending page HTML, rather than waiting for the browser to parse to the appropriate location, initiate a request and then respond
  • HTTP/2 compresses message headers to save network traffic. However, each HTTP/1.x request carries a large number of redundant headers, wasting bandwidth resources.

The advantages of HTTP/2 can be seen from the comparison above. Resources are loaded at the same time, and later loaded resources do not need to be queued

3. Reduce the number of HTTP requests

Let’s look at an example of an HTTP request.

As can be seen from the figure above, the proportion of time spent downloading data is 2.24/138.45 = 1.62%. The larger the file is, the larger the proportion will be. If multiple small files are merged into one large file to reduce the number of HTTP requests, the HTTP overhead will be greatly reduced.

Webpack can merge, package, and compress files using plug-ins like the following, with compression typically greater than 50%.

  • JavaScript: UglifyPlugin
  • CSS :MiniCssExtractPlugin
  • HTML: HtmlWebpackPlugin

4. Gzip compressed files

Compressed files reduce file download time and provide better user experience. Webpack plugin compression is introduced above, in fact, can do better, is gzip. Gzip is currently the most efficient compression method. For example, the size of the app.js file generated after a Vue project was built was 1.4MB, but when compressed with gzip, it was 573KB, a nearly 60% reduction in size.

Enabling Gzip requires client and server support. The accept-Encoding :gzip notation in the request header identifies client support for Gzip compression. Content-encoding :gzip in the HTTP response header indicates that the server uses the GZIP compression mode.

Front-end configuration:

/ / installation
npm install compression-webpack-plugin --save-dev

/ / webpack configuration
const CompressionWebpackPlugin = require('compression-webpack-plugin');
plugins.push(
    new CompressionWebpackPlugin({
        asset: '[path].gz[query]'.// Target file name
        algorithm: 'gzip'.// Use gzip compression
        test: new RegExp(
            '\\.(js|css)$' // Compress js and CSS
        ),
        threshold: 10240.// resource files will be compressed if they are larger than 10240B or 10kB
        minRatio: 0.8 // It is compressed when the minimum compression ratio reaches 0.8}));Copy the code

5. Image optimization

Use the font icon iconfont instead of an image. The font icon is a vector image, not distorted, and the resulting file is extremely small.

6. Browser cache

Browser caches are divided into strong cache and negotiated cache.

1. Strong cache

Strong caching refers to the use of Expires (HTTP/1.0) and cache-Control (HTTP/1.1) header fields to Cache the content of the request and avoid necessary HTTP requests

The content of the Expires header is a time value that represents when the resource will expire locally. Within the expiration time, cached resources are used directly and no HTTP requests are sent. Cache-control (more commonly used) works similarly.

2. Negotiate cache

Negotiation caching is asking the server whether the cache is available, and then deciding whether to use the cache

  • The first time the browser requests it, the server adds last-modified to respone’s header.
  • If the browser requests again, the header of the request is appended with if-modified-since, which is the last-modify value returned before the cache
  • If last-modify > if-modify-since, last-modify > if-modify-since, then last-modify > if-modify-since, then last-modify > if-modify-since, then last-modify > if-modify-since, then last-modify > if-modify-since, then last-modify > if-modify-since, then last-modify > if-modify-since, then last-modify > if-modify-since And update last-modify to the new value.

Understand HTTP browser caching

5. Browser rendering layer optimization

1. Avoid JS blocking

Because JS blocks browser rendering, JS is usually placed at the bottom of the tag and CSS is placed within the tag to prevent a white screen.

2. Reduce redrawing and reflow

What operations can cause backflow?

  • Add or remove visible DOM elements
  • Element position change
  • Element size changes (margins, padding, borders, width, and height)
  • Changes in the width and height of a calculated value caused by changes in content, such as text changes or image sizes
  • Page render initialization
  • Browser window size changes – resize event

Backflow takes a lot of time to style and redraw and render nodes, so you should minimize backflow

What would cause a redraw?

  • Changes in an element’s attributes or style do not affect the layout

How to reduce redraw backflow?

  1. Do not use the table layout, because a small change can cause the entire table to be rearranged. And table rendering is slow.
  2. The element defines a height or minimum height, otherwise the dynamic content of the element will change the page position and cause backflow.
  3. When changing styles with Js, do not write styles directly, but switch classes to change styles in batches.
  4. Merge DOM inserts using The DocumentFragment.
  5. Caching DOM objects
// Dom is not cached
 for (let i = 0; i < document.getElementsByTagName('p').length; i++) {
 // Each loop will find the element whose tagName is p. }/ / cache the dom
var p = document.getElementsByTagName('p');
for (let i = 0; i < p.length; i++) {
// Improve search efficiency
}
Copy the code

3. Lazy loading of images

For a website with many images, loading all images at one time will have a great impact on user experience. Lazy loading of images can not only improve user experience, but also save user traffic.

Create a custom attribute data-src to store the actual image path, and img SRC to place a 1 * 1px image path. When the image is rolled into the visible area, use js to fetch the data-src value of the image and assign it to SRC.

You can use getBoundingClientRect or offsetTop -scrooltop <=clientHeight to determine if the area is visible

//html
<img src="img/loading.gif" alt="1" data-src="img/g1.jpg">
<img src="img/loading.gif" alt="2" data-src="img/g2.jpg">
<img src="img/loading.gif" alt="3" data-src="img/g3.jpg">

//js
<script>
var imgs = document.querySelectorAll('img');

// Check if it is in viewable area
function isInner(el) {
    var bound = el.getBoundingClientRect();
    var clientHeight = window.innerHeight;
    return bound.top <= clientHeight;
} 
function check() {
  imgs.forEach(function(el){
    if(! el.dataset.isLoaded && isInner(el)){// Download the image in the viewable area without downloading itloadImg(el); }})}function loadImg(el) {
    var source = el.dataset.src;
    el.src = source;
    el.dataset.isLoaded = 1
}
window.onload = window.onscroll = function () {
   // Scroll trigger
    check();
}
</script>
Copy the code

4. The thumbnails

For large, infrequently used images, thumbnail images can be used to show the full image when the user hovers over them.

5. Responsive images

The browser displays images of different sizes according to different resolutions, which not only ensures the display effect, but also saves bandwidth and improves loading speed

<picture>
    <source srcset="banner_w1000.jpg" media="(min-width: 801px)">
    <source srcset="banner_w800.jpg" media="(max-width: 800px)">
    <img src="banner_w800.jpg" alt="">
</picture>
Copy the code

Ruan Yifeng responsive picture tutorial

6. Event broker

Event broker is the registration of event listeners on the parent element. Since child element events are propagated up to the parent node through event bubbles, the listener functions of the parent node can process events of multiple child elements in a unified manner. With event brokers, you can reduce memory usage, improve performance, and reduce code complexity.

7. Event throttling

Use function throttle or debounce to limit how often events fire.

  • Function throttling:

The application scenario of function throttling is usually onrize, OnScroll and other functions that are triggered frequently. For example, if you want to get the position of the scroll bar and then perform the next action, if Dom operation is performed after listening, such frequent trigger execution may affect the browser performance, or even crash the browser. So we can specify how many seconds to execute, a method called function throttling

// limit the execution to 500ms
1 / / way
var type = false;
window.onscroll = function(){
   // Once executed, no amount of crazy operation within 500ms is necessary
   // The next execution will take place after 500ms
    if(type === true) return;
    type = true;
    setTimeout(() = >{
        console.log("Things to be done.");
        type = false;
    },500)}2 / / way
var time = null;
window.onscroll = function(){
    let curTime = new Date(a);if(time==null){
       time = curTime; 
    }
    // check whether the time difference between the two operations is less than 500ms
    if((curTime-time)>500) {console.log("Things to be done."); }}Copy the code
  • Function image stabilization

If the execution condition is not triggered within a specific period of time, the function is performed only once. Application scenarios: Frequent operation like or unlike, frequent list query, etc., need to obtain the last operation result and send it to the server

var timer = null;
function click(){
  // if the operation is frequent within 500ms, the timer will be cleared each time and a new one will be created
  // Execute after 500ms until the last operation
    clearTimeout(timer);
    timer = setTimeout(() = >{ ajax(...) ; },500)}Copy the code

Function anti-shake and throttling

8. Paging loading

For lists with large data volumes, use pagination loading to reduce server stress.

9. Reference modules as needed

In a system with complex business logic such as SPA, it is necessary to load the business module required by the current page according to the route and reference it on demand, which is a good way to optimize the web page. Loading relevant modules only when they are needed improves the first screen loading speed and reduces the overall application volume.

Webpack provides two approaches, prioritizing the import() syntax, or using require.ensure.

10. Clean up your environment

Eliminate object references, prevent memory leaks, clear timers, etc.

6. Check performance

Website performance is divided into loading performance and running performance

  1. The loading performance depends on the white screen time and the first screen time
  • White screen time: the time from entering the URL to displaying the content on the page. Place the following script in</head> You can get the white screen time up front.
<script>
    new Date() - performance.timing.navigationStart
</script>
Copy the code
  • First screen time: the time from entering the url to rendering the page completely.

In the window. The onload event in the execution of new Date () – performance. Timing. NavigationStart can obtain first screen time

  1. Check running performance

In practical applications, you need to choose an appropriate optimization method according to the situation of the project. With the performance of Chrome developer tool, you can check the performance of the website during runtime.

Click on the gray dot in the upper left corner, and then imitate the user using the site. After using the site, click Stop to get the performance report of the site during its operation. If any of these processes take a lot of time, consider performance tuning here.