preface

Front-end performance optimization is like “actor’s self-cultivation” for a front-end. If you want to be a good front-end, you must know how to optimize performance. In addition, performance optimization is basically an interview must ask a question, this piece can answer very beautiful, is undoubtedly a plus.

Perceived performance

To the user, the user perception performance is the most important, simply put, is to make the user feel that your site is fast, and there is no measurement of perception performance.

If a page takes a long time to load, there are ways to make it feel less slow.

loading

The most basic chrysanthemum waiting ~

Skeleton screen

You can refer to antD-Design’s skeleton screen

Objective performance

For developers, performance metrics can be measured objectively, and there are ways to optimize Web performance to meet the standards set by developers.

Objective performance is a measure of time from the time the user enters the URL to the time all resources are downloaded, parsed, and executed, and eventually drawn.

The process by which a browser opens a web page

1. The browser resolves the URL by DNS

2. The browser connects to the server through TCP

3. The browser sends an HTTP request

4. The server returns an HTTP response

5. The browser renders the page

Performance indicators

Google’s three core metrics for web user experience

LCP, FID, CLS

The LCP represents the speed of the page

FID represents the interactive experience indicator of a page

CLS represents the stability indicator of the page

Common performance optimization methods

Reduce the number of requests

Resources combined

Use the packaging tool to package JS and CSS resources to avoid too many files

Use Sprite

Pictures go to CDN and so on

The cache

HTTP Cache

Strong cache

A product of Expires HTTP1.0, it is no longer used

Cache-Control

A new caching scheme is added for the time inconsistency between browser and server. Instead of telling the browser the expiration date directly, the server tells a relative time cache-control =10 seconds, meaning that within 10 seconds, the browser Cache is used directly

app.get('/demo.js',(req, res)=>{ let jsPath = path.resolve(__dirname,'./static/js/demo.js'); let cont = fs.readFileSync(jsPath); Res.setheader (' cache-control ', 'public,max-age=120') //2分钟 res.end(cont)})Copy the code

Strong caching is characterized by no need to ask the server

Negotiate the cache

The downside of mandatory caching is that the cache is always out of date. However, if the file has not changed after the expiration time, it is a waste of server resources to retrieve the file again

The negotiated cache is a request to the server, and the server decides whether to return a new resource or tell the browser to use the old resource

There are two negotiation methods according to the last modification time and whether the content of the document is changed or not

The last-modified and If – Modified – Since

It can only be accurate to the second. If it is modified multiple times within a second, it will not be sensed

Indeed after modification, but the content has not changed, will also be new requests

The ETag and If – None – Match

ETag changes only when the contents of the file change

The server reads the disk file demo.js and returns it to the browser with last-modified (GMT standard format).

app.get('/demo.js',(req, res)=>{
    let jsPath = path.resolve(__dirname,'./static/js/demo.js')
    let cont = fs.readFileSync(jsPath);
    let status = fs.statSync(jsPath)

    let lastModified = status.mtime.toUTCString()
    if(lastModified === req.headers['if-modified-since']){
        res.writeHead(304, 'Not Modified')
        res.end()
    } else {
        res.setHeader('Cache-Control', 'public,max-age=5')
        res.setHeader('Last-Modified', lastModified)
        res.writeHead(200, 'OK')
        res.end(cont)
    }
})
Copy the code

The server reads the disk file demo.js and sends it back to the browser with the ETag that uniquely identifies the file

Memory Cache & Disk Cache

They work with HTTP caching. Memory cache hits are the fastest, but it has a short cycle. Base64 images, smaller JS and CSS have a greater chance of being written into memory. Other large JS, CSS and images are written directly to the hard disk for caching.

storage

cookie

Maximum 4K, stores some user login status

webStorage

SessionStorage and localStorage, size in 5-10M, key-value pair storage, the difference is different life cycle, sessionStorage in TAB closed, no longer exist, localStorage permanent, unless the active delete.

Reduced request volume

Resources compression

Gzip

You can enable gzip compression on the server to reduce the size of the transferred file, as can be seen in the response header Content-Encoding: gzip.

Code compression

Use some code compression tools to reduce file size by removing unnecessary comments, blank lines, and name reduction.

Image compression

Images are a resource that consumes a lot of traffic on web pages. If losing some color and pixels is not going to affect the user experience too much, compress the image.

Image compression

PNG lossless format, medium compression rate, support transparent background, often used for transparent images or ICONS.

JPG lossy format, good compression rate, often used for complex large images, do not support transparent background.

SVG vector graphics, programmable. No distortion at all resolutions, but rendering complex graphics consumes performance. Often used for simple graphics.

WEBP lossless format, better compression than PNG and JPG, and supports transparent backgrounds. The only downside is poor compatibility. JPG and PNG rollback mechanism can be used in compatible browsers.

The server sends HTTP accordingly

Reduced response time

Using CDN (in case of high traffic and large concurrency)

CDN Content Delivery Network. It relies on the edge servers deployed in various regions to achieve users to obtain content nearby, reduce network congestion, and improve user access speed and hit ratio. Its main key technology is content storage and distribution technology.

Reduce the initial rendering time of the page

pre-rendered

The part of parsing javascript dynamically rendered pages by the browser is done in the packaging phase, and in other words, webPack generates statically structured HTML during the build process using the prerender-spa-plugin.

Server Rendering (SSR)

The TTFP (Time To First Page) of a CSR project takes a long Time. Refer To the previous legend. In the rendering process of a CSR Page, the HTML file must be loaded First, and then the JavaScript file required for the Page must be downloaded. The JavaScript file is then rendered to generate the page. At least two HTTP request cycles are involved in this rendering process

Node, as the middle layer, makes the React code run on the server first, so that the user downloads the HTML to contain all the page display content, so that the page display process only needs to go through one HTTP request cycle

At the same time, because HTML already contains all the content of the web page, so the SEO effect of the web page will become very good. After that, the React code was executed again on the client side, adding data and event bindings to the CONTENT of the HTML page, and the page had various interaction capabilities of React.

Page rendering

To reduce congestion

Js blocked

When HTML parsing meets JS, js files will be downloaded and executed first. This is to prevent JS from manipulating DOM and other situations. But we, as operators, can artificially specify which elements can be lazily loaded.

Specify async or defer for the script tag to defer the script.

Async means that JS does not block and executes in parallel, but immediately after the download is complete

Defer starts executing in sequence after the download is complete and before the entire document is parsed and the DOMContentLoaded event is triggered

The CSS block

CSS blocks HTML rendering, but in order for the interface to have no style presented to the user, we need to move CSS forward

Reduce rendering times

Avoid backflow and repainting

Backflow, also known as rearrangement, is the process by which information such as the size and position of an element is changed in some way, causing the browser to recalculate and render. Repainting is just changing the style such as background and color etc.

Either way, it costs performance, so we want to avoid loops.

Reduce the number of render nodes

Lazy loading

For elements that are not in the user view, we can display them without rendering them until they are in the view.

Lazy loading involves loading and rendering images or DOM elements

Improve rendering efficiency

Reduce DOM node operations

The browser’s rendering engine is separate from the JS engine. As you can imagine, “cross-boundary communication” between the JS engine and rendering engine is not easy, it is expensive, so we should avoid this operation as much as possible.

Reduce the complexity of the selector

.box:nth-last-child(-n+1) .title { /* styles */ }

It may take a lot of time for the browser to calculate this result, but we can change the expected behavior of the selector to a class:

.final-box-title { /* styles */ }

Avoid forced layout synchronization and layout jitter

Browsers almost always apply the entire DOM to each layout calculation, and if there are a large number of elements, it can take a long time to figure out the size and position of all the elements.

So we should avoid dynamically changing the collection properties (width and height) at run time. If you can’t avoid it, use Flexbox first

Inline First Screen Critical CSS

The browser must download and parse the CSS file referenced by our page before rendering it to the user, so the link is a render-blocking CSS resource. If the CSS file is very large or the network is in poor condition, rendering blocked CSS can seriously affect the user experience. One of the best solutions to this problem is to place some Critical CSS directly in the header’s style tag and load the rest asynchronously so that the browser can render the page after parsing the HTML.

You can directly NPM I criticalCSS

So how do you define Critical CSS? Of course, less CSS in the head tag is better, because too much content increases the size of the HTML, so we generally extract the least CSS that users need to see above the fold as Critical CSS. Because pages display different effects on different devices, the corresponding Critical CSS content will be different, so the extraction of Critical CSS is a very complex process, although there are many corresponding tools in the community, but the effect is not satisfactory. Css-in-js does a good job of supporting Critical CSS generation. In CSS-in-JS, because the CSS is bound to the component, and only when the component is mounted to the page, their CSS styles are inserted into the page’s style tag, it is easy to know which CSS styles need to be sent to the client during the first rendering. Combined with the packaging tool’s Code Splitting feature, you can minimize the Code loaded onto the page to achieve Critical CSS effects.

In other words, CSS-in-JS increases the size of the loaded JS file by a small amount without having to make another request for another CSS file. Also, some CSS-in-JS implementations (such as Styled – Components) automatically support Critical CSS.