1. Reduce the number of requests

If you do not merge files, the following three risks exist

1. There are inserted uplink requests between files, adding N-1 network latency

2. It is more seriously affected by the packet loss problem

3. It may be disconnected when passing through the proxy server

However, file merging has its own problems

1. First screen rendering problem

2. Cache invalidation

Therefore, for file merge, there are the following suggestions for improvement

1. Merge common libraries

2. Merge different pages separately

[Photo processing]

1. Sprite

CSS Sprite image is a very popular technology in the past. It can reduce the number of HTTP requests on a website by combining some images into a single image, but when the integrated image is large, it will be slow to load at one time. With the popularity of font graphics and SVG graphics, this technology has gradually faded from the scene

2, Base64

Embedding the content of the image in HTML in Base64 format reduces the number of HTTP requests. However, because Base64 encoding uses 8-bit characters to represent six bits of information, the encoded size is approximately 33% larger than the original value

3. Use font ICONS instead of images

Reduce redirection

Avoid redirects as much as possible. When a page is redirected, it delays the transfer of the entire HTML document. Nothing is rendered on the page until the HTML document arrives, and no components are downloaded, degrading the user experience

If you must use redirects, such as HTTP to HTTPS, use 301 permanent redirects instead of 302 temporary redirects. If you use 302, you will be redirected to an HTTPS page every time you visit HTTP. Permanent redirects, after the first redirect from HTTP to HTTPS, return the PAGE directly to HTTPS each time you visit HTTP

[Using cache]

When using a strong cache such as Cache-Control or Expires, no requests are sent to the server until the cache expires. When the strong cache expires, a request is sent to the server using a last-Modified or eTAG negotiated cache. If the resource has not changed, the server returns a 304 response and the browser continues loading the resource from the local cache. If the resource is updated, the server sends the updated resource to the browser with a 200 response

[Not using CSS @import]

CSS @import causes additional requests

Avoid empty SRC and href

The a tag sets the empty href, which redirects to the current page address

The form sets the empty method, which submits the form to the current page address

2. Reduce resource size [compression]

1. HTML compression

HTML code compression is the compression of characters that make sense in a text file but are not displayed in HTML, including Spaces, tabs, and newlines

2. CSS compression

CSS compression includes invalid code removal and CSS semantic consolidation

3. JS compression and chaos

JS compression and chaos include deletion of invalid characters and comments, reduction and optimization of code semantics, reduction of code readability and code protection

4. Picture compression

For the real picture situation, discard some relatively insignificant color information

【 webp 】

Android can use webP format images, it has better image data compression algorithm, can bring smaller image volume, the same picture quality, volume than JPG, PNG more than 25%, and has lossless and lossy compression mode, Alpha transparency and animation features

【 Enable Gzip 】

GZIP encoding over the HTTP protocol is a technique used to improve the performance of WEB applications. High-traffic WEB sites often use GZIP compression technology to give users a sense of speed. This usually refers to a feature installed in WWW servers that compresses the content of web pages to display in the visiting computer’s browser when someone visits a web site on the server. Generally for plain text content can be compressed to 40% of the original size

3. Optimize network connection [using CDN]

The full name of CDN is Content Delivery Network, namely, Content Delivery Network. It can redirect users’ requests to the service node nearest to users in real time according to comprehensive information such as Network traffic, connection of each node, load status, distance to users and response time. Its purpose is to enable users to obtain the content needed nearby, solve the situation of crowded Internet network, improve the response speed of users to visit the website

[Using DNS pre-resolution]

When the browser accesses a domain name, it resolves the DNS to obtain the IP address of the domain name. During the resolution process, the cache is read by the browser cache, system cache, router cache, ISP(carrier)DNS cache, root DNS server, top-level DNS server, and primary DNS server until the IP address is obtained

DNS Prefetch refers to domain names that may be used after DNS Prefetch is resolved in advance according to rules defined by the browser, so that the resolution results are cached in the system cache, shortening the DNS resolution time and improving the access speed of websites

The method is to write several link tags inside the head tag

[Parallel connection]

Due to the HTTP1.1 protocol, Chrome’s maximum number of concurrent requests per domain name is 6. Multiple domain names can increase the number of concurrent requests

Persistent connection

Persistent connections are established using keep-alive or Presistent, which reduces latency and connection establishment overhead, keeps connections tuned, and reduces the potential number of open connections

[Pipelined connection]

In the HTTP2 protocol, you can enable pipelining connections, that is, the multiplexing of a single connection, each connection concurrently transfers multiple resources, there is no need to add a domain name to increase the number of concurrent connections

4. Optimize resource loading

Make functionality available as quickly as possible by optimizing where resources are loaded and changing when they are loaded to display page content as quickly as possible

1, CSS file in the head, first external chain, after this page

2. Put the JS file at the bottom of the body

3, processing page, processing page layout JS files in the head, such as babel-polyfill. JS file, flexibility.js file

4, try not to write style tags and script tags in the middle of the body

[Resource loading time]

1. Asynchronous script tags

Defer: Load asynchronously and execute after the HTML parsing is complete. The actual effect of defer is similar to putting the code at the bottom of the body

Async: load asynchronously and execute immediately after the load is complete

2. Modules are loaded on demand

In a system with complex business logic, such as SPA, the service module required by the current page needs to be loaded according to the route

Loading on demand is a great way to optimize a web page or application. In this way, the code essentially leaves at some logical breakpoint, and then references, or is about to reference, some new code blocks as soon as something is done in some code blocks. This speeds up the initial loading of the application and reduces its overall size, since some code blocks may never be loaded

Webpack provides two similar techniques, with the preferred approach being the use of import() syntax that conforms to the ECMAScript proposal. The second is to use webpack-specific require.ensure

3. Use resource preload and resource prefetch

Preload enables the browser to load specified resources in advance and execute them as needed, which speeds up the loading of the page

Prefetch tells the browser what resources might be used to load the next page, speeding up the loading of the next page

Lazy resource loading and resource preloading

Lazy loading of resources is also called lazy loading. Resources are loaded lazily or only when certain conditions are met

Resource preloading is used to load resources required by users in advance to ensure good user experience

Lazy resource loading and resource preloading are off-peak operations. They are not performed when the browser is busy and load resources when the browser space is available, optimizing network performance

  

5. Reduce redraw reflux

Avoid using deep selectors or other complex selectors to improve CSS rendering efficiency

2. Avoid CSS expressions. CSS expressions are a powerful but dangerous way to dynamically set CSS properties. Recalculation occurs not only when the page is displayed and zoomed, but when the page is scrolled, and even when the mouse is moved

3. Define the height or minimum height of the element as appropriate, otherwise the page element will wobble or position and cause backflow when the element’s dynamic content is loaded

4. Size the image. If the image is not sized, the first load will take up space from zero to full, up, down, left, and right, and backflow will occur

5. Do not use the table layout, because a small change may cause the entire table to be rearranged. And table rendering usually takes 3 times as long as the equivalent element

6, can use CSS to achieve the effect, try to use CSS rather than JS to achieve

[Rendering layer]

1. In addition, separate elements that need to be redrawn multiple times into the render layer, such as absolute, to reduce the redrawn range

2. For some animated elements, use hardware rendering to avoid redrawing and reflow

【DOM optimization 】

1. Cache DOM

Const div = document.getelementById (‘div’) Because querying the DOM is time-consuming, you can cache the DOM without querying the same node more than once

2. Reduce DOM depth and DOM number

The more tag elements you have in HTML and the deeper the tag level, the longer it takes for the browser to parse the DOM and draw it into the browser, so keep DOM elements as clean and as hierarchical as possible.

3. Batch manipulate DOM

Because DOM manipulation is time-consuming and can cause backflow, you can avoid frequent DOM manipulation by batch manipulating the DOM, first concatenating the DOM with strings, and then updating the DOM with innerHTML

4. Batch CSS styles

Batch manipulate element styles by toggling the class or using the style.csstext attribute of the element

5. Manipulate DOM in memory

Use the DocumentFragment object to let DOM operations take place in memory rather than on the page

6. DOM elements are updated offline

AppendChild, etc., can use the Document Fragment object to perform offline operations on the DOM, insert the page again after the element is “assembled”, or use display: None to hide the element and perform operations after the element is “gone”

7. DOM read and write separation

The browser has a lazy rendering mechanism, and connecting multiple times to modify the DOM may only trigger the browser to render once. If you modify the DOM, read the DOM immediately. To ensure that the correct DOM value is read, a render is triggered by the browser. Therefore, DOM modification is done separately from DOM access

8. Event agents

Event broker is to register event listeners on the parent element. Since the events of the child element will be propagated up to the parent node through event bubbling, the listener function of the parent node can process the events of multiple child elements in a unified manner

With event brokers, you can reduce memory usage, improve performance, and reduce code complexity

9, shake and throttle

Use function throttling or debounce to limit how often a method fires

10. Clean up the environment in time

Remove object references, clear timers, clear event listeners, create minimum scoped variables, and reclaim memory in time

6. Better performance apis

1, use the right selector

The performance order of the selectors is shown below. Try to choose the selector with the best performance

Id selector (# myID) Class selector (.myclassName) Tag selector (div,h1, P) Adjacent selector (H1 + P) subselector (ul > li) Descendant selector (li A) Wildcard selector (*) attribute selector (a[rel=”external”]) (a:hover, Li: nth-Child) 2, use requestAnimationFrame instead of setTimeout and setInterval

You want to make changes to the page at the beginning of each frame, which is currently only possible with requestAnimationFrame. Use setTimeout or setInterval to trigger a function that updates the page. This function may be called in the middle of a frame or at the end of a frame, causing the following frame to be lost

3. IntersectionObserver is used to achieve lazy loading of the visual area of the image

In traditional practice, the scroll event is used and the getBoundingClientRect method is called to determine the visible area. Even if function throttling is used, the page will flow back. IntersectionObserver does not have the above problems

4. Use web workers

A basic feature of client-side javascript is single-threading: for example, a browser cannot run two event handlers at the same time, nor can it trigger a timer when one is running. Web Worker is a javascript multithreading solution provided by HTML5. It can put some computational-heavy code into the Web Worker to run, so as to avoid blocking the user interface. This API is very useful when performing complex calculations and data processing

However, use some of the new apis with browser compatibility in mind

Webpack optimization [Packaging common code]

Using the CommonsChunkPlugin plugin, the resulting composite file can be loaded once at the beginning and stored in the cache for subsequent use. This leads to a speed increase because the browser quickly removes the common code from the cache, rather than loading a larger file every time a new page is visited

Webpack 4 will remove CommonsChunkPlugin, replaced by two new configuration item optimization. SplitChunks and optimization runtimeChunk

By setting up optimization. SplitChunks. Chunks: “all” to start the default code division configuration items

Dynamic import and load on demand

Webpack provides two techniques for separating code through inline function calls to modules, preferentially using import() syntax that conforms to the ECMAScript proposal. The second is to use webpack-specific require.ensure

[Eliminate useless code]

Tree shaking is a term used to describe removing dead-code from a JavaScript context. It relies on static structural features in the ES2015 module system, such as import and export. The term and concept actually arose from the ES2015 module packaging tool rollup

JS tree shaking is done through UglifyJS and CSS tree shaking is done through Purify CSS

[Long cache optimization]

1. Replace hash with chunkhash so that the cache remains valid even if chunk does not change

2. Use Name instead of id

Each module.id is incremented based on the default resolve order. That is, when the parse order changes, the ID changes with it

Let’s use two plug-ins to solve this problem. The first plug-in, NamedModulesPlugin, will use the module’s path instead of a numeric identifier. While this plug-in helps make the output readable during development, it takes longer to execute. The second option is to use HashedModuleIdsPlugin, which is recommended for production builds

[Public code inline]

Inline mainfest.js into the HTML file using the htMl-webpack-inline-chunk-plugin plugin