preface

Usually, performance optimization is considered as a disordered application scenario, but in my opinion, it is an orderly application scenario and many performance optimization are mutual paving or even one belt one Road. From the perspective of process trend, performance optimization can be divided into network level and rendering level. According to the trend of results, performance optimization can be divided into time level and volume level. To put it simply, when visiting a website, make it fast, accurate and immediately present to the user.

All performance optimization is done at two major levels and two small levels, the core level is the network level and rendering level, the secondary level is the time level and volume level, and the secondary level is filled with the core level. So the author sorted out nine strategies and six indicators of front-end performance optimization through this paper. Of course, these policies and indicators are defined by the author, which is convenient to make some specifications for performance optimization in some way.

Therefore, combining these characteristics on the job or in an interview is a perfect way to interpret the knowledge extended by performance optimization. High energy ahead, don’t look also have to collect, go!!

All code examples show only the core configuration code to highlight the theme, the other configurations are not filled in, please copy the code by your own imaginationCopy the code

Nine strategies

The network layer

The performance optimization at the network level is undoubtedly how to make the resource volume smaller and load faster. Therefore, the author makes suggestions from the following four aspects.

  • Build strategy: Based on build tools (Webpack/Rollup/Parcel/Esbuild/Vite/Gulp)
  • Image strategy: Based on the image type (JPG/PNG/SVG/WebP/Base64)
  • Distribution strategy: Content delivery Network (CDN)
  • Caching strategies: Based on browser cache (Strong cache/negotiated cache)

The above four aspects are completed step by step and are filled in the whole project process. The build and image policies are in development, and the distribution and caching policies are in production, so at each stage you can check if the above policies are plugged in sequentially. In this way you can maximize performance optimization application scenarios.

Build strategy

This strategy mainly revolves around Webpack to do related processing, but also access to the most common performance optimization strategy. Other build tools handle things much the same, perhaps just differently configured. When it comes to webpack performance optimization, there is no doubt that it starts from the time level and volume level.

The author found that the overall compatibility of WebPack V5 is not very good at present, some functions may have problems with third-party tools, so we have not upgraded to V5, and continue to use V4 as the production tool. Therefore, the following configurations are based on V4, but the overall configuration is not very different from V5Copy the code

The author makes 6 performance optimization suggestions for the two levels respectively, and a total of 12 performance optimization suggestions are summarized in four words for easy memory and easy digestion. ⏱ : reduces the packing time, and 📦 : reduces the packing volume.

  • Reduce packing time:Shrink range,The cached copy,Directional search,Build in advance,Parallel build,Visual structure
  • Reduce packing volume:Segmentation code,Shake the tree optimization,Dynamic gasket,According to the need to load,Role in ascension,Compressing resources

⏱ Reduce scope

Configuring include/exclude reduces the search scope of files by the Loader to avoid unnecessary translation. Node_modules is such a huge directory, how much more time does it take to retrieve all the files?

Include /exclude is usually configured in each Loader. The SRC directory is used as the source directory. You can perform the following operations. Of course, include/exclude can be modified as required.

export default { // ... module: { rules: [{ exclude: /node_modules/, include: /src/, test: /\.js$/, use: "babel-loader" }] } }; Copy the codeCopy the code

⏱ Cached copy

Configure cache To cache a compiled copy of a file by a Loader. The advantage is that only modified files are compiled during recompilation. Why should unmodified files be recompiled with modified files?

Most Loaders/plugins provide an option to use the compile cache, usually containing the cache word. Take babel-loader and eslint-webpack-plugin as examples.

import EslintPlugin from "eslint-webpack-plugin"; export default { // ... module: { rules: [{ // ... test: /\.js$/, use: [{ loader: "babel-loader", options: { cacheDirectory: true } }] }] }, plugins: [ new EslintPlugin({ cache: true }) ] }; Copy the codeCopy the code

⏱ Directed search

The resolve command is configured to improve file search speed by specifying required file paths. If some third-party libraries are introduced in a normal form, errors may be reported, or you want the program to automatically index certain types of files.

Alias mapping module path, Extensions indicating file suffixes, noParse filtering no dependent files. It is usually sufficient to configure alias and Extensions.

export default { // ... Resolve: {alias: {"#": AbsPath(""), // Root directory shortcut "@": AbsPath(" SRC "), // SRC directory shortcut swiper: "Swiper /js/swiper.min.js"}, // module import shortcut extensions: [" js ", "ts" and "JSX", "benchmark", "json", "vue"] / / import path file can be omitted when the suffix}}; Copy the codeCopy the code

⏱ Build ahead

Configuring DllPlugin packages third-party dependencies ahead of time with the benefit of completely separating DLLS from business code and only building business code at a time. This is an ancient configuration, dating back to WebPack V2, but is now deprecated by WebPack V4 + because the performance gains from iteration of its version are enough to ignore the benefits of the DllPlugin.

DLL stands for dynamically linked library and refers to a code library that can be used by multiple programs at the same time. In the front-end field, it can be considered as the existence of an alternative cache. It packages the common code into DLL files and stores them in the hard disk. When packaging again, dynamic link DLL files do not need to package those common codes again, thus improving the construction speed and reducing the packaging time.

In general, configuring DLLS is more complex than other configurations. The configuration process can be roughly divided into three steps.

First, tell the build script which dependencies to make into DLLS and generate DLL files and DLL mapping table files.

import { DefinePlugin, DllPlugin } from "webpack"; export default { // ... entry: { vendor: ["react", "react-dom", "react-router-dom"] }, mode: "production", optimization: { splitChunks: { cacheGroups: { vendor: { chunks: "all", name: "vendor", test: /node_modules/ } } } }, output: { filename: "[name].dll. Js ", // Output path and file name library: "[name]", // global variable name: other modules will get the module path from this variable: AbsPath("dist/static") // plugins: [new DefinePlugin({" process.env.node_env ": Json.stringify ("development") // In DLL mode overwrite the production environment into the development environment (start third-party dependency debugging mode)}), new DllPlugin({name: {name: (dist/static/[name]-manifest.json") // output directory path})]}; Copy the codeCopy the code

Then configure the execution script in package.json and execute it first before each build to package out the DLL files.

{"scripts": {" DLL ": "webpack --config webpack.dll. Js "}} Copy the codeCopy the code

Finally, link the DLL files and tell WebPack to read the DLL files that can be hit. Use htML-webpack-tags-plugin to automatically insert DLL files at packaging time.

import { DllReferencePlugin } from "webpack"; import HtmlTagsPlugin from "html-webpack-tags-plugin"; export default { // ... plugins: [ // ... new DllReferencePlugin({ manifest: AbsPath("dist/static/vendor-manifest.json") // manifest file path}), new HtmlTagsPlugin({append: False, // insert publicPath: "/", // use publicPath tags: ["static/ vendor.dl.js "] // resource path})]}; Copy the codeCopy the code

For the time cost of those few seconds, I suggest better configuration. Of course, you can also use the autoDLL-webpack-plugin instead of manual configuration.

⏱ Parallel construction

Configure Thread to convert a Loader process into multiple processes to release the concurrency advantage of multiple CPU cores. When building a project using WebPack, there are a lot of files to parse and process, and the build process is a computationally intensive operation that gets slower as the number of files increases.

Webpack running in Node is a single-threaded model, which simply means that the tasks to be processed by Webpack need to be processed one by one, not multiple tasks at the same time.

File reads and writes and computations are inevitable, so can you speed up builds by having WebPack handle multiple tasks at once and harness the power of a multi-core CPU computer? Thread-loader helps you start threads based on the number of cpus.

One caveat here is that if the number of project files is not large, do not use this performance optimization recommendation. After all, there are performance costs associated with starting multiple threads.

import Os from "os"; export default { // ... module: { rules: [{ // ... test: /\.js$/, use: [{ loader: "thread-loader", options: { workers: Os.cpus().length } }, { loader: "babel-loader", options: { cacheDirectory: true } }] }] } }; Copy the codeCopy the code

⏱ Visual structure

Configuring BundleAnalyzer to analyze packaged file structures has the advantage of finding the cause of excessive volume. Thus, through analyzing the reasons, the optimization plan can reduce the construction time. BundleAnalyzer is an official webPack plug-in that can visually analyze the module components, module volume ratio, module inclusion relationship, module dependency relationship, file duplication, compressed volume comparison and other visual data of packaged files.

With the webpack-bundle-Analyzer configuration, we can quickly find related problems.

import { BundleAnalyzerPlugin } from "webpack-bundle-analyzer"; export default { // ... plugins: [ // ... BundleAnalyzerPlugin() ] }; Copy the codeCopy the code

📦 Split code

Split each module code, extract the same part of the code, the advantage is to reduce the frequency of repeated code. Webpack V4 uses splitChunks instead of CommonsChunksPlugin for code splitting.

SplitChunks are configured in various ways. For details, please refer to the official website.

export default { // ... Optimization: {runtimeChunk: {name: "manifest"}, // Split WebpackRuntime splitChunks: {cacheGroups: {common: {minChunks: 2, name: "common", priority: 5, reuseExistingChunk: true, // Reuse existing code block test: AbsPath(" SRC ")}, vendor: {chunks: "initial", // code block name: "vendor", // code block name priority: 10, // priority test: // node_modules/ // check file regular expressions}}, // cache group chunks: "all" // code segmentation types: all modules, async module, initial module} // code block segmentation}}; Copy the codeCopy the code

📦 tree shaking optimization

Removing unreferenced code from a project has the benefit of removing duplicate code and unused code. Tree shaker was first introduced in Rollup and is the core concept of Rollup, which was later borrowed from WebPack V2.

Tree shaking is only valid for ESM specifications and invalid for other module specifications. For static structure analysis, only import/export can provide static import/export functions. Therefore, the ESM specification must be used when writing business code to allow tree shaker optimization to remove duplicate and unused code.

In Webpack, you only need to set the packaging environment to the production environment to make the tree shaking optimization take effect. Meanwhile, the business code is written using ESM specification, import module is used, export module is used.

export default { // ... mode: "production" }; Copy the codeCopy the code

📦 Dynamic gasket

The shim service returns the current browser code shim based on UA, which has the advantage of not packing heavy code shim in. Every build is configured with @babel/ PRESET -env and core-js to pack Polyfill in for certain needs, which undoubtedly contributes to the size of the code again.

UseBuiltIns provided by @babel/preset-env can be imported into Polyfill on demand.

  • false: ignoretarget.browsersWill allPolyfillLoading in
  • entry: according to thetarget.browsersWill be part ofPolyfillLoad in (import only those not supported by the browserPolyfill, required in the entry fileimport "core-js/stable")
  • usage: according to thetarget.browsersAnd detect the use of ES6 in the code will be partPolyfillLoad in (no import file requiredimport "core-js/stable")

Dynamic gaskets are recommended. The dynamic shim can return polyfills for the current browser based on the browser UserAgent. The idea is that the browser’s UserAgent can find out from browserList which features are unsupported in the current browser and return polyfills for those features. Polyfill-library and Polyfill-Service are available for those interested in this area.

Two dynamic gasket services are provided here, and you can click on the following links in different browsers to see the different Polyfill outputs. IExplore is believed to be the most polyfilled, proudly saying: “I am me, not the same fireworks.”

  • Official CDN service: Polyfill. IO /v3/ Polyfill…
  • Ali CDN service: polyfill.alicdn.com/polyfill.mi…

Use htML-webpack-tags-plugin to automatically insert dynamic gaskets at packaging time.

import HtmlTagsPlugin from "html-webpack-tags-plugin"; Export default {plugins: [new HtmlTagsPlugin({append: false, // insert publicPath: false after generating resources, // use publicPath tags: [" https://polyfill.alicdn.com/polyfill.min.js "] / / resource path}})]; Copy the codeCopy the code

📦 load on demand

Package the routing page/trigger functionality as a separate file and load it only when you use it to reduce the burden of first-screen rendering. The more features a project has, the larger the package size, resulting in a slower rendering of the first screen.

When rendering the first screen, only the corresponding JS code is required and no other JS code is required, so loading on demand can be used. Webpack V4 provides module cutting and loading function on demand, which can be combined with Import () to achieve the effect of reducing packages in the first screen rendering, thus speeding up the first screen rendering speed. The JS code for the current function is loaded only when some function is triggered.

Webpack V4 provides magic annotation naming cutting module, if there is no annotation, the cut module can not distinguish which business module belongs to, so generally a business module shares the annotation name of a cutting module.

const Login = () => import( /* webpackChunkName: "login" */ ".. /.. /views/login"); const Logon = () => import( /* webpackChunkName: "logon" */ ".. /.. /views/logon"); Copy the codeCopy the code

When running, the console may report an error. Add @babel/plugin-syntax-dynamic-import to the Babel configuration of package.json.

{/ /... "babel": { // ... "Plugins ": [//... "@babel/ syntax-dynamic-import"]}} Copy codeCopy the code

Promotion of 📦

Analyzing module dependencies and combining packaged modules into a function has the benefit of reducing function declarations and memory costs. Enhancements first appeared in Rollup and are the core concept of Rollup, which was later borrowed from WebPack V3.

Built code will have a large number of function closures before enablement. Because of module dependency, when packaged with WebPack, it is converted to IIFE, and a large number of function closures wrap code, resulting in an increase in package size (the more modules there are, the more noticeable). More scoped functions are created while the code is running, resulting in greater memory overhead.

After the promotion is enabled, the built code is placed in a function scope in the order it was introduced, and some variables are properly renamed to prevent variable name conflicts, thereby reducing function declarations and memory costs.

In Webpack, simply set the packaging environment to production for the enhancement to take effect, or explicitly set concatenateModules.

export default { // ... mode: "production" }; // Export default {//... optimization: { // ... concatenateModules: true } }; Copy the codeCopy the code

📦 Compressed resources

Compress HTML/CSS/JS code, compress fonts/images/audio/video, the benefit is more efficient to reduce packaging volume. Optimizing code to the extreme may not be as effective as optimizing the size of a resource file.

For HTML code, use the html-webpack-plugin to enable compression.

import HtmlPlugin from "html-webpack-plugin"; export default { // ... HtmlPlugin({//... minify: {collapseWhitespace: true, removeComments: true})]}; Copy the codeCopy the code

For CSS/JS code, use the following plug-ins to enable compression. OptimizeCss is packaged based on CSSNano, Uglifyjs and Terser are official plug-ins of WebPack. Meanwhile, it should be noted that compressed JS code should distinguish ES5 and ES6.

  • Optimize – CSS -assets-webpack-plugin: compressed CSS code

  • Uglifyjs-webpack-plugin: Compress ES5 version JS code

  • Terser-webpack-plugin: Zip ES6 version JS code

    import OptimizeCssAssetsPlugin from “optimize-css-assets-webpack-plugin”; import TerserPlugin from “terser-webpack-plugin”; import UglifyjsPlugin from “uglifyjs-webpack-plugin”;

    Const compressOpts => ({cache: true, // parallel: true, // parallel processing [${type}Options]: {beautify: False, compress: {drop_console: true}} // compress configuration}); const compressCss = new OptimizeCssAssetsPlugin({ cssProcessorOptions: { autoprefixer: { remove: False}, // Set autoprefixer to keep outdated styles safe: true // avoid cssnano recalculation of z-index}}); const compressJs = USE_ES6 ? new TerserPlugin(compressOpts(“terser”)) : new UglifyjsPlugin(compressOpts(“uglify”));

    export default { // … optimization: { // … Minimizer: [compressCss, compressJs] // code compression}} Copy the code

For font/audio/video files, there is really no Plugin for us to use, so we can only ask you to use the corresponding compression tool before releasing the project into production. For image files, most Loader/Plugin packages use some image processing tools, and some functions of these tools are hosted in foreign servers, so it often fails to install. Check out my post “Talking about the Dangerous Pits of NPM Mirroring” for answers.

In view of this, the author developed a Plugin for webpack compression images with a little skill, please refer to tinyimg-webpack-plugin for details.

import TinyimgPlugin from "tinyimg-webpack-plugin"; export default { // ... plugins: [ // ... TinyimgPlugin() ] }; Copy the codeCopy the code

The above build strategy is integrated into my open source Bruce-CLI, which is an automated build scaffold for React/Vue applications. It has zero configuration out of the box and is ideal for beginners, intermediate, and rapid development projects. You can also override the default configuration by creating brucerc.js files. Just focus on writing the business code instead of the build code, keeping the structure of the project cleaner. For details please stamp here, remember to check the documentation when using, support a Star ha!

Image strategy

This strategy mainly focuses on image types and is also a performance optimization strategy with low access cost. Just do the following two things.

  • Image selection: Understand the characteristics of all image types and which application scenarios are most appropriate
  • Image compression: Use tools or scripts to compress images before deployment to production

Image selection must know the volume/quality/compatibility/request/compression/transparency/scene and other parameters relative value of each image type, so as to quickly make a judgment on what type of image to use in what kind of scene.

type

volume

The quality of

Compatible with

request

The compression

transparent

scenario

JPG

small

In the

high

is

lossy

Does not support

Background, rotation, rich color map

PNG

big

high

high

is

nondestructive

support

ICONS, transparent diagrams

SVG

small

high

high

is

nondestructive

support

ICONS, vector drawings

WebP

small

In the

low

is

both

support

Depending on compatibility

Base64

Look at the situation

In the

high

no

nondestructive

support

icon

Image compression can be done in the above build strategy – Compressed Resources or by using your own tools. Since most webPack image compression tools fail to install or have various environmental issues (you know what I mean), I recommend using an image compression tool before releasing a project into production, which is stable and doesn’t increase the packaging time.

Good image compression tools are nothing more than the following, if there is a better tool to add in the comments oh!

tool

Open source

charge

API

Free experience

QuickPicture

✖ ️

✔ ️

✖ ️

Compressible type more, compression texture is better, there are volume restrictions, there are quantity restrictions

ShrinkMe

✖ ️

✖ ️

✖ ️

Compressible type is more, the compression texture is general, there is no quantity limit, there is volume limit

Squoosh

✔ ️

✖ ️

✔ ️

Compressible type is less, the compression texture is general, there is no quantity limit, there is volume limit

TinyJpg

✖ ️

✔ ️

✔ ️

Compressible type is less, compression texture is very good, there are quantity limits, there are volume limits

TinyPng

✖ ️

✔ ️

✔ ️

Compressible type is less, compression texture is very good, there are quantity limits, there are volume limits

Zhitu

✖ ️

✖ ️

✖ ️

Compressible type is general, compression texture is general, there are quantity restrictions, there are volume restrictions

If you don’t want to drag image files back and forth in the website, you can use the author’s open source image batch processing tool IMg-master instead, which not only has the function of compression, but also the function of grouping, marking and transformation. At present, the author is responsible for all projects using this tool processing, has been using a straight!

Graphics strategy can be a very cheap but effective strategy for optimizing performance, as it can probably handle all the build strategies in a single image.

Distribution strategy

This strategy mainly deals with content distribution network, and is also a performance optimization strategy with high access cost, requiring sufficient financial support.

Although the cost of access is high, most enterprises will buy some CDN servers, so they don’t have to worry too much about deployment, just use it. The strategy can maximize the effect of CDN by following the following two points as far as possible.

  • All static resources go to CDN: the development phase determines which files belong to static resources
  • Put static resources and home pages under different domain names: Avoid requests to bringCookie

A content distribution network, or CDN, is a group of geographically distributed servers that store copies of data and cater to data requests based on proximity. Its core features are cache and back source. Cache is to copy resources to the CDN server, and back source is to request the upper layer server and copy resources to the CDN server when the resources expire/do not exist.

CDN can reduce network congestion and improve user access response speed and hit ratio. The intelligent virtual network built on the basis of the existing network relies on servers deployed in various places and enables users to obtain the resources needed nearby through the central platform’s scheduling, load balancing, content distribution and other functional modules. This is the ultimate mission of CDN.

Based on the advantages brought by the proximity principle of CDN, all static resources of the website can be deployed to the CDN server. What files do static resources include? This typically means resources that can be obtained without the server producing computation, such as style files, script files, and multimedia files (fonts/images/audio/video) that are not constantly changing.

If you need to configure a CDN server independently, you can consider ali Cloud OSS, netease Shufan NOS, and Qiliuyun Kodo. Of course, you also need to purchase the CDN service corresponding to the product. Because of the length problem, these configurations will have relevant tutorials after the purchase, can be experienced by themselves, no longer narrated here.

I recommend our first choice netease tree fan NOS, after all, their own products or quite confident, accidentally to their own products hit a small advertisement, ha ha!

Caching strategies

This strategy mainly deals with the browser cache, but also makes the lowest access cost of performance optimization strategy. It can significantly reduce the loss caused by network transmission and improve the speed of web page access, which is a performance optimization strategy worth using.

As you can see from the figure below, in order to get the most out of the browser cache, the strategy tries to follow these five steps to get the most out of the browser cache.

  • Consider rejecting all caching policies:Cache-Control:no-store
  • Consider whether the resource is requested to the server each time:Cache-Control:no-cache
  • Consider whether the resource is cached by a proxy server:Cache-Control:public/private
  • Consider resource expiration time:Expires:t/Cache-Control:max-age=t,s-maxage=t
  • Consider negotiated caching:Last-Modified/Etag

Browser caching is also one of the most frequent interview questions. I think the above mentioned nouns can be fully understood in different word order in order to really understand the role of browser caching in performance optimization.

The cache policy is implemented by setting HTTP packets. It is classified into strong cache/mandatory cache and negotiated cache/comparison cache. In order to facilitate comparison, the author will use legend to show some details, I believe you have a better understanding.

The whole cache policy mechanism is clear: strong cache first, negotiation cache only if a hit fails. If strong cache is hit, use strong cache directly. If the strong cache is not matched, a request is sent to the server to check whether the negotiation cache is matched. If the negotiated cache is hit, the server returns 304 notifying the browser to use the local cache, otherwise returns the latest resource.

There are two common application scenarios that are worth using a caching strategy, but more can be tailored to project requirements.

  • Frequently changing resourcesSet:Cache-Control:no-cacheTo make the browser send a request to the server each time, matchLast-Modified/ETagVerify that the resource is valid
  • Infrequently changing resourcesSet:Cache-Control:max-age=31536000, hash file name processing, when the code is modified to generate a new file name, when the HTML file introduced file name changes will download the latest file

Rendering level

Performance optimization at the rendering level is no doubt how to make code parsing better and execution faster. Therefore, the author makes suggestions from the following five aspects.

  • CSS policy: Based on CSS rules
  • DOM policy: DOM based operations
  • Blocking strategy: script-based loading
  • Backflow redraw strategy: Redraw based on backflow
  • Asynchronous update policy: Based on asynchronous update

The above five aspects are done when writing code and are filled with the development phase of the entire project flow. Therefore, pay attention to each of the following points during the development phase, develop good development habits, and performance optimization will be used naturally.

Performance optimization at the render level is more about coding details than physical code. Simply put, follow certain coding rules to maximize performance optimization at the render level.

The backflow redraw strategy is one of the most common performance optimizations at the render level. Last year, I published a gold digging book called “Playing with the Beauty of CSS Art,” which uses an entire chapter on backflow redrawing. This chapter has been opened for trial reading. For more details, please click here.

CSS strategy
  • Avoid more than three layersNested rules
  • Avoid toThe ID selectorAdd extra selectors
  • Avoid the use ofLabel selectorInstead ofClass selectors
  • Avoid the use ofWildcard selectorDeclare rules only for the target node
  • Avoid duplicate match duplicate definition, concernInheritable property
DOM strategy
  • The cacheDOM computed properties
  • To avoid too muchDOM manipulation
  • useDOMFragmentCached batchDOM manipulation
Blocking strategy
  • Script withDOM/ other scriptsA strong dependence on: Yes<script>Set up thedefer
  • Script withDOM/ other scriptsIs not a strong dependency: Yes<script>Set up theasync
Reflux redraw strategy
  • The cacheDOM computed properties
  • Use class merge styles to avoid changing styles line by line
  • usedisplaycontrolDOM show hiddenThat will beDOM available offline
Asynchronous update strategy
  • inAsynchronous tasksChanges in theDOMWhen it is packaged intoMicro tasks

The six indicators

According to the importance and practicality of performance optimization, I have divided nine strategies and six indicators, which are actually living performance optimization recommendations. Some performance optimization suggestions have little impact whether they are connected or not. Therefore, the author positioned the nine strategies higher than the six indicators. For the nine strategies, it is recommended to access them in the development stage and production stage. During the project review, the six indicators can be added according to the actual application scenarios.

The six metrics capture most of the details of performance optimization and complement the nine strategies. According to the characteristics of each performance optimization suggestion, the author divides the indicators into the following six aspects.

  • Load optimization: Performance optimization that can be done on a resource while it is being loaded
  • Execution tuning: The performance tuning that can be done on a resource while it is executing
  • Render optimizations: Performance optimizations that can be made for resources at render time
  • Style optimization: Performance optimization that styles can do while coding
  • Script optimization: Performance optimization that can be done while a script is being coded
  • V8 engine optimization: in view of theV8 engineFeatures can be optimized for performance
Load optimization

Perform optimization

Rendering optimization

Style optimization

The script to optimize

V8 engine optimization

conclusion

Performance optimization is a cliche that comes up on the job or in a job interview. Most of the time, it’s not about doing or answering a performance optimization suggestion that comes to mind, it’s about having an overall understanding of why you’re doing it and what you’re doing with it.

Performance tuning can’t be covered in a single article; it would take two books to cover it in detail. This article can give you is a direction of a kind of attitude, learn to apply bai, I hope to read this article will help you.

Original link: juejin.cn/post/698167…

A front-end public number only dry goods, daily update quality front-end articles, and senior front-end engineers grow together, welcome to 👏