• 原文 标 题 : front-end Performance Checklist 2019-4
  • Vitaly Friedman
  • The Nuggets translation Project
  • Permanent link to this article: github.com/xitu/gold-m…
  • Translator: Ivocin
  • Proofreader: Ziyin Feng, Weibinzhu

Let 2019 come faster! You are reading the 2019 Annual summary of front-end Performance optimizations, which began in 2016.

  • Front-end Performance Optimization Year 2019 – Part 1
  • Front-end Performance Optimization Year 2019 – Part 2
  • Front-end Performance Optimization Year 2019 – Part 3
  • [translation]2019 Front-end Performance Optimization Annual Review – Part 4
  • Front-end Performance Optimization Year 2019 – Part 5
  • Front-end Performance Optimization Year 2019 – Part 6

directory

  • Build optimization
    • 22. Prioritize
    • 23. Revisit excellent “meet the minimum requirements” techniques
    • Parsing JavaScript is time consuming, so keep it small
    • 25. Did you use tree shaking, scope lifting, and code splitting
    • 26. Can I switch JavaScript to the Web Worker?
    • 27. Can I switch JavaScript to WebAssembly?
    • 28. Is AOT compilation used?
    • 29. Only provide legacy code to older browsers
    • 30. Are JavaScript differentiated services used?
    • 31. Identify and rewrite legacy code through incremental decoupling
    • Identify and remove unused CSS/JS
    • 33. Reduce the size of JavaScript packages
    • 34. Is predictive prefetch of JavaScript code blocks used?
    • 35. Benefit from optimizing for your target JavaScript engine
    • 36. Client-side rendering or server-side rendering?
    • 37. Restrict the influence of third-party scripts
    • 38. Set the HTTP cache header

Build optimization

22. Prioritize

Know what you’re dealing with first. Make a list of all of your static resources (JavaScript, images, fonts, third-party scripts, and large modules on your page such as scroll graphics, complex infographics, and multimedia content) and group them together.

Create a new spreadsheet. Define basic core experience of older browsers (i.e., the core content of fully accessible), enhance the experience of modern browsers (that is, the more rich full experience) and the additional features (can delay to load the necessary resources: such as web fonts, unnecessary style, by the script, video player, social media buttons, and a big picture). Not long ago, we published an article on “Improving The Performance of Smashing Magazine’s website,” in which we described this approach in detail.

When optimizing performance, we need to determine our priorities. Load the core experience immediately, then load the enhanced experience, and finally load the extra features.

23. Revisit excellent “meet the minimum requirements” techniques

Today, we can still use cut-the-mustard technology to deliver core experiences to older browsers and enhance them for modern browsers. (See this article for examples of cutting-edge mustard.) An updated version of this technique will use ES2015 + syntax

Now it’s important to keep in mind that feature detection alone is not enough to make an informed decision about what resources to send to the browser. As such, the ability to meet minimum requirements to infer devices from browser versions is no longer valid today.

For example, cheap Android phones in the developing world, which use the Chrome browser primarily, meet the standard for using the technology that meets the minimum requirements, despite the device’s limited memory and CPU capabilities. Finally, using the device memory client prompt header, we will be able to locate low-end devices more reliably. As of this writing, this header is only supported in Blink (usually used for client-side prompts). Since the device memory also has a JavaScript API already provided in Chrome, feature checking based on that API is an option, and only meets the minimum requirements if it is not supported (thanks Yoav!). .

Parsing JavaScript is time consuming, so keep it small

When working with a single-page application, we need some time to initialize the application before rendering the page. Your setup will require your own custom solution, but keep an eye out for modules and techniques that will speed up the first rendering. For example, how to debug React performance, eliminate common React performance issues, and how to improve Angular performance. Typically, most performance problems come from the initial parsing time to start the application.

JavaScript has a parsing cost, but rarely only because of file size as a factor affecting performance. Parsing and execution times vary greatly depending on the hardware of the device. On a regular phone (Moto G4), 1MB (uncompressed) JavaScript takes about 1.3-1.4s to parse, and 15-20% of the time on mobile devices. Compiling in the game took an average of 4 seconds just to prepare the JavaScript, and about 11 seconds before First Meaningful Paint was available on a mobile device. Reason: Parsing and execution times are easily 2-5 times higher on low-end mobile devices.

To ensure high performance, we as developers need to find ways to write and deploy less JavaScript. That’s why it’s important to examine every JavaScript dependency in detail.

There are many tools available to help you make informed decisions about dependencies and the impact of viable alternatives:

  • webpack-bundle-analyzer
  • Source Map Explorer
  • Bundle Buddy
  • Bundlephobia
  • Webpack size-plugin
  • Import Cost for Visual Code

One interesting way to avoid parsing costs is to use a binary template Ember launched in 2017. Using this template, Ember uses JSON parsing instead of JavaScript parsing, which can be faster. (Thanks Leonardo, Yoav!)

Measure JavaScript parsing and compilation time. We can track parsing times using integrated testing tools and browser tracking, and browser implementors are talking about exposing RUM-based processing times in the future. Also consider using DeviceTiming from Etsy, a gadget that allows you to use JavaScript to measure parsing and execution times on any device or browser.

Bottom line: While script size is important, it’s not everything. Parsing and compilation times do not necessarily increase linearly as the size of a script increases.

25. Did you use tree shaking, scope lifting, and code splitting

Tree-shaking is a way to clean up the build process in webpack by packaging only the code used in the actual production environment and excluding imported modules that are not used. With WebPack and rollup, you can also use scope promotion, which allows WebPack and Rollup to detect where the import chain can be expanded and convert it into an inline function without affecting the code. With Webpack, we can also use JSON Tree Shaking.

In addition, you may want to consider learning how to write efficient CSS selectors and how to avoid bloated and time-consuming styles. If you want to go one step further, you can also shorten class names using WebPack and dynamically rename CSS class names at compile time using scoping isolation.

Code-splitting is another Webpack feature that breaks up your code base into on-demand “chunks.” Not all JavaScript must be downloaded, parsed, and compiled immediately. After the split point is defined in the code, WebPack can handle dependencies and output files. It keeps the initial download small and requests the code on demand as the application requests it. Alexander Kondrov has a great introduction to code splitting using Webpack and React.

Consider using the preload-webpack-plugin, which takes the route of code splitting and then prompts the browser to preload them with or < Link rel=”prefetch”>. Webpack inline directives can also control preload/prefetch.

Where do you define the split point? Trace the code to see which CSS/JavaScript packages are used and which are not. Umar Hansa explains how to do this using Devtools’ code coverage tools.

If you’re not using WebPack, note that rollup displays significantly better results than Browserify exports. While we’re involved, you might want to look at rollup-plugin-closure-Compiler and rollupify, It converts the ECMAScript 2015 module into a large CommonJS module — because small modules can have surprisingly high costs depending on your package and module system of choice.

26. Can I switch JavaScript to the Web Worker?

To reduce the negative impact on first-time time-to-interactive, consider caching high Time consuming JavaScript on Web workers or via Service workers.

As the code base grows, UI performance bottlenecks will occur, which will degrade the user experience. The main reason is that DOM operations run with JavaScript on the main thread. With web workers, we can move these time-consuming operations to another thread in the background process. Typical use cases for Web workers are pre-fetch data and progressive Web applications that pre-load and store some data so you can use it later when you need it. And you can use Comlink to simplify the communication between the homepage and the worker. There’s still some work to do, but we’ve done a lot.

Workerize lets you move modules into Web workers, automatically mapping exported functions to asynchronous proxies. If you are using Webpack, you can use workerize-loader. Alternatively, try the worker-plugin.

Note that Web workers do not have access to the DOM because the DOM is not “thread-safe” and the code they execute needs to be contained in a separate file.

27. Can I switch JavaScript to WebAssembly?

We can also convert JavaScript to WebAssembly, which is a binary instruction format that can be compiled using high-level languages such as C/C++/Rust. Its browser support is excellent, and recently it has become feasible because function calls between JavaSript and WASM are getting faster, at least in Firefox.

In a real-world scenario, JavaScript seems to perform better than WebAssembly at smaller array sizes, while WebAssembly performs better than JavaScript at larger array sizes. For most Web applications, JavaScript is better suited, while WebAssembly is best suited for computationally intensive Web applications, such as Web games. However, it might be worth investigating if switching to WebAssembly yields significant performance improvements.

If you want to learn more about WebAssembly:

  • Lin Clark has written a comprehensive series of articles for WebAssembly, and Milica Mihajlija Outlines how to run native code in a browser, why to do it, and what it means for the future of JavaScript and Web development.

  • Google Codelabs provides an introduction to WebAssembly. This is a 60 minute course where you will learn how to use native code — take C and compile it into WebAssembly, then call it directly from JavaScript.

  • Alex Danilo explains WebAssembly and how it works in his Google I/O 2017 talk. In addition, Benedek Gagyi shares a practical case study of WebAssembly, specifically how the team uses it as an output format for the C++ codebase for iOS, Android, and websites.

Milica Mihajlija provides an overview of how WebAssembly works and why it’s useful. (Preview larger image)

28. Is AOT compilation used?

Use AOT (Ahead-of-time) compiler to put some client renderings on the server to quickly output usable results. Finally, consider using optimize.js to speed up initial load time, wrapping functions that need to be called immediately (although this may not be necessary now).

From Default Fast: Modern Loading Best Practices, by the unique Addy Osmani. Slide on page 76.

29. Only provide legacy code to older browsers

Since ES2015 is so well supported in modern browsers, we can use babel-preset-env to just escape those ES2015 + features that are not yet supported by our target browser. Then set up two builds, one in ES6 and one in ES5. As mentioned above, JavaScript modules are now supported by all major browsers, so use script type = “module” to get browsers that support ES modules to load ES6-enabled files, Older browsers can load ES5-enabled files using Script nomodule. We can automate the whole process using the Webpack ESNext Boilerplate.

Note that we can now write module-based JavaScript that runs natively in a browser, without a compiler or packaging tool. Header provides a way to load module scripts early (and with high priority). Basically, it does a good job of maximizing bandwidth usage by telling the browser what it needs to get so that it doesn’t get stuck during these long round trips. In addition, Jake Archibald has posted a detailed article about the ES module to keep in mind that is worth reading.

For LoDash, use babel-plugin-loDash, which loads only the modules you use in your source code. Your other dependencies may also depend on other versions of LoDash, so converting generic LoDash requires specific functionality to avoid code duplication. This may save you quite a bit of JavaScript overhead.

Shubham Kanodia has written a detailed low-maintenance guide to smart packaging: how to just push legacy code to older browsers in a production environment, with snippets of code that you can use directly.

Jake Archibald has posted a detailed article about the ES module to keep in mind, such as the fact that inline scripts are delayed until blocking external and inline scripts are executed. (Preview larger image)

30. Are JavaScript differentiated services used?

We wanted to send the necessary JavaScript over the network, but that meant more concentration and fine-grained attention to the delivery of these static resources. Philip Walton introduced the idea of differentiated services a while back. The idea is to compile and serve two separate JavaScript packages: “regular” builds with babel-transforms and Polyfills that are only available to older browsers that actually need them, and another package without transforms and polyfills that has the same functionality.

As a result, it helps reduce blocking on the main thread by reducing the number of scripts the browser needs to process. Jeremy Wagner published a comprehensive article in 2019 on differential services and how to set it up in your build pipeline, from setting up Babel to what tweaks you need to make in Webpack, and the benefits of doing it all.

31. Identify and rewrite legacy code through incremental decoupling

Old projects are riddled with old and outdated code. Review your dependencies and assess how long it will take to refactor or rewrite the legacy code that recently caused the problem. Of course, it’s always a big task, but once you understand the impact of legacy code, you can start with incremental decoupling.

First, set metrics that track whether the rate of legacy code calls stays the same or goes down, not up. Publicly prevent teams from using the library, and make sure your CI warns developers if it is used in pull requests. Polyfill helps convert legacy code into a rewritten code base using standard browser functionality.

Identify and remove unused CSS/JS

CSS and JavaScript code coverage in Chrome will give you an idea of what code has been executed/applied and what has not been executed. You can start recording coverage, perform actions on the page, and then browse code coverage results. Once you detect unused code, find those modules and use import() for lazy loading (see whole thread). Then overwrite the configuration file repeatedly and verify that it now sends less code on initial load.

You can use Puppeteer to programmatically collect code coverage, and Canary lets you export code coverage results. As Andy Davies mentioned, you may want to collect code coverage for both modern and older browsers. Puppeteer also has many other use cases, such as automatic parallax or monitoring of unused CSS per build.

In addition, Purgecss, UnCSS and Helium can help you remove unused styles from your CSS. If you are not sure whether use the suspicious code somewhere, you can follow Harry Roberts advice: 1 x 1 px for the class to create a transparent GIF and puts it into dead/directory, for example: / assets/img/dead/comments. GIF. Then, set that particular image to the background of the appropriate selector in your CSS, and wait a few months to see if that file shows up in your log. If the entry does not appear in the log, no one uses the legacy component: you can continue to delete it all.

For the adventurous, you can even monitor DevTools by using DevTools, which automatically collects unused CSS through a set of pages.

33. Reduce the size of JavaScript packages

As Addy Osmani points out, when you only need a small part, you’re likely to send complete JavaScript libraries, along with outdated polyfills for browsers that don’t need them, or just duplicate code. To avoid extra overhead, consider using Webpack-Libs-Optimization to remove unused methods and polyfills during the build process.

Add packaged auditing to your regular workflow. There are some lightweight alternatives to the heavy library you added a few years ago, for example: moment.js can be replaced with date-fns or Luxon. Research by Benedikt Rotsch suggests that the switch from moment.js to date-fns could reduce first draw times on 3G and low-end phones by around 300ms.

This is where tools like Bundlephobia can help you find the cost of adding an NPM package to a package. You can even combine these costs with the Lighthouse Custom Audit. This also applies to frameworks. Styles can be reduced from 194KB to 10KB by removing or reducing the Vue MDC adapter (Vue’s Material component).

Do you like adventure? You can look at Prepack. It compiles JavaScript to equivalent JavaScript code, but unlike Babel or Uglify, it allows you to write normal JavaScript code and output faster equivalent JavaScript code.

Instead of shipping the entire framework package, you can even trim the framework and compile it into a raw JavaScript package that requires no extra code. Svelte does this, as does the Rawact Babel plug-in, which converts the React. Js component to native DOM operations at build time. Why is that? Well, as the maintainer explains: “React-DOM contains code for every possible component/HTMLElement that can be rendered, including code for incremental rendering, scheduling, event handling, and so on. But some applications don’t need all of these features (at initial page load). For such applications, it might make sense to build an interactive user interface using native DOM manipulation.”

In Benedikt Rotsch’s article, he states that the switch from moment.js to date-fns will reduce the first drawing time on 3G and low-end phones by around 300ms. (Preview larger image)

34. Is predictive prefetch of JavaScript code blocks used?

We can use heuristics to determine when to preload blocks of JavaScript code. Guess. Js is a set of tools and libraries that use Data from Google Analytics to determine which pages a user is most likely to visit from a given page. Based on user navigation patterns gathered from Google Analytics or other sources, Guess. Js builds a machine learning model to predict and prefetch the JavaScript required for each subsequent page.

Therefore, each interaction element receives a probability score of participation, and based on that score, the client script decides to pre-acquire the resource in advance. You can integrate this technology into your Next. Js application, Angular, and React, and a Webpack plug-in automates the setup process.

Obviously, you might expect the browser to pre-fetch unwanted pages using unwanted data, so it’s best to be absolutely conservative in the number of pre-fetch requests. A good use case is pre-fetching validation scripts needed in checkout, or speculative pre-fetching when a critical CTA (call-to-action) enters the viewport.

Need something less complicated? Quicklink is a small library that automatically prefetches links in the viewport when idle to speed up the loading of next page navigation. However, it also takes into account Data traffic, so it doesn’t pre-fetch Data on a 2G network or when the Data-Saver is turned on.

35. Benefit from optimizing for your target JavaScript engine

Research which JavaScript engines dominate your user base, and then explore ways to optimize for those engines. For example, script flows are used to handle large scripts when optimizing for Blink kernel browser, V8 used in The Node.js runtime, and Electron. It allows async or defer scripts to be parsed on a separate background thread at the start of the download, thus reducing page load times by up to 10% in some cases. In fact, use

Warning: Opera Mini does not support script delay, so if you are developing for India or Africa, defer will be ignored, which will prevent rendering until the script runs out (thanks Jeremy!). .

Progressive startup means using server-side rendering to get a fast first effective draw, but also includes some minimal JavaScript to keep first interaction times close to first effective draw times.

36. Client-side rendering or server-side rendering?

In both cases, our goal should be to set up progressive startup: use server-side rendering to get a fast first effective draw, but also include some minimal JavaScript necessary to keep the first interaction time close to the first effective draw time. If JavaScript comes too late after the first effective drawing, the browser may lock up the main thread while parsing, compiling, and executing the JavaScript discovered later, thus locking up the site or application’s interactions.

To avoid this, always break function execution into separate asynchronous tasks and use requestIdleCallback whenever possible. Consider using WebPack’s dynamic import() support to delay loading portions of the UI, reducing the cost of loading, parsing, and compiling until the user actually needs them (thanks to Addy!). .

In essence, first interaction time (TTI) tells us the time between navigation and interaction. The metrics are defined by looking at the first five second window after the initial content rendering, where no JavaScript task will exceed 50 milliseconds. If a task occurs for more than 50 milliseconds, restart the search for the five-second window. Therefore, the browser will first assume that it has reached the interactive state, then switch to the frozen state, and finally switch back to the interactive state.

Once we reach the interactive state, we can start the non-essential parts of the application on demand or as time permits. Unfortunately, as Paul Lewis has noted, frameworks often don’t provide developers with a concept of priority, so most libraries and frameworks struggle with incremental startup. If you have the time and resources, use this strategy to ultimately improve performance.

So, client side or server side? If there is no obvious benefit to the user, client-side rendering may not really be necessary — in fact, server-side rendering of HTML may be faster. Maybe you can even use static site generator to pre-render some content and push it directly to the CDN with some JavaScript at the top.

Limit the use of client frameworks to pages that absolutely need them. Server rendering and client rendering can be a disaster if done badly. Consider pre-rendering and dynamic CSS inlining at build time to produce production-ready static files. Addy Osmani gave an excellent talk on JavaScript costs that might be worth paying attention to.

37. Restrict the influence of third-party scripts

With all the performance tuning, we often have no control over third-party scripts from business requirements. Third-party script metrics are not influenced by the end user experience, so often a script ends up calling an annoying, verbose third-party script, undermining the dedicated performance effort. To control and mitigate the performance cost of these scripts, it is not enough to load them asynchronously (possibly via latency) and accelerate them through resource prompts such as DNS-Prefetch or preconnect.

As Yoav Weiss explains in his must-read view on third-party scripts, in many cases these scripts download dynamic resources. Resources change between page loads, so there is no need to know which hosts to download resources from and what those resources are.

What are your options? Consider using the service worker to compete for resource downloads by timeout, and if the resource does not respond within a specified timeout, an empty response is returned to tell the browser to continue parsing the page. You can also log or block third-party requests that are unsuccessful or do not meet certain criteria. If you can, load third-party scripts from your own server rather than the vendor’s.

Casper.com published a detailed case study of how they reduced their response time by 1.7 seconds with self-hosted Optimizely. It might be worth it. (Image source) (Preview larger image)

Another option is to establish a content security policy (CSP) to limit the impact of third-party scripts, such as not allowing audio or video downloads. The best option is to embed the script through

For example, you might have to use

Consider using Intersection Observer; This will keep the AD still in the IFrame, but you can schedule events or get the required information (for example, AD visibility) from the DOM. New policies, such as functional policies, resource size limits, and CPU/ bandwidth priorities, can be looked at to limit harmful Web features and scripts that can slow down browsers, such as synchronous scripts, synchronous XHR requests, document.write, and outdated implementations.

To stress test a third party, check the bottom-up summary in the performance profile page in DevTools to test what happens if the request is blocked or timed out — for the latter, You can use the WebPageTest Blackhole server, blackhole.webpagetest.org, it can be a specific domain points to your hosts file. It is best to self-host and use a single host name, but you can also generate a request map that exposes a fourth party call and detects when the script changes. You can audit third parties using the Harry Roberts approach and generate spreadsheets like this one. Harry also explained the audit workflow in his discussion on third-party performance and auditing.

Image credit: Harry Roberts

38. Set the HTTP cache header

Double check that expires, max-age, cache-Control, and other HTTP cache headers are set correctly. In general, resources are cacheable either for a short time (if they can change) or indefinitely (if they are static) — you just need to change their version in the URL as needed. Disable the last-Modified header, because any static resource with it will result in a conditional request with an if-Modified-since header, even If the resource is in the cache. The same goes for Etag.

Use cache-Control: immutable, designed for fingerprint static resources, to avoid revalidation (as of December 2018, Firefox, Edge, and Safari all support this feature; Firefox only supports https:// transactions). In fact, “of all pages in the HTTP archive, 2% of requests and 30% of sites appear to contain at least one immutable response. Also, most sites that use it have static resources with a long, fresh life cycle.”

Remember stale-while-revalidate? As you may know, we use the cache-control response header to specify the Cache time, for example: cache-control: max-age=604800. After 604,800 seconds, the cache will refetch the requested content, causing the page to load slowly. You can avoid this slow down problem by using stale-while-revalidate. It essentially defines an additional window of time during which the cache can use the old static resource as long as it is asynchronously revalidating itself in the background. Therefore, it “hides” latency from clients (both on the network and on the server).

In October 2018, Chrome released an attempt to handle stale-while-revalidate in the HTTP cache-control header, so it should improve subsequent page loading delays since the old static files are no longer in the critical path. Result: RTT of repeated page visits is zero.

You can use Heroku’s INTRODUCTION to HTTP Caching headers, Jake Archibald’s “Caching Best Practices” and Ilya Grigorik’s INTRODUCTION to HTTP Caching as guides. Also, be aware of header changes, especially those related to CDNS, and pay attention to the Key header, which helps avoid extra round-trip validation when a new request is slightly (but not significantly) different from a previous one (thanks Guy!). .

Also, double check if you’re sending unnecessary headers (such as X-powered-by, pragma, X-UA-compatible, Expires, etc.), And contains useful Security and performance headers (such as Content-security-policy, X-Xss-protection, x-Content-Type-options, etc.). Finally, keep in mind the performance cost of CORS requests in a single-page application.

  • Front-end Performance Optimization Year 2019 – Part 1
  • Front-end Performance Optimization Year 2019 – Part 2
  • Front-end Performance Optimization Year 2019 – Part 3
  • [translation]2019 Front-end Performance Optimization Annual Review – Part 4
  • Front-end Performance Optimization Year 2019 – Part 5
  • Front-end Performance Optimization Year 2019 – Part 6

If you find any mistakes in your translation or other areas that need to be improved, you are welcome to the Nuggets Translation Program to revise and PR your translation, and you can also get the corresponding reward points. The permanent link to this article at the beginning of this article is the MarkDown link to this article on GitHub.


The Nuggets Translation Project is a community that translates quality Internet technical articles from English sharing articles on nuggets. The content covers Android, iOS, front-end, back-end, blockchain, products, design, artificial intelligence and other fields. If you want to see more high-quality translation, please continue to pay attention to the Translation plan of Digging Gold, the official Weibo, Zhihu column.