🔝 General idea of static resource optimization

With the development of the Web, JavaScript has come a long way from being used as a simple scripting function to being used to build large, complex front-end applications. This makes it a very important part of today’s front-end applications, so in this section we’ll start with the familiar JavaScript.

1. Cut down on unnecessary requests

When we do JavaScript optimizations, we stick to the general idea of reducing unnecessary requests in the first place.

1.1. Code split vs. loading on demand

Those of you familiar with WebPack are familiar with this feature.

Although the overall application code is very much, but in many cases, when we visit a page, we do not need to load all the components of other pages. We can wait until we visit other pages, and then load dynamically as needed. The core ideas are as follows:

document.getElementById('btn').addEventListener('click'.e= > {
    // Load the chat component related resource chate.js here
    const script = document.createElement('script');
    script.src = '/static/js/chat.js';
    document.getElementsByTagName('head') [0].appendChild(script);
});
Copy the code

I dynamically added a

Code splitting is typically used in conjunction with build tools. In the case of WebPack, in daily use, the most common way to tell WebPack to do code splitting is through dynamic import[1]. Webpack is parsed when it is compiled, and when it encounters dynamic import it is assumed that the module needs to be loaded dynamically. Correspondingly, child resources are treated this way (unless they are also referenced by other non-dynamic modules).

One of the most common scenarios for using code splitting in Webpack is route-based code splitting. At present, many front-end applications use the FORM of SPA (single-page application), or a combination of SPA and MPA (multi-page application), which involves front-end routing. The business differences between pages make routing based code splitting a best practice. To learn how to implement route-level code splitting in React-Router V4, see this article [2].

Of course, if you don’t use a build tool like WebPack, you can also choose an AMD module loader (such as RequireJS) to implement asynchronous dependency loading on the front-end runtime.

1.2. Code merge

As mentioned in the general idea, one way to reduce requests is to merge resources. Imagine an extreme scenario: if we don’t package and merge the code in node_modules now, then we might concurrently request dozens or even hundreds of dependent script libraries before requesting a script. If the number of concurrent requests under the same domain name is too high, the requests are queued and may be affected by the slow TCP/IP startup.

Of course, many popular build tools (webpack/Rollup/Parcel) bundle dependencies together by default. But be careful when using other tools. With FIS3, for example, some common libraries or NPM dependencies need to be packaged and merged through configuration declarations. Or if you use a tool like Gulp, you also need to pay attention to packaging.

In short, don’t leave your broken papers lying around.

2. Reduce package size

2.1. Code compression

A common way to compress JavaScript code is to use UglifyJS for source-level compression. It can compress the code volume without changing the source logic as much as possible by replacing variables with short names and removing redundant newlines. It’s basically become standard for front-end development. This is enabled by default in Webpack production mode; Gulp-uglify plug-ins are also available on Gulp, a task flow management tool.

Another common method of code compression is to use some text compression algorithm, gzip is a common method.

The content-Encoding of the response header in the figure above indicates that it uses GZIP.

The dark numbers indicate that the compression size is 22.0KB, and the light ones indicate that the compression size is 91.9KB, which is still quite large and effective. General servers will build corresponding modules for GZIP processing, we do not need to write a separate compression algorithm module. For example, Nginx includes the ngx_HTTP_gzip_module [3] module, which can be turned on through simple configuration.

gzip            on;
gzip_min_length 1000;
gzip_comp_level 6;
gzip_types      application/javascript application/x-javascript text/javascript;
Copy the code

2.2. The Tree Shaking

Tree Shaking first came to the front end mainly because of Rollup. It was later implemented in Webpack. The essence is to reduce the size of the code by detecting parts of the source code that will not be used and removing them. Such as:

/ / module A
export function add(a, b) {
    return a + b;
}

export function minus(a, b) {
    return a - b;
}
Copy the code
B / / modules
import {add} from 'module.A.js';
console.log(add(1.2));
Copy the code

As you can see, module B references module A, but only the add method is used. So the minus method becomes Dead Code, there is no point in packing it in, and it will never be used.

Note that I use ESM specification module syntax in the above code, not CommonJS. This is mainly because Tree Shaking is a kind of static analysis, whereas ESM itself is a static modular specification, with all dependencies determined at compile time. For better use in WebPack, check out this section on the webpack website [4]. Tree Shaking can also be seen here [5].

Note that Tree Shaking is very dependent on ESM. For example, the popular front-end tool library LoDash is installed directly in a non-ESM version. To support Tree Shaking, we need to install its ESM version — LoDash-es to implement Tree Shaking[6].

In addition, Chrome DevTools can also help you see the usage coverage of loaded JavaScript code [7].

2.3. Optimize the use of polyfill

One of the hallmarks of front-end technology is compatibility. To make the browser’s new features easier to use, some programmers have created polyfill, the corresponding API for using the new features on non-compatible browsers. Subsequent upgrades do not change the business code, just remove the corresponding polyfill.

This comfortable development experience has made Polyfill an integral part of many projects. Polyfill comes at a cost, however, in that it increases the size of the code. After all, Polyfill is also written in JavaScript, not built into the browser, and the more you introduce it, the bigger your code gets. So loading only the polyfills you really need will help you reduce the size of your code.

First, not every business has the same compatibility requirements. Therefore, depending on your business scenario, determine which polyfills are most appropriate to introduce. However, with so many features, manually importing or adding Babel Transformer can be a costly proposition. For this, browserslist can help, and many front-end tools (babel-preset-env/autoprefixer/eslint-plugin-compat) rely on it. You can see how it works here.

Second, a technique called Differential Serving[8] was introduced at Chrome Dev Summit 2018, using the browser’s native modular API to try to avoid loading useless polyfills.

<script type="module" src="main.mjs"></script>
<script nomodule src="legacy.js"></script>
Copy the code

This way, on browsers that can handle Module properties (with many new features) you simply load main.mjs (without polyfill), while on older browsers you load legacy.js (with polyfill).

Finally, ideally, polyfill should be distributed according to browser characteristics, with the same project loading different Polyfill files in different browsers. For example, Polyfill. IO returns the required Polyfill collection based on the client characteristics in the request header and the required API characteristics.

2.4. webpack

Webpack is now used as a building tool for many front-end applications, so it’s listed here separately. We can use the tool Webpack-bundle-Analyzer to check the size of each module in the packaging code.

Most of the time, too much packaging is mainly due to the introduction of unsuitable packages. For optimizing the introduction of dependent packages, here are some suggestions for reducing bundle size [9].

3. Parse and execute

In addition to the time required to download JavaScript, script parsing and execution can also be time consuming.

3.1. Parsing time of JavaScript

In many cases, we ignore parsing JavaScript files. A JavaScript file needs to be parsed and compiled by the JavaScript engine, even if there is no “immediate function” inside it.

As you can see from the figure above, parsing and compiling took several hundred milliseconds. On the other hand, removing unnecessary code also helps reduce the load of Parse and Compile.

Also, as we’ve seen in the previous section, parsing, compiling, and executing JavaScript blocks page parsing and delays user interaction. So sometimes loading JavaScript with the same number of bytes can have a greater impact on performance than images, which can be processed in parallel in other threads.

3.2. Avoid Long tasks

For some single-page applications, you may need to perform a lot of logic after loading the core JavaScript resources. If not handled well, you can have JavaScript threads that take too long and block the main thread.

For example, in the figure above, the Long Task appears where the framerate drops significantly, along with a script execution time of over 700 ms. FCP and DCL are behind. To some extent, it can be considered that this Long Task blocks the main thread and slows down the loading time of the page, seriously affecting front-end performance and experience.

To learn more about Long Task, take a look at the standards for Long Task [10].

3.3. Is a framework really needed

If we were to ask people now, do we need React, Vue, Angular, or some other front-end framework (library), we would probably say yes.

But there’s another way to think about it. One of the problems libraries/frameworks help us solve is the rapid development and subsequent maintenance of code. Many times, library/framework developers have to make a trade-off between maintainability, ease of use, and performance. For a complex site-wide application, using the established programming paradigm that the framework gives you will improve the quality of your work at all levels. But for some pages, can we do the opposite?

For example, the product manager gave feedback that our landing page was too slow to load, and users were easy to lose. At this point, you will start to optimize performance, using the measures from this performance tour. But have you ever considered whether you can “go back to basics” for static pages like landing pages?

Maybe you used the React technology stack — you loaded React, Redux, react-Redux, a bunch of Reducers… Well, the whole JavaScript is probably almost 1MB. More importantly, if the page is used to pull new ones, it also means that the visitor has no cache to use. Well, for a static page (or some very simple form interaction), users are paying a lot of money for less than 50 lines of code. So sometimes it’s a strategy to consider implementing it using native JavaScript. Netflix has an article about how they dramatically reduced load and operation response times in this way [11].

Again, I’m not saying don’t use frameworks/libraries, just don’t get stuck in a certain mindset. Be the master of your tools, not the slave of them.

3.4. Optimization for code

Please note that as of this time (2019.08) the following is not recommended for use in a production environment.

Another way to optimize your code is to get it to an optimal state. It’s actually a compilation optimization. In compiled static languages such as C++, some optimizations through the compiler are common.

That’s Facebook’s Prepack. For example, the following code:

(function () {
    function hello() {return 'hello'; }function world() {return 'world'; }global.s = hello() + ' '+ world(); }) ();Copy the code

Can be optimized as:

s = 'hello world';
Copy the code

Many times, however, there is a conflict between code size and performance. Prepack is also not mature enough to be recommended for use in production environments.

4. The cache

The JavaScript cache is basically the same as the cache we mentioned in Part 1, so if you can’t remember, you can go back to our first stop.

4.1. Release and deployment

A quick note: in most cases, HTTP caching is enabled for static resources such as JavaScript and CSS. Of course, it is possible to use either strong caching or negotiated caching. When we release updates to strong caching, how do we get browsers to discard caching and request new resources?

Generally, there is a set of coordination methods: First, the file name contains the Hash of the file content. After the content is modified, the file name will change. At the same time, the page is not set to strong caching, so that static resources with updated content will not be cached because the URI has changed, while resources that have not changed can still be cached.

The above mainly involves the release and deployment of front-end resources. For details, please refer to this article [12], which will not be expanded here.

4.2. Package and merge the base library code

To make better use of the cache, we typically separate out the parts that are not easily changed. For example, a React stack project might package base libraries like React, Redux, and React-Router into a single file.

This has the advantage of not invalidating the entire cache, even if the business code changes frequently, because the base library is packaged separately. The infrastructure/libraries, common and util in the project can still take advantage of caching, and users won’t have to spend unnecessary bandwidth to re-download the base library every time a new version is released.

Therefore, a common strategy is to package the long-cache contents of the base library separately, using the Cache to reduce user access speed after a new release. This approach essentially separates the contents of different cache cycles and isolates changes.

Webpack before v3.x and before that, it was possible to separate some common libraries through the CommonChunkPlugin. And upgrade to the v4. X has a new configuration items after optimization. SplitChunks:

// webpack.config.js
module.exports = {
    / /...
    optimization: {
        splitChunks: {
            chunks: 'all'.minChunks: 1.cacheGroups: {
                commons: {
                    minChunks: 1.automaticNamePrefix: 'commons'.test: /[\\/]node_modules[\\/]react|redux|react-redux/,
                    chunks: 'all'
                }
            }
        }
    }
}
Copy the code

4.3. Reduce cache invalidation caused by improper compilation of Webpack

Since WebPack has become the dominant build tool on the front end, here are some considerations for using WebPack to reduce unnecessary cache invalidation.

As we know, WebPack assigns a unique module ID to each module, and generally webPack uses an increment ID. This can lead to a problem: some modules have the same code, but because new modules are added/deleted, all subsequent module ids change, and the MD5 of the file changes. Another problem is that webPack’s entry file contains its Runtime and business module code, as well as a mini-manifest for asynchronous loading, and any changes to a module will inevitably be transmitted to the entry file. All of these will make the website published, not changed source resources will also cache invalidation.

There are some common ways to get around these problems.

4.3.1. Use Hash to replace the increment ID

You can use the HashedModuleIdsPlugin plugin, which calculates the Hash value based on the relative path of the module. Of course, you can also use the Optimization.moduleids provided by WebPack, set it to hash, or choose any other method that is appropriate.

4.3.2. Separate Runtime Chunk

Through optimization. RuntimeChunk configuration allows webpack to separate part contains the manifest the runtime separately, as much as possible so that you can limit changes affect the range of files.

// webpack.config.js
module.exports = {
    / /...
    optimization: {
        runtimeChunk: {
            name: 'runtime'}}},Copy the code

If you are not familiar with how the WebPack Modular Runtime works, check out this article [13].

4.3.3. Use the records

You can use the recordsPath configuration to have WebPack produce a JSON file containing a record of the module’s information, which contains information about the module’s identity for later compilation. This allows Webpack to use the information in Records to avoid breaking the cache during subsequent compilation of packages for bundles that have been split.

// webpack.config.js
module.exports = {
  / /...
  recordsPath: path.join(__dirname, 'records.json')};Copy the code

If you are interested in the above ways to avoid or reduce cache invalidation, you can also read this article 14. There are also some work plans for Module and Chunk ids in webpack V5.x to improve long-term caching.

If you like, please follow my blog or subscribe to RSS feed.


“Performance Tuning” series

  1. Bring you a full grasp of front-end performance optimization 🚀
  2. How can caching be used to reduce remote requests?
  3. How can I speed up requests?
  4. How to speed up page parsing and processing?
  5. What is the general idea of static resource optimization?
    1. How to optimize performance for JavaScript? (this paper)
    2. 🔜 How to optimize the CSS performance?
    3. Graphics are great, but they can also cause performance problems
    4. Do fonts also need performance optimization?
    5. How to optimize performance for video?
  6. How can you avoid performance problems at run time?
  7. How can preloading improve performance?
  8. The end of the

At present, all the content has been updated to ✨ fe-performance-Journey ✨ warehouse, and the content will be synchronized to the nuggets one after another. If you want to read the content as soon as possible, you can also go directly to the repository.


The resources

  1. Proposal Dynamic Import
  2. Code split in React-Router4
  3. Module ngx_http_gzip_module
  4. Tree Shaking – webpack
  5. Tree Shaking Performance Optimization Practices – Principles
  6. Tree Shaking for Lodash
  7. CSS and JS code coverage – Chrome DevTools
  8. Chrome Dev Summit 2018
  9. Optimize your libraries with webpack
  10. Long Tasks API 1
  11. A Netflix Web Performance Case Study
  12. How do you develop and deploy front-end code in large companies?
  13. Advanced Webpack: Modular design and implementation of the front-end runtime
  14. Separating a Manifest
  15. The cost of JavaScript in 2019
  16. JavaScript performance in 2019
  17. webpack 4: Code Splitting, chunk graph and the splitChunks optimization
  18. Comparison and selection of text compression algorithms
  19. Briefly talk about GZIP compression principle and daily application
  20. Text Compression
  21. Better tree shaking with deep scope analysis
  22. How we reduced our initial JS/CSS size by 67%