Front-end optimization is a big topic and it takes a lot of time to understand. So this time from the rendering optimization, packaging optimization, code optimization to do a systematic summary, and extended out a few need to pay attention to the problem, the article may be a little long, you must see the end. Finally writing is not easy, hope that you can feel it, help a wave of praise, thanks in advance. Of course, if there are bad places, please point out, I will actively improve, grow together.

Rendering optimization

Rendering optimization is an important part of the front end optimization, a good first screen time will bring a good experience to the user, here is about the definition of first screen time, different team on the first screen time definition, some teams think first screen time is bad time, from the page is loaded in the first picture in time. But when it comes to user experience, that alone doesn’t cut it, so some front end teams think that the first screen time should be the time from when the page loads to when the user can normally work on the page, so let’s go with the latter

Js CSS loading sequence

Before we talk about render optimization, we need to say one more thing, which is the classic question “what happens when you enter the URL in the browser address bar?” To understand this, we can understand how js/CSS loading order affects rendering

Question 1: What happens when you enter a URL in the address bar

This question is often raised, some people may answer more concise, some may answer more detailed, here is the main process

  • First, the URL is resolved and the IP lookup is performed according to the DNS system
  • The server is located based on the IP address, and the browser and the server use a TCP three-way handshake to establish the connection. In the case of HTTPS, a TLS connection is established and the encryption algorithm is negotiated. Another issue to note here is “the difference between HTTPS and HTTP” (discussed below).
  • Connection is established after the browser to start sending requests for documents, here at this time there will be a kind of situation is the cache, go after establishing a connection is cached or directly to obtain, need to see the background setting, so there will be a concern of “the browser cache mechanism,” cache we’re talking about, we are now when there is no cache, to get the file directly
  • First get the HTML file, build the DOM tree, this process is to download and parse, not wait for all the HTML files downloaded, then parse HTML, it is a waste of time, but download a bit of parsing a bit
  • Okay, so when you go to the header of your HTML, there’s another problem, where’s the CSS, where’s the JS? Different locations can make different renderings, which brings up another concern: “Where should I put my CSS and JS? “Why?”, let’s start with the correct location (CSS in the head, JS in the tail).
  • The CSS file is also parsed while downloading. The CSSOM tree is built. When the DOM tree and CSSOM tree are all built, the browser will build the DOM tree and CSSOM tree into a rendering tree.
  • Style calculation, the last sentence above “the DOM tree and the CSSOM tree will be built together to render the tree” is a bit general, there are actually a little more detailed operations, but the general answer should be enough, let’s go to the above to talk about what else is done to construct the render tree. The first one is the style calculation. After the DOM tree and THE CSSOM tree are available, the browser starts the style calculation, mainly to find the corresponding style for the nodes in the DOM tree
  • Build the layout tree. Once the styles are calculated, start building the layout tree. The main thing is to find the corresponding location on the page for the nodes in the DOM tree and to hide some “display: None “elements.
  • After the layout tree is completed, the browser also needs to build a hierarchical tree, mainly to meet the scroll bar, Z-index, position and other complex hierarchical operations
  • Block the hierarchical tree map and use raster to find the corresponding bitmap under the view window. The main reason is that a page can be several screens long and rendering it all at once is wasteful, so the browser will find the corresponding block of the view window and render it
  • The final rendering process will render the entire page, and there will be rearrangements and redraws during the rendering process. This is also a popular question to ask: “why do rearrangements and redraws affect rendering, and how to avoid them?”
  • The above process roughly explains the entire process from URL to page rendering, in fact, involves several issues that need to be concerned, the following will be detailed

Problem 2: Js CSS order affects front-end optimization

We talked about the whole rendering process, but we didn’t talk about the effects of CSS and JS on rendering. The rendering tree must be composed of DOM tree and CSSOM tree, so it is an important optimization method to build CSSOM tree as soon as possible. If THE CSS file is placed at the end, the whole process is a serial process, first parsing the DOM, then parsing the CSS. So CSS is always in the header, so that the DOM tree and the CSSOM tree are built simultaneously.

If we look at js, it will prevent the DOM tree from being rendered, so once we have js in the header and there is no asynchronous loading, once js is running, the DOM tree will never be built, and the page will always have a white screen. So usually we put the JS file at the end. On the tail is not, of course, there is no problem, but the problem is relatively small, in js file if is too big, at the end of the long run, the code loads, there will be a lot of time-consuming operation page not found, this is another problem, but it’s certainly better than white, white page is what all have no, this page is the only operation is not smooth.

There’s another reason why you put your JS script at the end, because sometimes your JS code has to manipulate the DOM node, and if you put it in the head, the DOM tree hasn’t been built yet, and you can’t get the DOM node, and you try to use it, you’ll get an error, and if you don’t handle the error properly, the page will crash

Question 3: Why do rearrangements and redraws affect rendering and how can they be avoided?

Why does rearrangement and repainting affect rendering, which is more important, and how to avoid it is a frequently asked question. Let’s talk about repainting first

  • redraw

    Redraw refers to operations that do not affect the layout of the interface, such as changing the color. As we know from the render explanation above, we only need to repeat the style calculation after redraw and can render directly, with relatively little impact on the browser rendering

  • rearrangement

    Rearrangement refers to actions that affect the layout of the interface, such as changing width and height, hiding nodes, etc. For the rearrangement, it is not as simple as recalculating the style, because the layout is changed. According to the above rendering process, the stages involved are style calculation, layout tree reconstruction, and hierarchical tree reconstruction, so the rearrangement has a relatively high impact on browser rendering

  • Avoid methods

    • Use CSS to do what you can with CSS
    • Do as little dom manipulation as possible and use the createDocumentFragment wherever possible
    • If you must use JS manipulation styles, try to merge as many times as possible
    • Resize events should be stabilized and triggered as little as possible
    • When loading an image, write the width and height in advance

Problem 4: Browser caching mechanism

Browser caching is a common problem, I will from the browser caching method, cache implementation, cache where these points to illustrate

Cache way

We often talk about two kinds of browser caching, one is mandatory cache, and the other is negotiated cache, because the implementation of the following, so here is the concept

  • Negotiate the cache

    Negotiating the cache means that the file has been cached, but whether to read from the cache needs to be negotiated with the server, depending on the header/response header field Settings, as described below. It is important to note that the negotiation cache still sends the request

  • Mandatory cache

    Forced caching means that the file is fetched directly from the cache without the need to send a request

Cache implementation

  • Mandatory cache

Caching is enforced in HTTP1.0 using Expires, a field in the response header that indicates the expiration time of the file. Is an absolute time, which in some cases invalidates the cache when the time zone of the server is different from the time zone of the browser. To solve this problem, HTPP1.1 introduced a new response header cache-control with the following optional values

cache-control
  • Max-age: indicates the relative time when the cache expires
  • Public: indicates that both the client and the proxy server cache
  • Private: Indicates that the cache is only cached on the client
  • No-cache: Indicates the negotiated cache identifier. It indicates that the file will be cached but must be negotiated with the server
  • No-store: Indicates that the file is not cached

HTTP1.1 uses max-age:600 to force caching, because it’s relative, so there’s no Expires issue

  • Negotiate the cache

    The negotiation cache uses the last-modified/if-modified-since,Etag/ if-none-match pair of response and request headers.

    Last-Modified/if-Modified-Since
    • The first time the browser sends a request to get the file cached, the server response header returns a last-modified record of when it was changed
    • The browser sends a second request with an if-modified-since header, the time returned by last-Modified. Then the server takes this field and compares it with its internal setting time. If the time is the same, it means no modification, and directly returns 304 to get the file from the cache
    Etag/If-None-Match

    Because of the last-modified time granularity of seconds, some files may be Modified multiple times within 1s. This approach still doesn’t work in this particular case, so HTTP1.1 introduces the Etag field. This field generates a marker based on the contents of the file, such as “W/”5f9583bd-10a8″”, and then compares it to if-none-match to tell If the file has been changed

Where to cache

Knowing how and how the cache is implemented and where the cache is located, we can open the nugget and see the following information. There are two sources of cache: from disk cache and from memeory cache

form memory cache

This one is cached in memory, which has the advantage of being fast, but it’s time-sensitive, and when you close TAB, the cache becomes invalid.

from disk cache

This is cached on disk, which is slower but still faster than requested. The advantage is that the cache can always be retained, even if the TAB is closed

When to cache memory and when to cache Disk?

This problem rarely find online to the standard answer, the consensus is js, image file browser will be automatically stored in the memory, the CSS file because they don’t often change stored in the disk, we can open the nuggets site, a large part of the documents are in accordance with the rules, but also has a few js file is cached disk. So what exactly is his storage mechanism? With this question in mind, I looked up many articles. Although I did not find the answer definitually in the end, a zhihu answer can provide us with ideas. Here is a quote from a Zhihu respondent

  • The first symptom (using the image as an example) : Access -> 200 -> exit the browser and then enter -> 200(from disk cache) -> refresh -> 200(from memory cache). Summary: Will chrome be very smart to judge that since it has been taken from disk, the second time to take memory fast. Crying (laughs)

  • The second phenomenon (as an example of an image) : whenever the image is base64, I see it is from memroy Cache. Bottom line: Parsing the rendered image is so laborious that you should do it once and put it in memory. Just take it when you use it

  • Third phenomenon (take JS CSS as an example) : When I do static testing, I find that large JS CSS files are directly disk cache. Knot: Chrome will not say I go to you so big too much space. You can stay on the hard drive. Just take it slow.

  • The fourth phenomenon: in privacy mode, it is almost all from Memroy Cache. Conclusion: Privacy mode. I can’t expose your stuff. I’ll put it in memory. You shut, I die.

Although the above points are very humorous, we can find part of the answer, but I think another Zhihu answer I more agree with

The browser is also run by several processes, so the operating system to save memory, part of the memory resources will be exchanged back to the disk swap area, of course, the swap is a policy, such as the most commonly used is LRU.

When to save disk and when to save Memoey are all under the control of the browser. When memory is running out, you may consider to save disk. Therefore, my summary results are as follows

  • Larger files are cached on disk because memory is also limited and disk space is larger
  • It’s a smaller file, js, and the images are stored in memory
  • CSS files are usually stored in disks
  • The memory size for special cases is limited, and the browser will save some JS files to disk according to its own built-in algorithm

Question 5: The difference between HTTPS and HTTP

In terms of the differences between HTTPS and HTTP, you can talk about the differences between HTTPS server and client connections, the HTTPS specific encryption algorithm negotiation, and maybe even symmetric encryption, asymmetric encryption, certificates, etc., which is quite a long story. See the HTTPS detail that I wrote separately earlier.

Request to optimize

In order to render faster, we need to put js at the end and CSS at the head. Then we need to be careful to minimize rearrangement and redraw when writing js. When writing HTML, CSS, try to be concise, not redundant, in order to build the DOM tree and CSSOM tree faster. Now let’s talk about request optimization. Request optimization can start with the number of requests and request time

Reduce the number of requests

  • Package small images into Base64
  • Use Sprite images to merge multiple small images
  • Using the cache was mentioned above

Reduce request time

  • The JS, CSS, HTML and other files can be compressed as far as possible to reduce file size, speed up the download speed
  • Use WebPack packaging to load lazily according to the route. Don’t load the whole thing at first as the file will be large
  • If you can upgrade to a higher version of HTTP, upgrade to a higher version (this is a generic answer). Why does a higher version improve the speed of HTTP
  • Establish an internal CDN to obtain files more quickly

Webpack optimization

Introduced the rendering optimization, now look at webPack optimization, their usual writing demo to do training team is their own handwriting WebPack configuration, although also dozens of lines, but every time can let me consolidate the basic configuration of WebPack, the following directly say webPack optimization means what

Optimization of basic configuration

  • extensions

This configuration is within resolve and is often used to extend the file suffix

resolve: {
    extensions: ['.ts'.'.tsx'.'.js']}Copy the code

This configuration means that WebPack will look for file extensions based on extensions, so if our project is written primarily in TS, we can write.tsx and.ts in front of it to allow WebPack to parse quickly

  • alias

The main reason for this reduction in packaging time is that webPack can quickly resolve the file path and find the corresponding file. The configuration is as follows

resolve: {
  alias: {
    Components: path.resolve(__dirname, './src/components')}}Copy the code
  • noParse

NoParse means files that do not need to be parsed. Some files may be third-party files imported by providePlugin to use as variables on Windows. Such files are relatively large and already packaged, so it is necessary to exclude them

module: {
  noParse: [/proj4\.js/]}Copy the code
  • exclude

Some loaders have a property that excludes certain files from babel-Loader. Exclude excludes certain files from babel-Loader. If the scope of loader is smaller, the packaging speed will be faster

{
    test: /\.js$/,
    loader: "babel-loader".exclude: path.resolve(__dirname, 'node_modules')}Copy the code
  • devtool

This configuration is a debug item, and different configurations display different effects, packaging size and packaging speed. For example, in a development environment, cheap-source-map is definitely faster than source-map. As for why, I highly recommend my previous article on DevTool: The WebPack DevTools section goes into great detail.

{
    devtool: 'cheap-source-map'
}
Copy the code

.eslintignore

This is not a WebPack configuration but is useful for packaging speed optimization. In my practice, ESLint checks have a significant impact on packaging speed, but in many cases we can’t do without esLint checks. Eslint checks may not be safe if they are only turned on in VS.

Because there’s a chance that your ESLint plugin will suddenly shut down or for some reason your EsLint plugin won’t be able to check, you’ll have to rely on webPack builds to stop you from submitting the wrong code, even though this isn’t always a sure thing, because you might be in a hurry to commit your code without starting the service. The only way to protect you is through the last barrier, which is CI. For example, we’ll also do ESLint checks for you while Jenkins is building. The three barriers make sure that the mirror we end up with won’t be a problem.

So ESLint is very important, don’t delete it, and how can we reduce the checking time if we can’t delete it? We can greatly improve packaging speed by ignoring files, disabling ESLint for unnecessary files, and only for files that are needed

Loader, plugins optimization

These are some of the basic configuration optimizations mentioned above. There should be other basic configurations that will be added in the future. Here are some examples of using certain loaders and plugins to improve packaging speed

  • cache-loader

This loader will cache the results of the first package and directly read the contents of the cache during the second package to improve the packing efficiency. It’s important to keep in mind that every loader and plugins you add adds extra packing time. This extra time is more than it reduces, so there is no point in adding this loader, so it is best to use a cache-loader on a more time-consuming loader

{
  rules: [{test: /\.vue$/,
      use: [
        'cache-loader'.'vue-loader'].include: path.resolve(__dirname, './src')}}]Copy the code
  • webpack-parallel-uglify-plugin, uglifyjs-webpack-plugin, terser-webpack-plugin

As we already know from the render optimization above, the smaller the file, the faster the render. We often use compression when configuring WebPack, but compression is also time-consuming, so we often use one of the three plugins above to enable parallel compression and reduce the time of compression. We use the terse- Webpack-Plugin recommended by WebPack4 as an example

optimization: {
  minimizer: [
      new TerserPlugin({
        parallel: true.cache: true}})],Copy the code
  • happypack, parallel-webpack, thread-loader

These loaders/plugins are also enabled in parallel, but with parallel build enabled. Since the happypack authors say they are no longer interested in js, there is no maintenance, and recommend using Thread-Loader if you are using WebPack4. The basic configuration is as follows

     {
        test: /\.js$/,
        use: [
          {
            loader: "thread-loader".options: threadLoaderOptions
          },
          "babel-loader",].exclude: /node_modules/,}Copy the code
  • DllPlugin,webpack.DllReferencePlugin

Above said several parallel plug-in are theoretically can increase construction speed, are so many articles online, but I’m in the process of actual use, found that sometimes instead of not only didn’t reduce the packaging speed, the speed of access to the reason is that maybe your computer auditing is inherently low, or you are a high CPU run at the time, Starting multiple processes causes the build to slow down.

The above mentioned parallel plug-ins may not give you the effect you want in some cases, but from our team’s experience optimizing WebPack performance, the two plug-ins mentioned in this case are obvious and improve packaging speed every time. The principle is to first package the third-party dependencies to generate a JS file at a time, and then when the real package project code, according to the mapping file directly from the packaged JS file to obtain the required objects, without packaging third-party files. However, in this case, the packaging configuration is a little more difficult, you need to write a webpack.dll.js. Roughly as follows

webpack.dll.js

const path = require('path');
const webpack = require('webpack');

module.exports = {
    entry: {
        library: ["vue"."moment"]},output: {
        filename: '[name].dll.js'.path: path.resolve(__dirname, 'json-dll'),
        library: '[name]'
    },

    plugins: [
        new webpack.DllPlugin({
            path: './json-dll/library.json'.name: '[name].json'}})]Copy the code

webpack.dev.js

  new AddAssetHtmlWebpack({
     filepath: path.resolve(__dirname, './json-dll/library.dll.js')}),new webpack.DllReferencePlugin({
      manifest: require("./json-dll/library.json")})Copy the code

Other Optimized configurations

These plug-ins are briefly introduced. I have used them in my personal projects, and I feel they are ok. For specific use, please refer to NPM or Github

  • webpack-bundle-analyzer

This plugin allows us to visually analyze the packaging volume to improve our WebPack configuration with appropriate optimizations

  • speed-measure-webpack-plugin

This plugin can tell us how much time each loader or plugin takes to package, so that plugins and loaders that take longer can be optimized

  • friendly-errors-webpack-plugin

This plugin can help us optimize the packaging log, we often see a long log message when packaging, sometimes do not need, do not read so can use this plugin to simplify

Code optimization

This is the last part of the code optimization. I’m only talking about the performance optimization in my work. For other smaller optimization points like createDocumentFragment use, check out other articles

Don’t manipulate the DOM if you can, even if you need to change the design

In many cases, we can use CSS to restore the design draft, but sometimes it can not be restored from CSS alone, especially when the components are not written by you. For example, our team uses ANTD. Sometimes the product design can not be realized from CSS alone, but can only be completed by using JS, deleting and adding nodes in coordination with the style.

Since we are also a big data company, performance problems will occur at this time. When we first write code, the product says what it is, and I will try to figure it out, no matter what method. Later to the customer on-site big data situation, performance shortcomings immediately exposed.

So I think one of the principles of code optimization is not to write code that can not be written, of course, this is from a performance point of view, through performance analysis to give the product reason, and preferably provide a better solution, this is what we need to consider.

If you’re using React make sure you write that life cycle function shouldComponentUpdate, otherwise when you print it you’re wondering why you’re doing it so many times

Take a complex comparison and turn it into a simple comparison

What does that mean? Let’s take the example of shouldComponentUpdate, which is fine, but you can do it better, and we do it a lot in our work

shouldComponentUpdate(nextPrpops) {
  return JSON.stringify(nextPrpops.data) ! = =JSON.stringify(this.props.data)
}
Copy the code

If this is a paged table and data is per-page data, the data changes are rerendered, which in itself is fine in a small data scenario. However, in the case of big data, there may be problems. Some people may have questions, since we have done paging, how can there be big data? Because our product is to do big data analysis logs, ten logs per page, and some logs may be very long, that is to say, it is time-consuming to compare even 10 data. So the idea was can you find some alternative variable to show that the data changed? Like this

shouldComponentUpdate(nextPrpops) {
  return nextPrpops.data[0].id ! = =this.props.data[0].id
}
Copy the code

If the first one has a different ID, it means that the data has changed, ok? Obviously there are some cases, some people might say that the id is the same, but what if it’s the following?

shouldComponentUpdate(nextPrpops) {
  returnnextPrpops.current ! = =this.props.current
}
Copy the code

The comparison of data was converted into the comparison of current, because the number of pages changed, the data basically changed. For our own log display, there is basically no two pages of data that are exactly the same, if there is, it may be a background problem. Then think about this problem, even if there are two pages of identical data, at most this table will not be rerendered, but two pages of identical data will not be a problem, because the data is the same. Or if you’re still worried, how about this one

this.setState({
  data,
  requestId: guid()
})

shouldComponentUpdate(nextPrpops) {
  returnnextPrpops.requestId ! = =this.props.requestId
}
Copy the code

Given a requestId followed by a data, only requestId is compared. I might have a problem with any of these, but the main point is that when you’re writing code you can think about whether you can make a simple comparison instead of a complicated comparison.

Learn data structures and algorithms that are sure to come in handy in your work

We often hear that learning about data structures and algorithms is of little use because there is little work to be done. I thought that was true before, but now I think it’s very wrong. Every skill we learn will come in handy later in life. I’ve written code before and I’ve lost it and I haven’t optimized it so I think the algorithm is meaningless, it’s hard and easy to forget. But now finish demand requests itself, open the mock, open the perfermance of the large amount of data test, looking at the mark red flame and visible to the naked eye card, will understand the importance of algorithms and data structures, because at this stage, you can only obtain from its body optimization, at ordinary times you are will reject it, you is so want to optimize it. Let me give you an example of the code THAT I wrote before, and I’ll change the variables because the company code is confidential, and the pseudo-code is as follows

data.filter(({id}) = > {
  return selectedIds.includes(id);
})
Copy the code

These are just a few lines of code, and the logic is to filter out the data that has been checked. Basically, a lot of people might write that, because I see it all in my team. The product had already limited the data to a maximum of 200 data, so there was no pressure to finish writing and no performance impact. However, in keeping with the performance optimization principles (mainly due to the live environment), I started the mock service and set the data to 20,000 and then tested it. The code was exposed and the interface stalled and re-selected. Then began to optimize, at that time the specific idea is as follows

In the current code, this is a two-level loop with a brute force search time of O(n^2). So I thought I could reduce the complexity to at least O(nlogn). I looked at the code and I could only start with selectedids.includes (id). I thought I could use dichotomies, but it was immediately rejected because dichotomies need to be ordered.

Quiet after a while, recall to have seen the algorithm course and books and do the algorithm problem, change the violence search method is basically

1: upper pointer

2: increases the dimension of an array

3: Uses the hash table

The first two are denied by me because I think they are not that complicated, so I use the idea of hash table to solve this problem, because there is a natural hash table structure in JS, which is an object. We know that the hash table query is O(1), so I’ll rewrite the code as follows

const ids = {};
selectedIds.forEach(id= > ids[id] = 1);

data.filter(({id}) = > {
  return!!!!! ids[id]; })Copy the code

Change the query from selectedIds to IDS, so the time complexity changes from O(n^2) to O(n), this code has been added

const ids = {};
selectedIds.forEach(id= > ids[id] = 1);
Copy the code

Well, adding a selectedIds traversal is also O(n), so it’s O(2n) overall, but in terms of the time complexity it’s still O(n) in terms of the long term expectation, just adding an extra object, so this is also a typical example of space for time, but don’t worry, When IDS is used up, the garbage collection mechanism will collect it.

The last

In fact, this article is still very helpful to their own writing, let their own system of combing their understanding of the front-end optimization, HOPE to help you also.