Resources: the front – performance optimization resources: load caused by obstruction of js and CSS Resources: the working principle of the browser Resources: the CSS block js blocked resources: distributed rendering Resources: CSS will cause obstruction to load Resources: dry | 10 minutes to read the browser rendering process

1 Knowledge system

1.1 From URL input to page loading


First we need to use DNS (domain name resolution System) to resolve the URL to the corresponding IP address, and then we establish a TCP connection with the IP address of the server, and then we throw our HTTP request to the server, and after the server processes our request, The target data is returned to the client in an HTTP response, and the browser that receives the response data can begin a rendering process. Once rendered, the page is rendered to the user

  1. The DNS
  2. A TCP connection
  3. HTTP request Throwing
  4. The server processes the request and the HTTP response returns
  5. The browser takes the response data, parses the response, and displays the parsed results to the user

1.2 Performance optimization mind mapping


2 Web Pages (HTTP)

There are three main processes involved at the network level in the process of entering URL to display page:

  1. The DNS
  2. A TCP connection
  3. HTTP request/response

2.1 the DNS


In general, there are two things related to DNS in front-end optimization:

  1. Is to reduce the number of DNS requests,
  2. Yes DNS Prefetch — DNS Prefetch

DNS is the basic protocol of the Internet, and its resolution speed seems to be easily overlooked by website optimizers. Most new browsers have been optimized for DNS resolution. A typical DNS resolution takes 20-120 milliseconds, so reducing the time and frequency of DNS resolution is a good optimization.

DNS Prefetching reduces waiting time and improves user experience as domain name resolution and loading is a serial network operation.

By default, the browser Prefetch the domain name of the page that is not in the same domain as the current domain name (the domain name of the page being viewed) and cache the result. This is the implicit DNS Prefetch. If you want to pre-fetch domains that do not appear on the page, you use show DNS Prefetch.

DNS preresolution usage

// Use meta information to tell the browser that the current page is to be pre-resolved by DNS<meta http-equiv="x-dns-prefetch-control" content="on">// Use the link tag in the page header to force DNS preresolution:<link rel="dns-prefetch" href="//www.zhix.net"> 
Copy the code

Note: Dns-prefetch should be used with caution. Multiple pages of repeated DNS prefetch will increase the number of repeated DNS queries, because some developers pointed out that disabling DNS prefetch can save 10 billion DNS queries per month.

// Disable implicit DNS Prefetch if desired<meta http-equiv="x-dns-prefetch-control" content="off">
Copy the code

2.2 HTTP


For the TWO steps of DNS resolution and TCP connection, our front-end can do very little. In contrast, optimization at the HTTP connection level is at the heart of our network optimization

HTTP optimization has two broad directions

  1. Reduce the number of requests
  2. Reduce the time spent on a single request

2.2.1 Reducing the Number of HTTP Requests

  • Image: Sprite image, icon font file
  1. Several small images are merged into one image, and the display position of the image is adjusted using CSS-background-position

  2. Icon font file ali icon

  • Merge JS and CSS files
  1. webpack
  2. gulp
  3. grunt
  • Browser cache If images, scripts, and style files are fixed and not frequently modified, use caching to reduce the number of HTTP requests or file downloads
  1. Strong cache
  2. Negotiate the cache

2.2.2 Reduce the time spent on a single request

  • The picture

    Image online batch compression
  • Gzip if it is a Vue project, and nginx, which vue,nginx, server must be enabled with Gzip

3. Web chapter (picture optimization)

3.1 Image scheme selection under different business scenarios


In computers, pixels are represented by binary numbers. The corresponding relationship between pixels and binary bits is different in different image formats. The more binary bits a pixel corresponds to, the more color types it can represent, the finer the image effect, and the larger the file size accordingly.

3.2 JPEG/JPG


Keywords: lossy compression, small size, fast loading, not transparent

  • The advantages of JPG

    The biggest feature of JPG is lossy compression. This efficient compression algorithm makes it a very lightweight image format. On the other hand, even though it is called “lossy” compression, JPG compression is still a high quality compression: when we compress an image to less than 50% of its original volume, JPG still retains 60% of its quality. In addition, THE JPG format stores a single image in 24 bits and can present as many as 16 million colors, enough to handle the color requirements of most scenarios, which means that the mass loss before and after compression is not easily noticeable to our human eyes — assuming you use the right business scenario.

  • Usage scenarios

    JPG is good for rendering colorful images, and in our daily development, JPG images are often used as large background, rotation, or Banner images.

    The processing of large images on the two e-commerce websites is the best portrayal of the application scenario of JPG pictures: When opening the home page of Taobao, we can find the most eye-catching and largest picture on the page, which must be with the suffix of.jpg: Using JPG to render large images, which can preserve image quality without causing headaches in image volume, is one of the most popular solutions nowadays.

  • The defect of JPG

    Lossy compression is hard to spot on the rote images shown above, but when it comes to images with strong lines and color contrast, such as vector graphics and logos, the resulting blur can be quite noticeable. In addition, JPEG images do not support transparency processing, and transparent images need to be rendered in PNG.

3.3 PNG


Keywords: lossless compression, high quality, large volume, transparent support

  • The advantages of PNG

    PNG (Portable Network Graphics Format) is a lossless compression image format with high fidelity. 8 and 24, these are the bits of binary numbers. According to the corresponding relationship mentioned in our previous knowledge, 8-bit PNG can support up to 256 colors, while 24-bit PNG can present about 16 million colors.

    PNG images have stronger color expression than JPG, more delicate processing of lines, and good support for transparency. It makes up for the limitations of JPG mentioned above, the only problem is that it is too big.

  • Png-8 vs. PNG-24 multiple choice questions

    When to use PNG-8 and when to use PNG-24, that’s the question

    In theory, pNG-24 is recommended when you’re looking for the best display and don’t care about file size.

    In practice, however, PNG is not used for more complex images in order to avoid volume issues. When we came across scenes that were suitable for PNG, we preferred the smaller PNG-8.

    How to determine whether an image should be pNG-8 or PNG-24? It’s a good idea to export images in both formats first, see if pNG-8 output results in a visually visible loss of quality, and determine if this loss is acceptable to us (especially your UI designers) based on the comparison.

  • Application scenarios

    As mentioned above, the cost of processing complex and colorful images with PNG will be relatively high, so we usually hand over JPG to store them.

    Considering PNG’s advantages in handling lines and color contrast, we mainly use it for small logos, simple and contrasting images or backgrounds, etc.

    At this point, we turn our attention to the home page of Taobao, which is the industry’s model in terms of performance, and we find that the Logo on its page, no matter the size, is really PNG format:

SVG 3.4


Key words: text file, small size, no distortion, good compatibility

  • The usage and application scenarios of SVG

    Write SVG to HTML

    Write SVG to HTML after writing SVG to a separate file

3.5 Base64


Keywords: text file, dependent coding, small icon solution

  • Application scenarios of Base64

    The actual size of the image is small (if you look at the Base64 image on the digging page, it’s almost never larger than 2KB)

    Images cannot be combined with other small images in Sprite form (Synthetic Sprite is still the main way to reduce HTTP requests, Base64 is complementary to Sprite)

    Image update frequency is very low (no need for us to repeatedly encode and modify the file content, low maintenance cost)

3.6 WebP


Keywords: Young all-rounder player is Google designed for the Web development of a picture to speed up the loading of pictures of the picture format, it supports lossy compression and lossless compression.

  • The advantages of WebP

    WebP is handy for detailed images like JPEG, supports transparency like PNG, and displays dynamic images like GIF – it combines the best of many image file formats.

  • Limitations of WebP

    compatibility

3.7 summarize


Picture scheme selection under different business scenarios

4. Storage (browser cache)

4.1 What is cache


For a data request, it can be divided into three steps: initiating a network request, back-end processing and browser response

Browser caching helps us optimize performance in the first and third steps. For example, if the cache is used directly without making a request, or if the request is made but the back-end stores the same data as the front-end, there is no need to send the data back, thus reducing the response data.

Cache mind maps

4.2 Cache Location


4.2.1 Cache Priority

From the point of view of the cache position, there are four kinds, and each has its priority. When the cache is searched in sequence and none is hit, the network will be requested.

Service Worker

Memory Cache

Disk Cache

Push Cache

4.2.2 Service Worker

Don’t understand the MDN

Holdings MemoryCache

A MemoryCache is a cache that exists in memory. In terms of priority, it is the first cache that the browser tries to hit. It is the fastest type of cache in terms of efficiency.

Memory caching is fast and short-lived. It “lives and dies” with the rendering process, and when the process ends, that is, when the TAB is closed, the data in memory ceases to exist.

So what files are going into memory?

In fact, this rule of division has always been inconclusive. However, you can understand that memory is limited. In many cases, you need to consider the amount of memory available for real-time display first, and then determine the proportion of resources allocated to memory and disk based on the specific situation. The location of resources is random to some extent

Although there is no final decision on the partitioning rules, based on the results observed in daily development, including the Network screenshots we showed at the beginning, we can at least summarize the following rule: whether resources save memory, browsers adhere to the “conservation principle”. We found that Base64 images can almost always be stuffed into memory cache, which can be seen as a browser “self-preservation” to save on rendering costs. In addition, small JS and CSS files also have a greater chance to be written into memory – by contrast, large JS and CSS files do not have this treatment, memory resources are limited, they tend to be directly thrown into the disk.

4.2.4 Disk Cache

A Disk Cache is a Cache stored on a hard Disk. It is slower to read, but everything can be stored on Disk, which is better than a Memory Cache in terms of capacity and storage timeliness.

Disk Cache coverage is by far the largest of all browser caches. Based on the fields in the HTTP Herder, it determines which resources need to be cached, which resources can be used without being requested, and which resources have expired and need to be re-requested. And even in the case of cross-site, resources with the same address once cached by the hard disk will not be requested again. Most of the Cache comes from Disk Cache, which we’ll discuss in more detail in HTTP protocol headers

What files does the browser throw into memory? Large files are most likely not stored in the memory, whereas files are preferentially stored in disks if the current system memory usage is high

4.2.5 Push Cache

Don’t understand

Push Cache is HTTP/2 and is used when all three caches fail

4.3 Cache process analysis


The communication between the browser and the server is in reply mode. That is, the browser initiates an HTTP request and the server responds to the request. How does the browser determine whether a resource should be cached and how to cache it? After the browser sends the request to the server for the first time and gets the request result, it stores the request result and the cache identifier in the browser cache. The browser’s processing of the cache is determined by the response header returned during the first request for the resource. The specific process is shown below:

  1. Each time the browser initiates a request, it first looks up the result of the request and the cache identifier in the browser cache
  2. Each time the browser receives the result of the returned request, it stores the result and the cache id in the browser cache

4.4 HTTP cache

HTTP caching is one of the caching mechanisms we are most familiar with in our daily development. It is divided into strong cache and negotiated cache. The strong cache has a higher priority. The negotiation cache is enabled only when the strong cache fails to be matched.

4.5 strong cache

Strong cache: Does not send a request to the server, but reads resources directly from the cache. In the Network option of the Chrome console, the request returns a status code of 200, and Size displays from Disk cache or from Memory cache. Strong caching can be implemented by setting two HTTP headers: Expires and cache-Control.

4.5.1 Expires

Cache expiration time, used to specify the expiration time of resources, is a specific point in time on the server. That is, Expires=max-age + request time needs to be used in combination with last-Modified. Expires is a Web server response header field that tells the browser in response to an HTTP request that the browser can cache data directly from the browser before the expiration date without having to request it again.

Expires is a product of HTTP/1 and is limited to local time, which can invalidate a cache if you change it. Expires: Wed, 22 Oct 2018 08:41:00 GMT Indicates that the resource will expire after Wed, 22 Oct 2018 08:41:00 GMT and needs to be requested again.

  1. Cache expiration time, used to specify the expiration time of resources, is a specific point in time on the server
  2. Tell the browser that the browser can cache data directly from the browser before the expiration date without having to request it again
  3. Max-age has a higher optimization level than Expires, and when max-age is present, expires is ignored
  4. If the file on the server has been changed but the browser is not aware of it during the valid time

4.5.2 Cache-Control

In HTTP/1.1, cache-control is the most important rule, mainly used to Control the web Cache. Cache-control can be set in either the request header or the response header, and multiple instructions can be combined

  1. max-age
  2. s-maxage
  3. private
  4. public
  5. no-cache
  6. no-store

  • max-age

    Max-age = XXX (XXX is numeric) indicates that the cache contents will expire after XXX seconds

    1. Sets the maximum period for which the cache is stored, after which the cache is considered expired in seconds. In contrast to Expires, time is relative to the time of the request,
    2. Precedence over Expires

  • s-maxage

    1. Overrides max-age or Expires headers, but only for shared caches (such as individual agents) and is ignored in private caches 2
    2. Can be used with public, such as CDN
    3. The priority is higher than max-age

  • private

    All content can only be cached by the client

    Indicates that intermediate nodes do not allow caching. For Browser <– proxy1 <– proxy2 <– Server, proxy will honestly send data returned by Server to Proxy1 without caching any data itself. The next time Browser requests it again, the proxy forwards the request instead of giving cached data to itself

    1. Indicates that the response can only be cached by a single user, not as a shared cache (that is, the proxy server cannot cache it), and that the response content can be cached
    2. Own server
  • public

    All content will be cached (both client and proxy can be cached)

    Specifically, the response can be cached by any intermediate node, such as Browser <– proxy1 <– proxy2 <– Server, the proxy in the middle can cache resources, For example, the next time proxy1 requests the same resource, it directly gives Browser its cache instead of asking Proxy2 for it.

  • no-store

    Nothing is cached, either by force or by negotiation

    1. The cache should not store anything about client requests or server responses.
    2. No caching strategy is used
  • no-cache

    The client caches the content, and whether to use the cache is verified by a negotiated cache. Does not use cache-control Cache Control for pre-validation, but uses the Etag or Last-Modified field to Control the Cache. Note that the name no-cache is a bit misleading. This does not mean that the browser will no longer cache data, but that the browser needs to make sure that the data is consistent with the server before using the cache

    1. Force the cache to submit the request to the original server for validation before releasing the cache copy
    2. So this file, whatever it is, is going to make a request to the server, where is the server going to say, is this file in the cache policy

4.5.3 Strong Cache mind mapping

4.6 Negotiating Cache


4.6.1 What is Negotiated Cache

Negotiation cache is a process in which the browser sends a request to the server with the cache ID after the cache is invalid, and the server decides whether to use the cache based on the cache ID. There are two main situations:

The negotiated cache takes effect, returning 304 and Not Modified

Negotiation cache invalid, return 200 and request result

4.6.2 last-modified and If – Modified – Since

Last-modified is a response header that contains the date and time when the resource identified by the source server was Modified. It is often used as a validator to determine whether received or stored resources are consistent with each other. This is a backup mechanism because it is less accurate than ETag. This field is used for conditional requests that contain an if-modified-since or if-unmodified-since header.

  1. Caching mechanism based on client and server negotiation
  2. Last-Modified —-response header
  3. If-Modified-Since—-request header
  4. It needs to be used with cache-control
  5. Max-age takes precedence over last-Modified
  • disadvantages
  1. Some servers cannot obtain the exact change time
  2. The file modification time has changed, but the file content has not

4.6.3 Etag/If None – Match

The ETagHTTP response header is the identifier for a specific version of the resource. This makes caching more efficient and saves bandwidth because the Web server does not need to send the full response if the content does not change. If content changes, using ETag helps prevent simultaneous updates of resources from overwriting each other (” mid-air collision “)

  1. Hash value of the file content
  2. etag–response header
  3. if-none-match — request header
  4. It should be used with cache-control

4.6.4 Comparison between the two

  1. First, Etag is superior to Last-Modified in accuracy.
  2. Second, in terms of performance, Etag is inferior to Last-Modified, because last-Modified only takes time to record, whereas Etag requires the server to compute a hash value through an algorithm.
  3. Third, server verification takes Etag as the priority

4.6.5 Mind mapping

4.7 the reference


Introduction to the Browser cache mechanism and Analysis of the cache policy Understanding the front-end cache In-depth understanding of the browser cache mechanism

5. Storage (Local Storage)

5.1 the Cookie


  1. The job of a Cookie is not to store it locally, but to “maintain state.”
  2. A Cookie is not big enough
  3. All requests under the same domain name carry cookies

5.2 the Local Storage, the Session Storage


5.2.1 overview

Large Storage capacity: Web Storage The Storage capacity can reach 5 to 10 MBIT/s depending on the browser. The Storage capacity is only on the browser and does not communicate with the server.

5.2.2 Differences between Local Storage and Session Storage

  • Life cycle: Local Storage is persistent Local Storage. Data stored in it will never expire. The only way to make it disappear is to manually delete it. Session Storage is temporary local Storage. It is session-level Storage. When the Session ends (the page is closed), the Storage content is also released.

  • Scope: Local Storage, Session Storage, and Cookie all follow the same origin policy. However, the special point of Session Storage is that even if two pages under the same domain name are not opened in the same browser window, their Session Storage content cannot be shared.

5.2.3 Application Scenarios

  • Local Storage

    1. The Local Storage can do all the data Storage tasks that cookies cannot do in theory and can be accessed with simple key-value pairs.
    2. Store some stable resources. For example, image-rich e-commerce sites will use it to store Base64 image strings
    3. Stores static resources such as CSS and JS that are not updated frequently
  • Session Storage

    The Session Storage of Weibo is mainly to store the browsing footprint of this Session

5.3 with IndexedDB


  1. IndexedDB has no storage upper limit (generally no less than 250MB)
  2. IndexedDB can be seen as an upgrade to LocalStorage, and when the complexity and size of the data becomes too large for LocalStorage to handle, IndexedDB can certainly be used to help.

6. CDN

6.1 What is CDN


A Content Delivery Network (CDN) is a group of servers distributed in various regions. These servers store copies of the data, so the server can fulfill requests for data based on which servers are closest to the user. CDN provides fast service and is less affected by high traffic.

6.2 Why to use CDN


Does caching, local storage provide performance gains only after the “get resources and store them” thing happens? In other words, none of these tactics will save us the first time we ask for resources. To improve the response capability of the first request, we also need to make use of the capability of CDN

6.3 How does CDN work


Suppose my root server is in Hangzhou, and I have my own computer rooms available in each of the five cities in the diagram

At this time, a user in Beijing asked me for resources. In the case of small network bandwidth and large user visits, this server in Hangzhou may not be so powerful and can not give users a very fast response speed. So I had a brainwave, this batch of resources copy a batch of Beijing in the computer room. When the user requests resources, the nearest request Beijing server, Beijing this server looked down, this resource I saved, so close, the response speed must be chengchang! What if the server in Beijing does not copy this batch of resources? It will ask the root server in Hangzhou for the resource. In this process, the Server in Beijing plays the role of CDN.

6.4 Core functions of CDN


CDN has two core points, one is cache, one is back source.

  • The cache

    “Caching” is the process in which we copy resources to the CDN server

  • Back to the source

    This is the process by which the CDN finds that it does not have the resource (usually because cached data has expired) and turns to the root server (or its upper layer server) for the resource.

6.5 CDN and front-end performance optimization


CDN is usually used to store static resources

The root server is essentially a business server whose core task is to generate dynamic pages or return non-purely static pages, both of which require computation. The business server is like a workshop where machines hum to produce the resources we need; In contrast, a CDN server is like a warehouse, which only acts as a “habitat” and “porter” for resources.

Static resources refer to resources such as JS, CSS and images that do not need to be calculated by a business server. And “dynamic resources”, as the name implies, is the need for back-end real-time dynamic generation of resources, the more common is JSP, ASP or rely on the server render HTML pages.

What is an “impure static resource”? It is an HTML page that requires the server to do additional calculations outside of the page. Specifically, before I open a site, the site needs to verify my identity through a number of means, such as permission authentication, to decide whether to present the HTML page to me. The HTML is static in this case, but it is coupled to the operations of the business server, and it is not appropriate for us to dump it on the CDN.

6.6 Practical application of CDN


Static resources have the characteristics of high access frequency and large incoming traffic, so the loading speed of static resources is always a key indicator of front-end performance. CDN is an important means to speed up static resources. In many front-line Internet companies, “static resources using CDN” is not a suggestion, but a regulation

www.taobao.com/

You can see that the business server really does return us a simple HTML page that hasn’t been hosted by a static resource

Click on any static resource and you can see that it is requested from the CDN server

6.7 the CDN and cookies


Cookies follow the domain name. All requests under the same domain name carry cookies. Imagine if we were just asking for an image or a CSS file at the moment, and we were also carrying around a Cookie (the key is that the Cookie holds information I don’t need right now). Although cookies are small, there can be many requests. As the requests pile up, the overhead of such unnecessary cookies will be unimaginable

Requests under the same domain name carry cookies indiscriminately, whereas static resources often do not require cookies to carry authentication information. Put static resources and main page under different domain names, perfect to avoid unnecessary cookies!

It may seem like a small detail, but the utility is amazing. Due to the huge flow of static resources of e-commerce websites, if this redundant Cookie is not removed, not only the user experience will be greatly reduced, but also the annual economic cost caused by performance waste will be a very horrible number.

7. Rendering (Server-side rendering)

7.1 Client Rendering


In client rendering mode, the server will send the static file required for rendering to the client. After the client is loaded, it will run JS in the browser and generate the corresponding DOM according to the running results of JS


      
<html>
  <head>
    <title>I am the client rendering of the page</title>
  </head>
  <body>
    <div id='root'></div>
    <script src='index.js'></script>
  </body>
</html>
Copy the code

What is underneath the root node? You don’t know, I don’t know, until the browser runs through index.js, which is typical client rendering.

The page presents content that you wouldn’t find in an HTML source file — that’s the nature of it.

7.2 Server Rendering


In server-side rendering mode, when the user first requests a page, the server renders the required component or page as an HTML string and returns it to the client. What the client gets is HTML content that can be rendered and presented to the user without having to run through the JS code to generate the DOM content.

A web site rendered on the server is “WHAT you see is what you get.” The content rendered on the page can be found in the HTML source file

7.3 What performance problems are solved by Server-side rendering


In fact, many sites use server-side rendering for seo reasons, rather than performance.

Assume that there is A keyword called “front-end performance optimization” in the page of website A. This keyword is added to the HTML page after the JS code runs through it. So in client rendering mode, we can’t find website A when we search for this keyword in the search engine — the search engine will only look for ready-made content, not help you run JS code. The operator of A website sees this situation and feels very big: search engines can’t find us and users can’t find us, so who will use my website? In order to present “ready-made content” to search engines, site A has to enable server-side rendering.

But just because performance comes second doesn’t mean it’s not important. Server rendering solves one of the key performance issues — the slow loading of the first screen. In client-side rendering mode, in addition to loading the HTML, we have to wait for the part of JS required for rendering to load, and then have to run the part of JS in the browser again. All of this happens after the user clicks on our link, and until the process is over, the user doesn’t see the real face of our page, which means the user is waiting! In contrast, server-side rendering mode, the server to the client is a direct can be taken to present to the user’s web page, the middle link as early as in the server to help us do, the user is not “happy”? .

7.4 Application Scenarios Rendered by the Server


Server-side rendering is essentially what the browser does, and what the server does. This will render the resource faster when it arrives at the browser

But think about it, in this era of Internet users everywhere, there are almost as many browsers as there are users. The total number of browsers owned by users is too many to count. How many servers does a company have? We concentrated the rendering load on so many browsers and spread it out over a relatively small number of servers, which certainly couldn’t handle it

Unless the performance of the web page is too high, so that all the moves are used up, the performance is not satisfactory, at this time we can consider the boss to apply for a few more servers, the server render up ~

8. Rendering (browser rendering)

8.1 Browser Kernel


The browser kernel can be divided into two parts: the Layout Engine (Rendering Engine) and the JS Engine

The rendering engine includes HTML interpreter, CSS interpreter, layout, network, storage, graphics, audio and video, image decoder and other components.

8.2 Browser rendering Process Analysis


8.3 Suggestions for CSS optimization based on the rendering process


8.3.1 CSS selectors are matched from right to left

#myList li {}
Copy the code

The browser must iterate over every LI element on the page, each time checking that the parent id of the li element is myList

8.3.2 Specific optimization

  1. Avoid wildcards and select only the elements you need
  2. Focus on attributes that can be implemented through inheritance to avoid repeated matching and repeated definitions.
  3. Use label selectors less. If possible, use a class selector instead
  4. Rather than gilding the lily, ID and class selectors should not be held back by redundant label selectors
  5. Reduce nesting. Descendant selectors have the highest overhead, so we should try to keep the depth of the selectors to a minimum (no more than three layers at most) and use classes to associate each tag element whenever possible

8.4 Say goodbye to blocking: load order optimization of CSS and JS


HTML, CSS, and JS all have features that block rendering. HTML blocking, of course — without HTML, you wouldn’t have DOM? Without DOM, rendering and optimization, it’s all nonsense.

8.4.1 CSS Blocking

In the previous section, we mentioned that DOM and CSSOM work together to build a render tree. This can have a serious impact on performance: by default, CSS is a blocked resource. The browser does not render any of the processed content as it builds CSSOM. Even if the DOM has been parsed, rendering is not OK as long as CSSOM is not OK (this is mainly to avoid the ugly “streaking” of AN HTML page without CSS).

It’s only when we start parsing the HTML, to the link tag or the style tag, that CSS comes into play and the construction of CSSOM begins. Most of the time, DOM has to wait for CSSOM. So we can sum it up like this:

CSS is the resource that blocks rendering. It needs to be downloaded to the client as soon as possible to reduce the first rendering time. Early (put CSS in the head tag) and soon (enable CDN to optimize static resource loading speed)

  1. CSS files are downloaded in parallel
  2. Downloading CSS blocks subsequent JS execution
  3. CSS downloads do not block subsequent HTML parsing, but do block DOM rendering
  4. The CSS download blocks the onload event of the DOM following it.
  5. CSS downloads do not block subsequent JS downloads, but after the JS download is complete, the execution is blocked.

8.4.2 JS blocking

JS is about modification, and it helps us modify every aspect of a web page: content, style, and how it responds to user interaction. These “everything” changes are essentially DOM and CSSDOM changes. So JS execution blocks CSSOM, and it also blocks the DOM if we don’t explicitly declare it.

The JS engine exists independently of the rendering engine. Our JS code is executed wherever it is inserted in the document. When the HTML parser encounters a Script tag, it pauses the rendering process and gives control to the JS engine. The JS engine will execute the inline JS code directly, and the external JS file must first get the script, and then execute. Once the JS engine is finished running, the browser will return control to the rendering engine and continue to build CSSOM and DOM. So rather than JS blocking CSS and HTML, JS engines are taking control of rendering engines.

  1. Modern browsers load JS files in parallel.

  2. Loading or executing js blocks parsing of tags, which blocks the formation of the DOM tree, and the browser will not continue parsing tags until the JS execution is complete. Without the DOM tree, the browser can’t render, so when you load a large JS file, you can see the page blank for a long time

The reason why tag parsing is blocked is that the loaded JS may create or delete nodes, which will affect the DOM tree. If it is not blocked, after the browser has parsed the tag to generate the DOM tree, JS modifies some nodes, and the browser has to parse again to generate the DOM tree, which has poor performance

8.4.3 Three loading methods of JS

Normal mode

In this case, JS blocks the browser and the browser must wait for index.js to load and execute before it can do anything else.

<script src="index.js"></script>
Copy the code

Async mode

In async mode, JS does not block the browser from doing anything else. It loads asynchronously, and when it finishes loading, the JS script executes immediately.

<script async src="index.js"></script>
Copy the code

Defer the pattern

In EFER mode, JS loading is asynchronous and execution is delayed. When the entire document has been parsed and the DOMContentLoaded event is about to be triggered, the JS files that have been tagged defer will start executing in sequence.

<script defer src="index.js"></script>
Copy the code

From an application point of view, async is usually used when the dependencies between our script and DOM elements and other scripts are not strong; When the script depends on DOM elements and the execution results of other scripts, we choose defer.

9. Rendering (DOM Optimization)

10. Render (Event Loop and Asynchronous Update Strategy (VUE))

10.1 What Is Asynchronous Update?


When we update data using the interface provided by Vue or React, the update does not take effect immediately. Instead, it is pushed into a queue. When the time is right, update tasks in the queue will be triggered in batches. This is called asynchronous updating.

Asynchronous updates help us avoid overrendering and are a good example of “letting JS subpressure the DOM” as we mentioned in the previous section.

10.2 Advantages of asynchronous Update


The nature of asynchronous updates is that they only look at the result, so the rendering engine doesn’t have to pay for the process

Sometimes we have situations like this

/ / task
this.content = 'First test'
2 / / task
this.content = 'Second test'
/ / task three
this.content = 'Third Test'
Copy the code

We changed the same state three times in three update tasks, which would have required us to manipulate the DOM three times if we had followed a traditional synchronous update strategy. But what is essentially the target that needs to be presented to the user is only the result of the third time, which means that only the third time is meaningful — we wasted two calculations.

But if we shove these three tasks into an asynchronous update queue, they will be executed in batches at the JS level first. When the process gets to render, it only needs to manipulate the DOM once for meaningful results — that’s the beauty of asynchronous updates.

10.3 Vue status Update method: nextTick


Instead of updating the DOM directly when Vue observes data changes, it opens a queue and buffers all data changes that occur in the same event loop. Duplicate data is removed during buffering to avoid unnecessary computation and DOM manipulation. Then, in the next event loop tick, Vue refreshes the queue and performs the actual work

$nextTick is used to know when the DOM update is complete

11. Render (Backflow and Redraw)

11.1 a reflow and repaint


Based on the render tree layout, compute CSS styles, which are geometric information such as the size and position of each node in the page. HTML has a streaming layout by default, and CSS and JS break this layout by changing the look and feel of the DOM as well as its size and position. Two important concepts come up here: replaint and reflow.

Replaint: Part of the screen is redrawn without affecting the overall layout, e.g. the background color of a CSS is changed, but the geometry and position of elements remain the same.

Reflow: means that the geometry of the component has changed and we need to revalidate and evaluate the render tree. Part or all of the render tree has changed. That’s a Reflow, or Layout.

Reflow and Replaint should be kept to a minimum, which I think is one of the reasons why table layouts are rarely used today.

Display: None triggers reflow, visibility: The hidden property is not an invisible property, its semantics are to hide the element, but the element still occupies the layout space and will be rendered as an empty box, so visibility: Hidden will only trigger repaint because no position change has taken place.

In some cases, such as changing the style of an element, the browser does not reflow or repaint once immediately. Instead, the browser will accumulate a batch of such operations and do a reflow. This is also called asynchronous or incremental asynchronous reflow.

In some cases, such as the resize window, the default font of the page is changed. For these operations, the browser reflow immediately.

12.2 Nature of throttling and anti-shake


Both of these things exist as closures.

They control the event firing frequency by wrapping the corresponding callback function, caching the time information in the form of free variables, and finally using setTimeout.

12.3 the throttle


No matter how many callbacks you trigger in a given period of time, I only recognize the first one and respond when the timer ends.

Now a passenger just got off the plane and needed a car, so he called the airport’s only airport bus to pick him up. The driver drove to the airport, thinking it would be worth it to pick up a few more people. I waited ten minutes. So the driver opened the timer, while beckoning the guests behind off and on. During those 10 minutes, passengers disembarking from the next plane can only take this one bus, and at the end of the 10 minutes, no matter how many unpacked passengers are left behind, the bus must leave

12.4 if you


No matter how many times you trigger a callback over a given period of time, I only think it’s the last one.

After the first passenger gets on the bus, the driver starts the timer (say ten minutes). If another passenger arrives within 10 minutes, the driver will reset the timer and start waiting for another 10 minutes (delayed). Until a passenger gets on the bus and no new passenger gets on for ten minutes, the driver decides that no one really needs the ride and moves on.

12.5 Defects of anti – shake


The problem with anti-shake is that it is “too patient”. Imagine if the user is doing something so frequently that he doesn’t wait for the delay to end, and each time the timer is regenerated for the user, the callback is delayed countless times. Frequent delays can lead to delayed responses and the same “this page is stuck” feeling. In order to avoid self-defeating, we need to take advantage of the idea of throttling, to create a “bottom line” tremble – you can, but I have my principles: during the delay time, I can regenerate the timer for you; But as soon as the delay time is up, I have to give the user a response. This combination of throttling and anti-shock has been applied by many mature front-end libraries to implement their enhanced throttling functions

11 Performance Monitoring

performance

function getsec(time) {
  return time / 1000 + 's'
}
window.onload = function () {
  var per = window.performance.timing;
  console.log('DNS Query Time ' + getsec(per.domainLookupEnd - per.domainLookupStart))
  console.log('TCP connection Time ' + getsec(per.connectEnd - per.connectStart))
  console.log('request Request response time ' + getsec(per.responseEnd - per.responseStart))
  console.log('DOM rendering time' + getsec(per.domComplete - per.domInteractive))
  console.log('White screen time' + getsec(firstPaint - pageStartTime))
  console.log('DomReady operational time' + getsec(per.domContentLoadedEventEnd - per.fetchStart))
}
Copy the code