As a qualified front-end engineer, browser related working principle is the foundation of our performance optimization, before I also stressed the importance of the knowledge system, the rational part of the original content is the most important part of knowledge system, must be firmly to the changeable actual scene, targeted gives a practical solution, Instead of reciting development catch-all and performance optimization rules, it’s hard to find the real problem, let alone solve it.
It covers how browsers work, browser security and performance monitoring and analysis. There will be two posts, and today’s post is the first in a series.
Can you say something about the browser cache?
Caching is an important part of performance optimization, and the browser’s caching mechanism is an important part of development. The browser caching mechanism is clarified in three parts:
- Strong cache
- Negotiate the cache
- The cache location
Strong cache
Caching in the browser can be used in either case where HTTP requests need to be sent or when they are not.
The first is to check the strong cache, which does not require an HTTP request to be sent.
How do you check? This is done by the corresponding field, but it’s a little bit more straightforward to talk about this field.
In HTTP/1.0 and HTTP/1.1, this field is different. In the early days of HTTP/1.0, Expires was used, while HTTP/1.1 used cache-control. Let’s look at Expires first.
Expires
Expires is an expiration time that exists in the response header returned by the server and tells the browser to retrieve data directly from the cache before the expiration time without having to request it again. Like this:
Expires: Wed, 22 Nov 2019 08:41:00 GMT
Copy the code
Indicates that the resource will expire at 8:41 on November 22, 2019. If the resource expires, you must send a request to the server.
This may seem fine and reasonable, but there is a potential problem: the server time and browser time may not be the same, so the server may return an inaccurate expiration time. This approach was quickly abandoned in later versions of HTTP1.1.
Cache-Control
In HTTP1.1, a very critical field is used: cache-control. This field also exists in
The difference between Expires and Expires is that it does not use a specific expiration point. Instead, it uses the expiration time to control the cache, and the corresponding field is max-age. Take this example:
Cache-Control:max-age=3600
Copy the code
This means that after the response is returned, the cache can be used directly within 3600 seconds, that is, an hour.
If you think it has only max-age, you’re wrong.
In fact, it can combine a large number of instructions to complete the cache judgment of more scenarios. Some key attributes are listed as follows: public: Both the client and the proxy server can cache. Because a request may go through different proxy servers before reaching the target server, the result is that data can be cached not just by the browser, but by any proxy node in between.
Private: In this case, only the browser can cache. The intermediate proxy server cannot cache.
No-cache: Skip the current strong cache and send HTTP requests, that is, enter the negotiation cache phase directly.
No-store: Very rude, no caching of any kind.
S-maxage: This is similar to max-Age, but the difference is that S-maxAge is the cache time for the proxy server.
It’s worth noting that when both Expires and cache-control exist, cache-control takes precedence.
Of course, there is another case, when the resource cache time out, that is, the strong cache is invalid, what happens next? Yes, that brings us to the second barrier, the negotiated cache.
Negotiate the cache
When the strong cache is invalid, the browser sends a request to the server with the corresponding cache tag in the request header. The server decides whether to use the cache based on the tag, which is called the negotiated cache.
Specifically, there are two types of such cache tags: last-Modified and ETags. Each has its advantages and disadvantages, and there is no absolute advantage over the other, unlike the two strongly cached tags above.
Last-Modified
That is, the last modification time. After the browser sends a request to the server for the first time, the server adds this field to the response header.
When the browser receives the request and requests it again, it carries the if-modified-since field in the request header, which is the value of the last modification from the server.
When the server gets the if-modified-since field in the request header, it actually compares it to the last Modified time of the resource in the server:
- If this value in the request header is less than the last modified time, it is time to update. Return the new resource, just like a normal HTTP request response.
- Otherwise, return 304 and tell the browser to use the cache directly.
ETag
The ETag is a unique identifier generated by the server for a file based on the content of the current file. This value changes whenever the content of the file changes. The server gives this value to the browser through the response header.
The browser receives the value of ETag, puts it in the if-none-match field, and sends it to the server in the next request.
When the server receives if-none-match, it matches the ETag of the resource on the server:
- If they’re different, it’s time to update. Return the new resource, just like a normal HTTP request response.
- Otherwise, return 304 and tell the browser to use the cache directly.
Both comparisons
- in
precision
On,ETag
Better thanLast-Modified
. Better than ETag identifies resources according to their content, so it can accurately perceive changes in resources. Last-modified, on the other hand, may not be able to accurately perceive resource changes in some special cases. There are two main cases:
- Editing a resource file without changing its contents also invalidates the cache.
- Last-modified is perceived in seconds, so if the file changes multiple times in a second, then the last-modified file does not reflect the change.
- In terms of performance,
Last-Modified
Better thanETag
And it’s easy to understand,Last-Modified
It’s just a point in time, andEtag
Hash values need to be generated based on the specific contents of the file.
In addition, if both methods are supported, the server will give preference to ETag.
The cache location
As mentioned earlier, when a strong cache hit or the server in the negotiated cache returned 304, we fetched the resource directly from the cache. Where are these resources cached?
There are four types of cache locations in the browser, in descending order of priority:
- Service Worker
- Memory Cache
- Disk Cache
- Push Cache
Service Worker
The Service Worker borrows the idea from the Web Worker in that it has JS running outside the main thread, and because it is detached from the browser form, it cannot directly access the DOM. Still, it helps with many useful functions, such as offline caching, messaging push, and network proxies. The offline Cache is the Service Worker Cache.
Service Worker is also an important implementation mechanism of PWA. Details and features of Service Worker will be introduced in detail in the PWA sharing later.
Memory Cache and Disk Cache
Memory Cache refers to the Memory Cache, which is the fastest in terms of efficiency. But in terms of lifetime, it is the shortest, when the rendering process is finished, the memory cache does not exist.
Disk Cache is a Cache stored on disks. It is slower than memory Cache in terms of access efficiency, but it has advantages in storage capacity and storage duration. It should be easy to understand if you have some basic computer skills, so I won’t go into it.
Ok, so the question is, given the pros and cons of each, how does the browser decide whether to put resources in memory or hard drive? The main strategies are as follows:
- Large JS and CSS files will be directly thrown into the disk, otherwise thrown into memory
- When the memory usage is high, files are preferentially transferred to the disk
Push Cache
Push caching is the last line of defense in the browser cache. It is the content of HTTP/2, although not widely used now, but with the spread of HTTP/2, it is more and more widely used. There’s a lot to learn about Push Cache, but that’s not the focus of this article, so you can refer to this extended article.
conclusion
A quick summary of browser caching mechanisms:
First verify that the strong Cache is available through cache-control
- If strong cache is available, use it directly
- Otherwise enter the negotiation cache, that is, send HTTP request, the server through the request header
If-Modified-Since
orIf-None-Match
Field to check whether the resource is updated- If the resource is updated, return the resource and 200 status code
- Otherwise, 304 is returned, telling the browser to get the resource directly from the cache
Can you say something about the browser’s local storage? What are their strengths and weaknesses?
The localStorage of the browser is mainly divided into Cookie, WebStorage, and IndexedDB. WebStorage can be divided into localStorage and sessionStorage. Let’s take a look at each of these local storage scenarios.
Cookie
Cookies were not originally designed for local storage, but to make up for HTTP’s shortcomings in state management.
The HTTP protocol is a stateless protocol. The client sends a request to the server, the server sends a response, and the story ends. But how do you tell the server who the client is next time you send a request?
In this context, cookies are created.
A Cookie is essentially a small text file stored in the browser, stored internally as key-value pairs (you can see it in the Application section of the Chrome developer panel). Send a request to the same domain name, will carry the same Cookie, the server gets the Cookie for parsing, then can get the state of the client.
The Cookie’s use as a state store is easy to understand, but it also has a number of fatal drawbacks:
-
Capacity defects. Cookies have a maximum size of 4KB and can only be used to store a small amount of information.
-
Performance defects. Cookie follows the domain name. No matter whether an address under the domain name needs the Cookie or not, the request will carry the complete Cookie. As the number of requests increases, it will actually cause a huge waste of performance, because the request carries a lot of unnecessary content.
-
Security defects. Because the Cookie is passed between the browser and the server in the form of plain text, it is easy to be intercepted by illegal users, and then a series of tampering is carried out to re-send the Cookie to the server within the validity period, which is quite dangerous. In addition, when HttpOnly is false, the Cookie information can be read directly from the JS script.
localStorage
And the similarities and differences between the Cookie
LocalStorage has a little like a Cookie, is for a domain name, that is, in the same domain name, will store the same segment of localStorage.
However, there are quite a few differences between cookies:
-
Capacity. The maximum capacity of localStorage is 5M, which is greatly increased compared with 4K Cookie. Of course, this 5M is for a domain name, so it is persistent for a domain name.
-
Only a client exists and does not communicate with the server by default. This is a good way to avoid the performance and security problems associated with cookies.
-
Interface encapsulation. It is convenient to be exposed globally through localStorage and operate through its setItem and getItem methods.
Mode of operation
Let’s take a look at how to operate localStorage.
let obj = { name: "sanyuan".age: 18 };
localStorage.setItem("name"."sanyuan");
localStorage.setItem("info".JSON.stringify(obj));
Copy the code
Then enter the same domain name and get the corresponding value:
let name = localStorage.getItem("name");
let info = JSON.parse(localStorage.getItem("info"));
Copy the code
Json.parse () : localStorage () : localStorage () : localStorage () : localStorage () : localStorage () : localStorage () : localStorage () : localStorage () : localStorage () : localStorage () : localStorage ();
Application scenarios
With the large capacity and persistence of localStorage, you can use localStorage to store some content stable resources, such as the official website logo, storage Base64 format image resources, so use localStorage
sessionStorage
The characteristics of
SessionStorage is consistent with localStorage in the following aspects:
- Capacity. The capacity is also capped at 5M.
- Only a client exists and does not communicate with the server by default.
- Interface encapsulation. In addition to
sessionStorage
The name is changed, and the storage mode and operation mode are combinedlocalStorage
The same.
But there is a fundamental difference between sessionStorage and localStorage, that is, the former is only session level storage, not persistent storage. The session ends, the page closes, and this part of the sessionStorage no longer exists.
Application scenarios
- You can use it to maintain form information, store form information in it, and ensure that the previous form information will not be lost even if the page is refreshed.
- You can use it to store your browsing history. If these records are not needed after the page is closed, use
sessionStorage
I can’t think of a better fit. In fact, this is the way microblog is stored.
IndexedDB
IndexedDB is a non-relational database that runs in a browser. In essence, IndexedDB is a database. It is by no means the same magnitude as WebStorage’s 5M.
For its use, this article focuses on the principles, and the tutorial documentation on MDN is already very detailed, so I won’t go into detail here, but if you are interested, you can check out the documentation.
In addition to the database features, such as support for transactions and the storage of binary data, there are a few features that need to be noted:
- Key value pair storage. Internal use
Object storage
Store data, which is used in the object repositoryKey/value pairTo store. - Asynchronous operation. Database reading and writing is an I/O operation. The browser supports asynchronous I/O.
- Restricted by the same origin policy, that is, cross-domain databases cannot be accessed.
conclusion
The development of various local storage and caching technologies in browsers has created a lot of opportunities for front-end applications, and PWA has developed on the back of these excellent storage solutions. To recap these local storage solutions:
cookie
It’s not good for storage and has a lot of bugs.Web Storage
includinglocalStorage
andsessionStorage
By default, it does not communicate with the server.IndexedDB
Provides an interface for large data storage for non-relational databases that run on a browser.
What happens from the input URL to the page rendering? – – network
This is a problem of infinite difficulty. The purpose of this topic is to see how deep your web foundation is. Due to the limited level and space, HERE I will comb through some of the important processes for you, and I believe that I can give you an amazing answer in most cases.
Here I would like to declare in advance that since this is a very comprehensive problem, I may dig out a lot of details at a certain point. Personally, I think learning is a step by step process. After I understand the whole process, I can study these details by myself, and I will have a deeper understanding of the whole knowledge system. At the same time, I have reference materials for the details of the extension. After reading this article, I might as well go into further study to expand my knowledge.
All right, let’s get started.
At this moment, you in the browser address bar input Baidu’s url:
https://www.baidu.com/
Copy the code
Network request
1. Build the request
The browser builds the request line:
// The request method is GET, the path is the root path, and the HTTP protocol version is 1.1
GET / HTTP/1.1
Copy the code
2. Search for a strong cache
First check strong cache, if hit directly use, otherwise go to the next step. Refer to the previous article for more information on strong caching.
3. The DNS
Because we are entering the domain name, and the packet is sent through the IP address. So we need to get the IP address corresponding to the domain name. This process relies on a service system that maps domain names to IP addresses one by one. This system is called DNS (Domain Name System). The process of getting the specific IP address is DNS resolution.
Of course, it’s worth noting that the browser provides DNS data caching. That is, if a domain name has been resolved, the result of the resolution will be cached, and the next processing will be directly cached, without DNS resolution.
If no port is specified, port 80 of the corresponding IP address is used by default.
4. Set up a TCP connection
One caveat here is that Chrome requires a maximum of six TCP connections in the same domain at the same time. After that, the rest of the requests will have to wait.
Assuming there is no need to wait now, we enter the setup phase of the TCP connection. First, let’s explain what TCP is:
Transmission Control Protocol (TCP) is a connection-oriented, reliable, and byte stream-based transport layer communication Protocol.
Establishing a TCP connection goes through the following three phases:
- The connection between the client and server is established through a three-way handshake (that is, a total of three packets are sent to confirm the connection).
- Data transfer is performed. An important mechanism is that the receiver must send the packet to the sender after receiving it
confirm
If the sender did not receive thisconfirm
, the packet is determined to be lost, and the packet is resended. Of course, there is an optimization strategy in the process of sending, which is toThe large data packet is broken up into smaller packets
To the receiver, who sends them in the order of the packetThe assembly
Into a complete packet. - Disconnect phase. Data transfer is complete. Now it’s time to disconnect. Disconnect with four waves of the hand.
By this point, you should know what TCP connections do to ensure the reliability of data transmission. The first is a three-way handshake to confirm the connection, the second is packet verification to ensure that the data reaches the receiver, and the third is a four-way wave to disconnect the connection.
Of course, if you go deeper, like why three handshakes? Why not two? What if the third handshake fails? Why to wave four times and so on this series of questions, involving the basic knowledge of computer network, relatively low-level, but also very important details, I hope you can have a good study, in addition, here is a good article, click into the corresponding recommended article, I believe this article can give you inspiration.
5. Send an HTTP request
Now that the TCP connection is established, the browser can start communicating with the server, that is, sending HTTP requests. Browsers carry three things when making HTTP requests: a request line, a request header, and a request body.
First, the browser sends a request line to the server. For the request line, we build it in the first step of this section. Post the following:
// The request method is GET, the path is the root path, and the HTTP protocol version is 1.1
GET / HTTP/1.1
Copy the code
The structure is simple and consists of the request method, the request URI, and the HTTP version protocol.
Also include the request header, such as cache-control, if-modified-since, and if-none-match, which are all identifiers that might be placed in the request header as Cache information. Of course, there are a few other properties, listed below:
Accept: text/html,application/xhtml+xml,application/xml; Q = 0.9, image/webp image/apng, * / *; Q = 0.8, application/signed - exchange; v=b3 Accept-Encoding: gzip, deflate, br Accept-Language: zh-CN,zh; Q =0.9 cache-control: no-cache Connection: keep-alive Cookie: /* omit Cookie information */ Host: www.baidu.com Pragma: No-cache upgrade-insecure: 1 User-agent: Mozilla/5.0 (iPhone; CPU iPhone OS 11_0 like Mac OS X) AppleWebKit/604.1.38 (KHTML, like Gecko) Version/11.0 Mobile/15A372 Safari/604.1Copy the code
Finally, there is the request body, which only exists under the POST method, and the common scenario is form submission.
The network response
The HTTP request reaches the server and the server processes the request. Finally, you send the data to the browser, which is to return the network response.
Like the request part, the network response has three parts: the response line, the response header, and the response body.
The response line looks like this:
HTTP/1.1 200 OK
Copy the code
It consists of the HTTP protocol version, status code, and status description.
The response header contains some information about the server and the data it returns, when the server generated the data, the type of data it returned, and information about the Cookie it is writing.
Here are some examples:
Cache-Control: no-cache Connection: keep-alive Content-Encoding: gzip Content-Type: text/html; charset=utf-8 Date: Wed, 04 Dec 2019 12:29:13 GMT Server: apache Set-Cookie: rsv_i=f9a0SIItKqzv7kqgAAgphbGyRts3RwTg%2FLyU3Y5Eh5LwyfOOrAsvdezbay0QqkDqFZ0DfQXby4wXKT8Au8O7ZT9UuMsBq2k; path=/; domain=.baidu.comCopy the code
What happens when the response is complete? Is the TCP connection disconnected?
Not necessarily. If the request header or response header contains Connection: keep-alive, it indicates that a persistent Connection has been established. In this case, the TCP Connection will remain and will be reused later by the resources of the requesting unified site.
Otherwise, disconnect the TCP connection, and the request-response process ends.
conclusion
To summarize the main content, which is the web request process on the browser side:
What happens from the input URL to the page rendering? — Analytic algorithm
The network request and response are complete, and if the content-type value in the response header is text/ HTML, then it’s up to the browser to parse and render.
First, the analysis part is introduced, mainly divided into the following steps:
- build
DOM
The tree style
To calculate- generate
Layout of the tree
(Layout Tree
)
Build a DOM tree
Since HTML strings are not directly understood by browsers, this series of byte streams is transformed into a meaningful and easy to manipulate data structure called a DOM tree. The DOM tree is essentially a multi-tree with document as the root node.
So what’s the way to parse it?
The nature of HTML grammar
First, it should be clear that THE grammar of HTML is not context-free.
Here, it is worth discussing what a context-free grammar is.
In the discipline of compiler principles in computer science, there is a very clear definition:
If a formal grammar G = (N, σ, P, S) has the following production rules: V->w, it is called context-free. Where V∈N, w∈(N∪ σ)*.
Among them, the meaning of each parameter in G = (N, σ, P, S) is explained:
- N is a set of non-terminals (as the name implies, that is, the last symbol is not it, and so on).
- Sigma is a set of terminals.
- P is the start, it must belong to N, which is a non-terminal.
- S is just the set of different productions. Such as S -> aSb and so on.
In general terms, a context-free grammar means that the left side of all production in the grammar is a non-terminal.
Now, if you’re a little confused, let me give you an example.
Such as:
A -> B
Copy the code
In this grammar, there is a non-terminal to the left of each production. This is a context-free grammar. In this case, xBy must be able to specify xAy.
Let’s take a look at a counterexample:
aA -> B
Aa -> B
Copy the code
This is not A context-free grammar. When we encounter B, we don’t know whether we can specify A, depending on whether there is A to the left or right, that is, it depends on the context.
The first thing to notice about why it’s context-free is that the canonical HTML syntax is context-free, and what makes it context-free is the non-standard syntax. Let me give you just one counterexample to prove it.
For example, when the parser scans the form tag, the context-free grammar creates the DOM object directly corresponding to the form. This is not the case in real HTML5 scenarios. The parser looks at the context of the form. If the parent of the form tag is also a Form, skip the current form tag, otherwise create the DOM object.
Normal programming languages are context-free, but HTML is the opposite. It is the non-context-free nature of HTML Parser that makes it impossible to use the Parser of normal programming languages, so a new approach is needed.
Parsing algorithm
The HTML5 specification describes parsing algorithms in detail. The algorithm is divided into two stages:
- The tag.
- A difference “.
The two corresponding processes are lexical analysis and grammatical analysis.
Token algorithm
The algorithm takes HTML text as input and HTML tags as output, also known as the tag generator. The finite automatic state machine is used to complete. That is, when receiving one or more characters in the current state, it will be updated to the next state.
<html>
<body>
Hello sanyuan
</body>
</html>
Copy the code
A simple example demonstrates the tokenization process.
When < is encountered, the status is open for the tag.
When the character [A – Z] is received, the flag name state is entered.
This state remains until > is encountered, indicating that the tag name has been recorded and the state becomes data.
Do the same next time you encounter the body tag.
At this point the HTML and body tags are recorded.
Now go to , enter the data state, and then stay in this state to receive the following character Hello Sanyuan.
Then receive the < from , return to the open tag, and receive the next /, at which time an end tag token is created.
It then enters the tag name state, and returns to the data state when > is encountered.
Next, the </body> is processed in the same style.
Tree-building algorithm
As mentioned earlier, the DOM tree is a multi-tree with the root node document. So the parser first creates a document object. The tag generator sends information about each tag to the tree builder. When the tree builder receives the corresponding tag, it creates the corresponding DOM object. Creating this DOM object does two things:
- will
DOM object
Add to the DOM tree. - Press the corresponding mark into the store open (and
Closing tags
Element in the stack.
Here’s another example:
<html>
<body>
Hello sanyuan
</body>
</html>
Copy the code
First, the state is the initialized state.
The state changes to before HTML when an HTML tag is received from the tag generator. At the same time, create a DOM element of HTMLHtmlElement, add it to the document root object, and press the stack operation.
Then the state automatically changes to before head, and the body is sent from the tag generator to indicate that there is no head. At this point, the tree builder automatically creates an HTMLHeadElement and adds it to the DOM tree.
Now go to the in head state and jump straight to after head.
Now the tag generator passes the body tag, creates the HTMLBodyElement, inserts it into the DOM tree, and pushes it into the open tag stack.
Then the state changes to in body and then receive the following series of characters: Hello Sanyuan. When the first character is received, a Text node is created and inserted, and then the Text node is inserted below the body element in the DOM tree. As subsequent characters are received, they are attached to the Text node.
Now, the tag generator passes a closing tag for the body and enters the After body state.
The tag generator finally passes in an HTML closing tag and enters the After After Body state, indicating that the parsing process is over.
Fault-tolerant mechanism
Speaking of THE HTML5 specification, I have to say that it has a strong tolerance strategy, and its fault tolerance is very strong. Although the reviews are mixed, I think as a senior front-end engineer, it is necessary to know what HTML Parser does in terms of fault tolerance.
Here are some classic examples of fault tolerance in WebKit, and you are welcome to add others.
- Use
instead of
if (t->isCloseTag(brTag) && m_document->inCompatMode()) {
reportError(MalformedBRError);
t->beginTag = true;
}
Copy the code
<br> <br>
- The form of discrete
<table>
<table>
<tr><td>inner table</td></tr>
</table>
<tr><td>outer table</td></tr>
</table>
Copy the code
WebKit will automatically convert to:
<table>
<tr><td>outer table</td></tr>
</table>
<table>
<tr><td>inner table</td></tr>
</table>
Copy the code
- Form elements are nested
Ignore the form.
Style calculation
CSS styles generally come from three sources:
- Link Tag Reference
- Style in the style tag
- The embedded style attribute of the element
Format style sheet
First of all, CSS style text is not directly recognized by browsers, so the first thing the rendering engine does when it receives CSS text is to convert it into a structured object called styleSheets.
The formatting process is too complex, and there are different optimization strategies for different browsers, so I won’t go into it here.
The final structure can be viewed in the browser console via document.stylesheets. Of course, this structure incorporates all three of the above CSS sources, providing the basis for subsequent styling operations.
Standardized style attributes
Some CSS style values are not easily understood by the rendering engine, so they need to be standardized before calculating the style, such as em->px,red->#ff0000, Bold ->700, etc.
Calculate the specific style of each node
Now that the styles have been formatted and normalized, you can calculate the specific style information for each node.
In fact, the calculation method is not complicated, mainly two rules: inheritance and cascade.
Each child node inherits the style properties of the parent node by default, and if not found in the parent node, the browser default style, also known as the UserAgent style, is adopted. That’s the inheritance rule, and it’s pretty easy to understand.
Then a cascading rules, CSS the cascade, the biggest characteristic is that it is the ultimate style depends on the individual attributes combination effect, even has a lot of weird cascading phenomenon, read “CSS world” students should have experience greatly, specific rules of cascading belongs to the category of CSS language deeply, but more about here.
Window.getcomputedstyle = window.getComputedStyle = window.getComputedStyle = window.getComputedStyle = window.getComputedStyle = window.getComputedStyle = window.getComputedStyle = window.getComputedStyle = window.getComputedStyle
Generate layout tree
Now that the DOM Tree and DOM styles have been generated, the next step is to use the browser’s Layout system to determine the location of the elements, which is to generate a Layout Tree.
Layout tree generation generally works as follows:
- Walk through the generated DOM tree nodes and add them to
The layout in the tree
. - Calculates the coordinate positions of the nodes in the layout tree.
It is important to note that the layout tree value contains visible elements. Elements with the head tag and display: None will not be put in.
Some people say that the Render Tree will be generated first, but this was 16 years ago, and now the Chrome team has done a lot of refactoring, so there is no Render Tree. The layout Tree information has been very complete, fully has the function of Render Tree.
I’m not going to go into the details of the layout because it’s too complicated, and it would be too cumbersome to go through it all, but most of the time you just need to know what it does, and if you want to get into the mechanics of it, how it does it, I highly recommend you read the renrenfed team’s article to see how the Browser layout works from the Chrome source code.
conclusion
Let’s take a look at the main thread of this section:
Part 5: What happens from the input URL to the page rendering? Rendering process
The previous section described the process of browser parsing, which includes building the DOM, styling calculations, and building a layout tree.
Now it’s time to break down the next process — rendering. There are several steps:
- To establish
The layer tree
(Layer Tree
) - generate
Draw a list
- generate
The map
andrasterize
- Display content
1. Create a layer tree
If you think that now that you have DOM nodes, style and location information, you’re ready to start drawing your page, you’re wrong.
That’s because you’re taking into account other complex scenes, such as how 3D animation renders transitions, how elements are shown and hidden when they have a cascading context, etc.
To solve the problem described above, after the browser builds the layout Tree, it also layers specific nodes to build a Layer Tree.
So what is this layer tree based on?
In general, the layer of the node will belong to the parent node’s layer by default (these layers are also called compositing layers). When will it be elevated to a separate synthesis layer?
There are two cases that need to be discussed, one is explicit synthesis, the other is implicit synthesis.
Explicit synthesis
Here’s what happens with explicit composition:
Nodes that have a cascading context.
The cascading context is also basically created with some specific CSS properties, which generally have the following conditions:
- The HTML root element itself has a cascading context.
- A normal element that sets position to not static and sets the Z-index attribute produces a cascading context.
- The opacity value of an element is not 1
- The transform value of the element is not None
- The filter value of the element is not None
- The isolation value of the element is ISOLATE
- Will-change specifies any of the above values. (The effects of will-change are described in more detail later.)
Two, need to cut the place.
For example, if you have a div that is only 100 by 100 pixels, and you put a lot of text in it, the extra text needs to be clipped. Of course, if a scrollbar is present, the scrollbar is promoted to a separate layer.
Implicit synthesis
The next step is implicit composition, which simply means that after the lower level nodes are promoted to a separate layer, all the higher level nodes become a separate layer.
This implicit synthetic actually hides a huge risk, if in a large application, when a z – index lower element was promoted to separate layer after layer on it all elements will be promoted as a separate layer, may add thousands of layer and greatly increases the pressure of the memory, even collapse directly to the page. That’s how a layer explosion works. Here’s a concrete example, click open.
It is important to note that when repaint is needed, only the repaint itself is needed and the other layers are not affected.
2. Generate a drawing list
The rendering engine then breaks the layer drawing into separate drawing commands, such as draw the background first, then draw the border…… These instructions are then combined in order into a list to be drawn, which is equivalent to a wave of planning for subsequent draw operations.
In Chrome developer Tools, you can expand more Tools in the Settings bar and select Layers to see the following drawing list:
Three, generating blocks and generating bitmaps
Now start drawing, which is actually done by a special thread in the render process, called the Compositing thread.
Once the draw list is ready, the main thread of the rendering process sends a COMMIT message to the compositing thread, submitting the draw list to the compositing thread. Now it’s time for compositing threads to take on the big picture.
First of all, considering the viewport is this big, when the page is very large, it takes a long time to slide to the bottom, and drawing it all at once is a waste of performance. Therefore, the first thing the compositing thread does is break the layer into blocks. The size of these blocks is usually not very large, usually 256 * 256 or 512 * 512. This greatly speeds up the first screen of the page.
Since the subsequent block data must enter the GPU memory, considering that the operation of uploading the browser memory to the GPU memory is slow, it may take a lot of time to draw even a part of the block. Aiming at this problem, Chrome has adopted a strategy: in the first composite block using a low resolution images, so that the first screen to display the time just show low resolution images, this time to continue synthesis operation, when the normal block content mapping to end, will replace the current low resolution image block. This is also a way for Chrome to optimize the first screen loading speed.
As a side note, the rendering process maintains a rasterized thread pool dedicated to converting blocks into bitmap data.
The composite thread then selects the block near the viewport and gives it to the rasterized thread pool to generate the bitmap.
The bitmap generation process is actually accelerated using the GPU, and the resulting bitmap is sent to the compositing thread.
Four, display content
When rasterization is complete, the compositing thread generates a drawing command called “DrawQuad” and sends it to the browser process.
The Viz component in the browser process receives this command, follows this command, draws the page contents into memory, that is, generates the page, and then sends that memory to the graphics card. Why graphics cards? I think it’s worth talking a little bit about how the display works.
Whether it is a PC monitor or a mobile phone screen, there is a fixed refresh rate, generally 60 HZ, that is, 60 frames, that is, 60 pictures per second update, a picture stay time of about 16.7 ms. Each update comes from the front buffer of the graphics card. And after the graphics card receives the page from the browser process, it will synthesize the corresponding image, and save the image to the back buffer, and then the system automatically changes the position of the front buffer and the back buffer, so the cycle is updated.
As you can see here, when an animation consumes a lot of memory, the browser will produce the image slowly, the image will not be sent to the graphics card in time, and the display will refresh at a constant rate, so there will be a lag, or obvious frame drop phenomenon.
conclusion
Now that we’ve got the whole process out of the way, let’s go over the rendering process again.
Talk about your understanding of redrawing and reflux.
Let’s first review the flow of the render pipeline:
We’ll use this as a basis for redrawing and reflow in the future, as well as another way to update views — compositing.
backflow
First, reflux. Reflux is also called rearrangement.
The trigger condition
In simple terms, when we change the DOM structure and cause the DOM geometry to change, reflux occurs.
Specifically, there are the following actions that trigger reflux:
-
The geometric properties of a DOM element vary. Common geometric properties include width, height, padding, margin, left, top, border, etc. This is easy to understand.
-
Causes DOM nodes to be added or subtracted or moved.
-
When reading and writing the offset family, Scroll family, and client family attributes, the browser needs to perform reflux operations to obtain these values.
-
Call the window.getComputedStyle method.
Reflow process
Following the render pipeline above, if the DOM structure changes when the backflow is triggered, the DOM tree is rerendered and the rest of the process (including tasks outside of the main thread) is completed.
The process of parsing and synthesis is equivalent to another step, which is very expensive.
redraw
The trigger condition
When a DOM modification results in a style change that does not affect geometric properties, it results in a repaint.
Redraw process
Since there are no changes to the DOM geometry, the element’s position information does not need to be updated, eliminating the layout process. The process is as follows:
Skip the generation of layout trees and layer trees, and go straight to the drawing list, then continue to block, bitmap generation, and so on.
As you can see, redrawing does not necessarily lead to backflow, but the backflow must have redrawn.
synthetic
In another case, it’s a direct synthesis. For example, CSS3’s transform, opacity, filter attributes can be used to achieve the composite effect, which is often called GPU acceleration.
Reason for GPU acceleration
In the case of composition, the layout and drawing processes are skipped and the parts that are not handled by the main thread are directly handed over to the composition thread. Handing it over has two main benefits:
-
Can make full use of the advantages of GPU. The process of generating bitmaps by synthetic threads calls the thread pool and accelerates the generation using gpus, which are good at processing bitmap data.
-
No resources are taken away from the main thread, and even if the main thread is stuck, the effect is still smooth.
Practical significance
With this in mind, what does it mean to guide the development process?
- Avoid frequent use of style and use modifications instead
class
The way. - use
createDocumentFragment
Perform DOM operations in batches. - For the resize and Scroll, the shaking or throttling process is performed.
- Add Will-change: Tranform, and let the rendering engine implement a separate layer for it. When the transformation occurs, the compositing thread only handles the transformation instead of the main thread, greatly improving the rendering efficiency. The change is not limited, of course
tranform
, any CSS property that can be used to achieve the composite effectwill-change
To declare. Here’s a practical example, one linewill-change: tranform
To save a project,Click on the direct.
Can you tell us something about XSS attacks?
What is an XSS attack?
XSS stands for Cross Site Scripting, so it is called XSS to distinguish it from CSS. An XSS attack is when a malicious script is executed in a browser (whether across domains or in the same domain) to take the user’s information and act on it.
These operations generally do the following:
- To steal
Cookie
. - Listen to user behavior, such as input account password and directly send to the hacker server.
- Modify the DOM forgery login form.
- Generate float window ads in the page.
Typically, XSS attacks are implemented in one of three ways — stored, reflected, and documented. The principle is relatively simple, let’s first introduce one by one.
Storage type
Stored XSS, as the name implies, stores the malicious scripts. Indeed, stored XSS stores the scripts to the database on the server side, and then executes the scripts on the client side to achieve the effect of the attack.
The common scene is the comment area to submit a script code, if the front and back end did not do a good job of escape, the comment content saved to the database, in the page rendering process directly executed, equivalent to the execution of an unknown logic JS code, is very horrible. This is a stored-type XSS attack.
reflective
Reflective XSS refers to malicious scripts as part of a network request.
For example, IF I type:
http://sanyuan.com?q=<script>alert(" You're screwed ")</script>Copy the code
The browser parses the content as part of the HTML, finds out it’s a script, executes it directly, and is attacked.
It’s called reflective because the malicious script is parsed by passing through the server as a parameter to a network request and then reflected back into an HTML document. Unlike the storage type, the server does not store these malicious scripts.
The document type
The document type XSS attack does not pass through the server, but acts as a middleman, hijacking the network packet during the data transmission, and then modifying the HTML document inside!
Such hijackings can include WIFI router hijacking or local malware.
Measures to prevent
After understanding the principles of the three XSS attacks, we can see one thing in common: they all allow malicious scripts to be executed directly in the browser.
To guard against it, avoid the execution of the script code.
In order to accomplish this, it is necessary to do one faith and two exploits.
A belief
Never trust any user input!
Transcoding or filtering of user input should be carried out on both the front-end and the server.
Such as:
<script> <script>Copy the code
After transcoding, it becomes:
< script> alert(' You're finished39;) < /script>Copy the code
Such code cannot be executed during HTML parsing.
Of course, you can also use keyword filtering to delete script tags. So all that’s left now is:
Copy the code
Nothing 🙂
Using the CSP
CSP is the content security policy in the browser. The core idea of CSP is that the server decides which resources to load in the browser. Specifically, CSP can accomplish the following functions:
- Limit resource loading in other domains.
- Do not submit data to other fields.
- Provide a reporting mechanism to help us detect XSS attacks in a timely manner.
Using the HttpOnly
Many XSS attack scripts are used to steal cookies, and when the HttpOnly attribute of a Cookie is set, JavaScript cannot read the value of the Cookie. This can also be a good defense against XSS attacks.
conclusion
An XSS attack is when a malicious script is executed in a browser and the user’s information is taken and acted upon. Mainly divided into storage type, reflection type and document type. Preventive measures include:
- A belief: Don’t trust user input, transcode or filter it to make it unenforceable.
- Two uses: using CSP, using the Cookie HttpOnly attribute.
Can you say something about CSRF attacks?
What is a CSRF attack?
Cross-site Request forgery (CSRF) refers to that a hacker induces a user to click a link to open the hacker’s website, and then the hacker initiates a cross-site request using the user’s current login status.
For example, if you click on an image of your little sister that was carefully selected by the hacker in a forum, you click on it and it takes you to a new page.
So congratulations, you were attacked 🙂
You may be wondering, how can you suddenly be attacked? Here’s a breakdown of what the hacker did behind the scenes when you clicked on the link.
There are three things you might do. The list is as follows:
1. Automatically sends GET requests
The hacker’s web page might contain a piece of code like this:
<img src="https://xxx.com/info?user=hhh&count=100">
Copy the code
When you enter the page, you automatically send a GET request. It’s worth noting that this request automatically includes cookie information about xxx.com (assuming you’re already logged in to xxx.com).
If the server side does not have the corresponding authentication mechanism, it may think that the request is a normal user, because it carries the corresponding cookie, and then the corresponding various operations, can be transfer remittance and other malicious operations.
2. Automatically sends a POST request
The hacker may have filled out a form and written a script that automatically submitted it.
<form id='hacker-form' action="https://xxx.com/info" method="POST">
<input type="hidden" name="user" value="hhh" />
<input type="hidden" name="count" value="100" />
</form>
<script>document.getElementById('hacker-form').submit();</script>
Copy the code
It also carries the corresponding user cookie information, making the server mistakenly think that it is a normal user operating, making all kinds of malicious operations possible.
3. Induce click to send GET request
On the hacker’s website, there might be a link that drives you to click:
<a href="https://xxx/info? user=hhh&count=100" taget="_blank">Click to enter the world of xiuxian</a>
Copy the code
After clicking on it, it automatically sends the GET request, and the next part is the same as the automatic get request.
This is how a CSRF attack works. Compared with XSS attacks, CSRF attacks do not need to inject malicious code into the HTML document of the user’s current page, but jump to a new page, and take advantage of the authentication vulnerability of the server and the user’s previous login state to simulate the user’s operation.
Measures to prevent
1. Use the SameSite attribute of the Cookie
An important part of the CSRF attack is to automatically send cookies under the target site, and then this Cookie simulates the identity of the user. So in the Cookie above the article is the only choice to prevent.
As it happens, there is a key field in cookies that can restrict the portability of cookies in requests, and that field is SameSite.
SameSite can be set to three values: Strict, Lax, and None.
A. In Strict mode, the browser completely forbids third-party requests to carry cookies. For example, the request sanyuan.com website can only be carried in the sanyuan.com domain name request, request in other websites are not.
B. In Lax mode, it is more relaxed, but can only carry cookies when the GET method submits a form condition or the A tag sends a GET request, but not otherwise.
C. In the None mode, which is the default mode, cookies are automatically attached to requests.
2. Verify the source site
This requires the use of two fields in the request header: Origin and Referer.
Origin contains only the domain name information, while Referer contains the specific URL path.
Of course, both of these can be forged with custom request headers in Ajax, which are slightly less secure.
3. CSRF Token
Django is a backend framework for Python. If you’ve used Django to develop forms, you’ll often find a line of code in its template that reads:
{% csrf_token %}
Copy the code
This is a typical application of CSRF tokens. So how does it work?
First, when the browser sends a request to the server, the server generates a string and inserts it into the returned page.
Then the browser must send a request with this string, and the server will verify that it is valid, and not respond if it is not. This string is also known as the CSRF Token, which is usually not available to third-party sites and is therefore rejected by the server.
conclusion
Cross-site Request forgery (CSRF) refers to that a hacker induces a user to click a link to open the hacker’s website, and then the hacker initiates a cross-site request using the user’s current login status.
CSRF attacks can be carried out in three ways:
- Automatic GET request
- Automatic POST request
- Induce a click to send a GET request.
Precautions: Use the SameSite attribute of Cookie, verification source site and CSRF Token.
Why DOES HTTPS make data Transfer More secure?
When you talk about HTTPS, you have to talk about its counterpart, HTTP. Because HTTP is transmitted in plain text, data may be stolen or tampered with by third parties at every link of transmission. Specifically, HTTP data passes through the TCP layer, and then passes through WIFI router, carrier and target server. In these links, the data may be obtained by middlemen and tampered with. This is what we call an attack in the middle.
To protect against such attacks, we have to introduce a new encryption scheme, namely HTTPS.
HTTPS is not a new protocol, but rather an enhanced version of HTTP. The principle is to establish an intermediate layer between HTTP and TCP. When HTTP and TCP communicate, they are not directly communicated as before, but directly encrypted through an intermediate layer, and the encrypted data packet is sent to TCP. In response, TCP must decrypt the data packet before sending it to THE above HTTP. This middle layer is also called the security layer. The core of the security layer is to encrypt and decrypt data.
Let’s take a look at how HTTPS encryption and decryption is implemented.
Symmetric encryption and asymmetric encryption
concept
First, you need to understand the concepts of symmetric and asymmetric encryption, and then discuss how effective they are when applied.
Symmetric encryption is the simplest, where the same key is used for encryption and decryption.
For asymmetric encryption, if there are two keys, A and B, the data packet encrypted by A can only be decrypted by B; otherwise, the data packet encrypted by B can only be decrypted by A.
Encryption and decryption process
Let’s talk about how the browser and server negotiate encryption and decryption.
First, the browser sends the server a random number client_RANDOM and a list of encrypted methods.
The server returns another random number server_random and an encryption method to the browser.
They now have three of the same credentials: client_RANDOM, server_RANDOM, and the encryption method.
This encryption method is then used to mix the two random numbers to generate a key, which is the code for communication between the browser and the server.
The effect of each application
If symmetric encryption is used, the third party can obtain client_RANDOM, server_random and the encryption method in the middle. Since this encryption method can be decrypted at the same time, the middleman can successfully decrypt the password and get the data, and it is easy to break this encryption method.
Since symmetric encryption is so vulnerable, let’s try asymmetric encryption. In this type of encryption, the server has two keys, one is a public key, which means everyone can get it, which is public, and the other is a private key, which only the server knows.
All right, let’s start the transmission.
The browser passes client_RANDOM and the list of encryption methods. The server receives the message and passes server_random, the encryption method, and the public key to the browser.
Both now have the same client_RANDOM, server_RANDOM, and encryption methods. The browser then encrypts client_RANDOM and server_RANDOM with the public key, generating a code to communicate with the server.
At this time, due to asymmetric encryption, the data encrypted by the public key can only be decrypted with the private key. Therefore, even if the middleman obtains the data sent by the browser, he cannot decrypt the data because he does not have the private key, ensuring the security of the data.
Is this necessarily safe? Smart friends have long seen the signs. Going back to the definition of asymmetric encryption, the public key encrypted data can be decrypted with the private key, and the private key encrypted data can also be decrypted with the public key!
The server data can only be encrypted using the private key (because if it uses the public key, the browser can’t decrypt it). Once the middleman has the public key, he can decrypt the data from the server, thus being cracked again. Moreover, just using asymmetric encryption, the consumption of server performance is also quite huge, so we will not use this scheme for the time being.
A combination of symmetric and asymmetric encryption
It can be found that symmetric encryption and asymmetric encryption, the application of either alone, there will be security risks. Can we combine the two to make it even safer?
Yes, it can. Here’s how it works:
- The browser sends it to the server
client_random
And a list of encryption methods. - The server receives and returns
server_random
, encryption method, and public key. - The browser receives it and generates another random number
pre_random
And encrypted with the public key and passed to the server. Knock on the blackboard! Key operation! - The server uses the private key to decrypt the encrypted
pre_random
.
The browser and server now have three identical credentials: client_RANDOM, server_RANDOM, and pre_random. The two then use the same encryption method to mix the three random numbers to generate the final key.
The browser and server then communicate with each other using the same key, that is, symmetric encryption.
The ultimate key is hard for middlemen to get hold of. Why? Because the middleman does not have the private key, he does not have access to pre_random and therefore cannot generate the final key.
Going back to using asymmetric encryption, what are the improvements? In essence, it prevents the data encrypted by the private key from being transmitted. Using asymmetric encryption alone, the biggest vulnerability is that the data transmitted by the server to the browser can only be encrypted with the private key, which is the root cause of the danger. This is prevented by using a combination of symmetric and asymmetric encryption, thus ensuring security.
Adding a Digital Certificate
Although the combination of the two encryption methods can achieve a good encrypted transmission, there are actually some problems. If the hacker uses DNS hijacking, the target address is replaced by the hacker server address, and then the hacker makes a public and private key, the data can still be transmitted. For the browser user, he doesn’t know he’s accessing a dangerous server.
In fact, ON the basis of the above symmetric and asymmetric encryption, HTTPS adds a digital certificate authentication step. The idea is for the server to prove its identity.
Transfer process
To obtain the Certificate, the server operator needs to obtain authorization from a third-party certification Authority (CA). After the Certificate is passed, the CA issues a digital Certificate to the server.
This digital certificate serves two purposes:
- The server identifies itself to the browser.
- Pass the public key to the browser.
When does this validation process take place?
When the server passes server_RANDOM, the encryption method, along with the digital certificate (including the public key), and the browser receives the digital certificate and begins to verify it. If the validation is successful, the rest of the process continues, otherwise the execution is rejected.
Now let’s go through the final HTTPS encryption and decryption process:
The authentication process
After the browser gets the digital certificate, how to authenticate the certificate?
First, the clear text of the certificate is read. The CA saves A Hash function when signing the digital certificate. The Hash function calculates the plaintext to obtain information A. Then, the public key is used to decrypt the plaintext to obtain information B. If the two messages are consistent, they are valid.
Sometimes, of course, the browser does not know which cas are trustworthy, so it will continue to look for the parent CA and verify the validity of the parent CA with the same information comparison. Generally, the root CA is built into the operating system. Of course, if you do not find the root CA, it is considered illegal.
conclusion
HTTPS is not a new protocol. It establishes a security layer in the transmission of HTTP and TCP. It uses symmetric encryption and asymmetric encryption combined with digital certificate authentication to greatly improve the security of the transmission process.
Chapter 10: Can events be stabilized and throttled?
The throttle
The core idea of throttling is as follows: If the throttling is triggered again within the time range of the timer, the next timer task can be started only after the current timer completes. This is like a bus, 10 minutes a trip, 10 minutes how many people in the bus station waiting for me no matter, 10 minutes to a I will leave!
The code is as follows:
function throttle(fn, interval) {
let flag = true;
return function(. args) {
let context = this;
if(! flag)return;
flag = false;
setTimeout(() = > {
fn.apply(context, args);
flag = true;
}, interval);
};
};
Copy the code
The same idea can be expressed in the following way:
const throttle = function(fn, interval) {
let last = 0;
return function (. args) {
let context = this;
let now = +new Date(a);// It's not time yet
if(now - last < interval) return;
last = now;
fn.apply(this, args)
}
}
Copy the code
Image stabilization
Core idea: Each time an event is triggered, the original timer is deleted and a new timer is created. Similar to king of Glory’s return function, if you activate the return function repeatedly, only recognize the last time, and start the time from the last trigger.
function debounce(fn, delay) {
let timer = null;
return function (. args) {
let context = this;
if(timer) clearTimeout(timer);
timer = setTimeout(function() { fn.apply(context, args); }, delay); }}Copy the code
Double sword combination – enhanced version of throttling
Now we can put tremble and throttling together. Why? Because stabilisation is sometimes triggered so often that there is no response at all, we want to give the user a response at a fixed time, and in fact many front-end libraries take this approach.
function throttle(fn, delay) {
let last = 0, timer = null;
return function (. args) {
let context = this;
let now = new Date(a);if(now - last < delay){
clearTimeout(timer);
setTimeout(function() {
last = now;
fn.apply(context, args);
}, delay);
} else {
// This indicates that time is up and a response must be givenlast = now; fn.apply(context, args); }}}Copy the code
Can images be lazy loaded?
Solution 1 :clientHeight, scrollTop and offsetTop
First give the image a placeholder resource:
<img src="default.jpg" data-src="http://www.xxx.com/target.jpg" />
Copy the code
Then, we judge whether the image reaches the viewport by monitoring the Scroll event:
let img = document.getElementsByTagName("img");
let num = img.length;
let count = 0;// Count from the first picture
lazyload();// Don't forget to display the image the first time you load it
window.addEventListener('scroll', lazyload);
function lazyload() {
let viewHeight = document.documentElement.clientHeight;// Viewport height
let scrollTop = document.documentElement.scrollTop || document.body.scrollTop;// The height of the scroll bar
for(let i = count; i <num; i++) {
// The element is now in the viewport
if(img[i].offsetTop < scrollHeight + viewHeight) {
if(img[i].getAttribute("src")! = ="default.jpg") continue;
img[i].src = img[i].getAttribute("data-src"); count ++; }}}Copy the code
Of course, it is better to throttle the Scroll event to avoid frequent triggering:
// The throttle function was implemented in the previous section
window.addEventListener('scroll', throttle(lazyload, 200));
Copy the code
Solution 2: getBoundingClientRect
Now let’s use another way to determine whether an image appears in the current viewport: the DOM element’s getBoundingClientRect API.
The lazyload function is changed to look like this:
function lazyload() {
for(let i = count; i <num; i++) {
// The element is now in the viewport
if(img[i].getBoundingClientRect().top < document.documentElement.clientHeight) {
if(img[i].getAttribute("src")! = ="default.jpg") continue;
img[i].src = img[i].getAttribute("data-src"); count ++; }}}Copy the code
Scheme 3: IntersectionObserver
This is a built-in API of the browser, the realization of the monitoring window scroll event, judge whether in the viewport and throttling three functions.
Let’s try it out:
let img = document.getElementsByTagName("img");
const observer = new IntersectionObserver(changes= > {
// Changes are the collection of observed elements
for(let i = 0, len = changes.length; i < len; i++) {
let change = changes[i];
// This attribute determines whether the viewport is in the viewport
if(change.isIntersecting) {
const imgElement = change.target;
imgElement.src = imgElement.getAttribute("data-src"); observer.unobserve(imgElement); }}})Array.from(img).forEach(item= > observer.observe(item));
Copy the code
In this way, lazy image loading can be conveniently implemented. Of course, this IntersectionObserver can also be used as the preloading of other resources, which is very powerful.
At the end of the article, I read a lot
The above articles are in the open source project God three yuan blog debut, aimed at creating a complete front-end knowledge system, if you have some help, please help the project point a star, thank you very much!
In addition, my React Hooks and Immutable data streams have recently been released. It is a practical tutorial on React Hooks and Immutable data streams. It contains 36 sections, including many browser Hooks and performance optimization practices. Later intends to put the hooks source code analysis of the series of articles directly into the small volume, constantly add value to the small volume, I believe that can be helpful to you are advanced, look for a lot of support! Small volume of link
References:
Geek Time browser column
Browser layer compositing and page rendering optimization
How does the browser work
Mountain month – the realization of several lazy image loading scheme comparison
Nugget booklet – Front-end performance optimization principle and practice