Said in the previous

A video commentary for this article has been uploaded

As a platform, modern browsers are designed to deliver Web applications quickly, efficiently, and securely. In fact, beneath its surface, the modern browser is nothing more than an operating system with hundreds of components, including process management, security sandboxes, layered optimized caches, JavaScript virtual machines, graphics rendering and GPU pipelines, storage systems, sensors, audio and video, networking mechanisms, and more.

Obviously, the performance of a browser, and even the applications running within it, depends on several components: parsing, layout, HTML and CSS style calculations, JavaScript execution speed, rendering pipelines, and, of course, the coordination of protocols across the various layers of the network. Each of these components has an important role to play, so ensuring that they work together in a reasonable and efficient manner is a critical part of the browser’s work.

First of all, I would like to thank Teacher Li Bing for his wonderful technical course. This article takes Teacher Li Bing’s “Browser Working Principle and Practice” as the main source of technical knowledge, and deeply analyzes the working principle of the browser from the perspective of the process and thread in the browser.

Processes and threads

Process and thread are the basic concepts of operating system, but they are abstract and not easy to master.

In order not to make the conceptual stuff too boring, I have used a graphic example to illustrate the article on processes and threads by Teacher Yifeng Ruan.

First, the core of the computer is the CPU, which undertakes all the computing tasks. It’s like a factory, always running.

Assume that the plant has a limited supply of power for only one workshop at a time. That is to say, when one workshop starts, all the others must stop. The implication is that a single CPU can only run one task at a time.

Processes are like factory floors; they represent individual tasks that the CPU can handle. At any one time, the CPU is always running one process and the other processes are not running.

There can be many workers in a workshop. They work together on a task. Threads are like workshop workers. A process can contain multiple threads.

Workshop Spaces are shared by workers, with many rooms accessible to each worker. This indicates that the memory space of a process is shared, and each thread can use the shared memory (the various coordination mechanisms between shared memory are not detailed here).

Ok, now that we have roughly defined the relationship between processes and threads, let’s use more professional language to explain:

A process is a running instance of a program. When a program is started, the operating system creates a block of memory for the program, which is used to store code, running data and a main thread to perform tasks. We call such a running environment a process.

Threads are for specific tasks, and multithreading can process tasks in parallel. But threads cannot exist alone; they are started and managed by processes.

As can be seen from the figure, threads are attached to processes, and multi-threaded parallel processing in processes can improve computing efficiency.

The characteristics of

In summary, the relationship between processes and threads has the following four characteristics.

  1. The failure of any thread in the process will cause the entire process to crash.

  2. Threads share data in a process.

  3. When a process is shut down, the operating system reclaims the memory occupied by the process.

    When a process exits, the operating system reclaims all the resources applied for by the process. Even if any of these threads leak memory due to improper operation, the memory will be properly reclaimed when the process exits.

  4. The contents of the processes are isolated from each other.

    Process isolation is A technique to protect each process from interfering with each other in the operating system. Each process can access only the data it owns, preventing process A from writing data to process B. Because data between processes is strictly isolated, a crash or hang of one process does not affect other processes. If there is a need for data communication between processes, then a mechanism for interprocess communication (IPC) is needed.

Single-process browser

As the name suggests, a single-process browser means that all the functional modules of the browser run in the same process, including the network, plug-ins, JavaScript runtime environment, rendering engine and pages. Until 2007, all browsers were single-process.

We can combine the characteristics of the above process and thread, to analyze a single thread problem.

The first is that any thread in the process execution error, will lead to the entire process crash. This leads to the problem of unstable single-threaded browsers.

Early browsers needed plug-ins to implement powerful functions such as Web video and Web games, but plug-ins were the most problematic module and also ran in the browser process, so the accidental crash of one plug-in would cause the crash of the entire browser.

In addition to plug-ins, the render engine module is also unstable, and often some complex JavaScript code can cause the render engine module to crash. Just like plug-ins, a crash of the rendering engine can crash the entire browser.

Second, when a process is shut down, the operating system reclaims the memory occupied by the process. This means that the operating system does not fully reclaim the memory occupied by the single-process browser until it is shut down, relying on the browser’s memory reclamation mechanism.

However, usually the browser kernel is very complex, running a complex page and then closing the page, there will be a situation of memory can not be fully reclaimed, which leads to the problem is that the longer the use time, the higher the memory footprint, the slower the browser will become.

Also, all render modules, JavaScript execution environments, and plug-ins for pages run in the same thread, which means that only one module can be executed at a time. So when a loop JS script is running, it will monopolise the whole thread, so that other tasks running in the thread have no chance to be executed, and because the page rendering is running in a thread, the page will lose response and become stagnant. This is why, in the early days of browsers, one page was blocked, and the entire browser was blocked.

All of this leads to the second problem with single-threaded browsers: they don’t flow.

Finally, threads share data in a process, and content between processes is isolated from each other. That is to say, a page in a single-threaded browser can obtain all permissions of the browser through some means, and then attack the operating system through the browser. This leads to the problem of unsafe single-threaded browsers.

Because browser plugins can be written in C/C++ code and can be used to access any resources of the operating system, when you run a plug-in on a page it means that the plug-in can fully operate your computer. If it’s a malicious plug-in, it can release a virus, steal your passwords and raise security issues.

As for page scripts, they can gain access to the system through a vulnerability in the browser, and they can also do malicious things to your computer, which can also cause security problems. In a word, browsers do not have an isolated environment for scripts and plug-ins to run, leading to various security problems.

To summarize, single-threaded browsers have three major problems:

  1. unstable
  2. Not smooth
  3. unsafe

Multiprocess browser

The good news is that modern browsers have solved these problems, but how? This brings us to our “multi-process browser era.”

Early multi-process architecture

Take a look at the process architecture of Chrome when it was released in 2008.

As you can see, Chrome pages run in a separate rendering process, and plugins run in a separate plug-in process, which communicates with each other through an IPC mechanism (dotted line).

Let’s see how we can solve the instability problem. Processes are isolated from each other, so when a page or plug-in crashes, it only affects the current page or plug-in process, not the browser and other pages, which perfectly solves the problem that a page or plug-in crash can cause the entire browser to crash, which is unstable.

Let’s take a look at how the problem is solved. Also, JavaScript runs in the renderer process, so even if JavaScript blocks the renderer process, it only affects the current rendering page, not the browser and other pages, whose scripts are running in their own renderer process. So when we run the script in Chrome, the only thing that doesn’t respond is the current page.

The solution to the memory leak is even easier, because when a page is closed, the entire rendering process is closed, and the memory occupied by the process is then reclaimed by the system, which can easily solve the memory leak problem of the browser page.

Finally, let’s look at how the above two security problems are solved. An added benefit of the multi-process architecture is the use of a secure sandbox, which you can think of as a lock placed on the process by the operating system. Programs in the sandbox can run, but they cannot write any data to your hard drive or read any data from sensitive locations, such as your documents and desktop. Chrome locks the plugin and renderer processes in a sandbox, so that even if a malicious program is executed in the renderer or renderer process, the malicious program cannot break through the sandbox to obtain system permissions.

Well, after analyzing the early Days of Chrome, you’ve seen the need for a multi-process architecture.

Current multi-process architecture

But Chrome is moving forward, and there are a lot of new changes to the current architecture. Let’s take a look at the latest Chrome process architecture, as shown below:

As can be seen from the figure, the latest Chrome Browser includes: one main Browser process, one GPU process, one NetWork process, multiple rendering processes and multiple plug-in processes.

Let’s take a look at each of these processes one by one.

  • Browser process. It is mainly responsible for interface display, user interaction, sub-process management, and storage.
  • Render process. The core task is to turn HTML, CSS, and JavaScript into web pages that users can interact with. Both the typography engine Blink and JavaScript engine V8 run in this process. By default, Chrome creates a rendering process for each Tab Tab. For security reasons, renderers are run in sandbox mode.
  • Process of GPU. In fact, Chrome didn’t have a GPU process when it was first released. The original intention of using GPU was to achieve 3D CSS effect, but later the UI interface of web page and Chrome were drawn on GPU, which made GPU become a common requirement of browser. Finally, Chrome has introduced GPU processes on top of its multi-process architecture.
  • Network process. It is responsible for loading web resources on the page. It used to run as a module in the browser process until recently, when it became a separate process.
  • Plug-in process. It is mainly responsible for the running of plug-ins. Plug-ins are prone to crash. Therefore, plug-ins need to be isolated through the plug-in process to ensure that the plug-in process crash does not affect the browser and page.

At this point, you should know that opening a page requires at least 1 web process, 1 browser process, 1 GPU process, and 1 render process. If the page you open has a plug-in running, you need to add one more plug-in process.

  • Higher resource usage. Because each process contains a copy of the common infrastructure (such as the JavaScript runtime environment), this means that the browser consumes more memory resources.
  • More complex architectures. Problems such as high coupling between browser modules and poor scalability lead to the current architecture has been difficult to adapt to new requirements.

For both of these issues, the Chrome team has been looking for an elastic solution that can address both high resource usage and complex architecture issues.

The future of service-oriented architecture

In order to solve these problems, in 2016, the Chrome official team designed a new Chrome Architecture using the idea of “Services Oriented Architecture” (SOA). In other words, the overall Chrome architecture will move toward the “service-oriented architecture” of modern operating systems, where the various modules will be reorganized into separate services, each of which can run in a separate process. Access to services must use defined interfaces and communicate via IPC to build a more cohesive, loosely-coupled system that is easy to maintain and expand, better meeting Chrome’s goals of simplicity, stability, speed, and security.

Chrome eventually reconstructs UI, database, files, devices, and network modules into basic services, similar to operating system underlying services. Here is a diagram of Chrome’s “Service-oriented Architecture” process model:

Chrome also offers a flexible architecture that allows basic services to run in multi-process mode on powerful devices, but on resource-constrained devices (see figure below), Chrome can consolidate many services into a single process to save memory footprint.

Page loading process

Now that we have an overview of how browser processes evolve, let’s take a closer look at interprocess coordination in page loading in conjunction with my previous article, “Inside computer Networks in Browser Page loading.”

First let’s review the page loading process:

Then take a look at the chart below:

Here, we look back again in the multi-process architecture summarizes each process of the main function, here is worth our attention to the rendering process all of the content is obtained through the network, there are some malicious code system to make use of loopholes in the browser attack, so running in the inside of the rendering process code is not be trusted. This is why Chrome makes the rendering process run in a security sandbox, just to keep the system safe.

To give you an overview of the above process:

  1. The user enters the URL and press Enter. The browser process will determine whether to search or address based on the information entered by the user. If it is the search content, it will combine the search content and the default search engine into a new URL. If the content entered by the user conforms to the URL rules, the browser process will add the protocol to the content according to the URL protocol to synthesize a valid URL.
  2. The loading state is displayed in the navigation bar of the browser, but the loading state is displayed as the previous page, because the response data of the new page has not been obtained.
  3. The browser process constructs the request line information and sends the URL request to the network process via interprocess communication (IPC).
  4. After receiving the URL request, the network process checks whether the local cache caches the requested resource and returns the resource to the browser process if it does.
  5. If not,Network processMake an HTTP request (network request) to the Web server. The request process is as follows:
    1. Perform DNS resolution to obtain the server IP address (search the DNS cache first, and then initiate a DNS network request)
    2. Establish a TCP connection with the server using the IP address (The TCP three-way handshake is not a real physical connection, but a virtual connection. The essence of the connection is the resources (memory, process, etc.) needed to open the connection between the client and the server.)
    3. Complete the build request information and send the request (after the Socket is established using TCP over a three-way handshake connection, the HTTP request packets prepared are sent to the send queue and then handed over to TCP for the rest of the process)
    4. After the server responds, the network process receives the response header and response information and parses the response content
  6. Network processParse the response flow:
    1. Check the status code, if it is 301/302, it needs to be redirected, read the address automatically from Location, repeat step 3, if it is 200, continue processing the request.
    2. The content-Type response is checked, and if it is a byte stream, the request is submitted to the download manager, the navigation process ends without further rendering, and if it is a resource such as HTML, it is forwarded to the browser process.
  7. After the browser process receives the response header data from the web process, it checks whether the current URL is the same as the previously opened renderer root domain. If so, it will reuse the original process, and if not, it will start a new renderer (more on that later in this article).
  8. Rendering processWhen I’m ready,Browser processsendCommitNavigationThe message toRendering process, sendCommitNavigationWill carry basic information such as response headers and etc.Rendering processReceived the message andNetwork processBuild “pipes” to transfer data.
  9. After receiving the data, the renderer sends a “confirm submit” message to the browser process.
  10. After receiving the confirmation message, the browser process updates the browser interface status, including security, URL, forward and backward historical status, and Web page update.

So let’s explain the above process in detail.

User input

When the user enters the keyword and types Enter, it means that the current page is about to be replaced with a new page, but the browser also gives the current page a chance to execute the beforeUnload event once before the process continues.

Beforeunload events allow the page to perform some data cleansing operations before exiting. They also ask the user if they want to leave the current page, for example, if the current page may have unfinished forms, so the user can unnavigate by using beforeUnload events without the browser doing any subsequent work.

The current page does not listen to the beforeUnload event or agrees to continue the process, so the browser enters the state shown below:

As you can see from the figure, the icon on the TAB page enters the loading state as soon as the browser starts loading an address. However, the page in the figure still shows the content of the previously opened page, and the page is not replaced immediately. The content of the page will not be replaced until the submission stage.

URL Request process

Next, you enter the page resource request process. In this case, the browser process sends the URL request to the network process through interprocess communication (IPC). After receiving the URL request, the network process initiates the actual URL request process here. So what’s the process?

First, the network process checks whether the current URL is cached locally according to the mandatory cache rule and whether there is a strong cache. If there is a cached resource, it is returned directly to the browser process. If no resource is found in the cache, the system checks whether there is negotiation cache information. If there is negotiation cache information written into the request header, otherwise, the network request flow is directly entered. For details on the process of negotiating and enforcing caching, see here.

Then the first step before the request is to do a DNS resolution to get the server IP address for the requested domain name. About DNS resolution is also very well understood, because IP address is not convenient to remember and can not display the Name and nature of the address organization shortcomings, people design a Domain Name, and through the Domain Name resolution protocol (DNS, Domain Name System) to map the Domain Name and IP address to each other, make people more convenient access to the Internet, Instead of remembering the number of IP addresses that can be read directly by the machine. So domain name resolution is to find the IP address behind the URL and let the router know where to look for the target server. For details about the DNS resolution process, see here.

If the request protocol is HTTPS, you also need to establish a TLS connection. A TLS connection is a protocol between HTTP and TCP. Its core function is to encrypt HTTP packets for TCP transmission or decrypt HTTP packets for upper-layer applications. For details about HTTPS, see here.

The next step is to establish a TCP connection with the server using the IP address. After the connection is established, the browser side will construct the request line, request information, etc., and attach the data related to the domain name, such as cookies, to the request header, and then send the constructed request information to the server.

After receiving the request information, the server generates response data (including response line, response header, and response body) based on the request information and sends it to the network process. After the network process receives the response line and header, it parses the contents of the header.

redirect

Upon receiving the response header from the server, the network process begins to parse the response header. If the status code returned is 301 or 302, the server needs the browser to redirect to another URL. The network process reads the redirected address from the Location field in the response header, then initiates a new HTTP or HTTPS request and starts all over again.

For example, we enter the following command in the terminal:

curl -I http://bilibili.com
Copy the code

We get the following response message:

HTTP / 1.1 301 Moved Permanently
Server: Tengine
Date: Sat, 01 Jan 2022 12:30:21 GMT
Content-Type: text/html
Content-Length: 239
Connection: keep-alive
Location: https://www.bilibili.com/
Copy the code

The server at Site B converts all HTTP requests to HTTPS requests through redirection. This means that when you make an HTTP request to the server at site B, the server will return a response header containing either a 301 or 302 status code and fill in the Location field of the response header with an HTTPS address. This tells the browser to navigate to the new address.

If you want to access the HTTPS site directly in Location, you will see the following message:

HTTP/2 200 
date: Sat, 01 Jan 2022 12:33:38 GMT
content-type: text/html; charset=utf-8
support: nantianmen
set-cookie: innersign=0; path=/; domain=.bilibili.com
set-cookie: buvid3=4BC19AF-7335-E979-74C3-AA00D1411DC017954infc; path=/; expires=Sun, 01 Jan 2023 12:33:37 GMT; domain=.bilibili.com
cache-control: no-cache
gear: 1
vary: Origin,Accept-Encoding
idc: shjd
expires: Sat, 01 Jan 2022 12:33:37 GMT
x-cache-webcdn: MISS from blzone07
x-cache-time: 0
x-origin-time: no-cache, must-revalidate, max-age=0, no-store
x-save-date: Sat, 01 Jan 2022 12:33:38 GMT
Copy the code

As you can see from the figure, the server returns a response header with a status code of 200, which tells the browser that everything is fine and that it is time to proceed with the request.

Ok, so that’s the redirection. Now you should understand that during navigation, if the status code in the response line of the server contains a jump of 301 or 302, the browser will go to the new address to continue navigation. If the response line is 200, then the browser can continue processing the request.

Negotiate the cache

As mentioned earlier, if the browser finds the negotiated cache information while searching the cache based on the URL, the request will carry a relevant header, and the returned response header will have a status code of 304, indicating that the negotiated cache is in effect, and the browser will still fetch the resource from the cache. The specific process is shown in the figure below:

Response data type processing

The data type of the URL request, sometimes a download type, sometimes a normal HTML page, so how do browsers distinguish between them?

The answer is content-type. The Content-Type is a very important field in the HTTP header that tells the browser what Type of response body data the server is returning. The browser then uses the value of the Content-Type to decide how to display the response body Content.

The content-Type field in the response header is text/ HTML, which tells the browser that the data returned by the server is in HTML format. There are also interfaces on the back end of our normal requests, many of which are of application/ JSON type, and such data types will be passed to the renderer by the browser as normal data from the network process. Another example is application/ OCtet-stream, which displays byte stream data. Normally, the browser will handle the request according to the download type.

Therefore, the subsequent processing flow of different Content-Types is quite different. If the value of the Content-Type field is determined by the browser to be a download Type, the request is submitted to the browser’s download manager and the navigation of the URL request ends. But if it’s HTML, the browser will continue with the navigation process. Since Chrome’s page rendering runs in the render process, the next step is to prepare the render process.

Prepare the render process

By default, Chrome assigns a render process to each page, meaning that a new render process is created for each new page opened. However, there are some exceptions, in some cases the browser will allow multiple pages to run directly in the same render process.

Here I found some sites to compare LeeCode’s task manager screenshots with site B’s:

Among them, the three web pages of LeeCode belong to the same process 66661, while the three web pages of station B have their own processes respectively.

When can multiple pages be running in a render process at the same time?

Chrome’s default strategy is one render process per TAB. However, if a new page is opened from one page and belongs to the same site as the current page, the new page will reuse the parent page’s rendering process. Officially, this default policy is called process-per-site-instance.

This strategy is based on the following two points:

  1. If two tabs are in the same browsing context group and belong to the same site, they are assigned to the same rendering process by the browser.
  2. If these two conditions are not met, the two tabs are rendered separately using a different rendering process.

The same site

Let’s start by understanding what the same site is. Specifically, we define “same site” as the root domain plus protocol, including all subdomains and different ports under the root domain, such as the following three:

https://space.bilibili.com
https://www.bilibili.com
https://www.bilibili.com:443
Copy the code

They all belong to the same site because their protocol is HTTPS and the root domain name is bilibili.com.

Browse group context

Looking at the browsing group context, to understand this, we need to analyze the links between browser tabs.

As we know, browser tabs can be connected using JavaScript scripts, usually in one of the following ways:

The first is to use the tag to connect to the new tag:

<a  href="https://baidu.com" target="_blank" class="">baidu</a>
Copy the code

Click this link to open a new Baidu TAB page. The value of window.opener in the new TAB page points to the window in the original TAB page, so you can operate the previous TAB page by opener in the new Baidu TAB page. So we can say that the two tabs are connected.

Alternatively, you can connect to the new TAB using the window.open method in JavaScript, as shown below:

new_window = window.open("https://baidu.com")
Copy the code

In this way, you can use new_window to control the new TAB page in the current TAB page, or window.opener to control the current TAB page in the new TAB page. So we can also say that if we open the B TAB window.open from the A TAB, then the A TAB and THE B TAB are connected.

In fact, no matter whether the two new tabs opened in the above two ways belong to the same site or not, they can be connected by opener, so they are connected. In the WhatWG specification, this type of interconnected TAB page is called the Browsing Context Group.

Now that we’re talking about browsing context groups, we’re talking about browsing context. Usually, we refer to the contents of a TAB page as browsing context, such as the window object, history, scrollbar location, and so on. These script-linked browsing contexts are browsing context groups.

That is to say, if multiple new tabs are opened through links in A’s TAB page, no matter whether these new tabs are the same site or not, they all form A browsing context group with A’s TAB page, because opener in these tabs all points to A’s TAB page.

Noopener and noreferrer

With that in mind, let’s take a look at LeeCode’s task process. The three pages are indeed the same process, consistent with the results of our analysis above. But if you look at the three tabs on Site B, there are three different processes, which is completely different from what we expected.

In fact, the problem is easy to explain. If the default a tag link is set, it can get the reference of the original TAB through the opener attribute of the global object in the new window, which may cause hacker attacks and other risks.

To be specific:

  • Your own page A has A link to another three-way address B
  • B Web page passwindow.openerObtained from webpage AwindowObject to redirect page A to A phishing pagewindow.opener.location.href ="abc.com", the user does not notice that the address is redirected, and information is leaked after entering the user name and password on the page

The solution to this problem is also very simple, which is to add the ref attributes Noopener and Noreferrer on the A label.

Introduce rel=”noopener” property, so that the newly opened page does not get the window object of the source page, window.opener is null.

Similar to noopener, after rel=”noreferrer” is set, the new page cannot obtain the window of the source page for attack. Meanwhile, the document. Referrer information cannot be obtained from the new page, which contains the address of the source page.

Usually noopener and noreferrer will be set at the same time, rel=”noopener noreferrer”. This is for compatibility reasons, as some older browsers do not support Noopener.

iframe

In fact, the most special case we can all think of is the iframe tag. To sum up, if the iframe in the TAB page and the TAB page are the same site and have a connection, the TAB page will still run in the same render process as the current TAB page, if the iframe and the TAB page are not the same site, then the iframe will run in a separate render process.

The analysis is as follows:

Once the renderer process is ready, it cannot immediately enter the document parsing state because the document data is still in the network process and has not been submitted to the renderer process, so the next step is to submit the document.

Submit the document

Submitting a document means that the browser process submits the HTML data received by the web process to the renderer process. The process looks like this:

  • First, when the browser process receives the response header data from the network process, it sends a “submit document” message to the renderer process.
  • After receiving the “submit document” message, the renderer process establishes a “pipeline” with the network process to transfer data.
  • Once the pipe is set up, the network process receives the data and places it in the pipe, while the renderer process continuously reads the data from the other end of the pipe and passes it to an HTML parser, which dynamically takes the byte stream and parses it into the DOM.
  • After the document data transfer is complete, the renderer process returns a “confirm submit” message to the browser process.
  • After receiving the “confirm submission” message, the browser process updates the browser interface status, including the security status, the URL of the address bar, the historical status of forward and backward, and the Web page.

This explains why, when you type an address into your browser’s address bar, the previous page doesn’t disappear immediately, but instead takes a while to load before the page is updated.

Rendering phase

Once the document is submitted, the renderer process begins page parsing and child resource loading. To get a better idea of what’s going on here, you can quickly grasp the implications of HTML, CSS, and JavaScript with the following:

Due to the complexity of the rendering mechanism, the rendering module is divided into many sub-stages during execution, and the input HTML passes through these sub-stages and finally outputs pixels. We call such a process a rendering pipeline, and its general flow is shown in the following figure:

According to the chronological order of rendering, the pipeline can be divided into the following sub-stages: DOM tree building, style calculation, layout stage, layering, drawing, chunking, rasterization, and composition.

There are many processes, but each stage can be understood in three ways:

  • At the beginning each sub-stage has its input
  • Then each sub-stage has its own process
  • Eventually each sub-stage generates output

Understanding these three parts will give you a clearer understanding of each sub-stage.

Build a DOM tree

Because browsers can’t understand and use HTML directly, you need to transform the HTML into a structure that browsers can understand — a DOM tree. To understand what a DOM tree is, see the following figure:

As you can see from the figure, the input for building a DOM tree is a very simple HTML file, which is then parsed by an HTML parser and finally outputs the DOM as a tree structure. We can also type Document in the Console TAB and press Enter to see a complete DOM tree.

DOM is almost identical to HTML content, but unlike HTML, DOM is stored in an in-memory tree structure that can be queried or modified using JavaScript.

Style calculation

The purpose of style calculation is to calculate the specific style of each element in the DOM node, which can be roughly divided into three steps.

Convert CSS to a structure that browsers can understand

As you can see from the figure, there are three main sources of CSS styles:

  • External CSS files referenced by link
  • Element’s style property is embedded with CSS

Just like HTML files, browsers don’t understand the CSS styles of plain text directly, so when the rendering engine receives CSS text, it performs a conversion operation that transforms the CSS text into styleSheets that browsers can understand.

To understand, you can look at the structure in the Chrome console. Just type Document. styleSheets in the console and you’ll see the structure like this:

As you can see from the figure, the stylesheet contains a wide variety of styles, including styles from all three sources, and the structure has both query and modification capabilities, which provide the basis for future style manipulation.

Transform property values in the stylesheet to standardize them

Now that we’ve converted the existing CSS text into a structure that browsers can understand, it’s time to standardize the property values.

To understand what attribute value normalization is, you can look at CSS text like this:

body { font-size: 2em }
p {color:blue; }span {display: none}
div {font-weight: bold}
div p {color:green; }div {color:red; }
Copy the code

As you can see in the CSS text above, there are many attribute values, such as 2em, Blue and bold. These values are not easily understood by the rendering engine, so you need to convert all values into standardized computed values that the rendering engine can easily understand. This process is called attribute value standardization. Here is the result of standardization:

Figure out the specific style of each node in the DOM tree

Now that the style properties have been standardized, it’s time to calculate the style properties for each node in the DOM tree, which involves CSS inheritance and cascading rules.

The first is CSS inheritance. CSS inheritance is when each DOM node contains the style of its parent node. While this may seem a bit abstract, let’s see how a stylesheet like this is applied to a DOM node using a concrete example.

body { font-size: 20px }
p {color:blue; }span  {display: none}
div {font-weight: bold;color:red}
div  p {color:green; }Copy the code

The final effect of the stylesheet applied to the DOM node is shown below:

As you can see from the figure, all child nodes inherit the parent node style. For example, if the font-size property of the body node is 20, then all nodes below the body node will have a font-size of 20.

A special mention is made of the UserAgent style, which is a set of default styles provided by the browser. If you do not provide any styles, the UserAgent style is used by default.

The second rule in style calculation is style cascade. Cascading is a fundamental feature of CSS. It is an algorithm that defines how to combine property values from multiple sources. It is at the heart of CSS, which is highlighted by its full name, cascading style sheets.

In short, the purpose of the style calculation stage is to calculate the specific style of each element in the DOM node. In the calculation process, two rules of CSS inheritance and cascading need to be observed. The final output of this phase is the style of each DOM node, stored in the ComputedStyle structure.

The layout phase

Now, we have the DOM tree and the styles of the elements in the DOM tree, but that’s not enough to display the page because we don’t yet know the geometry of the DOM elements. The next step is to figure out the geometry of the visible elements in the DOM tree, a process we call layout.

Chrome performs two tasks in the layout phase: creating a layout tree and calculating the layout.

Creating a layout tree

DOM trees also contain many invisible elements, such as the head tag and elements that use the display: None attribute. So build an additional layout tree with only visible elements before displaying it.

Let’s look at the layout tree construction process with the following figure:

As you can see from the figure above, all invisible nodes in the DOM tree are not included in the layout tree.

To build the layout tree, the browser basically does the following:

  • Walk through all the visible nodes in the DOM tree and add them to the layout tree
  • Invisible nodes are ignored by the layout tree, such as the entire content below the head tag, and so onbody.p.spanThis element, because its attributes containdispaly:none, so this element is not included in the layout tree
Layout calculation

Now we have a complete layout tree. The next step is to calculate the coordinate positions of the nodes in the layout tree.

When a layout operation is performed, the result of the layout operation is written back into the layout tree, so that the layout tree is both the input and the output. This is an unreasonable place in the layout phase, where the input and output are not clearly distinguished.

To address this problem, the Chrome team is refactoring the layout code. The next generation of layout system, called LayoutNG, attempts to separate input and output more clearly, thus making the newly designed layout algorithm simpler.

layered

Here we summarize the first three stages:

After the HTML page content is submitted to the rendering engine, the rendering engine first parses the HTML into a DOM that the browser can understand. Then according to the CSS style sheet, calculate the style of all nodes in the DOM tree; It then calculates the geometric position of each element and stores this information in the layout tree.

Now that we have the layout tree and the exact location of each element calculated, is it time to start drawing the page?

Again, the answer is no.

Because there are many complex effects on the page, such as complex 3D transformations, page scrolling, or z-index sorting, the rendering engine also needs to generate a LayerTree for a specific node to make it easier to achieve these effects. If you are familiar with Photoshop, you will easily understand the concept of layers, which are added together to form the final page image.

To visualize what Layers are, open Chrome’s Developer Tools and select the Layers TAB to visualize the Layers on your page.

Now you know that the browser page is actually divided into layers, which are superimposed to create the final page. Let’s look at the relationship between these layers and the nodes in the layout tree, as shown in the figure below:

In general, not every node in the layout tree contains a layer, and if a node has no corresponding layer, then the node is subordinate to the layer of the parent node. If the SPAN tags in the image above do not have their own layer, they are subordinate to their parent layer. Eventually, however, each node is directly or indirectly subordinate to a layer.

So what criteria does the rendering engine need to meet to create a new layer for a particular node? Generally, elements that satisfy either of the following two points can be promoted to a separate layer.

First, elements with cascading context attributes are promoted to separate layers.

A page is a two-dimensional plane, but a cascading context gives a three-dimensional concept to HTML elements that are distributed along the Z-axis perpendicular to the two-dimensional plane in terms of their attribute priorities. You can use the following image to get a feel for it:

As you can see from the figure, elements with explicitly positioned attributes, elements with transparent attributes defined, elements with CSS filters, and so on, all have cascading context attributes.

Secondly, the places that need to be clipped will also be created as layers.

But first you need to understand clipping, combined with the following HTML code:


<style>
      div {
            width: 200;
            height: 200;
            overflow:auto;
            background: gray;
        } 
</style>
<body>
    <div >
        <p>So elements that have the properties of a cascading context or need to be clipped can be promoted to a separate layer, as you can see below:</p>
        <p>From the figure above, we can see that the Document layer has A and B layers, and the B layer has two more layers. These layers are organized into a tree structure.</p>
        <p>The LayerTree is created based on the layout tree. To find out which elements need to be in which layers, the rendering engine iterates through the layout tree to create the Update LayerTree.</p> 
    </div>
</body>
Copy the code

In this case, we limit the size of the div to 200 pixels by 200 pixels. The div contains a lot of text, and the text must be displayed in more than 200 pixels by 200 pixels. At this point, clipping occurs. The following is the run-time result:

When this clipping happens, the rendering engine creates a separate layer for the text section, and if the scroll bar appears, the scroll bar is promoted to a separate layer. You can refer to the following image:

So an element that has the properties of a cascading context or needs to be clipped to satisfy any of these points can be promoted to a single layer.

Layer to draw

After building the layer tree, the rendering engine will draw each layer in the tree, so let’s look at how the rendering engine does this.

Imagine if you were given a piece of paper and told to color the background blue, then draw a red circle in the middle, and then a green triangle on top of that circle. How would you do that?

Normally, you would break your drawing operation into three steps:

  1. Draw a blue background
  2. Draw a red circle in the middle
  3. Draw a green triangle on the circle

The rendering engine implements layer drawing in a similar way, breaking a layer’s drawing into smaller instructions, which are then sequentially assembled into a list of instructions to draw, as shown below:

As can be seen from the figure, the instructions in the draw list are actually very simple. They are asked to perform a simple drawing operation, such as drawing a pink rectangle or a black line. Drawing an element usually requires several drawing instructions, because each element’s background, foreground, and borders require separate instructions to draw. So in the layer drawing phase, the output is these lists to draw.

You can also go to the Layers TAB of the Developer Tools and select the Document layer to actually experience drawing a list, as shown below:

In this figure, area 1 is the drawing list of Document, and dragging the progress bar in area 2 can reproduce the drawing process of the list.

Rasterization operation

A draw list is simply a list of draw orders and draw instructions that are actually done by the compositing thread in the rendering engine. You can see the relationship between the render main thread and the composition thread in the following image:

As shown above, when the drawing list of layers is ready, the main thread commits the drawing list to the composition thread. How does the composition thread work next?

The visible area of the page on the screen is called a ViewPort. Generally speaking, a page may be large, but the user can only see a part of it. We call the part that the user can see the ViewPort.

In some cases, some layer can be very big, such as some pages you use the scroll bar to scroll to scroll to the bottom for a long time, but through the viewport, users can only see a small portion of the page, so in this case, to draw out all layer content, will generate too much overhead, but also it is not necessary.

For this reason, the composition thread will divide the layer into tiles, which are usually 256×256 or 512×512 in size, as shown below:

The composition thread then prioritizes bitmap generation based on the blocks near the viewport, and the actual bitmap generation is performed by rasterization. Rasterization refers to the transformation of a map block into a bitmap. The graph block is the smallest unit for rasterization. The renderer maintains a rasterized thread pool, where all rasterization is performed, as shown below:

Generally, GPU is used to accelerate the generation of bitmaps during rasterization. The process of using GPU to generate bitmaps is called fast rasterization, or GPU rasterization. The generated bitmaps are stored in GPU memory.

As you may recall, GPU operations are run in the GPU process, and if rasterization operations use the GPU, then the final bitmap generation is done in the GPU, which involves cross-process operations. You can refer to the picture below for the specific form:

As can be seen from the figure, the rendering process sends the instruction of generating map blocks to GPU, and then executes the bitmap of generating map blocks in GPU, which is saved in GPU memory.

Composition and display

Once all the blocks have been rasterized, the composition thread generates a command to draw the block, DrawQuad, and then submits the command to the browser process.

The browser process has a component called viz that receives DrawQuad commands from the compositing thread, draws its page contents into memory, and displays them on the screen.

At this point, through this series of stages, the HTML, CSS, JavaScript, etc., written by the browser will display a beautiful page.

Rendering assembly line summary

Ok, we have now analyzed the entire rendering process, from HTML to DOM, style calculation, layout, layering, drawing, rasterization, composition, and display. Here’s a diagram to summarize the entire rendering process:

Combined with the above image, a complete rendering process can be summarized as follows:

  1. The renderer transforms the HTML content into a readable DOM tree structure.
  2. The rendering engine translates CSS styleSheets into styleSheets that browsers can understand, calculating the style of DOM nodes.
  3. Create a layout tree and calculate the layout information for the elements.
  4. Layer the layout tree and generate a hierarchical tree.
  5. Generate a draw list for each layer and submit it to the composition thread.
  6. The composite thread divides the layer into blocks and converts the blocks into bitmaps in the rasterized thread pool.
  7. The composite thread sends the draw block commandDrawQuadTo the browser process.
  8. Browser process based onDrawQuadMessage generation page and display on the monitor.

Rendering stage with rearrangement, redrawing, composition

With the foundation of the rendering pipeline introduced above, let’s look at three concepts related to the rendering pipeline — “rearrangement”, “redraw” and “composition”. Understanding these three concepts will help you optimize your Web performance later on.

rearrangement

If you change the geometry of an element using JavaScript or CSS, such as changing its width, height, etc., then the browser triggers a rearrangement, a series of sub-stages after parsing, called rearrangement. Of course, reordering requires updating the entire rendering pipeline, so it’s also the most expensive. For details, please refer to the following figure:

redraw

If you change the background color of an element, the layout phase will not be performed, because there is no change in geometry, so you go directly to the draw phase, and then perform a series of subsequent subphases, called redraw. Redraw eliminates layout and layering, so it is more efficient than rearrange. For details, please refer to the following figure:

synthetic

If you change a property that does neither layout nor draw, the rendering engine skips layout and draw and only performs subsequent compositing operations, which we call compositing. Please refer to the following figure for specific process:

In the image above, we use the TRANSFORM of CSS to animate the effect, which avoids the rearrangement and redraw phases and executes the compositing animation directly on the non-main thread. This is the highest efficiency, because it is composed on the non-main thread, does not occupy the main thread resources, and avoids the layout and drawing two sub-stages, so compared with redraw and rearrangement, composition can greatly improve the drawing efficiency.

conclusion

In summary, we can conclude that reducing redrawing and reordering is a good way to optimize Web performance. Here are some ways to reduce rearranging and redrawing:

  1. useclassManipulate styles, not frequentlystyle
  2. Avoid the use oftablelayout
  3. batchdomOperations, such ascreateDocumentFragmentOr use frameworks, for exampleReact
  4. window resizeEvents offer
  5. rightdomProperties are read and written separately
  6. will-change: transformDo the optimization

Threads in the renderer process

As we mentioned earlier, every renderer has a main thread, and the main thread is very busy dealing with DOM, styling, layout, JavaScript tasks, and various input events. For so many different types of tasks to be executed methodically in the main thread, a system is needed to coordinate the scheduling of these tasks, and the coordinated scheduling system is the message queue and event loop system.

Use a single thread to process scheduled tasks

Starting with the simplest scenario, if there are a series of tasks that need to be executed, we need to write all the tasks into the main thread in order. When the thread executes, the tasks will be executed in order in the thread. When all tasks are completed, the thread exits automatically. As shown in the figure:

New tasks are processed while the thread is running

But not all tasks are uniformly scheduled before execution, and in most cases, new tasks are created while the thread is running. For example, during thread execution, a new task is received to compute 10+2, and the above method does not handle this situation.

In order to be able to receive and execute new tasks while the thread is running, it needs to adopt the event loop mechanism. The main changes are as follows:

  • The first introduces looping by adding a for loop to the end of a thread statement, which will continue to execute.
  • The second point is the introduction of events, which can wait for the user to input the number while the thread is running. During the waiting process, the thread will be suspended. Once the user input information is received, the thread will be activated, and then perform the addition operation, and finally output the result.

Process tasks sent by other threads

Above we improved the way threads execute by introducing an event loop that allows them to accept new tasks during execution. However, in the second version of the threading model, all tasks come from within the thread. If another thread wants the main thread to perform a task, it cannot be done with the second version of the threading model.

Let’s take a look at how other threads send messages to the render main thread. For details, see the following figure:

As can be seen from the above figure, the render main thread frequently receives some tasks from the IO thread. After receiving these tasks, the render process needs to deal with them. For example, after receiving the message that the resource load is completed, the render process needs to start DOM parsing. After receiving the mouse click message, the render main thread starts executing the corresponding JS script to handle the click event.

So how do you design a thread model so that it can receive messages sent by other threads?

A common pattern is to use message queues. Before explaining how to implement it, let’s talk about what a message queue is, as shown below:

As you can see from the figure, a message queue is a data structure that holds tasks to be executed. It conforms to the “first in, first out” nature of the queue, that is, to add tasks to the end of the queue; To fetch a task, fetch it from the head of the queue.

With queues in place, we can continue to transform the threading model as shown below:

As can be seen from the above figure, our transformation can be divided into the following three steps:

  1. Add a message queue;
  2. New tasks generated in the IO thread are added to the end of the message queue;
  3. The render main thread executes the task by reading it in a loop from the message queue header.

Process tasks sent by other processes

By using message queues, we achieve message communication between threads. Cross-process tasks occur frequently in Chrome, so how do you handle tasks sent by other processes? You can refer to the following image:

Can be seen from the diagram, the rendering process have an IO thread used to receive other processes incoming messages, receive messages, these messages will be assembled into the task sent to render the main thread, the subsequent steps and front interpretation of “sending the task of dealing with other threads, not repeat here.

Type of task in the message queue

There are many types of tasks in message queues, which contain many internal message types, such as input events (mouse scroll, click, move), microtasks, file reads and writes, Websockets, JavaScript timers, and so on. In addition, the message queue contains many page-related events, such as JavaScript execution, DOM parsing, style calculation, layout calculation, CSS animation, and so on.

All of these events are executed in the main thread, so when writing a Web application, we also need to measure how long these events take and find ways to solve the problem of individual tasks taking too long on the main thread.

Describe the disadvantages and solutions of single threading

All tasks performed by the page thread come from the message queue. Message queuing is a “first in, first out” property, which means that a task placed in a queue will not be executed until the previous task has been executed. Given this property, there are two problems that need to be solved.

The first problem is how to handle high-priority tasks.

A typical scenario, for example, is to monitor DOM node changes (node insertions, modifications, deletions, etc.) and then process the corresponding business logic based on those changes. A common design is to use JavaScript to design a set of listening interfaces that the rendering engine calls synchronously when changes occur, a typical observer pattern.

There is a problem with this pattern, however, because the DOM changes very frequently, and if the corresponding JavaScript interface is called directly every time a change occurs, the current task will take longer to execute, resulting in less efficient execution.

If these DOM changes are made into asynchronous message events and added to the end of the message queue, then the real-time monitoring will be affected because many tasks may be queued before being added to the message queue.

That is to say, if DOM changes and synchronous notification is adopted, the execution efficiency of the current task will be affected. If the asynchronous mode is adopted, the real-time monitoring will be affected.

In response to this situation, microtasks came into being. We put in the message queue task usually referred to as macro task, each macro task contains a task queue, in the process of macro tasks, if the DOM have change, you will add the changes to the task list, so as not to affect the macro mission continue to execute, thus solved the problem of the execution efficiency. See this article for a detailed introduction to macro and micro tasks.

The second problem is how to solve the problem that a single task takes too long to execute.

Because all tasks are executed in a single thread, only one task can be executed at a time, leaving all other tasks in a wait state. If one of the tasks takes too long to execute, then the next task has to wait a long time. Please refer to the following figure:

As you can see from the figure, if one of the JavaScript tasks takes too long to execute during the animation process and takes up the time of a single frame of animation, this will create a feeling of lag for the user, which is of course a very bad user experience. In this case, JavaScript can circumvent this problem with a callback function that allows the JavaScript task to be executed to be delayed.

Timers and AJAX

Earlier we saw events and message queues in pages, and we saw that browser pages are driven by message queues and event loops. Let’s talk about two special apis: setTimeout and XMLHttpRequest. These two Webapis are two different types of applications, relatively typical, and are used very frequently in JS. Now that you think about it, they don’t seem to work the same way we did with message queues.

Let’s take a quick look at how they work.

setTimeout

A quick introduction to setTimeout is a timer that specifies how many milliseconds later a function should be executed. It returns an integer indicating the number of the timer, which can also be used to cancel the timer.

To understand how timers work, we need to review the event loop. We know that all tasks running on the main thread in the renderer process need to be added to the message queue first, and the event loop executes the tasks in the message queue in sequence.

So to execute an asynchronous task, you need to add the task to the message queue. Timer callbacks are a bit special though. They need to be called at a specified time interval, but the tasks in the message queue are executed in order, so to ensure that the callbacks can be executed at a specified time, we cannot add the timer callbacks directly to the message queue.

In Chrome, in addition to the normal message queue, there is another message queue, which maintains a list of tasks that need to be delayed, including timers and some tasks that need to be delayed in Chromium. So when a timer is created with JS, the renderer adds the timer’s callback task to the delay queue.

To be precise, the delay queue mentioned here is a hashMap structure. When this structure is executed, it calculates whether each task in the HashMap is due and executes it when it is due. It will enter the next cycle until all expired tasks are completed.

Some problems with timers

  1. If the current task is executed for a long time, the timer task execution will be affected

    The callback task with setTimeout setting the callback time to 0 is placed in the message queue and waits for the next execution, which is not immediate; To execute the next task in the message queue, wait for the current task to complete, and if it is a long loop, the current task will take longer to execute. This will inevitably affect the execution time of the next task.

  2. If setTimeout has nested calls, the minimum interval is set to 4 milliseconds

    In Chrome, if the timer is called more than five times nested, the system determines that the function method is blocked. If the timer is called at an interval of less than 4 milliseconds, the browser sets the interval for each call to 4 milliseconds.

  3. For inactive pages, setTimeout is executed with a minimum interval of 1000 milliseconds

    In addition to the previous 4 millisecond delay, there is also a very easy to be ignored, that is not activated in the page timer minimum value is greater than 1000 milliseconds, that is to say, if the tag is not current activation, so is the smallest time interval in the timer 1000 milliseconds, the purpose is to optimize the background of the page to load loss and reduce power consumption.

  4. The delay time has the maximum value. Procedure

    Chrome, Safari, and Firefox all store delay values in 32 bits, which can only hold a maximum of 2,147,483,647 milliseconds. This means that if setTimeout overflows when the delay is greater than 2,147,483,647 milliseconds (approximately 24.8 days), then the delay is set to zero, causing the timer to execute immediately.

  5. The this in a callback set with setTimeout is counterintuitive

    If the callback delayed by setTimeout is a method of an object, the this keyword in that method points to the global environment, not the object at which it was defined.

XMLHttpRequest

Before XMLHttpRequest, you still had to refresh the entire page if the server data was updated. XMLHttpRequest provides the ability to get data from a Web server, so if you want to update a piece of data, you can just use XMLHttpRequest to request the interface provided by the server, get the data from the server, and then manipulate the DOM to update the content of the page. Instead of having to refresh the entire page, you only need to update a part of the page, which is efficient and doesn’t bother users.

Before we dive into XMLHttpRequest, we need to introduce the concepts of synchronous and asynchronous callbacks.

First, you pass a function as an argument to another function, which is called a callback.

The callback function is executed before the main function returns. We call this callback a synchronous callback.

Asynchronous callbacks occur when a callback function is not executed before the return of the main function, but is executed outside the main function.

Now that you understand what synchronous and asynchronous callbacks are, let’s take a look at the implementation mechanism behind XMLHttpRequest. Here’s how it works:

This is the overall execution flow of XMLHttpRequest, so let’s take a look at the complete process from request initiation to data receipt.

Let’s start with the use of XMLHttpRequest. Let’s look at the following request code:


 function GetWebData(URL){
    /** * 1: Create an XMLHttpRequest request object */
    let xhr = new XMLHttpRequest()

    /** * 2: register the related event callback handler */
    xhr.onreadystatechange = function () {
        switch(xhr.readyState){
          case 0: // The request is not initialized
            console.log("Request not initialized")
            break;
          case 1://OPENED
            console.log("OPENED")
            break;
          case 2://HEADERS_RECEIVED
            console.log("HEADERS_RECEIVED")
            break;
          case 3://LOADING  
            console.log("LOADING")
            break;
          case 4://DONE
            if(this.status == 200||this.status == 304) {console.log(this.responseText);
                }
            console.log("DONE")
            break;
        }
    }

    xhr.ontimeout = function(e) { console.log('ontimeout') }
    xhr.onerror = function(e) { console.log('onerror')}/** * 3: open request */
    xhr.open('Get', URL, true);// Create a Get request, asynchronously


    /** * 4: Set parameters */
    xhr.timeout = 3000 // Set the timeout period for XHR requests
    xhr.responseType = "text" // Format the data to be returned in the response
    xhr.setRequestHeader("X_TEST"."time.geekbang")

    /** * 5: send the request */
    xhr.send();
}
Copy the code

Above is the code that uses XMLHttpRequest to request data. With the above flowchart, we can examine how this code executes.

Step 1: Create the XMLHttpRequest object.

When let XHR = new XMLHttpRequest() is executed, JS creates an XMLHttpRequest object XHR that performs the actual network request operation.

Step 2: Register the callback function for the XHR object.

Because network requests are time consuming, the callback function is registered so that the background task will be called to tell the result of its execution after completion.

The XMLHttpRequest callback functions are as follows:

  • ontimeout: used to monitor timeout requests. This function is called if background requests have timed out
  • onerror: monitors error messages and is called if a background request fails
  • onreadystatechange: is used to monitor the status of background requests, for example, HTTP header messages, HTTP response body messages, and data loading messages

Step 3: Configure the basic request information.

After registering the callback event, it is time to configure the basic request information, including the address of the request, the request method (whether it is GET or POST), and the request method (whether it is synchronous or asynchronous), through the Open interface.

Step 4: Initiate the request.

With everything in place, you can call xhr.send to initiate the network request. If you look at the request flow chart above, you can see:

The renderer process will send the request to the network process, and then the network process is responsible for the download of resources. After receiving the data, the network process will use IPC to notify the renderer process; When the renderer receives the message, it willxhrThe callback function is encapsulated as a task and added to the message queue. When the main thread loop system executes this task, it will call the corresponding callback function according to the relevant state.

conclusion

Well, this is the end of this article, a quick review of what this article said:

  • Introduces the concept of process and thread
  • Introduces single-process browser and multi-process browser
  • Emphasis is placed on the analysis of the coordination between the various processes in the page load and what the rendering process does
  • The differences of rearrangement, redrawing and composition are analyzed from the perspective of rendering process
  • Threads in the rendering process are analyzed
  • The realization logic of timer and AJAX is described briefly

Finally, I would like to thank Teacher Li Bing again for his course “Browser Working Principle and Practice”. I also strongly recommend you here. If you want to know more about the underlying knowledge of browsers, I suggest you to learn from Teacher Li Bing’s course, which is really great. So, I’ll see you in the next article.