Spring breeze good matchmaker, move a tree pomegranate red. Occasionally to rain much, the hut is new and broken, and wear a bamboo hat, an excuse to visit the old man in the neighborhood. Along the river bank, one by one, the plums, full of sleep, are just a red in the green, no one at the moment, and pick it a clean, a bag of new wine. Worldly fame can not be mounted on the wall, to heaven and earth for a sum of spare money to live. I was thinking about the old man’s ancient urn. How was it? A fast horse galloped past and splashed mud all over my body. And spell a sigh of spring breeze, fortunately, more rain recently.

The main function of the browser is to make requests to the server to display HTML documents, PDFs, pictures, videos and other web content in the browser window. The location of these network resources is specified by the user using a URI (Uniform Resource Identifier).

For most people, the browser looks like this:

A display front end, an unknown middle layer connecting the network world; Even the cyber world can be left out: a monitor, a mysterious black box behind the scenes.

If you’re a front-end developer, and you spend more time with your browser than your girlfriend, think about every “not-so anticipated” morning, every evening that you have to work against the clock to complete a task, and only the browser and the editor are your faithful companions. Even the VS Code editor, which you can’t live without, has its roots in the browser.

Friends in front of the screen, are you familiar with the people around you, are you familiar with the friends who accompany you day and night? Maybe, maybe not, but would you like to spend some time familiarizing yourself with the inner world of the browser that you have spent so much of your time with?

Today, we’re going to take a look at this middle ground where we’re most connected to the Internet. The structure of the article is as follows:

A brief history of the browser

The birth and development of the browser

As you probably know, the first browser, WorldWideWeb, was created in 1990. But the modern browser was born in the 1980s.

A British scientist named Tim Berners-Lee, who was working at the Swiss-based European Organization for Nuclear Research (CERN) in its French alphabet, created a computer program called Inquire in the early 1980s. The plan aims to make it easier for the many different individuals working at CERN to share information.

The first browser came out in 1990 while Tim Berners-Lee was working at CERN. You may be wondering what a Web browser really is. In short, it is a computer program whose purpose is to display and retrieve data. It does this using the URL assigned to each data set (Web page) stored on the Web server. So this means that when you type something in the browser, you’re actually typing an address, and the browser will use that address to get the information you want to see. Another key function of the browser is to explain and present computer code to you in an easy-to-understand way.

Here is a brief history of browser development by 2020:

Important early browsers include Erwise, Violawww, Mosaic, Netscape Navigator:

You’ve heard the story since the browser was born in 1990:

  • NCSA Mosaic, or simply Mosaic, was the first widely used web browser capable of displaying images in the history of the Internet. Published in 1993 by the NCSA at the University of Illinois at Urbana-Champaign, and officially terminated development and support on January 7, 1997, the browser was hugely popular at the time. Mosaic was one of the sparks that ignited the later dotcom boom. The subsequent development of Netscape Navigator employed many of the original Mosaic browser engineers, but did not use any of the Mosaic web browser code. The descendants of Netscape’s code are the Firefox browser.
  • Marc Andreesen and colleague Jim Clark started a company in 1994, when Mosaic was still the most popular browser, with a plan to build a browser better than Mosaic, take over the market, make them rich, and change history. Their first browser was called Mosaic Netscape 0.9, which was soon renamed Netscape. Thanks to JavaScript (JavaScript was born in 1995, and was designed and implemented by Netscape’s Brendan Eich in just ten days). And partial screen loading, a new concept that greatly enriched the online experience by allowing users to start reading detailed information on a page even when the page was not fully loaded. It quickly became the market leader, capturing half the browser market. Netscape’s market share is close to 90 percent.

On August 9, 1995, Netscape IPO, the original price is $14 a share, but later, by default, instead of $28 a share issue, on the day of closing, Netscape’s stock is $75 a share, Netscape became the world’s most valuable Internet companies, the Netscape IPO also contributing to a bubble in the growing network.

  • Netscape’s initial success proved to those working in computing and the Internet that times had changed forever, shocking the most powerful players in the industry at the time, including a Seattle-based company called Microsoft. Computers would run through browsers that could run on any machine, democratizing the software industry and lowering its considerable barriers to entry, leading many to speculate that the days of the operating system were over. Netscape was a challenge for Microsoft, which created its own browser, Internet Explorer, in the late 1990s, when Internet Explorer, as it is now, was generally regarded as a shoddy product. Because Microsoft has built an empire selling Windows, its proprietary operating system, it sees this development, spearheaded by companies such as Netscape, as a threat. Microsoft managed to turn the browser industry around quickly by investing heavily in its products to make them as good as Netscape. Windows computers were released with Internet Explorer (Microsoft’s browser) installed, which allowed it to gain a foothold in the market and grow, culminating in what became known as the First Browser Wars.

The rapid decline in market share led to the sale of Netscape to AOL, and the dissolution of Netscape in July 2003, on the same day that the Mozilla Foundation was formed. In 2004, Firefox, based on Mozilla source code, made its debut, starting the second browser war. When Netscape finally died in 2008, the browser empire was officially gone.

By 2003, Microsoft’s Internet Explorer controlled more than 92 percent of the market, a complete reversal from 1995. However, while Microsoft has managed to completely take over the browser market in less than a decade, other competition will soon emerge to reshape the history of the web browser once again.

  • After Microsoft rose to power in the late 1990s and brought companies like Netscape to its knees, the browser’s history seemed to have come to an end. However, as has been the case since its initial release, Internet Explorer is becoming a shoddy product. Google launched its proprietary browser, Chrome, in 2008. By the end of 2012, just four years after its launch, Google Chrome had overtaken Internet Explorer as the most popular browser, thanks to its ease of use, cross-platform capabilities, speed, and special features related to tags and bookmarks.
  • In the early 2000s, probably after Microsoft added a browser to its operating system, Apple released Safari, a browser designed for the Mac, and it is now the second-largest browser in the market.
  • Internet Explorer’s popularity waned in the late 2000s, mainly because it became slow and outdated, and Microsoft now found itself on the outside looking in on the browser world. Not wanting to miss out, the company set about fixing the problem, but found that a key problem was that the name “Internet Explorer” had become synonymous with a bad browser. So, in an attempt to re-enter the game, Microsoft had to rename it, and so the Edge variant was born. Edge, the latest version of Microsoft’s browser, has received a lot of praise, but it may be too late for Microsoft.
  • Internet Explorer has finally become the tear of the age, with Microsoft Edge becoming the default browser for Windows 11. This is the first and last time IE will be absent in the 20 years since Windows was updated. As early as the Windows 10 update, Microsoft said that it would abandon the update of IE to develop a new browser, Microsoft Edge. It’s time to say goodbye to IE on the desktop once and for all. — Internet Explorer will disappear from Windows 11, and it will rest in 2022.

Browser Market Share

As of early July 2021, the browser market share is as follows.

Browser usage trends:

Browser Market Share:

Domestic browser market share:

If you are interested in the browser market share data above, you can check it out via the following link:

  • Domestic browser market share

    • Browser Market Share
  • Global browser market share

    • Global browser market share
    • w3counter

Browser architecture

The heart of a computer

Three-tier computer architecture: machine hardware at the bottom, operating system in the middle, and applications at the top.

When you launch an app on your computer or phone, it’s the CPU and GPU that power the app. Typically applications run on the CPU and GPU through mechanisms provided by the operating system.


The Central Processing Unit, or CPU for short. The CPU can be thought of as the brain of a computer. A CPU core, like the office worker shown here, can solve many different tasks one by one. It can solve everything from math to art while also knowing how to respond to customer requests. In the past, CPUs were mostly single-chip. With modern hardware, you often have more than one core, providing more computing power for your phone and laptop.

The 4 CPU cores act as office workers, sitting at their desks and doing their own work:


The Graphics Processing Unit (GPU) is another part of the computer. Unlike CPUs, GPUs excel at simultaneously handling simple tasks across cores. As the name suggests, it was originally developed for solving graphics. This is why “using GPU” or “GPU support” in a graphics environment is all about fast rendering and smooth interaction. With the popularity of GPU accelerated computing in recent years, GPU alone makes more and more computing possible.

In the figure below, many GPU cores with specific wrenches mean they can only handle a limited number of tasks.

Process and Thread

A process can be described as the execution of an application. A thread is any part of a process program that is inside a process and executes it.

A process is created when the application is launched. The program may create one or more threads to help it work. The operating system provides a “block” of memory that processes can use, where all application state is held in a private memory space. When an application is closed, the corresponding process also disappears, and the operating system frees up memory (in the figure below, the bailed box is the process, and the thread is swimming in the process as an abstract fish).

A process can ask the operating system to start another process to perform a different task. At this point, different parts of memory are allocated to the new process. If two processes need to talk, they can do so through interprocess communication (IPC). Many applications are designed this way, so that if a worker process becomes unresponsive, that process can be restarted without stopping other processes running in different parts of the application.

The process/threading architecture model of the browser

Browser Process Classification

There is no standard specification for how to build a Web browser, and one browser can be built differently from another. The process/thread architecture of different browsers generally consists of the following sections:

Chrome Multiprocess Architecture

The current “king of the browser world” Chrome architecture is shown below, with multiple layers displayed under the renderer process, indicating that Chrome runs multiple renderers for each TAB.

In the figure above, the browser process at the top is coordinated with the process that handles the task of applying the other modules. For the renderer process, multiple renderer processes are created and assigned to each TAB. Chrome assigns a process to each TAB whenever possible. Now it tries to assign a process to each site, including an iframe.

  • Browser process: Controls the “Chrome” part of the application, including the address bar, bookmarks, back and forward buttons, and handles the privilege parts of the web browser that are not visible, such as network requests and file access.
  • Rendering process: control the site display in the TAB;
  • Plug-in process: Controls any plug-ins used by the site, such as Flash;
  • GPU process: Handles GPU tasks independently of other processes. The GPU is divided into different processes because the GPU processes requests from several different applications and draws them on the same surface.

It can be simply understood as different parts of the browser UI for different processes:

Chrome abstracts itself more as an operating system, with web pages and extensions acting as programs. You’ll even notice that Chrome does come with a task manager panel that lists the processes currently running and how much CPU/ memory they’re using.

You can usually open Chrome Task Manager in one of two ways:

  • By right-clicking on the top bar (TAB bar) of the browser, select Task Manager to view;
  • Click the “Options” menu in the upper right corner of Chrome browser (usually a three-point logo), select the “More Tools” submenu, and click “Task Manager” to open the Task Manager window.

I mentioned earlier that Chrome uses multiple renderers, so what are some of the advantages?

  • Stability: In the simplest case, you can imagine each TAB having its own rendering process. Suppose you have three tabs open, and each TAB has its own separate rendering process. If a TAB becomes unresponsive, you can close the TAB while the rest of the TAB is still running and usable. If all the tabs are running in the same process, then when one fails to respond, all the tabs will fail to respond, which is obviously a bad experience. Below is a GIF of multi-process/single-process architecture for your reference.

  • Security and sandboxing: Another benefit of splitting your browser work into multiple processes is security and sandboxing. Because the operating system provides a way to limit process permissions, the browser can sandbox processes with certain functions. For example, Chrome can restrict file access to processes that handle user input, such as renderers.

Since processes have their own private memory space, they often contain copies of the common infrastructure (such as the Chrome V8 engine). This means that more memory is being used, and these copies cannot be shared if they are threads in the same process (threads in the same process do not share the stack, which is necessary to keep threads running independently). To save memory, Chrome limits the number of processes that can be started. The limit depends on the amount of memory and CPU power available on the device, but when Chrome runs up to the limit, it starts running the same process on different tabs on the same site.

Chrome is undergoing an architectural change to run each module of a browser application as a service, making it easy to disassemble or aggregate processes. Specifically, when Chrome is running on powerful hardware, it splits each service into different processes to improve stability, but when Chrome is running on resource-constrained devices, it aggregates services into a single process to save memory footprint. Prior to this architectural change, a similar approach to consolidating processes to reduce memory usage was already in use on the Android class platform.

With Chrome 67, the desktop version of Chrome has site isolation enabled by default, and each TAB’s iframe has a separate rendering process. Enabling site isolation is the result of many years of engineering effort. Site isolation is not as simple as assigning different renderers. It fundamentally changes the way iframes communicate. Opening the developer tool on one page and having the iframe run on a different process means that the developer tool has to work behind the scenes to make it look seamless. Even running a simple Ctrl + F to find a word on the page means searching in a different renderer process. You can see why browser engineers are making the release of site isolation a major milestone!

Further reading:
Why is Chrome multi-threaded instead of multi-threaded?

Overall browser architecture

If you’re a front-end engineer, chances are you’ve been asked in your interview: what happens from the URL input to the page presentation? If you are not familiar with this process, I suggest you take a look at the following two articles, which will not be repeated here:

  • Classic interview question: What happens from URL input to page presentation?
  • What happens after the browser enters the URL Enter (Hyperdetail version)

One of the main tasks of a browser is to render a presentation page. The rendering process is not exactly the same between different browser cores, but the general process is similar. The following image is from the development documentation of the Firefox browser (you can think of it as Netscape’s rebirth).

The image above gives a rough idea of how a browser renders, but when it comes to the overall architecture of the browser, the image above is probably just the tip of the iceberg.

In general, the browser architecture looks something like this:

The user interface

Includes address bar, forward/back buttons, bookmarks menu, etc. Every part of the display, except for the page that you request displayed in the browser’s main window, belongs to the user interface.

Browser engine

A bridge between the user interface and the rendering engine, transmitting commands between the user interface and the rendering engine. The browser engine provides some advanced methods to start loading the URL resources, such as reloading, forward and backward actions, error messages, loading progress, and so on.

Rendering engine

Responsible for displaying the content of the request. If the requested content is HTML, it parses the HTML and CSS content and displays the parsed content on the screen.

The browser kernel is the most important, or core, part of the browser, which is called the Rendering Engine. Be responsible for parsing webpage syntax, such as HTML, JavaScript, and rendering to the webpage. So the browser kernel is the rendering engine that the browser uses, and the rendering engine determines how the browser displays the content and formatting information of the page. Different browser kernels interpret the syntax differently, so the same page will look different (browser-compatible) on different kernels. This is why web developers test web pages in browsers that do not require the same kernel.

It has intelligent authentication engine, rendering engine and control engine, as well as a powerful “national secret communication protocol”, which supports unified control and remote control. On August 15, 2018, Redcore browser was exposed after opening the installation directory, a large number of files with the same name as Google Chrome browser appeared. The file properties of the installation program also showed the original file name chroms.exe. The official website of Redcore browser has removed the download link of the browser. On August 16, Redchip co-founder Gao Jing responded that Redchip’s browser “contains’ Chrome ‘in it”, but it is not plagiarism.It is “standing on the shoulders of giants and doing innovation”.

Getting back to the point, the browser kernel consists of three main technical branches: the typographic rendering engine, the JavaScript engine, and others.

Typography Engine:

  • KHTML: KHTML, is one of the HTML web layout engines, developed by KDE. KHTML has the advantage of being fast, but has less tolerance for incorrect syntax than the Gecko engine used in Mozilla products. Apple adopted KHTML in 2002 for the development of its Safari browser, and released the source code for the latest and past versions of the changes. Later published open source Webcore and WebKit engine, they are KHTML derivative products.
  • Webcore: Webcore is a typesetting engine developed by Apple. It is based on another typesetting engine called “KHTML”. The main browser that uses Webcore is Safari.

Browser kernel engine, basically four parts of the world:

  • TRIDENT: IE uses TRIDENT as the core engine;
  • Gecko: Firefox is based on Gecko;
  • WebKit: Founded in 1998 and developed in 2005 as an open source browser by Apple, Safari, Google Chrome, Maxthon 3, Cheetah browser, and Opera browser based on WebKit.
  • Presto: The core of Opera, but due to market selection issues, it is mainly used on the mobile platform Opera Mini. (Opera announced the switch to WebKit in February 2013, and Opera announced that it would abandon WebKit in favor of Google’s new Blink engine in April 2013.)

In addition, we often hear about Chromium, WebKit2, Blink engines.

  • Chromium: Based on WebKit, it started as a Chrome engine in 2008. Chromium browser is an experimental version of Chrome, experimenting with new features. It can be simply understood as: Chromium for the experimental version, with many new features; Chrome is stable.

Image Source:
Ten thousand words: in-depth understanding of browser principles

  • WebKit2: Launched with OS X Lion in 2010. Implementing process isolation at the WebCore level conflicts with Google’s sandbox design.
  • Blink: Based on WebKit2 branch, which is a branch of Webcore component in WebKit, Google started to be integrated in Chromium browser as the engine of Chrome 28 in 2013. Android’s WebView, which is also based on WebKit2, is the kernel with the best support for new features. It is also used in Opera (15 and later) and the Yandex browser.
  • Mobile is almost entirely WebKit or Blink (except for Tencent’s X5 on Android). These two kernelshave high support for new features, so new features can work well on mobile.

Each kernel diagram:

Let’s take WebKit as the column for a brief introduction, so that you can have a better understanding of the rendering engine. WebKit consists of a number of important modules, we can have an overall understanding of WebKit through the following figure:

WebKit is a page rendering and logical processing engine, front-end engineers to HTML, JavaScript, CSS this “troika” as input, after WebKit processing, the output became we can see and operate the Web page. As you can see from the figure above, WebKit consists of four parts framed in the figure. The main ones are Webcore and JScore (or other JS engines). In addition, the WebKit Embedding API is responsible for the interaction between the browser UI and WebKit, while WebKit Ports provide some interfaces to call Native Library to facilitate the transplantation of WebKit to various operating systems and platforms. For example, on the rendering side, Safari is handled by CoreGraphics on iOS, while WebKit is handled by Skia on Android.

WebKit rendering process:

First of all, the browser locates a bunch of resource files composed of HTML, CSS and JS through the URL, and gives the resource files to Webcore through the loader. HTML Parser then parses HTML into a DOM tree, and CSS Parser parses CSS into a CSSOM tree. Finally, the two trees are combined to generate the final required rendering tree. After layout, the rendering tree is rendered to the screen with the specific WebKit Ports rendering interface, which becomes the final Web page presented to the user.


Used for network calls, such as HTTP requests. Its interface is platform-independent and provides the underlying implementation for all platforms, responsible for network communication and security.

JavaScript interpreter

Used to parse and execute JavaScript code, the results of which are passed to the rendering engine for display.

User interface backend

Used to draw basic widgets, such as combo boxes and Windows. It exposes a common, platform-independent interface and uses the operating system’s user interface methods at the bottom.

Data is stored

This is the persistence layer, and the browser needs to store all kinds of data, such as cookies, on the hard disk. The new HTML specification (HTML5) defines a “web database,” which is a complete and portable in-browser database.

Browser architectures that agree to disagree

Some of the browser architectures are listed below. Maybe some of them have changed. If you are interested, you can take a look at them.

Mosaic architecture:

Firefox architecture:

Chrome architecture:

Safari architecture:

IE architecture:

Browser Fundamentals

Chrome V8

The term “V8” was first used in the “V-8 engine”, which is commonly used in mid – to high-end vehicles. The eight cylinders are divided into two groups of four in a V-shape arrangement. It is the most common engine structure used in high level motor sports, especially in the United States, where IRL, CHAMPCAR and NASCAR all require V8 engines.

About Chrome V8, I have a note to do a more detailed introduction, the full text of the context is as follows, interested can refer to read.

V8 is built around Chrome, but it’s not limited to the browser kernel. The development of V8 has been used in many scenarios, such as the popular NodeJS, weex, fast applications, early RN. V8 has gone through one of the major architectural changes, the main change is “from bytecode abandonment to really sweet”.

An early architecture for V8

The V8 engine was born with a mission to revolutionize speed and memory recycling. JavaScriptCore is built by generating the bytecode and then executing the bytecode. Google felt that JavaScriptCore did not work and that generating bytecode was a waste of time, rather than generating machine code directly. So V8 was very radical in its early architectural design, and it compiled directly into machine code. Later practice proved that Google’s architecture improved speed, but it also caused memory consumption problems.

The early V8 had two compilers, Full-Codegen and Crankshaft. V8 first compiles all the code once with Full-Codegen to generate the corresponding machine code. During the execution of JS, the built-in Profiler in V8 screened out the hot spot function and recorded the feedback type of the parameters, which were then handed over to Crankshaft for optimization. So Full-Codegen is essentially generating unoptimized machine code, and Crankshaft is generating optimized machine code.

As web pages become more complex, V8 also gradually reveals its own architectural flaws:

  • Full-codegen compilation directly generates machine code, resulting in a large memory footprint.
  • Full-codegen compilation directly generates machine code, which leads to long compilation time and slow startup.
  • Crankshaft was unable to optimize the code blocks for keywords such as try, catch and finally;
  • Crankshaft has new syntax support, which requires writing code that is adapted to different CPU architectures.

The existing architecture of V8

To address these shortcomings, V8 borrows from the JavaScriptCore architecture to generate bytecode. After V8 adopts bytecode generation, the overall flow is shown as follows:

V8 is now a very complex project, with over a million lines of C++ code. It is made up of many submodules, of which these four are the most important:

  • Parser: responsible for converting JavaScript source code into Abstract Syntax Tree (AST)

    To be precise, before Parser converts JavaScript source code into an AST, there is a “Scanner” process as follows:

  • Ignition: Interpreter is the interpreter that converts the AST to Bytecode and interprets the Bytecode; It also collects information that TurboFan needs to optimize compilation, such as the types of function arguments. There are four main modules for the interpreter to execute: bytecode, register, stack and heap in memory. The original motivation for Ignition was to reduce memory consumption on mobile devices. Before Ignition, V8’s Full-Codegen baseline compiler typically generated code that accounted for nearly a third of Chrome’s overall JavaScript heap. This leaves less space for the actual data in the Web application. The bytecode for Ignition can generate optimized machine code directly from Turbofan, rather than having to be recompiled from source code as Crankshaft did. The bytecode of Ignition in V8 provides a cleaner and less error-prone baseline execution model, simplifying the de-optimization mechanism, which is a key feature of V8 adaptive optimization. Finally, because generating bytecode is faster than generating baseline compiled code for Full-Codegen, activating Ignition generally improves script startup time, which in turn improves page load.
  • Turbofan: Compiler, the optimization compiler, converts the Bytecode into optimized assembly code using the type information gathered at Ignition. The Turbofan project was originally launched in late 2013 to address Crankshaft’s shortcomings. Crankshaft optimizes only a subset of the JavaScript language. For example, it is not designed to optimize JavaScript code with structured exception handling, that is, blocks of code divided by JavaScript’s try, catch, and finally keywords. It was difficult to add support for new language features in Crankshaft because these features almost always required architecture-specific code to be written for the nine supported platforms.
  • Orinoco: Garbage Collector, a garbage collection module that is responsible for recycling memory space that is no longer needed by a program.

With the new Ignition+TurboFan architecture, we have more than half the memory of the Full-Codegen +Crankshaft architecture, and around 70% of the web speed has been improved.

Before running C, C++ and Java programs, need to be compiled, can not directly execute the source code; However, with JavaScript, we can execute the source code directly (for example, Node Test.js), which is compiled and then executed at run time. This method is called just-in-time compilation, or JIT. Therefore, V8 also belongs to the JIT compiler.


Before V8 came along, the early mainstream JavaScript engine was the JavaScriptCore engine. JavaScriptCore (JScore) is the main service for the WebKit browser kernel, which was developed and open source by Apple. JScore is the default JS engine embedded in WebKit. The reason why it is embedded by default is that many browser engines based on WebKit branch development have developed their own JS engine, among which the most famous is the Chrome V8 mentioned above. The mission of these JS engines is to interpret and execute JS scripts. In the rendering process, there is a correlation between JS and the DOM tree, because the main function of JavaScript in the browser is to manipulate and interact with the DOM tree. Here’s how it works:

JavaScriptCore main modules:
Lexer Lexer, the script source code is broken down into a series of tokens; Parser, which processes tokens and generates the corresponding syntax tree; Lint low-level interpreter that executes binary code generated by the Parser; Baseline JIT (just in time) DFG Low Delay Optimization JIT; FTL high-throughput optimized JIT.

As you can see, interpreted languages are much simpler in terms of process than statically compiled languages that need to do the linking, loading and generating executable files, and so on, after the syntax tree is generated. The JSCORE components are shown in the frame on the right of the flowchart: Lexer, Parser, lint, and JIT (the JIT parts are orange because not all JSCORs have JIT parts).

  • Lexical analysis is easy to understand. It is the process of breaking a piece of source code we have written into Token sequences. This process is also called word segmentation. At JScore, lexing is done by the Lexer (some compilers or interpreters call it Scanner, such as Chrome V8).
  • Like human language, when we speak, we actually follow a convention, and our communication habits follow a certain grammar to say one word after another. The analogy is to computer language. A computer has to understand a computer language, but it also has to understand the syntax of a sentence. Parser parses the sequence of tokens generated after Lexer parsing and generates an abstract syntax tree (AST). After that, the BytecodeGenerator completes the syntax parsing step by generating the bytecode of the JScore based on the AST.
  • JS source code after the lexical analysis and syntax analysis of these two steps, into the bytecode, in fact, is after any program language must go through the steps – compilation. However, unlike the OC code we compile and run, JS does not generate object code or executable files stored in memory or hard disk after compilation. The generated instruction bytecode is immediately interpreted line by line by line by the JScore virtual machine. Running instruction ByteCode is the core part of JS engine, and the optimization of each JS engine is also mainly focused on this.

PS: Strictly speaking, there is no compiled or interpreted type of language itself, because the language is just some abstract definitions and constraints, and does not require specific implementation or execution. JS is an “interpreted language”, but JS is usually dynamically interpreted by the JS engine, not a property of the language itself.

If you are interested in JavaScriptCore, and for more details on JavaScriptCore, I recommend reading the following blog posts:

  • Deep understanding of JScore
  • Dig deep into JavaScriptCore
  • JavaScriptCore is fully parsed
  • JavaScriptCore

Browsers and JavaScript

In this summary, I will use Chrome V8 as an example to briefly explain the relationship between browsers and JavaScript.

Prior to V8, all JavaScript virtual machines used interpreted execution, which was one of the main causes of slow JavaScript execution. V8 pioneered the two-wheel drive design of just-in-time compilation (JIT) (using a mixture of compiler and interpreter techniques), a tradeoff that gave JavaScript a huge speed boost by mixing compilation execution with interpretation execution. Since the advent of V8, vendors have introduced JIT into their JavaScript VMs, so there is a similar architecture for JavaScript VMs on the market today. In addition, V8 also introduced lazy compilation, inline caching, hidden classes and other mechanisms earlier than other virtual machines, further optimizing the efficiency of JavaScript code compilation and execution.

V8 executes a JavaScript flow

The process of V8 executing a JavaScript snippet is shown below:

With the Chrome V8 architecture introduced above, focus on JavaScript. Browsers can get the JavaScript source code, Parser, Ignition and Turbofan can compile the JS source code into assembly code. The flow chart is as follows:

Simply put, Parser converts the JS source Code to the AST, then Ignition converts the AST to Bytecode, and finally Turbofan converts the Bytecode to optimized Machine Code(actually assembly Code).

  • If the function is not called, V8 does not compile it.
  • If the function is only called once, then Ignition builds the Bytecode and the execution is interpreted directly. Turbofan doesn’t do optimized compilation because it needs to start collecting type information at function execution time. This requires that the function be executed at least once before TurboFan can compile it optimally.
  • If the function is called multiple times and it is likely to be recognized as a hot function and the type information collected on Ignition proves to be Optimized for compilation, Turbofan computes Bytecode to Optimized Machine Code. To improve the execution performance of your code.

The red dotted line in the image is backward, meaning the Machine Code is restored to Bytecode, a process called Deoptimization. That’s because the information you gather about Ignition can be wrong, like when the arguments to the add function started as integers and then turned into strings. The generated Machine Code already assumes that the arguments to the add function are integers, which of course is wrong and needs to be deoptimized.

function add(x, y) {
  return x + y;

add(1, 2);
add('1', '2');

V8 is essentially a virtual machine. Since a computer can only read binary instructions, there are usually two ways to get a computer to execute a high-level language:

  • The first is to convert the high-level code into binary code and let the computer execute it.
  • The other way is to install an interpreter on the computer, and the interpreter interprets the execution.
  • Both interpreted execution and compiled execution have their own advantages and disadvantages. Interpreted execution starts fast but executes slowly, while compiled execution starts slow but executes fast. Explained in order to make full use of advantages of executed and compilation, avoid its shortcomings, V8 adopted a trade-off strategy, adopting the tactics of interpretation in the process of start, but if one piece of code execution frequency exceeds a value, the V8 will optimize the compiler to compile it into execution efficiency more efficient machine code.

To summarize, the main processes that V8 goes through to execute a piece of JavaScript code include:

  • Initialize the base environment;
  • Parse the source code to generate AST and scope;
  • Generate bytecode based on AST and scope
  • Interpret the execution bytecode;
  • Listen for hot spot code;
  • Optimize machine code with hotspot code as binary;
  • De-optimizes the generated binary machine code.

Chrome V8’s event mechanism

As for asynchronous programming and message queue, the UI thread provides a message queue and adds events to the message queue to be executed, and then the UI thread will constantly retrieve and execute events from the message queue in a cycle. The macro architecture of the common UI thread is shown in the figure below:

Different forms of browsers


WebView is an embedded browser that native applications can use to display Web content. WebView is just a visual component/control/widget etc. So we can use it as a visual part of our native app. When you use a native app, the WebView may just be hidden in the normal native UI elements, and you won’t even notice it.

If you think of the browser as two parts, one is the UI (address bar, navigation buttons, etc.), and the other part is the engine that translates markup and code into a visible and interactive view of us.
WebView is the part of the browser engine, you can insert a WebView into your native application as if it were an iframe, and programmatically tell it what web content will load.

JavaScript running in your WebView has the ability to call native system APIs. This means you don’t have to be constrained by the traditional browser security sandbox that Web code typically has to adhere to. The following diagram illustrates the architectural differences using this technique:

By default, any Web code that runs in a WebView or Web browser is kept isolated from the rest of the application. This is done for security reasons, primarily to minimize damage to the system caused by malicious JavaScript code. With any Web content, this level of security makes sense because you can never fully trust loaded Web content. This is not the case with WebView, where the developer usually has complete control over what is loaded. The likelihood of malicious code getting in and causing havoc on the device is very low.

This is why, with WebView, developers can use a variety of supported ways to override the default security behavior and let Web code and native application code communicate with each other. This communication is often referred to as a bridge. You can see the bridge visualization in the image above as part of the Native Bridge and JavaScript Bridge.

WebViews are great, and while it may seem like it’s totally special and unique, remember that they’re just a browser with a well-positioned and sized application and no fancy UI at all, and that’s the essence of it. In most cases, you don’t have to specifically test your Web application in a WebView unless you call native APIs. In addition, what you see in the WebView is the same as what you see in the browser, especially if you use the same rendering engine:

  • On iOS, the Web rendering engine is always WebKit, the same as Safari and Chrome. Yes, you read that right. Chrome on iOS actually uses WebKit.
  • The rendering engine on Android is usually Blink, the same as Chrome.
  • On Windows, Linux, and MacOS, since these are more relaxed desktop platforms, there is a lot of flexibility in choosing a WebView style and rendering engine. The popular rendering engines you’ll see would be Blink (Chrome) and Trident (Internet Explorer), but there’s no one to rely on. It all depends on the application and which WebView engine it is using.

Application of WebView:

  • One of the most common uses of a WebView is to display the content of a link;
  • Ads are still one of the most popular ways to make money for native apps, with the majority of ads being delivered through Web content provided by WebViews;
  • Hybrid Apps, Hybrid applications are popular for several reasons, the biggest of which is to increase developer productivity. If you have a responsive Web application that runs in a browser, it’s fairly simple to have the same application run with a hybrid application on a variety of devices; When you make an update to your Web application, all devices that use it can immediately use the change because the content comes from a centralized server, whereas if it’s a pure native application, you’ll have to go through a build and review process for each platform when you deploy and update it.
  • Native application extensions, such as Web-based extensions in Microsoft Office such as Wikipedia, are implemented through a WebView.

If you are interested in WebView, read the following articles:

  • 7.5.1 Basic usage of WebView
  • Android: This is a comprehensive and detailed guide to how to use WebView

Headless browser

A headless browser is a Web browser that is not configured with a graphical user interface (GUI) and is usually executed through command line or network communication. It is primarily used by software test engineers, and browsers without GUIs perform faster because they do not have to draw visual content. One of the biggest benefits of headless browsers is that they can run on servers without GUI support.

Headless browsers are especially useful for testing Web pages because they are able to render and understand HTML like a browser, including style elements such as page layout, color, font selection, and JavaScript and Ajax execution that are not normally available with other testing methods.

The Headless browser has two main deliverables:

  • Headless libraries, which allow embedded applications to control the browser and interact with Web pages.
  • A headless shell, which is a sample application that performs various functions of the headless API.

Puppeteer is a Node library that provides a set of APIs for running Chrome, which is generally referred to as a headless Chrome browser (although you can configure it to have a UI, by default it doesn’t). It’s a browser, so we can run puppeteers all the things we can do manually on the browser. Besides, we can also run puppeteers on it, so we can also run puppeteers on it.

1) Generate screenshots of web pages or PDF 2) Advanced crawlers that can crawl pages with large amounts of asynchronously rendered content 3) Implement automated UI testing that simulates keyboard input, auto-submit forms, clicks, login pages, etc. 4) Capture site timelines to track your site, Help analyze website performance issues 5) simulate different devices 6)…

The biggest difference between Puppeteer and WebDriver and PhantomJS, which were originally designed for automated testing, is that it’s designed to be viewed from the perspective of the machine. So they use different design philosophies.

  • Headless Chrome architecture
  • puppeteer
  • Puppeteer Puppeteer tutorial
  • Talk about Puppeteer in terms of projects


Electron (formerly known as Atom Shell) is an open source framework developed by GitHub. It uses Node.js (as the back end) and Chromium’s rendering engine (as the front end) to develop cross-platform desktop GUI applications. It has been used for front-end and back-end development by several open source Web applications, notably GitHub’s Atom and Microsoft’s Visual Studio Code.

The Electron Architecture consists of multiple Render processes and a Main Process. The Main Process starts the Render Process, and the Communication between them is through IPC [Inter Process Communication], as shown in the figure below.

The Electron (formerly known as the Atom Shell) is the basis for the development of our commonly used IDE VS code. As shown in the figure below, (click the “Switch Developer Tool” under VSCode [Help] to open the following panel).

The other major components of VS Code are:

  • Shell: Monaco Editor
  • Kernel: Language Server Protocol (a code editor)
  • Debug Adapter Protocol
  • Xterm.js

Further reading:
Electron | Build cross-platform desktop apps with JavaScript, HTML, and CSS

Browser code compatibility testing

  • caniuse
  • browseemall
  • html5test


  • A Brief History of Browsers
  • Some concepts related to Web browsers
  • How Browsers Work: Behind the scenes of the new web browser
  • From browser multi-process to JS single thread, JS running mechanism of the most comprehensive comb
  • 🤔 Which is the best mobile JS engine? Silicon Valley:……
  • What you didn’t know about the interview from the perspective of V8
  • High performance JavaScript engine V8 – Garbage collection
  • Inside Look at Modern Web Browser

The resources

  • Inside look at modern web browser
  • How Browsers Work: Chrome V8 Makes You Better at Javascript
  • Deep understanding of JScore
  • The Story of the Web: A History Of Internet Browsers
  • PPT – Browser Architecture
  • The JavaScript engine V8 performs a process overview
  • Understanding WebViews
  • Quantum Up Close: What is a browser engine?

This article first appeared on my blog, please correct and STAR.