The authors introduce

Ziyi (interested students can click to check Ziyi’s gold digging homepage, there are a lot of excellent articles inside), a member of the proprietary dingding front-end team, responsible for the development of engineering, on-end application and on-end module plug-in of the proprietary Dingding PC client.

Introduction to the

This article is an introduction to V8 compilation principles, designed to give you a sense of the JavaScript parsing process in V8. The main writing process of this paper is as follows:

  • Interpreters and compilers: an introduction to the fundamentals of computer compilation
  • V8 compilation principle: Based on the knowledge of computer compilation principle, understand V8 for JavaScript parsing process
  • Run time performance of V8: With V8 compilation principles, practice the specific performance of V8 in the parsing process

Warm tips: There may be some knowledge points that the author does not introduce or provide reference links in the article. Interested students can consult or follow our team account by themselves. More detailed “simple JavaScript” series articles will be published in the future.

Interpreters and compilers

Question you may have been wondering: Is JavaScript an interpreted language? To understand this, you first need to understand what interpreters and compilers are and what their characteristics are.

The interpreter

The function of an interpreter is to take a source program written in a certain language as input and the result of the execution of the source program as output. Perl, Scheme, APL, etc. All use the interpreter for conversion execution:

The compiler

Compiler design is a very large and complex software system design, in the real design needs to solve two relatively important problems:

  • How to analyze source programs designed in different high-level programming languages
  • How to map the functions of the source program equivalently to the target machine of different instruction systems

Intermediate representation (IR)

Intermediate Representation (IR) is a Representation of program structure that more closely resembles assembly language or instruction sets than an Abstract Syntax Tree (AST), while retaining some high-level information from the source program. Specific functions include:

  • Easy compiler error debugging, easy to identify IR before the front end or after the back end of the problem
  • Could the compiler’s responsibilities be more separated, with source compilation focusing more on converting to IR rather than adapting to different sets of instructions
  • IR is closer to the instruction set and thus saves more memory space than the source code

Optimized compiler

IR itself can optimize the source program through multiple iterations. The details of optimization can be recorded during each iteration, which is convenient for subsequent iterations to find and use the optimization information. Finally, the target program with better efficiency can be output:

The optimizer can process the IR one or more times to generate faster execution or smaller target programs (for example, finding invariant calculations in the loop and optimizing them to reduce the number of operations), or it can be used to generate fewer exceptions or lower power consumption. In addition, the front-end and back-end can be divided into multiple processing steps, as shown in the following figure:

Comparison of the characteristics of the two

The specific features of the interpreter and compiler are compared as follows:

type The interpreter The compiler
Working mechanism Compilation and execution run simultaneously Compilation and execution are separated
startup Relatively fast Relatively slow
Running performance The relatively low The relatively high
Error detection Runtime detection Compile-time detection

It should be noted that the early Web front-end required the page to be launched quickly, so the interpretation execution was adopted, but the performance of the page in the process of running was relatively low. To solve this problem, JavaScript code needs to be optimized at runtime, so JIT technology is introduced into the parsing engine of JavaScript.

JIT compilation technique

JIT (Just In Time) compiler is a kind of dynamic compilation technology. Compared with traditional compilers, the biggest difference is that compilation Time and runtime are not separated. It is a technology that dynamically compiles code during the running process.

type The interpreter The compiler The JIT compiler
Working mechanism Compilation and execution run simultaneously Compilation and execution are separated Compilation and execution run simultaneously
startup fast In the slow
Running performance The relatively low The relatively high Generally better than the interpreter, depending on optimization
Error detection Runtime detection Compile-time detection Runtime detection

Hybrid dynamic compilation techniques

In order to solve the problem of slow performance of JavaScript at runtime, we can introduce JIT technology and adopt hybrid dynamic compilation to improve the performance of JavaScript. The specific ideas are as follows:

Using the above compilation framework, you can make the JavaScript language:

  • Fast startup: JavaScript is run in interpretive execution at startup, taking advantage of the interpreter’s fast startup speed
  • High performance: The code can be monitored while JavaScript is running, so that the code can be compiled and optimized using JIT technology

V8 compilation principles

V8 is an open source JavaScript virtual machine, currently mainly used in Chrome browser (including open source Chromium) and Node.js, the core function is used to parse and execute JavaScript language. In order to solve the problem of poor JavaScript performance in the early days, V8 went through a number of historical compilation framework evolution (interested students can learn about the early V8 compilation framework design), introduced hybrid dynamic compilation technology to solve the problem, the detailed compilation framework is as follows:

Ignition interpreter

The main thing Ignition does is convert the AST to Bytecode. During the run, TypeFeedback techniques are used and HotSpot (which repeats the code being run, either as a method or a loop body) is computed, ultimately turning it over to TurboFan for dynamic runtime compilation optimization. The Ignition process is explained as follows:

During the execution of bytecode interpretation, runtime information requiring performance optimization is directed to the corresponding Feedback Vector (previously also known as Type Feedback Vector). The Feeback Vector will contain various types of Feedback Vector Slot information stored according to the Inline Cache (IC). For example, The BinaryOp slot (data type of binary operation results), The Invocation Count (number of function calls), and Optimized Code message.

Tips: The details of each execution process will not be covered here, but will be covered in a subsequent series of articles. The Feedback Vector shown above will be printed in the following V8 runtime demo.

TurboFan optimized the compiler

TurboFan uses JIT compilation to optimize JavaScript code at runtime, as shown below:

An Introduction to Speculative Optimization in V8

Note the Profiling Feedback section, which provides a Feedback Vector of runtime Feedback generated during Ignition execution. Turbofan combines bytecode and Feedback Vector information to generate graphs (graph structures in data structures). The graph is passed to the front end, and the code is optimized and de-optimized according to the feedback vector information.

Tips: And by de-optimizing, I mean pulling the code back to Ignition and doing it, essentially because the machine code can no longer do what it wants to do, so if a variable goes from string to number, the machine code compilers string, it can no longer do what it wants to do, So V8 does the tuning, pulling the code back to Ignition to explain the execution.

Runtime performance of V8

Now that we know how V8 is compiled, we need to use V8’s debugging tools to see how JavaScript is compiled and run in detail to deepen our understanding of the V8 compilation process.

D8 debugging tool

If you want to learn about compile time and runtime information for JavaScript in V8, you can use the debugging tool D8. D8 is the command line Shell of V8 engine, you can view AST generation, intermediate code ByteCode, optimized code, de-optimized code, optimized compiler statistics, code GC and so on. There are many ways to install D8, as follows:

  • Method 1: Download and compile the toolchain according to V8 official documents Using D8 and Building V8 with GN
  • Method 2: use someone else has compiled D8 tools, the version may have lag, such as the Mac version
  • Method 3: Use a JavaScript engine version management tool, such as JSVU, to download the latest compiled JavaScript engine

After installing v8-debug, run v8-debug –help to view the following commands:

V8 -debug --help Synopsis: shell [options] [--shell] [<file>...]  d8 [options] [-e <string>] [--shell] [[--module|--web-snapshot] <file>...]  -e execute a string in V8 --shell run an interactive JavaScript shell --module execute a file as a JavaScript module --web-snapshot execute a file as a web snapshot SSE3=1 SSSE3=1 SSE4_1=1 SSE4_2=1 SAHF=1 AVX=1 AVX2=1 FMA3=1 BMI1=1 BMI2=1 LZCNT=1 POPCNT=1 ATOM=0 The following syntax for options is accepted (both '-' and '--' are ok): --flag (bool flags only) --no-flag (bool flags only) --flag=value (non-bool flags only, no spaces around '=') --flag value (non-bool flags only) -- (captures all remaining args in JavaScript) Options: # print-bytecode (print bytecode generated by ignition interpreter) type: bool default: --noprint-bytecode # Trace optimized compilation --trace-opt (Trace Optimized compilation) type: bool default: --notrace-opt --trace-opt-verbose (extra verbose optimized compilation tracing) type: bool default: --notrace-opt-verbose --trace-opt-stats (trace optimized compilation statistics) type: bool default: --trace-deopt (trace deoptimization) type: bool default: --notrace-deopt --log-deopt (log deoptimization) type: bool default: --nolog-deopt --trace-deopt-verbose (extra verbose deoptimization tracing) type: bool default: --notrace-deopt-verbose --print-deopt-stress (print number of possible deopt points) # Source AST) type: bool default: --noprint-ast # check generated code --print-code (print generated code) type: bool default: --print-opt-code (print optimized code) type: bool default: --noprint-opt-code # Allow native API syntax provided by V8 in source --allow-natives-syntax (allow natives syntax) type: bool default: --noallow-natives-syntaxCopy the code

Generate AST

We’ll write an index.js file, put JavaScript code in the file, and execute a simple add function:

function add(x, y) {
    return x + y
}

console.log(add(1, 2));
Copy the code

Use the –print-ast argument to print the AST information for the add function:

v8-debug --print-ast ./index.js

[generating bytecode for function: ]
--- AST ---
FUNC at 0
. KIND 0
. LITERAL ID 0
. SUSPEND COUNT 0
. NAME ""
. INFERRED NAME ""
. DECLS
. . FUNCTION "add" = function add
. EXPRESSION STATEMENT at 41
. . ASSIGN at -1
. . . VAR PROXY local[0] (0x7fb8c080e630) (mode = TEMPORARY, assigned = true) ".result"
. . . CALL
. . . . PROPERTY at 49
. . . . . VAR PROXY unallocated (0x7fb8c080e6f0) (mode = DYNAMIC_GLOBAL, assigned = false) "console"
. . . . . NAME log
. . . . CALL
. . . . . VAR PROXY unallocated (0x7fb8c080e470) (mode = VAR, assigned = true) "add"
. . . . . LITERAL 1
. . . . . LITERAL 2
. RETURN at -1
. . VAR PROXY local[0] (0x7fb8c080e630) (mode = TEMPORARY, assigned = true) ".result"

[generating bytecode for function: add]
--- AST ---
FUNC at 12
. KIND 0
. LITERAL ID 1
. SUSPEND COUNT 0
. NAME "add"
. PARAMS
. . VAR (0x7fb8c080e4d8) (mode = VAR, assigned = false) "x"
. . VAR (0x7fb8c080e580) (mode = VAR, assigned = false) "y"
. DECLS
. . VARIABLE (0x7fb8c080e4d8) (mode = VAR, assigned = false) "x"
. . VARIABLE (0x7fb8c080e580) (mode = VAR, assigned = false) "y"
. RETURN at 25
. . ADD at 34
. . . VAR PROXY parameter[0] (0x7fb8c080e4d8) (mode = VAR, assigned = false) "x"
. . . VAR PROXY parameter[1] (0x7fb8c080e580) (mode = VAR, assigned = false) "y"
Copy the code

We graphically describe the generated AST tree:

The VAR PROXY node connects to the VAR node at the corresponding address during the actual analysis phase.

Generate bytecode

The AST generates the BytecodeGenerator from the Ignition interpreter. We can print the bytecode information with the –print-bytecode argument:

v8-debug --print-bytecode ./index.js [generated bytecode for function: (0x3ab2082933f5 <SharedFunctionInfo>)] Bytecode length: 43 Parameter count 1 Register count 6 Frame size 48 OSR nesting level: 0 Bytecode Age: 0 0x3ab2082934be @ 0 : 13 00 LdaConstant [0] 0x3ab2082934c0 @ 2 : c3 Star1 0x3ab2082934c1 @ 3 : 19 fe f8 Mov <closure>, r2 0x3ab2082934c4 @ 6 : 65 52 01 f9 02 CallRuntime [DeclareGlobals], r1-r2 0x3ab2082934c9 @ 11 : 21 01 00 LdaGlobal [1], [0] 0x3ab2082934cc @ 14 : c2 Star2 0x3ab2082934cd @ 15 : 2d f8 02 02 LdaNamedProperty r2, [2], [2] 0x3ab2082934d1 @ 19 : c3 Star1 0x3ab2082934d2 @ 20 : 21 03 04 LdaGlobal [3], [4] 0x3ab2082934d5 @ 23 : c1 Star3 0x3ab2082934d6 @ 24 : 0d 01 LdaSmi [1] 0x3ab2082934d8 @ 26 : c0 Star4 0x3ab2082934d9 @ 27 : 0d 02 LdaSmi [2] 0x3ab2082934db @ 29 : bf Star5 0x3ab2082934dc @ 30 : 63 f7 f6 f5 06 CallUndefinedReceiver2 r3, r4, r5, [6] 0x3ab2082934e1 @ 35 : c1 Star3 0x3ab2082934e2 @ 36 : 5e f9 f8 f7 08 CallProperty1 r1, r2, r3, [8] 0x3ab2082934e7 @ 41 : c4 Star0 0x3ab2082934e8 @ 42 : a9 Return Constant pool (size = 4) 0x3ab208293485: [FixedArray] in OldSpace - map: 0x3ab208002205 <Map> - length: 4 0: 0x3ab20829343d <FixedArray[2]> 1: 0x3ab208202741 <String[7]: #console> 2: 0x3ab20820278d <String[3]: #log> 3: 0x3ab208003f09 <String[3]: #add> Handler Table (size = 0) Source Position Table (size = 0) [generated bytecode for function: add (0x3ab20829344d <SharedFunctionInfo add>)] Bytecode length: 6 // Accept 3 parameters, 1 implicit this, and an explicit x and y Parameter count 3 Register count 0 So Frame size is 0 Frame Size 0 OSR Nesting Level: 0 Bytecode Age: 0 0x3AB2082935F6 @ 0:0b 04 Ldar A1 0x3AB2082935F8@ 2: 39 03 00 Add a0, [0] 0x3ab2082935fb @ 5 : a9 Return Constant pool (size = 0) Handler Table (size = 0) Source Position Table (size = 0)Copy the code

The add function mainly contains the following three bytecode sequences:

// Load Accumulator Register (a1, 0, 0, 0, 0); // Load Accumulator Register (A1, 0, 0, 0, 0); // [0] points to the Feedback Vector Slot, Ignition collects the value analysis information for subsequent TurboFan optimizations. Add A0, [0] // Transfer control to the caller, And returns the value Return in the accumulatorCopy the code

The register architecture that executes these bytecodes in Ignition is an address instruction structure.

Note: For more information about bytecodes, see Understanding V8’s Bytecode. The register architecture will be explained in detail in a future series of articles.

Optimize and de-optimize

JavaScript is a weakly typed language. Unlike strongly typed languages, JavaScript does not need to define the parameter type of function calls. Instead, it can be very flexible to pass in various types of parameters for processing, as shown below:

Function add(x, y) {return x + y} add(1, 2); add('1', 2); add(null, 2); add(undefined, 2); add([], 2); add({}, 2); add([], {});Copy the code

In order to be able to operate the + operator, many apis need to be called during the low-level execution, such as ToPrimitive (to determine whether it is an object), ToString, ToNumber, etc., and data conversion processing that conforms to the + operator is carried out on the passed parameters.

This is where V8 makes assumptions about the x and y parameters in JavaScript as strongly typed languages do, so that some side effects can be excluded from the branch code as it runs. It also predicts that the code will not throw exceptions, so it can be optimized for maximum performance. In Ignition, bytecode Feedback Vector is collected as follows:

To view the runtime feedback of the Add function, we can print the runtime information of the Add function through the Native API provided by V8, as shown below:

Function add(x, y) {return x + y} function add(x, y) {return x + y} A lighter V8: https://v8.dev/blog/v8-lite % EnsureFeedbackVectorForFunction (add); add(1, 2); %DebugPrint(add);Copy the code

With the –allow-natives-syntax parameter, you can call the %DebugPrint low-level Native API in JavaScript (see the runtime.h header for more apis) :

v8-debug --allow-natives-syntax ./index.js DebugPrint: 0x1d22082935b9: [Function] in OldSpace - map: 0x1d22082c2281 <Map(HOLEY_ELEMENTS)> [FastProperties] - prototype: 0x1d2208283b79 <JSFunction (sfi = 0x1d220820abbd)> - elements: 0x1d220800222d <FixedArray[0]> [HOLEY_ELEMENTS] - function prototype: - initial_map: - shared_info: 0x1d2208293491 <SharedFunctionInfo add> - name: 0x1d2208003f09 <String[3]: # add > / / include the Ignition interpreter of trampoline pointer - builtin: InterpreterEntryTrampoline - formal_parameter_count: 2 - kind: NormalFunction - context: 0x1d2208283649 <NativeContext[263]> - code: 0x1d2200005181 <Code BUILTIN InterpreterEntryTrampoline> - interpreted - bytecode: 0x1d2208293649 <BytecodeArray[6]> - source code: (x, y) { return x + y } - properties: 0x1d220800222d <FixedArray[0]> - All own properties (excluding elements): { 0x1d2208004bb5: [String] in ReadOnlySpace: #length: 0x1d2208204431 <AccessorInfo> (const accessor descriptor), location: descriptor 0x1d2208004dfd: [String] in ReadOnlySpace: #name: 0x1d22082043ed <AccessorInfo> (const accessor descriptor), location: descriptor 0x1d2208003fad: [String] in ReadOnlySpace: #arguments: 0x1d2208204365 <AccessorInfo> (const accessor descriptor), location: descriptor 0x1d22080041f1: [String] in ReadOnlySpace: #caller: 0x1d22082043a9 <AccessorInfo> (const accessor descriptor), location: descriptor 0x1d22080050b1: [String] in ReadOnlySpace: #prototype: 0x1D2208204475 <AccessorInfo> (const Accessor Descriptor), location: Descriptor} 0x1d2208293691: [FeedbackVector] in OldSpace - map: 0x1d2208002711 <Map> - length: 1 - shared function info: 0x1d2208293491 <SharedFunctionInfo add> - no optimized code - optimization marker: OptimizationMarker::kNone - optimization tier: OptimizationTier::kNone - invocation count: 0 - profiler ticks: 0 - closure feedback cell array: 0x1d22080032b5: [ClosureFeedbackCellArray] in ReadOnlySpace - map: 0x1d2208002955 <Map> - length: 0 - slot #0 BinaryOp BinaryOp:None { [0]: 0 } 0x1d22082c2281: [Map] - type: JS_FUNCTION_TYPE - instance size: 32 - inobject properties: 0 - elements kind: HOLEY_ELEMENTS - unused property fields: 0 - enum length: invalid - stable_map - callable - constructor - has_prototype_slot - back pointer: 0x1d22080023b5 <undefined> - prototype_validity cell: 0x1d22082044fd <Cell value= 1> - instance descriptors (own) #5: 0x1d2208283c29 <DescriptorArray[5]> - prototype: 0x1d2208283b79 <JSFunction (sfi = 0x1d220820abbd)> - constructor: 0x1d2208283bf5 <JSFunction Function (sfi = 0x1d220820acb9)> - dependent code: 0x1d22080021b9 <Other heap object (WEAK_FIXED_ARRAY_TYPE)> - construction counter: 0Copy the code

Tips: SharedFunctionInfo here preserved a InterpreterEntryTrampoline pointer (SFI), every function has a pointer to the Ignition interpreter of trampoline, Whenever V8 needs to go in for tuning, it uses this pointer to pull the code back to the corresponding function execution position in the interpreter.

To make the Add function optimized like HotSpot code, force a function optimization here:

function add(x, y) { return x + y } add(1, 2); / / forced open function optimization % OptimizeFunctionOnNextCall (add); %EnsureFeedbackVectorForFunction(add); add(1, 2); %DebugPrint(add);Copy the code

The –trace-opt argument traces the compile optimizations of the add function:

v8-debug --allow-natives-syntax --trace-opt ./index.js [manually marking 0x3872082935bd <JSFunction add (sfi = Compiling method 0x3872082934B9)> compiling for Non-concurrent Optimization [compiling Method 0x3872082935BD <JSFunction add (sfi = 0x3872082934b9)> (target TURBOFAN) using TurboFan] [optimizing 0x3872082935bd <JSFunction add (sFI = 0x3872082934B9)> (Target TURBOFAN) - took 0.097, 2.003, 0.273ms] DebugPrint: 0x3872082935BD: [Function] in OldSpace - map: 0x3872082c2281 <Map(HOLEY_ELEMENTS)> [FastProperties] - prototype: 0x387208283b79 <JSFunction (sfi = 0x38720820abbd)> - elements: 0x38720800222d <FixedArray[0]> [HOLEY_ELEMENTS] - function prototype: - initial_map: - shared_info: 0x3872082934b9 <SharedFunctionInfo add> - name: 0x387208003f09 <String[3]: #add> - formal_parameter_count: 2 - kind: NormalFunction - context: 0x387208283649 <NativeContext[263]> - code: 0x387200044001 <Code TURBOFAN> - source code: (x, y) { return x + y } - properties: 0x38720800222d <FixedArray[0]> - All own properties (excluding elements): { 0x387208004bb5: [String] in ReadOnlySpace: #length: 0x387208204431 <AccessorInfo> (const accessor descriptor), location: descriptor 0x387208004dfd: [String] in ReadOnlySpace: #name: 0x3872082043ed <AccessorInfo> (const accessor descriptor), location: descriptor 0x387208003fad: [String] in ReadOnlySpace: #arguments: 0x387208204365 <AccessorInfo> (const accessor descriptor), location: descriptor 0x3872080041f1: [String] in ReadOnlySpace: #caller: 0x3872082043a9 <AccessorInfo> (const accessor descriptor), location: descriptor 0x3872080050b1: [String] in ReadOnlySpace: #prototype: 0x387208204475 <AccessorInfo> (const accessor descriptor), location: descriptor } - feedback vector: 0x387208293685: [FeedbackVector] in OldSpace - map: 0x387208002711 <Map> - length: 1 - shared function info: 0x3872082934b9 <SharedFunctionInfo add> - no optimized code - optimization marker: OptimizationMarker: : kNone - optimization tier: OptimizationTier: : kNone / / call number increased 1 - invocation count: 1 - profiler ticks: 0 - closure feedback cell array: 0x3872080032b5: [ClosureFeedbackCellArray] in ReadOnlySpace - map: 0x387208002955 <Map> - length: 0 - slot #0 BinaryOp BinaryOp:SignedSmall { [0]: 1 } 0x3872082c2281: [Map] - type: JS_FUNCTION_TYPE - instance size: 32 - inobject properties: 0 - elements kind: HOLEY_ELEMENTS - unused property fields: 0 - enum length: invalid - stable_map - callable - constructor - has_prototype_slot - back pointer: 0x3872080023b5 <undefined> - prototype_validity cell: 0x3872082044fd <Cell value= 1> - instance descriptors (own) #5: 0x387208283c29 <DescriptorArray[5]> - prototype: 0x387208283b79 <JSFunction (sfi = 0x38720820abbd)> - constructor: 0x387208283bf5 <JSFunction Function (sfi = 0x38720820acb9)> - dependent code: 0x3872080021b9 <Other heap object (WEAK_FIXED_ARRAY_TYPE)> - construction counter: 0Copy the code

Note that V8 automatically detects structural changes in the code to perform de-tuning. For example:

function add(x, y) { return x + y } %EnsureFeedbackVectorForFunction(add); add(1, 2); %OptimizeFunctionOnNextCall(add); add(1, 2); Add (1, '2'); // Add (1, '2'); %DebugPrint(add);Copy the code

We can trace the add function to optimize information with the –trace-deopt parameter:

V8-debug --allow-natives-syntax --trace-deopt./index.js // Perform optimization, reason: Not a Smi [bailout (kind: deopt-eager, reason: not a Smi: begin. deoptimizing 0x08f70829363d <JSFunction add (sfi = 0x8f7082934c9)>, opt id 0, node id 58, bytecode offset 2, deopt exit 1, FP to SP delta 32, caller SP 0x7ffee9ce7d70, pc 0x08f700044162] DebugPrint: 0x8f70829363d: [Function] in OldSpace - map: 0x08f7082c2281 <Map(HOLEY_ELEMENTS)> [FastProperties] - prototype: 0x08f708283b79 <JSFunction (sfi = 0x8f70820abbd)> - elements: 0x08f70800222d <FixedArray[0]> [HOLEY_ELEMENTS] - function prototype: - initial_map: - shared_info: 0x08f7082934c9 <SharedFunctionInfo add> - name: 0x08f708003f09 <String[3]: #add> - formal_parameter_count: 2 - kind: NormalFunction - context: 0x08f708283649 <NativeContext[263]> - code: 0x08f700044001 <Code TURBOFAN> - interpreted - bytecode: 0x08f7082936cd <BytecodeArray[6]> - source code: (x, y) { return x + y } - properties: 0x08f70800222d <FixedArray[0]> - All own properties (excluding elements): { 0x8f708004bb5: [String] in ReadOnlySpace: #length: 0x08f708204431 <AccessorInfo> (const accessor descriptor), location: descriptor 0x8f708004dfd: [String] in ReadOnlySpace: #name: 0x08f7082043ed <AccessorInfo> (const accessor descriptor), location: descriptor 0x8f708003fad: [String] in ReadOnlySpace: #arguments: 0x08f708204365 <AccessorInfo> (const accessor descriptor), location: descriptor 0x8f7080041f1: [String] in ReadOnlySpace: #caller: 0x08f7082043a9 <AccessorInfo> (const accessor descriptor), location: descriptor 0x8f7080050b1: [String] in ReadOnlySpace: #prototype: 0x08f708204475 <AccessorInfo> (const accessor descriptor), location: descriptor } - feedback vector: 0x8f708293715: [FeedbackVector] in OldSpace - map: 0x08f708002711 <Map> - length: 1 - shared function info: 0x08f7082934c9 <SharedFunctionInfo add> - no optimized code - optimization marker: OptimizationMarker::kNone - optimization tier: OptimizationTier::kNone - invocation count: 1 - profiler ticks: 0 - closure feedback cell array: 0x8f7080032b5: [ClosureFeedbackCellArray] in ReadOnlySpace - map: 0x08f708002955 <Map> - length: 0 - slot #0 BinaryOp BinaryOp:Any { [0]: 127 } 0x8f7082c2281: [Map] - type: JS_FUNCTION_TYPE - instance size: 32 - inobject properties: 0 - elements kind: HOLEY_ELEMENTS - unused property fields: 0 - enum length: invalid - stable_map - callable - constructor - has_prototype_slot - back pointer: 0x08f7080023b5 <undefined> - prototype_validity cell: 0x08f7082044fd <Cell value= 1> - instance descriptors (own) #5: 0x08f708283c29 <DescriptorArray[5]> - prototype: 0x08f708283b79 <JSFunction (sfi = 0x8f70820abbd)> - constructor: 0x08f708283bf5 <JSFunction Function (sfi = 0x8f70820acb9)> - dependent code: 0x08f7080021b9 <Other heap object (WEAK_FIXED_ARRAY_TYPE)> - construction counter: 0Copy the code

It is important to note that the process of optimizing code can cause performance loss. Therefore, it is recommended to use TypeScript type declarations to improve the performance of your code.

conclusion

The research of V8 in this article is still in a perceptual cognitive stage, and does not delve into the underlying source code of V8. This article gives you a sense of how V8 builds, and also suggests using TypeScript, which does provide a better guide to JavaScript code to some extent. You can follow our team account, and we will publish a series of more detailed JavaScript articles in the future.