DartVM introduction

Mrale. Ph /dartvm/

The purpose of writing this article

This article is intended as a reference for new members of the DartVM team, potential external contributors, or just anyone with an interest inside the VM. This article provides a high level of overview of DartVM and some detailed descriptions of various internal components.

DartVM is a collection of components used to execute Dart code in a native environment. It mainly contains the following contents:

  • Run-time system
    • The object model
    • The garbage collection
    • The snapshot
  • Native methods for core libraries
  • Many development experience components, such as debugging, performance measurement, and hot loading, are formed through service protocols
  • JIT and AOT compile pipelines
  • The interpreter
  • ARM emulator

“Dart VM” is a historical name. The Dart VM is a virtual machine that provides a runtime environment for high-level programming languages, but Dart is not always interpreted or JIT compiled. For example, Dart code using the AOT-compiled pipeline will be compiled to machine code and run in a castrated version of DartVM, called the precompiled runtime, which does not contain the compiled components used to run Dart source code dynamically.

.

The Dart runtime environment is different from that of previous VMS, with different efficiency and goals in different environments. The DartVM can execute machine code directly.

How does DartVM run your code?

The Dart VM has a number of ways to run code, such as:

  • Run source code or kernel binary files in JIT mode
  • Running snapshots (AOT and AppJIT)

The main difference is when and how the VM converts the Dart source into executable code, but the runtime environment is always conducive to execution.

In DartVM every line of code must be running on some ISOLATE, the ISOLATE — a separate Dart universe with its own memory (heap) and its own thread control (mutator threads). The Dart code can be executed concurrently on multiple isolates, but they do not share any state directly with each other, and can only communicate by passing messages between ports (not ports in the network sense).

The relationship between system threads and ISOLATE is a bit murky and highly dependent on how DartVM is embedded in an application. There are only two things to ensure:

  • A system thread can only enter one ISOLATE at a time. If it wants to enter another, it must leave the current isolate first
  • There is only one mutator thread associated with one ISOLATE at a time. The Mutator thread is a thread that executes the Dart code and can use the VM’s C open API interface

So, the same native system line could enter one ISOLATE and execute the Dart code, and then leave to enter another isolate. Or a number of different native threads could go into the same ISOLATE to execute the Dart code, but not all at the same time.

In addition to the Mutator threads that each ISOLATE associates with, it also associates with many other helper threads, such as:

  • JIT compilation thread in the background
  • GC cleaning thread
  • Concurrent GC marks the thread

.

A Dart virtual machine may have only one thread that executes Dart code, which is used repeatedly between isolates, or multiple threads. This is similar to our own thread pool logic, where we need to find the most cost-effective value to define what is high performance. This can be highlighted in a subsequent analysis of the DartVM implementation of Flutter to learn and understand how mature frameworks determine optimal performance for multiple threads.

Dart ::ThreadPool is used internally to manage native threads. Dart ::ThreadPool::Task is used in VMS as opposed to using the native thread concept.

.

My understanding is that the designers of DartVM created the concept of Task to abstract tasks and processes in order to smooth out or eliminate the heavy concept of Thread. All other developers need to do is focus on the Task and send it out, while behind it there is a more efficient thread utilization mechanism to maximize runtime performance. This allows developers to focus only on the task, the process itself, and not always on whether to open up, reuse, or recycle native threads, a performance-sensitive operation.

For example: after the GC to initiate a background removal action, rather than to create a special thread, DartVM way is to send a dart: : ConcurrentSweeperTask, give the task to the virtual machine to the global thread pool. It is up to the thread pool to decide whether to select a thread that is currently idle, or to construct a new thread because there are no threads available. Similarly, the isolate’s message-processing mechanism EventLoop, when a new message is received, the default implementation does not directly generate a dedicated thread. Instead, it issues a Dart ::MessageHandlerTask into the thread pool.

Source code

  • Dart ::Isolate Indicates an Isolate
  • Dart ::Heap represents the Heap memory of an ISOLATE
  • Dart ::Thread describes the status of threads that are attached to an ISOLATE. Threads are not native threads. This is because all threads on the same ISOLATE that are used as mutators reuse the same Thread instance. You can see this through the ISOLATE’s default message processing procedures: Dart_RunLoop and Dart ::MessageHandler

Run the source code through the JIT

This section explains what happens when you execute Dart from the command line:

// hello.dart
main() => print('Hello, World! ');
Copy the code
$ dart hello.dart
Hello, World!
Copy the code

After Dart 2, VMS no longer support direct execution of raw code, but instead support execution of Kernel binary files containing serialized Kernel abstract tree contents (DILL format files). The task of compiling source code into a kernel abstract syntax tree is done through a common front-end (CFE). This tool is written in Dart and is shared between different Dart tools (such as VM, Dart 2JS, Dart Dev Compiler).

To preserve the convenience of running directly from source code, the DART command line tool provides an auxiliary isolate called Kernel Service that compiles the DART source code into the kernel. The VM can then execute the kernel binary.

This is not the only way to combine CFE and VM to execute Dart code. For example, Flutter separates two processes (compiling to create the kernel and running the kernel) completely on different devices:

  • Compile on the developer’s machine (host)
  • throughflutterThe command line tool pushes kernel binaries to the target mobile device for execution

Of course, Flutter also includes packaging kernel binaries into the APP, not just pushing them through the command line

Note that the Flutter tool does not handle Dart parsing on its own. Instead, it generates another resident process, frontend_server, which is basically a very thin wrapper around CFE, with some special flutter handling for kernel conversion. This process compiles the Dart source into a kernel file, which is then pushed to the device by the Flutter tool. This process is resident because when a developer needs to do a hot reload, it can compare the compile state of the previous CFE and only compile those libraries that have actually changed to speed up the compilation process.

Once the kernel binary is loaded into the VM, it is parsed to create the objects represented by the different program nodes. The whole process is lazy, with only the most basic library and class information being loaded at first. Each node holds a pointer back to the kernel binary so that more information can be loaded as needed.

Whenever we refer to an object applied within the VM, we prefix it with Untagged. This is a naming convention for the VM itself, which is always prefixed by internal VM objects defined in C++ classes in the runtime/ VM /raw_object.h header file. For example, A VM object of type DART ::UntaggedClass represents a DART class, dart::UntaggedField represents a variable in a class, and so on. However, in the image above, we have removed the prefix Untagged for brevity, hoping that readers will understand this.

Class-related information is fully deserialized only when the runtime really needs it later (such as finding a member of a class, constructing an instance object, and so on). In this scenario, the members of the class are read from the kernel binary, and the entire method body is not fully deserialized, just their signature information.

At this point, the runtime has loaded enough information from the Kernel binaries to interpret and execute the methods. For example, you can find the main method in the library and execute it.

Introduction to the source code

  • The package:kernel/ast.dart file defines the structure of the classes read from the kernel AST. Package :front_end is used to parse Dart source code and compile it into kernel AST structured data

  • Dart: : the kernel: : KernelLoader: : LoadEntireProgram deserialization is KernelAST structure as the entry point of VM object

  • Dart implements KernelService Isolate, and Runtime/VM/kernel_ISOLate. cc acts as glue to glue the DART implementation to other VMS

.

KernelService has two functions:

  1. It is used to compile the Kernel file by itself
  2. It can be used in JIT mode as a performance aid to improve JIT performance
  • package:vmContains most of the kernel-based VM special methods, such as kernel-to-kernel transformations, but some transformations still exist for historical reasonspackage:kernelUnder the bag. For example, to parseasync.async*.sync*The operation of syntactic sugar is inpackage:kernel/transformations/continuation.dartIn the

Taking a

Dart If you are interested in the Kernel format and its specific use in VMS, you can use PKG/VM /bin/gen_kernel.dart to generate a Kernel binary with the DART source as input. The resulting binary file can be reprinted with PKG /vm/bin/dump_kernel.dart

#Dart is compiled into a kernel binary such as hello.dill using CFE
$dart pkg/vm/bin/gen_kernel.dart \ --platform out/ReleaseX64/vm_platform_strong.dill \ -o hello.dill \ hello.dart

#Print out the KernelAST data in text format
$ dart pkg/vm/bin/dump_kernel.dart hello.dill hello.kernel.txt
Copy the code

Dart has a configuration item called –platform, which refers to the Kernel binary that contains the AST data for the core library. If you have already configured the DartSDK build environment, you can just point to the out/ReleaseX64/vm_platform_strong.dill file in the out directory. Otherwise you will use PKG /front_end/tool/_fasta/compiled_platform.dart to generate:

#Given a list of core libraries to generate the corresponding platform file and outline file
$dart pkg/front_end/tool/_fasta/compile_platform.dart \ dart:core \ sdk/lib/libraries.json \ vm_outline.dill vm_platform.dill vm_outline.dill
Copy the code

At first, all methods have a container, not the actual method body executable code, called LazyCompileStub, which asks the runtime system to generate the current method’s executable code and then execute the newly generated code.

In real scenarios, not all methods have actual Dart/Kernel AST data bodies. Such as native methods defined in C++ or artificial invalid methods produced by DartVM, where IL (intermediate language) is just created out of thin air and not generated by AST

When methods are compiled for the first time, they are handled by an “unoptimized compiler” :

The “unoptimized compiler” generates machine code through two rounds of processing:

  1. Iterate over the AST serialized by the method body to generate control flow graphic data (CFG). CFG is made up of basic blocks, and the data in the blocks are IL instructions. The SET of IL instructions used at this point is similar to stack-based virtual machine instructions: the number is fetched from the stack and the processed data is pushed onto the stack
  2. IL is one-to-many relative to machine code: that is, each IL instruction expands into multiple machine instructions. CFG will be directly compiled to the corresponding machine code

No compilation optimizations are made at this point, and the main purpose of an “unoptimized compiler” is to produce executable code as quickly as possible.

This means that the “unoptimized compiler” does not statically resolve any calls that have not been resolved in KernelBin, so AST invocation nodes such as MethodInvocation or PropertyGet are compiled as if they were completely dynamic. Currently, VMS do not use distribution mechanisms such as virtual table or interface table based, but instead implement a dynamic invocation mechanism using inline caching.

From Wikipedia

Call site

  • Indicating on which line a method was called
  • Indicate where (on what line) the method was passed 0 or N arguments and returned 0 or N values

A Call Site is a location identifier for a method.

Inline caching

Inline caching was first introduced in Smalltalk and is an optimized technique used by many language runtimes. The goal is to speed up run-time method binding by remembering the results of the last method found at a call site. This scheme is particularly effective for dynamically typed languages, where most of the method binding takes place at run time and V-table is not available

The core idea of inline caching is to cache the results of method resolution for a single Call site. There are two applications of this mechanism in VM:

  • throughdart::UntaggedICDataObject as a special cache for a Call site. It maps the recipient’s class to a method that is called when the recipient matches the type. The cache also stores auxiliary information, such as call frequency counters, to keep track of how often a given class appears at a call site
  • Fast invocation of methods via shared lookup stubs. The stub looks up the given cache to see if it contains nodes that match the recipient type. If such a node is found, the stub table increments its frequency count and then calls (?). The cache method. Otherwise, the run-time system helper will trigger the method resolution logic, update the cache if the method is successfully resolved, and the serial calls will not need to enter the run-time system

The figure above illustrates the associated inline cache structure and state under the call site node animal.toface (). The call site is called once under the Dog instance and again under the Cat instance.

The “unoptimized compiler” is perfectly capable of executing any Dart code on its own, but the code it generates is quite slow to execute. This is why the VM also implements an adaptive, optimized compilation pipeline. The design idea of adaptive optimization is to drive the optimization decision according to the execution measurement data of the running program.

The following information is collected during the execution of unoptimized code:

  • The inline cache collects the relevant recipient types in the Call site
  • Code hotspots are tracked by counting the execution of methods and underlying code blocks within methods

When a method’s execution count reaches a certain threshold, the method is submitted to a background “tuning compiler” for optimization.

Optimized compilation and unoptimized compilation start the same way:

The serialized KernelAST data is iterated over and the unoptimized IL is first constructed for the method to be optimized. The optimized compiler then does not translate IL directly into machine code, but instead translates the unoptimized IL into optimized IL based on SSA (Static Single Assignment) structure. Ssa-formatted IL data is speculatively specialized based on the type feedback collected and is optimized through a range of traditional and Dart specific optimizations such as convergence processing, range analysis, type propagation, representation selection, StoreLoad and LoadLoad forwarding, global value coding, allocation sinking, and more. Most of the compiler’s nouns, translators understand not deep, if there is understanding of the judge please advise. Eventually, the optimized IL is degraded to the corresponding machine code by a linear sweep register allocator and simple one-to-many IL instructions.

Once the compilation is done in the background, the compiler requests the Mutator thread to enter the SafePoint and attach the compiled code to the corresponding method.

About SafePoint Security Point

A thread in the virtual world is considered safePoint when its associated state (stack frame, heap memory, etc.) remains unchanged and can be accessed or modified without interruption. Usually this indicates that a thread is not suspended or executing code entirely outside the managed environment (virtual machine), such as executing native code outside of the managed scope.

The next time a method is called, it is already using optimized code. Because some methods have a long run loop (run for a long time), it is common to switch from non-optimized code to optimized code at run time. This process is called on stack replacement – OSR. As the name suggests, the actual operation is simply to replace a stack frame representing one version of the method with another version of the same method.

.

With this new understanding, the previous “unoptimized compiler” diagram can be updated to a more complete version

Introduction to the source code

  • The source code for the compiler is inruntime/vm/compilerdirectory
  • The entry point for compiling the pipeline isdart::CompileParsedFunctionHelper::Compile
  • runtime/vm/compiler/backend/il.hDefines the IL
  • Kernel to IL translation process indart::kernel::StreamingFlowGraphBuilder::BuildGraphInitially, this method also takes care of the task of constructing IL for different functions
  • dart::compiler::StubCodeCompiler::GenerateNArgsCheckInlineCacheStubResponsible for generating machine code for inline stubs,InlineCacheMissHandlerIs responsible for in-line cache misses
  • runtime/vm/compiler/compiler_pass.ccDefines the rounds for optimizing compilers and their order
  • dart::JitCallSpecializerDid the vast majority of type feedback based on professional inference

Taking a

The VM can control the JIT with flags that print out the machine code and IL of the method compiled by the JIT

Marking parameters and description:

  • --print-flow-graph[-optimized]Prints all (or only optimized) compiled IL data
  • --disassemble[-optimized]Disassemble all (or only optimized) compiled methods
  • --print-flow-graph-filter=xyz,abc,...With the preceding marker argument, output only those methods that contain substrings
  • --compiler-passes=...Control the progress of compiled rounds: print IL before and after each round forcibly, disable a round by naming, enterhelpGet more usage details
  • --no-background-compilationClose background compilation, heat function compilation in the main thread. For experimental learning purposes only, otherwise very short programs would have ended before the background compiler had finished processing the thermal function

Such as:

#Dart executes test.dart and optimizes the IL and machine code of the method that contains the words "myFunction"
#Print it all out
#And turn off background compilation so that compilation takes place in the main thread
$dart --print-flow-graph-optimized \ --disassemble-optimized \ --print-flow-graph-filter=myFunction \ --no-background-compilation \ test.dart
Copy the code

It is important to emphasize that the code generated by optimizing the compiler is strongly correlated with inferences based on the application’s runtime measurements. For example, a dynamic call site observed to be called only from an instance of class C is converted to a direct call that performs type verification expected of type C. This assumption may be broken later in the program execution:

void printAnimal(obj) {
  print('Animal {');
  print('  ${obj.toString()}');
  print('} ');
}

// Call printAnimal several times with Cat carding, resulting in printAnimal
// the optimization is based on the assumption that obj is always Cat
for (var i = 0; i < 50000; i++)
  printAnimal(Cat());

// Now make the printAnimal call with the Dog instance. The optimized version is not handled correctly
// This type, because it was previously optimized on the assumption that the type was always Cat
// This will cause "de-optimization"
printAnimal(Dog());
Copy the code

Every time optimized code makes optimistic inferences, it can still be violated and broken during execution. So you need a mechanism that can effectively recover when this happens and keep the program going.

The mechanism for recovery is “de-optimization” : when an optimized version hits an unhandled condition, execution is switched to its non-optimized version. The non-optimized version of the method does not make any assumptions, so it can handle all possible inputs.

Getting into the actual method that is not optimized is critical because the code can have side effects (in the example above, we have already executed the first print, and the “de-optimization” happens later)

De-tuning to unoptimized code by position matching instruction set in THE VM is done by “de-tuning ID” (this is mentioned in the figure above)

VMS typically discard the optimized version after “de-tuning” and then re-tune it later with updated type feedback data.

The VM protects optimistic inferences/assumptions generated by the compiler in two ways:

  • Internal file check (such as CheckSmi, CheckClass IL command) for verification. For example, when a dynamic call is converted to a direct call, these check instructions are added before the direct call. The “de-optimizations” that occur at checkpoints are called “emergency de-optimizations” or “just-in-time de-optimizations” because they are carried out as soon as the inspection arrives
  • Global protection tells the runtime to discard optimized code when something it depends on changes. As an example, the optimization compiler might observe that a class C was never inherited, carrying such information with it during subsequent type propagation. However, it is possible that subsequent subclasses of C may be generated by dynamic code loading or class determination, which invalidates the previous assumption. At this point, the runtime needs to find all optimized code compiled based on this assumption and discard it. It is possible that runtime will find invalid optimized code on the current execution stack, in which case the affected stack frames will be marked as de-optimized and de-optimized when they are executed. This type of anti-optimization can be called “lazy anti-optimization” : it delays execution until execution control returns to the optimized code

Code reading

The de-optimizer is in runtime/ VM/deopt_Instructions.cc. It is essentially a small interpreter used to de-optimize the optimized code of the corresponding state into the corresponding non-optimized code instruction data. Dart: : CompilerDeoptInfo: : CreateDeoptInfo at compile time for each of the optimized code potential against the optimal location and the optimal instruction generation

Taking a

  • --trace-deoptimizationTags allow the VM to print out information about why and where each de-optimization occurs
  • --trace-deoptimization-verboseEach line of de-optimization instructions executed is printed during de-optimization

Running through snapshots

The VM has the ability to serialize the Isolate’s heap data or the cleaner object graph data in the heap into binary snapshot files. The snapshot files can then be used to recreate the same state when the VM Isolate is started.

A snapshot is a low-level, optimized format for quick start, which is essentially a list of objects used to create and guide interconnected relationships. The original idea behind snapshots was that rather than parsing Dart source code and gradually creating data structures within the VM, the VM had the ability to quickly start up an Isolate with all the necessary data structures by unpacking the snapshot.

The idea of snapshots came from Smalltalk images, inspired by Alan Kay. DartVM uses a cluster serialization format, similar to the techniques described in the fast rich feature binary development technique and cluster serialization using Fuel article

Initially snapshots did not contain machine code; this capability was added as AOT compilers were developed. The AOT compiler, along with snapshots of the code, is needed to keep the VM running on platforms where the JIT cannot be used due to platform limitations.

A snapshot with code basically works like a normal snapshot, with a minor difference: a snapshot with code will contain a snippet of code that does not need to be deserialized. This code snippet is special in that it becomes part of the heap data directly after memory mapping.

.

As shown in the figure above, the original black data remains unchanged in the heap after being converted into snapshot format. The machine code is executed, and this data is mediated by the snapshot, fulfilling the retention goal to the new runtime

Introduction to the source code

Runtime/VM/Clustered_snapshot.cc handles serialization and deserialization of snapshots. Such as Dart_CreateXyzSnapshot [AsAssembly] series of API is responsible for the heap data written to the snapshot (such as Dart_CreateAppJITSnapshotAsBlobs and Dart_CreateAppAOTSnapshotAssembl Y).

In addition, Dart_CreateIsolateGroup has the option to receive snapshot data to start an Isolate

Run through AppJIT snapshots

AppJIT snapshots were introduced to reduce JIT warm-up time in large Dart applications such as Dart Analyzer or Dart 2JS. For small projects, the amount of time these tools take to do the work in the VM is roughly equal to the time it takes the VM to JIT compile these apps.

AppJIT snapshot solves the problem that an application can run in a VM with forged training data, and then all the generated code and data structures inside the VM can be serialized into an AppJIT snapshot file. This snapshot file instead of being distributed in source code or kernel binary format, the VM starts from this snapshot. In the event of a mismatch between the real health and the health during training, it can still be JIT.

Taking a

Dart If passed –snapshot-kind=app-jit –snapshot=path-to-snapshot will generate the AppJIT snapshot file after the application is run. Here is an example of dart2JS creating an AppJIT snapshot:

#Run the source code in JIT mode
$ dart pkg/compiler/lib/src/dart2js.dart -o hello.js hello.dart
#The compilation produces a hello.js fileAlready 7 359,592 characters Dart to 10 620 characters JavaScript in 2.07 seconds Dart file (hello.dart) Compiled to JavaScript: hello.js
#Training run to produce AppJIT snapshot files
$dart --snapshot-kind=app-jit --snapshot=dart2js.snapshot \ pkg/compiler/lib/src/dart2js.dart -o hello.js hello.dart
#The snapshot file dart2js.snapshot is generatedAlready 7 359,592 characters Dart to 10 620 characters JavaScript in 2.05 seconds Dart file (hello.dart) Compiled to JavaScript: hello.js
#Starting from a Snapshot
$ dart dart2js.snapshot -o hello.js hello.dartAlready 7 359,592 characters Dart to 10 620 characters JavaScript in 0.73 seconds Dart file (hello.dart) Compiled to JavaScript: hello.jsCopy the code

Run through AppAOT snapshot

AOT snapshots were originally introduced for platforms that could not use JIT, but AOT is also useful for fast startup and for scenarios that seek consistent performance in the face of potential performance degradation.

Failure to use JIT means:

  1. AOT snapshots must contain executable code for each method that will be called during application execution
  2. The executable code must not rely on any optimistic assumptions that can be broken at run time

To meet these requirements, AOT compilation does a global static analysis (called Type Flow Analysis, or TFA) to determine which parts of the program will be touched, based on entry points, which class instances will be allocated, and what subsequent data flows will look like. All of these analyses will be cautious: they will be wrong when they encounter correctness problems. This is in stark contrast to JIT, which tends to perform poorly because it often de-optimizes to perform the action correctly.

All methods that will be touched will be compiled into native code without any inference optimization. But its type stream data will still be retained for professional analysis of this code (such as anti-virtualization calls)

.

Devirtualized — De-virtualized and de-virtualized. Contrast this with inline caching mentioned earlier. Code calls written in code that are made through abstract classes or superclasses but are actually called at run time by concrete or derived types. The process of being able to parse such abstract, impractical things into concrete things is called de-virtualization.

When all methods are compiled, a snapshot of the heap can be taken.

The resulting snapshot can be run using a precompiled runtime, a special DartVM variant that does not contain JIT, dynamic code loaders, and so on.

Introduction to the source code

  • package:vm/transformations/type_flow/transformer.dartIs the entry point for type flow analysis and transformation based on its results
  • dart::Precompiler::DoCompileAllIs the entry point for AOT compilation in a virtual machine

Taking a

The AOT compiler pipeline is currently packaged into a DartSDK script called dart2native:

$ dart2native hello.dart -o hello
$ ./hello
Hello, World!
Copy the code

Tag parameters such as –print-flow-graph-optimized and –disassemble-optimized are not passed to dart2Native scripts, so if you want to know the generated AOT code you will need to compile the compiler from source:

#Compile an executable runtime that runs AOT code
$ tool/build.py -m release -a x64 runtime dart_precompiled_runtime

#Then compile the application using AOT format
$ pkg/vm/tool/precompiler2 hello.dart hello.aot

#Run AOT snapshots using the runtime
$ out/ReleaseX64/dart_precompiled_runtime hello.aot
Hello, World!
Copy the code

Despite global and local analysis, the resulting AOT-compiled code may contain call sites that cannot be de-virtualized (and cannot be resolved statically). To compensate for this part of aoT-compiled code, the runtime uses an extended version of the inline cache integrated in the JIT. This extension is called “switch-mode call”.

As described in the JIT section, the inline cache associated with each call site contains two pieces of data:

  1. Cache object,dart::UntaggedICDataType
  2. A piece of native code for calling, for exampleInlineCacheStubtype

In JIT mode, only part of the cached data is updated, but in AOT mode, both data can be selectively replaced based on the state of the inline cache.

Initially, all dynamic calls will be in the unliked state. When these Call sites are first touched, the SwitchableCallMissStub method is called, which connects to the call site by calling the run-time helper method DRT_SwitchableCallMiss.

If possible, DRT_SwitchableCallMiss tries to transition the state of this call site to the monomorphic monomorphic state. In this state, the call site becomes a direct call, and the called method passes through a special entry point that verifies whether the receiver is of the intended type.

As in the example above, obj is an instance of C when obj.method() is first called, so obj.method is resolved to c.thod.

The next time we execute the same call site it will skip any method queries and call the C. thod method directly. However, this “direct” is not really direct either. As mentioned above, its call to C. thod passes through a special entry point that verifies that the obj instance is still of type C. If not, DRT_SwitchableCallMiss is called to try to switch to the next call site state.

C. thod would still be a valid call target, for example, if obj is of type D and D is a derivative of C but is a copy of the c. thod method. In this case, will go to check the call site can transition to a single target single target state, its implementation is SingleTargetCallStub (code in the dart: : UntaggedSingleTargetCache).

Most classes that are aot-compiled in this way are given an integer ID that is the depth first traversed in the inheritance relationship. For example, C has D0 to Dn subclasses, but none of them copy C method method, then it can be expressed as:

C.:cid <= classId(obj) <= max(D0.:cid, ... , Dn.:cid)

This means that when the obj.method method is called, it is resolved directly to c. thod. In this case, there is no need to compare a class in a singleton state, just by checking the class ID range of all C’s subclasses.

In other cases, the call site might switch to a linear search of the inline cache, As used in JIT mode (relevant code in ICCallThroughCodeStub, dart: : UntaggedICData and dart: : PatchableCallHandler: : DoMegamorphicMiss).

Finally, if the linear array you’re checking grows beyond a certain threshold, call Site will switch to use a class a dictionary type of data structure, relevant code MegamorphicCallStub, dart: : UntaggedMegamorphicCache and dart: : PatchableCallHandler: : DoMegamorphic Miss).

Run-time system

.

In the original text, the author indicates that this chapter will be written in the future. However, the last update of this article was on January 29, 2020.The translator has contacted the author by email, hoping to obtain relevant information to continue writing, but has not received a reply yet.

The original article simply lists the title “object model” and leaves two TODO items:

  1. Documentation explaining the difference between AppJIT and CoreJIT snapshots (then CoreJIT seems to be a new concept and AppJIT is the only one mentioned above)
  2. How does switch-calling work in unoptimized code