JIT

What is the

JIT(Just in Time Compiler) Even though the Compiler, the JVM compiles hot code (methods or blocks of code) into machine code and optimizes it for more efficient code execution.

The compiler

The HotSpot VIRTUAL machine has two just-in-time compilers, called Client Compiler and Server Compiler, built into it. You will see these two threads when you start up! Or C1 and C2 compilers for short. The current mainstream HotSpot VIRTUAL machine uses the interpreter to work with one of the compilers by default. The compiler used by the application depends on the running mode of the VIRTUAL machine. The HotSpot virtual machine automatically selects the running mode according to its own version and the hardware performance of the host machine. You can also use the -client or -server parameter to specify that the VM runs in client or server mode.

Interpreters and compilers

Mainstream commercial virtual machines, such as HotSpot, J9, etc., contain both an interpreter and a compiler. The interpreter execution saves memory, while the compiler execution can be used to improve efficiency

By default, Java code is compiled and executed in mixed mode on VMS. You can use -xint to interpret and execute Java code using the interpreter. With -xcomp, the just-in-time compiler is preferred for execution, but the interpreter still has to step in if compilation fails!

Layered compilation

Because just-in-time compilers take up program runtime to compile native code, it can take longer to compile more optimized code; In order to compile more optimized code, the interpreter may have to collect performance monitoring information for the compiler, which also affects the speed of interpretation execution. In order to achieve the best balance between application startup responsiveness and running efficiency, the HotSpot VIRTUAL machine will gradually implement the strategy of hierarchical compilation, which is divided into different compilation levels according to the scale and time of compiler compilation and optimization, including:

  • At layer 0, the program interprets execution, and the interpreter does not turn on Profiling, which triggers layer 1 compilation.

  • Layer 1, also known as C1 compilation, compiles bytecode into native code for simple, reliable optimizations, including performance monitoring logic if necessary;

  • Layer 2, also known as C2 compilation, also compiles bytecode to native code, but enables some optimizations that take a long time to compile, and even some aggressive optimizations that are unreliable based on performance monitoring information.

After the implementation of hierarchical compilation, Client Compiler and Server Compiler will work at the same time, and many codes may be compiled multiple times. The Client Compiler can obtain higher compilation speed, while the Server Compiler can obtain better compilation quality. You no longer need to collect performance monitoring information when interpreting execution!

Hot object

There are two types of “hot code” that are compiled by the just-in-time compiler at run time:

1) methods that are called many times;

2) the body of a loop executed several times; Compilation occurs while the method is executing, so it is referred to graphically as On Stack Replacement (OSR compilation, where the method is replaced while the method frame is still On the Stack).

Detection method

Sample Based Hot Spot Detection periodically detects the top of the thread stack, and takes the methods frequently appearing on the top of the stack as the Hot Spot methods.

Counter Based Hot Spot Detection (Counter Based Hot Spot Detection) is established for each method, and the number of code executions exceeds a certain threshold is counted as a Hot Spot method.

Code Cache

What is code cache

Code Cache The Code Cache is used to store hot Code compiled by the JIT. The default size of the CodeCache code buffer is 32 MB in client mode and 48 MB in server mode. This value can also be set.

The just-in-time (JIT) compiler is the largest consumer of the code cache, so this area is also known as the JIT code cache by developers.

The corresponding JVM parameters are ReservedCodeCacheSize and InitialCodeCacheSize, which can be set for Java programs as follows.

InitialCodeCacheSize - Initial size (default: 160KB) ReservedCodeCacheSize - The size reserved for the CodeCache, that is, the maximum size. Default: 48MB CodeCacheExpansionSize - specific expansion sizes (typically 32 or 64KB)Copy the code

One solution is to increase the size of the database codecachesize reasonably, since many applications now have too much code with dependent libraries. But we can’t increase the size of this region indefinitely.

Fortunately, the JVM provides a startup parameter, UseCodeCacheFlushing, that controls the flushing of the CodeCache. The default value of this parameter is false. If it is turned on (-xx :+UseCodeCacheFlushing), the occupied area is released when the following conditions are met:

  • Code cache is full. If the size of the area exceeds a certain threshold, it is flushed.

  • A certain amount of time has passed since the last cleanup.

  • Precompiled code is not hot enough. For each JIT-compiled method, the JVM has a heat tracking counter. If the counter value is less than the dynamic threshold, the JVM releases the precompiled code.

Code Cache to see

To monitor code cache usage, we can track the amount of memory currently in use.

Specify the JVM startup parameter: – XX:+PrintCodeCache prints the usage of the CodeCache. During program execution, we can see output similar to the following:

CodeCache: size=32768Kb used=542Kb max_used=542Kb free=32226Kb
Copy the code

Size indicates the maximum size of this memory region, equal to ReservedCodeCacheSize. Used is the actual size of memory currently used for this region. Max_used is the historical maximum used since the program started. Free is the unused free space in this area. The PrintCodeCache option is useful to help us:

  • Look at when flushing happens

  • Determine whether memory usage has reached a critical point

Code Cache block

Starting with Java 9, the JVM subdivides the Code Cache into three distinct segments, each containing one type of compiled Code. Concrete is:

Non-method segments, which hold associated internal JVM code, such as bytecode interpreters. By default, this segment is about 5 MB. Through – XX: NonNMethodCodeHeapSize parameter to adjust. Profiled -code segment, which contains code that is simply optimized and has a short service life. The default size of this section is 122 MB, which can be adjusted with the -xx :ProfiledCodeHeapSize parameter. Non-profiled segments, which save fully optimized native code and may have a long service life. The default size is also 122 MB. Can pass – XX: NonProfiledCodeHeapSize parameter to adjust. This new segmented structure, which handles different types of compiled code in different ways, provides better overall performance.

For example, separating compiled short-lived code from long-lived code improves the performance of the method cleaner – after all, there is a smaller area of memory to scan.