In the previous stress test, I often encountered a scenario that some Java applications just started to simulate the scenario and send pressure to the system, but the performance was very poor when the pressure was initiated, and the pressure test result was not up to the standard. On the second retest, however, performance can reach a steady high watermark. At that time, I only knew that the hot code was “cached” and did not need repeated compilation, and I did not know the principle. After querying some information,……

CodeCache

CodeCache is the staging area for hot code, where code compiled by the just-in-time compiler resides, in off-heap memory. In addition to jIT-compiled code (which makes up the majority of the code), native method code (JNI) used by Java also resides in codeCache

The compiler

Client compiler (C1) : For Client GUI programs with requirements on startup performance, the optimization method is relatively simple, so the compilation time is short

Server compiler (C2) : For the peak performance requirements of the Server program, the optimization method is complex, so the compilation time is long, but the performance is better during the run

The advantage of C1 is that C2 is more efficient in actual operation without waiting

Hierarchical compilation: starting with Java7, the JVM defaults to hierarchical compilation: hot methods are first compiled by the C1 compiler, and hot spots within hot methods are further compiled by C2 (understood as secondary compilation, calculating better compilation optimizations based on previous runs). In order not to interfere with the normal running of the program, JIT compilation is carried out on an additional thread, and HotSpot allocates the number of C1 and C2 threads in a 1:2 ratio based on the actual CPU resources. , in the case of computer resources, the interpretation of the bytecode run and compile the runtime can simultaneously, compiled after the execution of the machine will start when the next call this method, has replaced the original explanation to execute (means have been translated into more efficient machine code, replace the original nature of the relatively low efficiency performance of methods)

GC for the CodeCache region

Internally, the JVM tries to interpret the execution of Java bytecode when a method call or loopback edge reaches a certain number of times (C1:1500; C2:10000), triggers just-in-time compilation, which compiles Java bytecode to native machine code to improve execution efficiency.

The compiled native machine code is cached in the CodeCache, and the CodeCache will fill up if too much code triggers just-in-time compilation and does not GC in time. Once the CodeCache is filled, already compiled code is executed as native code, but uncompiled code can only be run as interpreted execution. The system processing capacity decreases significantly.

The JVM provides a GC method for CodeCache: -xx :+UseCodeCacheFlushing

This parameter is turned on by default after JDk1.7._4 and will be attempted when the CodeCache is about to fill. JDK7 does not do a good job of collecting in this area, GC returns are low and there is a big improvement in JDK8.

Java8 provides a JVM startup parameter: -xx :+PrintCodeCache, which prints the usage of CodeCache when the JVM is stopped. You can observe this value each time the application is stopped and slowly adjust it to the most appropriate size. This is enabled by adding: -xx :+UseCodeCacheFlushing to the startup parameter.

When this option is turned on:

  1. Before the JIT is turned off, that is, before the CodeCache is filled, a cleanup is done to remove some of the CodeCache code before the JIT is turned off. If there is still no space after cleaning, the JIT will still be off.
  2. When the codeCache is about to run out, the first half of the compiled methods are put into an old list for collection; If a method in the old list is not called within a certain amount of time, the method is flushed from the codeCache; An emergency cleanup occurs when the Code Cache is full, discarding half of the old compiled Code;
  3. The Code Cache is halved, and method compilation may still not restart.
  4. Flushing can lead to high CPU usage, which affects performance degradation;

JDK 8 has significantly improved the collection of codeCache. Not only does the growth of codeCache remain flat, but recycling efforts increase significantly when the amount of codeCache used reaches 75%, and the amount of codeCache used fluctuates around this value and slowly increases. Best of all, JIT compilation is still going on, and system speed is not affected.

For other JVM GCS, please read the full explanation of garbage collection mechanism for **JVM. **

To learn more about JVM parameters, read key Parameters for the JVM.