“This is the 11th day of my participation in the Gwen Challenge.

Java JVM performance optimization case in the first chapter, the introduction of the Java JVM performance optimization case in the next chapter, before the introduction, first explore the new generation and the old age of the ratio.

First, the ratio of new generation to old age

Is the ratio of Eden, S0, and S1 really 8:1:1?

Parameter Settings

JVM parameters are set to:

Initialize memory 300M Maximum memory 300M Log path -xx :+PrintGCDetails -xx :+PrintGCDateStamps -xMS300m -XMx300m -Xloggc:log/gc.logCopy the code

The ratio of Young to Old is 1:2, so the memory allocation should be 100M for the new generation and 200M for the Old generation

We can first check how the heap allocation works with the following command:

JPS -l = jmap-heap = jmap-heap = jmap-heap14160
Copy the code

We can see that the maximum heap memory configuration MaxHeapSize = 300, the ratio of new generation to old generation NewRatio = 2, the ratio of Eden, S0 and S1 SurvivorRatio = 8, that is, Eden:S0:S1 = 8:1:1

But is actual Eden:S0:S1 = 8:1:1? As shown in the figure below

Result: our SurvivorRatio= 8 but memory allocation is not 8:1:1. Eden:S0 = 75:12.5 = 6:1, isn’t it amazing? Don’t worry!

Parameter AdaptiveSizePolicy

This is because JDK 1.8 uses the UseParallelGC garbage collector by default. This garbage collector has AdaptiveSizePolicy enabled by default and automatically calculates the Eden, From, and To areas based on GC. So this is due to the adaptive sizing strategy of JDK1.8, in addition to the fact that there are many fullGCs (Ergonomics) like this in the GC logs below.

We can turn this configuration on and off in JVM parameters:

# enable: -xx :+ usetivesizePolicy # disable -xx: -usetivesizePolicyCopy the code

Is it Eden:S0:S1 = 8:1:1 that we turn off adaptive test?

If you don’t want to dynamically resize memory, here’s the solution:

  1. Keep UseParallelGC and explicitly set -xx :SurvivorRatio=8.

2. Use the CMS garbage collector. The CMS disables adaptive Server Policy by default. Configuration parameter -xx :+UseConcMarkSweepGC.

Matters needing attention

  1. In JDK 1.8, if you use CMS, UseAdaptiveSizePolicy is set to false regardless of how it is set. However, there are differences between JDK versions;
  2. UseAdaptiveSizePolicy Do not be used with a SurvivorRatio parameter display setting.
  3. UseAdaptiveSizePolicy dynamically adjusts the size of Eden and Survivor. In some cases, Survivor may be automatically adjusted to a small size, such as ten or even several MB. In this case, after YGC retrieves Eden, Surviving objects that fail to fit into Survivor will be promoted directly to the old generation, causing the old generation to take up more space and trigger the FULL GC. If a FULL GC takes a long time (such as hundreds of milliseconds), it will not be desirable on highly responsive systems.
  4. For external systems with heavy traffic and low latency, you are advised not to enable this parameter.

2. Troubleshooting scheme for high CPU usage

case

Deadlock problem, believe everybody knows, do not write a case, imagine by oneself, put it bluntly, wait for each other namely the lock that hold in the hand of the other party, I have you, you have me, deadlock does not fall, stare at!!

Problem analysis

If a thread is deadlocked, the thread is always occupying CPU, which can result in a high CPU usage. Our solution to the problem should be as follows:

  1. Start by looking at the Java process ID
  2. Check the PID of the abnormal thread currently in use based on the process ID
  3. Change the thread PID to hexadecimal like 31695 -> 7bcf and get 0x7BCf
  4. Jstack process pid | grep – A20 zero x7bcf code by the relevant process

Here are our steps to implement the above logic, as follows:

Java process ID JPS -lCopy the code

The results are as follows:

Check the CURRENT use of the abnormal thread PID top-HP based on the process ID1456
Copy the code

The results are as follows:

As can be seen from the figure above, the current thread ID with high CPU usage is 1465

Next, convert the thread PID to hexadecimal

# 10The base thread PId converts to16Into the system1465-------> 5b9 # 5b9 appears on the computer as0x5b9
Copy the code

jstack 1465 | grep -A20 0x5b9

The solution

  1. Adjust the lock order to be consistent.
  2. Or a timing lock is used. After a period of time, if the lock cannot be obtained, all the locks held by the owner are released.

3. The impact of the number of G1 concurrent execution threads on performance

Configuration information

Hardware configuration: 8 cores

JVM Parameter Settings

export CATALINA_OPTS="$CATALINA_OPTS -XX:+UseG1GC"
export CATALINA_OPTS="$CATALINA_OPTS -Xms30m"
export CATALINA_OPTS="$CATALINA_OPTS -Xmx30m"
export CATALINA_OPTS="$CATALINA_OPTS -XX:+PrintGCDetails"
export CATALINA_OPTS="$CATALINA_OPTS -XX:MetaspaceSize=64m"
export CATALINA_OPTS="$CATALINA_OPTS -XX:+PrintGCDateStamps"
export CATALINA_OPTS="$CATALINA_OPTS - Xloggc: / opt/tomcat8.5 / logs/gc log"
export CATALINA_OPTS="$CATALINA_OPTS -XX:ConcGCThreads=1"
Copy the code

Initialize the memory and adjust the maximum memory smaller, the purpose of FullGC, pay attention to the GC time

The concerns are: GC count, GC time, and the average Jmeter response time

Initial state

Start Tomcat and check the default number of concurrent threads: jinfo-flag ConcGCThreads PID

-xx :ConcGCThreads=1 If no configuration is configured, the number of concurrent threads is 1

Check the thread status: jstat -gc pid

Get the information:

YGC: youngGC The number is1259FGC times: Full Indicates the number of GC times6Second GCT: the total time of GC is5.556s
Copy the code

GC status after Jmeter pressure test:

Get the information:

YGC: youngGC The number is1600FGC times: Full Indicates the number of GC times18Second GCT: the total time of GC is7.919s
Copy the code

From this, we can calculate the GC times and GC time difference in the process of pressure measurement

GC state during pressure measurement:

YGC: youngGC The number is1600 - 1259 = 341FGC times: Full Indicates the number of GC times18 - 6 = 12Second GCT: the total time of GC is7.919 - 5.556 = 2.363s
Copy the code
The Jmeter pressure test results are as follows:95% request response time: 16ms99% request response time: 28msCopy the code

After the optimization

Add thread configuration: export CATALINA_OPTS=”$CATALINA_OPts-xx :ConcGCThreads=8″

Check the GC status jstat-gc PID

Initial GC state after Tomcat is started:

Summary: YGC: youngGC times is1134FGC times: Full Indicates the number of GC times5Second GCT: the total time of GC is5.234s
Copy the code

GC status after Jmeter pressure test:

Summary: YGC: youngGC times is1347FGC times: Full Indicates the number of GC times16Second GCT: the total time of GC is7.149s From this, we can calculate the number of GC and GC time in the process of pressure measurement GC state: YGC: youngGC number is1347 - 1134 = 213FGC times: Full Indicates the number of GC times16 - 5 = 13Second GCT: the total time of GC is7.149 - 5.234 = 1.915s provides the number of threads, which reduces the user response time. The results of the pressure test are as follows:95% request response time: 15ms99% request response time: 22msCopy the code

advice

After configuring the number of threads, the average response time and GC time of our requests have been significantly reduced. Just from the effect point of view, our optimization has some effect. Take this into account when optimizing online projects at work.

Four,

At the beginning of this section, the ratio of Eden, S0 and S1 in the new generation is introduced. We all know that it is 8:1:1, but it is not the default ratio. The influencing factors include garbage collector, adaptive adjustment of the new generation heap memory parameter UseAdativeSizePolicy and SurvivorRatio. Secondly, it introduces the high CPU usage troubleshooting scheme and the impact of the number of G1 concurrent execution threads on the performance of the case, hoping to help you!

Welcome everyone to pay attention to the public account (MarkZoe) to learn from each other and communicate with each other.