This is the 7th day of my participation in the August Text Challenge.More challenges in August

preface

Those of you who have studied operating systems know that cpus are time division and are not always used exclusively by threads, except in a non-preemptive manner. In an operating system, there are many threads, and the running time of each thread is determined by the CPU. The CPU assigns each thread a time slice. The time slice is a very short length of time. CPU runs very fast, that is, the main frequency is very high, unless intensive CPU consuming operations, other types of tasks will end in less than the time slice.

Java process cpu100% troubleshooting steps are generally the same, but some commands may vary according to different scenarios.

  1. First, find the process PID that consumes the most CPU
  2. Then, the thread id TID with the highest CPU consumption in the process is found according to the process PID
  3. Finally, locate the Corresponding Java thread according to the thread id, TID, and rectify the fault

Under normal circumstances, we usually deploy the service on the Linux server, so we will focus on the troubleshooting process on the Linux server. However, considering that most programmers develop in a Windows environment, or a small number of services are deployed to a Windows server, both Windows and Linux environments will explain the troubleshooting process.

The sample preparation

Create a Springboot project, write a simple interface, infinite loop to create a Person object, will eventually produce out of memory, Java. Lang. OutOfMemoryError: Java heap space.

Start the project, and the interface call: http://localhost:8080/person/test? justDo=true

package com.nobody;

import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;

import java.util.ArrayList;
import java.util.List;

/ * * *@Description
 * @Author Mr.nobody
 * @Date 2021/3/19
 * @Version1.0.0 * /
@RestController
@RequestMapping("person")
public class PersonController {

    private List<Person> persons = new ArrayList<>();

    @GetMapping("test")
    public Boolean test(@RequestParam boolean justDo) {

        int i = 1;
        if (justDo) {
            while (true) {
                persons.add(new Person("Zhang", i++)); System.out.println(persons.size()); }}returnjustDo; }}Copy the code

Check the Windows Environment

First, find the process PID that consumes the most CPU. There are two ways to find out the image. One is to use The Windows task Manager to find out the image by combining the CPU column and PID column. The larger the CPU column is, the higher the CPU consumption is.

Another way is to use Microsoft’s tool Process Explorer, which not only looks at the status of the Process, but also the CPU usage of the thread, whereas the task manager can only see the CPU usage of the Process. Download to decompress directly can be used.

Download: docs.microsoft.com/zh-cn/sysin…

After starting the software, find the Java process with the highest CPU usage as follows: The process ID PID is 16356.

Then right click on this row and select Properties to open as follows:

Switch to the Threads column page. You can see the thread information in this process and find that the TID of the thread with the highest CPU usage is 18240,6480,11260,13888,16312 and so on.

Open CMD command line tool, through the commandjstack 16356 > d:/16356.stackTo export the stack information of the process to the 16356.stack file on the local disk D. Note that the location of the exported file is arbitrary. The file name usually consists of the process number +stack suffix, but other suffixes can also be used.

Above we checked that the TID of the thread with the highest CPU usage is 18240, 6480, 11260, 138888, 16312. These numbers are decimal, but the thread ID in the exported stack information is hexadecimal, so we need to convert it to hexadecimal first, namely 0x4740, 0x1950, 0x3640. 0 x3fb8.

Then we opened the stack file 16356.stack and searched for these numbers, starting with 0x4740. We found that 30 lines of code were located in our program. After detection, it turned out that the Person object was constantly generated in an infinite loop, so we found the cause.In addition, the stack file of the other three thread ids 0x1950, 0x3640, 0x3FB8 is found to be the ID of the GC thread, which also proves that the GC thread has been busy, indicating that the memory is insufficient, and the memory needs to be reclaimed. It may be that the Java memory can not be reclaimed, so the GC has been kept, which makes the CPU usage very high.

At this point, the problem to find the reason, the original is in an infinite loop, continuous production Person instance, and has no recovery, not only the worker thread has CPU, and lead to gc thread busy recycling memory, but the recovery is not, resulting in insufficient memory Java lang. OutOfMemoryError.

Checking the Linux Environment

The first to usetopCommand to find the process with the highest CPU usage.

To useps -ef | grep javaorjpsCommand to check whether the process with high CPU usage is a Java process.

usetop -H -p pidCommand to query information about all threads of the process, three threads (whose PID is 29871 29872 29873) have high CPU usage. -h displays in the thread dimension. By default, displays in the process dimension.

Run the jstack pid > pid.tdump command to export the thread stack of this process to a file, and run the cat command to view it. Pid. tdump file name extension is optional and usually ends with tdump.

jstack 29869 > 29869.tdump

cat 29869.tdump
Copy the code

Change the PID of the three threads detected above from base 10 to base 16, corresponding to 29871 -> 0x74AF, 29872 -> 0x74B0, 29873 -> 0x74B1 respectively.

It was found that two of the three threads were GC threads and one worker thread. If the GC thread is busy, it indicates that the memory is insufficient and must be reclaimed.

usejstat -gcutil pidCommand to view the heap status of the process. It is found that the used memory in Eden and old generation accounts for a high percentage of the current capacity and GC is frequent.

  • S0: percentage of current capacity used by the first survivor in the young generation
  • S1: Percentage of the current capacity used by the second survivor in the young generation
  • E: Percentage of Eden used in the current capacity in the young generation
  • O: Percentage of the used capacity of the old generation in the current capacity
  • M: Usage ratio of metadata area
  • CCS: compression usage ratio
  • YGC: Number of GC’s in the young generation from application startup to sampling time
  • YGCT: Time from application startup to GC in the young generation at sampling time (s)
  • FGC: Number of old generation (full GC) GCS from application startup to sampling time
  • FGCT: Time from application startup to old generation (full GC) GC at sampling time (s)
  • GCT: Total time taken by GC from application startup to sampling (s)

usejmap -dump:live,format=b,file=pid.hprof pidCommand to export the heap file, exporting only live objects. The file suffix can be arbitrary, as it is also binary, but usually ends with hprof.

Finally, use the tool JAVA_HOME/bin/jvisualvm.exe to analyze the snapshot.

Load snapshot (file —–> Load -> File type (heap))Select the class list, sort by size, and find the category that occupies the most memory. Find the Person class.

At this point, the cause of the problem is that in an infinite loop, the Person instances are constantly being produced and cannot be reclaimed. Not only is the worker thread constantly consuming CPU, but also the GC thread is busy collecting memory, which cannot be reclaimed and runs out of memoryjava.lang.OutOfMemoryError.

Finally, a simple summary, the appeal of the investigation process is not necessarily applicable to all scenarios, different ideas in general are similar. Use different commands and tools for your own analysis, and there are many JVM performance tuning monitoring tools in the Java bin directory, such as JPS, jStack, Jmap, Jhat, jstat, and hprof.


I am Chen PI, an ITer in Internet Coding. Search “Chen PI’s JavaLib” in wechat to read the latest articles in the first time oh!

This is the end of sharing ~~

If you find the article helpful, like, favorites, follow, comment on yours