There are many reasons for CPU100%, and an infinite loop in a program is one of them. However, not everyone has the opportunity to step into this pit at work. I’m one of the ones who hasn’t. Life seems incomplete.
So, I made a very important decision: write an infinite loop in the program. Let’s see what happens.
Not in a production environment, of course. I set up an experimental environment to do the experiment. It’s just that the environment is not just for this loop. Here is the structure of the environment:
As usual, use Vagrant + Virtualbox + Ansible automation.
We’ll write a simple Spring MVC application and have an infinite loop in one of the interfaces:
Here’s how I tried to figure out the loop myself.
Use top to check which process is causing the problem
I ask you once:
http://192.168.88.10:9898/web/loop
Then I open a new window and ask again
Here, I’m curious that the CPU is not at 200%. It’s been between 120% and 130%. P.S. I must have some knowledge point is not solid, otherwise, there would not be this question.
Second, heap space
Since there are no JVM heap space issues involved, executing jstat -gcutil 32593 1S shows no problem. 32593 indicates the ID of the Java process. 1S indicates that the process is sampled once every second.
Third, the stack
The heap is fine, just see which thread has the highest occupancy.
To list the threads of Java processes, top -h -p < Java process PID >
Dump the JVM stack jstack -l < one of the threads PID> >> stack.log, I choose 3596.
In the log, to find the corresponding thread, we need to find the corresponding thread from the stack log, but because the stack log is used in hexadecimal, but the PID in top is 10 base. Therefore, you need to manually convert a base 10 PID to a base 16. The hexadecimal transformation of 3596 is 0xe0C
Four, summary
From this solution process, we can already see a basic processing 100% CPU case! Hope to help you!
Java backend technology
Java backend technology