1. The introduction

This article introduces GCeasy, a visual analysis tool for GC logs. Using the GC log visualization analysis tool, we can easily see the JVM memory usage for each generation, the number of garbage collections, the reason for garbage collection, the time taken for garbage collection, throughput, and so on, which are very useful when we are tuning the JVM.

2. Introduction

IO /, gCEASY is an online GC log analyzer that can be used for memory leak detection, GC pause cause analysis, JVM configuration optimization, and more. It is free to use (some services are charged).

After opening the official website, upload the log file and click Analysis:

Wait a moment, and you’ll see a graph of the GC log analysis.

3. The analysis

Before looking at the chart, paste the startup parameters for my blog application:

java -Xms312m -Xmx312m -XX:MetaspaceSize=768m -XX:MaxMetaspaceSize=768m -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -Xloggc:/data/blog_server/target/gc.log -jar /data/blog_server/target/blog.jar &
Copy the code

Then take a look at some of the analysis charts provided by the site in conjunction with the metrics.

3.1 the JVM memory size

The table on the left shows the memory allocation of new generation, old generation, metadata area and summary respectively. Allocated memory in the Allocated memory column, Peak indicates the maximum Peak value in the area.

Let’s compare the application startup parameters:

  • New generation: The memory is set to 312M at startup. Since the ratio of new generation in all memory is not set, the system will default to 1/3, so 312/3, approximately 100M, is 93M in the table.
  • Old age: The default value of the old age is 2/3 of the total memory, approximately 200M. The value in the table is 208M.
  • Meta Space: metadata area after jdk1.8, 1.7 and before only permanent generation. The value shown here is 768M, which is indeed the value set in the application startup command.
  • Total: This is the sum of the memory allocated by the new generation, old generation, and metadata areas.

doubt


See here, in fact, I am very confused, my Ali cloud ECS server memory only 1 G, the current application memory total has 1 G, then I also deployed MySQL instance, Nginx instance, and some other applications, is not the cloud server memory is not enough?

In fact, everything on the server works fine in the current configuration, so where is the misunderstanding?

The key to the problem is the metadata area defined in JDK1.8.

We know that in jdk1.7, class definitions and other information are stored in the permanent section, and the new generation, old generation, and permanent generation belong to the heap memory, which logically belongs to the internal memory of the virtual machine.

Starting with JDK1.8, the permanent section has been removed and replaced with a metadata section to store class definition information. In addition, the metadata area no longer belongs to the memory in the virtual machine heap, but directly occupies the memory on the operating system, so the maximum memory that can be reached by the metadata area is the memory of the operating system.

-xx :MetaspaceSize=768m -xx :MaxMetaspaceSize=768m

M-xx :MetaspaceSize=768m. This parameter does not mean that 768MB of memory will be allocated to the metadata area immediately after the application starts. This value represents the water level at which Full GC is triggered. In fact, the memory size of the metadata area is gradually increased from 0 as classes are loaded after the application is started. If it reaches the threshold of 768 megabytes, the Full GC will be triggered and the metadata area will be garbage collected. If the metadata area exceeds the threshold of MaxMetaspaceSize, the metadata area will be garbage collected. An OOM exception is raised.

Therefore, the current usage of the application metadata area is far less than 768 MB. I will write another article on how to check the current usage of the application metadata area.

Note:

In addition, you may wonder why the peek value in the metadata area is empty. I think GCeasy is empty because Full GC does not appear in the current application’s GC log.

3.2 Key Performance Indicators

Key performance indicators: throughput and average time of GC pauses, maximum time, and ratio of time periods.

As we know, The JVM GC pauses an application, or Stop The World, or STW.

You can see that a total of 26 GCS occurred, with 25 pauses of less than 100 milliseconds and one pause of 160 milliseconds.

Later, if you want to tune it, you can analyze how to reduce the number of GCS and the time interval.

3.3 Interactive Graphs

Time charts:

You can see the heap memory usage process in this figure. If memory usage surges at a certain moment and triggers the OOM, use this chart is relatively intuitive.

3.4 the GC Statistics

Pie chart of statistics:

As you can see intuitively, not a single Full GC has been fired (since I optimized it in the previous section, there was a Full GC anyway).

3.5 the GC Causes

GC triggers:

As you can see, Allocation failures are all GCS triggered by insufficient memory in the new generation, so we can see if we can optimize this by increasing the memory in the new generation to reduce the number of GCS.

3.5 other

There are some other functions, I will not show here, there is a need to go to see.

4. Summary

If you look at this, you can see that this is a great visualization site, which is much more intuitive than just looking at the GC log files. After reading this, I suddenly realized that there is a lot of room for tuning JVM parameters of my blog application.