I have not written an original article for a long time, this is the first original article in 2021, and I started my own original road again. Today I share a recently finished book “Deep into the JVM Virtual Machine third Edition”, which took a total of three days to finish. I believe that many people have not read it, after all, more than 700 pages, it is not easy to stick to it. Here are some tips on how to finish the book and how to do it.

In fact, before brushing this book in fact for a lot of the content of this book has been learned, so again to brush this book is the knowledge point of review, see also relatively fast, more purposefully, before this for the LEARNING of JVM, I also specifically do their own learning and summarized their own mind map:My personal self-learning and summary is all about tuning purposes, so the purpose of this book is simple: to give myself a deeper understanding of TUNING for the JVM.

This mind map has been accumulated for a long time, probably more than a year, I still remember my first time to see “Deep in the JVM virtual Machine”, are misty, the first time is not finished.

Later, I learned a lot about THE JVM in bits and pieces because of my work, and recorded it. This became my final mind map, which I felt was quite complete for tuning.

Let’s start our brush book journey, first take a look at the total catalogue of the book, a total of five parts, first of all, the first department can be skipped directly, is not a thief happy, a second less a large part:There is also the fifth part, because this part belongs to concurrent programming content, and the content in it is basically my previous original blog, so I spent an hour to read this part of the content.

This part of the content can actually not read, in the concurrent programming actual combat this book will have, MY next one is going to brush concurrent programming actual combat, you can also directly skip, this is not a little part.

And the fourth part, this part should be chosen, I personally focuses on real-time compilers, just-in-time compilers are important because it is over, some interviews will also be asked, the other parts are hastily I have a look at the general content, also is not what you want, because you went to see, you don’t remember, and also used in actual not, at least for now phase in less than, Interview questions are almost zero.

There is the third part, the third part is also to choose to see, for example, the sixth chapter of the third part, he is about the structure of a class file, from the binary to you, you remember it, completely remember ah, there are various byte code instructions, this does not affect your reading mood and confidence.

For example, the following is the contents of chapter 6 in Part 3. It will take you to the class file one by one. What does each position mean?But the third part of the class loading subsystem is very important, including the class loading process inside, typicallyThree layers class loader, custom class loader, parent delegate mechanismThat’s what we need to focus on, and I was asked about the class loading process and the parent delegate mechanism when I interviewed bytes earlier, and that’s definitely what we need to focus on.But, if you’re looking at what I’m looking at, and you’re looking at tuning, you’re looking at the second part, and you’re looking at the second part.

Next, let’s look at the second part is the JVM memory management, the contents of the memory model, runtime data area, and is used to observe the JVM memory monitoring tools, in fact, this part of the memory model should be a lot of technology blogger, wrote a blog post, so feel you to look at this part should be just reviewed:

In Chapter 2, we will focus on the Java runtime data area, Java objects, and OOM exceptions.

  • Program counter
  • Java virtual machine stack
  • Local method stack
  • The Java heap
  • Methods area
  • Runtime constant pool
  • Direct memory

What is thread private, what is thread common, and what causes an exception in the OOM?

Direct memory is not a part of the virtual machine, it is local memory, it is the size of the OOM problem combined with the local memory, you can also set the size of the -xx :MaxDirectMemorySize parameter.

One of the Java frameworks that has a relationship with this part of memory is Netty, because the underlying use of Netty is NIO, which is an IO for channels and buffers, and it operates on that memory through a DirectByteBuffer object.

The next thing isObject creation, object memory layout, object accessThree parts, how does the magic of Java pass through onenewIn the creation part of the object, mainly involves the following concepts:Pointer collisions, Thread Local AllocationBuffer (TLAB).

How does an object allocate memory? How to avoid memory resource contention when multithreading? Where is memory allocated (heap, stack)? The content of this section is basically around these questions.

After the object is created and memory is allocated, how is the object arranged in memory? I believe you may have read an article before: XX factory XX side interviewer: Do you know how big an object is in Java?

You can find out how many parts an object is divided into (headers, Instance Data, and Padding), what’s in the Mark Word, and so on.

Finally, there is access to objects: handles, direct Pointers. What’s the difference between the two? What are the disadvantages and advantages? Understand these questions, even if the master of more than 90%.

Above content is partial theory, theory is always for actual combat service, OOM exception is partial actual combat, several runtime data area introduced in front of the program counter will not appear in OOM, other parts are likely to appear in OOM.

The Java heap is the most important and most difficult part of tuning in the JVM. Basically, the most complex tuning parameters in the JVM are biased towards the Java heap. What is the solution for OOM to appear in the run-time data period other than the Java heap? The most effective way is to increase the memory appropriately.

Tuning the Java heap is not as simple as simply increasing the size of the heap. Sometimes the size of the Java heap is the opposite, because even though the size of the heap is larger and the number of new generation objects that can be allocated is larger, the collection time of a single GC is longer, which can cause the system to lag longer.

For applications that require a lot of user interaction and a short response time, it’s not okay. Maybe the boss is asking you: son of a bitch, was it the JVM you tuned last time, and now it’s stuck longer, users are complaining every day, you have to fix it tonight, or don’t come tomorrow.

So Java heap is also the key and difficult part of tuning, first of all, in this part of the first to understand and familiar with the OOM exception in each area, the printed stack information is generally what, such as heap OOM as shown below:

java.lang.OutOfMemoryError: Java heap space
Dumping heap to java_pid3404.hprof ...
Heap dump file created [22045981 bytes in 0.663 secs]
Copy the code

You can see the name “Java heap space” in the stack information, and what are the scenarios that would cause these parts to overflow? For example, if recursion depth is too large, methods are too long, there are large objects in the program, memory leaks, thread pool Settings are not reasonable, overuse of proxies, constants are too large, Class information is too much, etc.

In fact, before tuning, there is another thing to be said, maybe 70% or 80% of the problems can be solved by code optimization. For example, if the method is too long, the method can be broken down. If there is a large object, the object can be created.

Sometimes, however, there are some problems you might ignore, such as large objects (it is possible that the same time many objects), although you are very carefully in the development, object has control of the very small, in the production flow one appears OOM, is only likely to verify your production flow some neglected function.

For example, the PC will have to have an export function, all export and export the current page, especially the total export, small amount of data when they do not have what problem, if a large amount of data, and flow rate, the same time create Excell object is very much, carelessly appears OOM.

For another example, user-oriented C-end home page, generally C-end home page will be cached in order to improve user experience, so when the number of cache is too large, the objects obtained from the cache at the same time will be too many or too large.

For this part of the content, MY previous mind map, also made a very detailed summary, the following is just a part of it, you can see, for you to think about is also very important.The next chapter is about GC, and the area for tuning is the Java heap, including how to determine whether an object is alive (Reference counting method, reachability analysis algorithm), the three basic garbage collection algorithms (Tag-clear, copy algorithm, tag-tidy), the theory of generational model (The new generation, the old age), and commonly used classical garbage collectors, collocation, applicable scenarios. Understand these questions, you are invincible, this part of the content you knowAmong them, the classical garbage collector is relatively unfamiliar to everyone. Other parts, such as garbage collection algorithm, generational model theory, and judging whether an object is alive, have been written by many technical bloggers. I have also written an original garbage collection algorithm before, so you can refer to it.

Let’s focus on the garbage collector. As the JDK continues to evolve and upgrade, the server evolves from single-core to multi-core, and the memory becomes larger, the garbage collector becomes more and more intelligent. Each garbage collector has its own application scenarios.

A common combination of these garbage collectors is shown in the following figure:My personal memory is relatively deep collocation is:

  • Serial and Serial Old
  • PS and PO
  • ParNew and CMS
  • G1

Each garbage collector has its own usage scenarios, advantages and disadvantages, so this section of the content should be read with the following questions in mind:

  • What are the young and old garbage collectors?
  • What are the advantages and disadvantages of each garbage collector?
  • What are the usage scenarios for each garbage collector?
  • How does each garbage collector work?
  • What are the parameters associated with the JVM for each garbage collector?

To understand the basic issues, such as in the single-core era of Serial collector it is a major problem is caused by STW problems, and single-threaded garbage collection efficiency problems, but for single-core servers, it is undoubtedly the best choice.

As well as later development to multithreaded era PS and PO, from the original single thread collection, now programming multithreaded collection, certainly than the original garbage collection time efficiency, but for multithreaded and set the number of threads recycling garbage, the book has given advice.

And then the classic oneCMS.CMSThere are four stagesInitial marking, concurrent marking, re-marking, concurrent cleaning.What are the principles of the four phases, which phases operate concurrently with user threads, how the CMS minimizes STW problems, the range of heap sizes that CMS generally applies to, etc.

The answers to these questions are clear and you can study them carefully. The last one is G1, and the previous classic garbage collector is given a generational model.

Generational model is, however, have their own problems, such as the classic generational garbage collector is corresponding to the collection of the entire heap size, with the improvement of the performance of the server, the server memory also will be more and more big, now ready to several g or the size of the 10 g, so this will lead to a single garbage collection time longer and longer, The pause time will also be longer, because there is no way to eliminate the current STW position, but to reduce as much as possible.

G1 weakens the garbage collection of the generational theoretical model and divides the whole heap into regions. Each Region does not have a fixed generation (the new generation or the old generation). When garbage collection occurs, G1 records the collection time of each Region. This allows the cost to be collected to be calculated, so G1 is able to do garbage collection within the user’s acceptable pause time (STW, typically 100-300 ms) rather than the entire heap.

These are the advantages of G1, which solves the problems of traditional garbage collectors, but G1 still has its own problems, such as floating garbage generation, which is still unsolved by G1, and the possibility of system suspended animation in extreme cases.

These are some of the reasons why G1 is not the default garbage collector for now. These issues can be found in this article, which must be carefully studied.

There are also two low-latency collectors that we don’t need at this stage, just look at the scenarios in which they are used, and apply to Java heap sizes, so you can ignore them:

Of course, if you are interested, also can read carefully, after all, it is the thing that big guy writes out, absorbed is essence.

Having familiarised myself with the principles of garbage collection in the above concentration, here are some of the most common JVM tuning parameters I’ve used in my mind maps:Of course, it’s in the books, but it’s a little bit scattered, and I’ve basically summarized it here.

The next part is the actual combat includes some scenes and principles: the allocation principle of object memory, large objects, long-term survival of the object, object age, space allocation guarantee, this part of the content must be careful product, must be the essence.

For the tools, I use VisualVM in JDK, which can be configured into IDEA, and GC plug-in and other plug-ins can be installed. It is very convenient, but it is usually used locally or in the test environment. As for online tools, I recommend using The Arthas tool of Ali. Baidu has a bunch of tutorials.

There is also a more primitive method of troubleshooting, using Linux commands such as JPS, jstat, jinfo, jmap and jstack, which are explained in detail in the book:

For the use of these original commands, there are two main test points, also two practical points, respectively:Analyze the CPU surge issue, analyze the OOM issue. The other is to analyze the problem of insufficient memory, which is relatively easy.

If you can use the above original command and analysis tools to analyze the above problems independently, then you are invincible, tuning master ah, the boss must not spend a lot of money to ask you.

You can also take a look at my own mind maps, which are pretty much the same, but not complete, because some of them haven’t been sorted out:

Chapter five, I think you can skip, maybe the experience of the big guy and we are different, the actual combat scene, rarely met, I will skip directly.

Here if the above content is thoroughly understood, the following content is basically pure theory of things, it is not difficult for you:

The third part will directly look at the virtual machine loading mechanism, includingClass loading process, classic three-tier loaders, custom class loaders, parent delegates, etcThat’s the point of this section.You can thank the code for implementing its own class loader, for loading its own classes, for breaking the parent delegate model, and so on.

Ok here what I personally need basic has absorbed about, I just with the purpose of the tuning to look at the book, clarify here is not to say that don’t look at the content of speaking and writing is not good, not like that, but we are, with a purpose to learn the contents of the book is very detailed, but not all things are what we need, at least at this stage, I just want to help you to extract the content we need, and then share their own use of the book and experience.

Limited by space, I will continue to share my in-depth learning and understanding of the JVM virtual machine. If you need “In-depth JVM Virtual Machine edition 3” ebook, you can add my wechat: Abc730500468, I’ve marked the key parts of the book so that you can brush and how you do it and there are techniques that you can discuss with each other. All right, I’m Lidu, and I’ll see you in the next issue.