“This is the 24th day of my participation in the August More Text Challenge.

🔉 introduction

Because the content is too much, so the points on and below, the main reason is the author liver immovable, this I do not avoid, gossip less pull, start the topic 😛.

🌲 Garbage First collector

❤️ Design details

– Cross-generation references

For example, if you split the Java heap into multiple independent reigons, how do cross-generation references work? (PS: Portal 🚪 ([across generations quote]), aiming at a glance through the portal, you can use the memory set, that’s right, use the memory set can avoid all the heap scan, but the actual design is much more complicated than we imagine, because the G1 each Region need to maintain a set of memory, memory rally down other Region pointer to myself, We can regard this data structure as a hash table, where Key is the starting address of Region and Value is a collection storing the index number of the card table (PS: Portal 🚪 [card table]), this design can not only know who I point to, but also know who points to me, which is more complex to implement. At the same time, G1 divides more regions than the original garbage collector, so it takes up more memory than the traditional garbage collector. According to experience, G1 requires at least 10%-20% of the Java heap for additional memory work.

– Concurrent token

As mentioned earlier, when threads change references, they must ensure that they do not change the original object graph, or they will get marked errors (PS: portal 🚪); While CMS is implemented using incremental updates, G1 is implemented using the raw snapshot algorithm. In addition, if the program continues to run and allocates memory for new objects, garbage collection will have an impact on user threads. If the speed of garbage collection is not fast enough, G1 will be forced to freeze the execution of user threads, resulting in Full GC and a long garbage collection.

– Pause prediction

So how do you build a reliable pause model? The user’s -xx :MaxGCPauseMillis instruction is only an expected value before garbage collection. How does G1 implement the pause model? Here we have to mention the concept of the decaying mean, which is different from the ordinary mean, which is only the overall average, which is the most recent average, which means that it is more susceptible to new data than the ordinary mean. The newer the statistics of regions in G1 are, the more they can determine the value of their collection. Then, based on this information, the prediction can be made. Which regions will be used to collect more revenue within the expected pause time? (During the operation of G1, the cost of each measurable cost such as the time consuming of regions and the number of dirty cards in the memory set will be counted, and then the confidence, average, standard error and other information will be obtained through analysis.)

❤️ Execution process

The G1 runs in four steps:

  • Initial flag (initial Marking)
  • Concurrent token (Concurrent Marking)
  • Final mark (Final Marking)
  • Screening recovery (Live Data Counting And Evacation)

Initial tag

The initial tag marks objects that can be directly associated with GCRoots, and then modifies the value of the TAMS pointer (PS: G1 sets TAMS Pointers for each Region), this phase is very short and requires a thread to be paused, which is done in synchronization with the Minor GC to set the stage for proper allocation of objects from the available regions in the next phase when user threads are concurrent.

Concurrent tags

Starting from GCRoots, the reachabness analysis of objects in the heap is conducted to find out garbage objects. This stage is time-consuming and can be executed concurrently with the user thread. After the object graph scan is completed, the objects with reference changes recorded by SATB during this period need to be reprocessed.

In the end tag

A simple pause is done on the user thread to deal with the small number of SATB records that remain after the concurrent phase is over

Screening of recycling

You can select multiple regions to collect data, copy surviving objects from the old Region to the new Region, and clear the old Region. This operation involves object movement. Therefore, the user thread must be paused and executed by multiple collection threads.

🌲 summary

As you can see, most of G1’s operations, with the exception of concurrent markers, involve suspending user threads, which means that G1 is not purely about low latency, but about high throughput with manageable latency.

Without a doubt, the ability to specify a user’s expected time is a powerful feature of the G1 collector (PS: Wrote that my brain doesn’t turn 😴), but expectations also is not literally set, cannot be set as much as they want to, because it is need to freeze the thread object replication, default is 200 ms, if you set too short will cause each selected to collect accounts for only a small part of the memory, the collector can’t keep up with the dispenser is likely to happen, eventually led to the full, A hundred or two hundred milliseconds is generally reasonable in cases where Full GC severely affects performance.

The reason why G1 is a milestone is that, since G1, advanced garbage collectors have been designed to focus not on cleaning the heap at once, but on memory allocation rates, and as long as the speed of collection can keep up with the speed of allocation, everything works perfectly 😪.

📝 digression

The second source of religion is social emotion. Mothers and fathers, as well as the leaders of the larger human community, all die and make mistakes. People’s desire for guidance, love and help led to the formation of a social or moral concept of God. The providence of God protects, decides, rewards and punishes. God loves and nurtures tribal life or human life, or even life itself, according to the different levels of human beings. He was the comforter of men in their misfortunes and unfulfilled hopes, and the protector of the souls of the dead. This is the social or moral concept of God.

Oh oh oh 😮! So that’s how God came to us.