The outline

Garbage collection algorithm

Garbage collector is a garbage collection algorithm. The garbage collection algorithm is based on generational collection theory. That generally we have young generation heap area, old age, so distinguish purpose is mainly to distinguish the different life cycle of objects, such as the object of life in death in the young generation, and long-standing objects are stored in old age, so for different targeted garbage collection process more efficient, Such as the young generation of the life in death object replication algorithm are used to collect efficiency will be very high, and long live object is work for memory requirements are not suitable for replication algorithm, and the suit will be in the sweep algorithm and marking sorting algorithm to choose a suitable for processing, including replication algorithm to more than 10 times faster than the other two kinds of algorithm.

1. Mark copy algorithm

The marker copy algorithm divides the memory region into two parts, one is free region and one is used region. During GC, the living objects are assigned to the free region and the remaining objects are cleared. And so on. This is a very efficient GC but a very low memory utilization.

2. Mark clearing algorithm

The mark clearing algorithm marks the surviving objects and clears the remaining objects directly. Simple and crude all the memory will be used but there will be a lot of memory fragmentation

3. Mark sorting algorithm

The mark sorting algorithm is similar to the mark clearing algorithm, but instead of clearing directly, it moves the living object forward once, and finally clears the objects outside the boundary of the living object to ensure the continuity of memory.

Second, garbage collector

Serial Serial garbage collector

The Serial garbage collector is arguably the oldest garbage collector. As the name implies, it collects garbage in Serial, and STW suspends execution of the entire user thread during the entire garbage collection. Its garbage collection in the young generation adopts the tag copy algorithm, while the garbage collection version in the Old generation adopts the tag collation algorithm for Serial Old. Serial garbage collector in the entire process of gc single-threaded Serial collection will do a long TIME STW, so it is not good for the user experience, but because it is single-threaded STW collection process is not affected by any simple and efficient collection process, so it has a high single-thread collection efficiency. Serial Old is an older version of the Serial garbage collector. It was primarily used in conjunction with PARALLEL prior to JDK 1.5 and as a backup to the CMS garbage collector. Enable parameters:

-XX:+UseSerialGC  -XX:+UseSerialOldGC
Copy the code

Parallel collector

Parallel collector is a Parallel collector with Serial collector. The default number of threads is the current number of CPU cores. This number can be set to -xx :ParallelGCThreads, which is not recommended. Parallel is the multithreaded version of Serial. Meanwhile, Parallel also provides a collector of the old era to collect garbage from the old era. Similarly, the younger generation algorithm adopts the tag copy algorithm, while the older generation is the tag collation algorithm.

-xx :+UseParallelGC(young generation), -xx :+UseParallelOldGC(old generation)Copy the code

Parallel pays more attention to CPU throughput than CMS, while CSM pays more attention to user experience, that is, the maximum STW pause time. The default garbage collector used in JDK 1.8 is Parallel Core Parallel Old

3. ParNew

ParNew is not fundamentally different from Parallel. It is mainly a young-generation garbage collector provided to complement CSM’s garbage collection. It has only a young-generation collection version, which is the same as Parallel in garbage collection. Currently only Serial and ParNew are available for garbage collection in conjunction with CSM.

4. CMS

The CMS garbage collector is an older garbage collector whose primary goal is to improve the user experience by shortening the STW time as much as possible. Both G1 and ZGC are further optimized and tuned based on this idea and process. Its garbage collection process is mainly divided into the following stages:

  1. Initial tag

The initial tag is STW, and this tag only marks the direct reference object of GC Roots, not all references are searched down, which is a very quick process. This process is single-threaded in JDK 1.7 and before, and parallel after 1.8, and can be controlled by parameters. 2. Concurrent marking the concurrent marking process is carried out with the user thread, this process does not STW, this process will be marked all references, because the user thread is running at the same time, there may be a change in the state of the marked object, this change may lead to missed collection or mistakenly deleted. Of course CMS solved this problem by using tricolour 3. Relabeling The process of relabeling is to fix problems caused by concurrent tagging. This process is STW, and there are two solutions for the correction, incremental updates or raw snapshots based on tricolor tagging. CMS chose to use incremental updates while G1 chose to use raw snapshots, and the pros and cons are explained in the tricolor below. This process is parallel 4. Concurrent cleanup The cleanup process is performed simultaneously with the user thread without STW. If a new object comes in, it will be marked black and will not be cleaned up. 5. Concurrent reset reset the GC marker data.

CMS advantage isLow pause, concurrent collectionThe disadvantages are also obvious:

  • It is sensitive to CPU resources and competes with services for CPU resources
  • Floating garbage cannot be processed, and floating garbage generated during the concurrent tagging and concurrent cleanup phases can only be left for the next GC
  • Because of the token clearing algorithm used, there is a lot of memory fragmentation
  • The uncertainty in the garbage collection process, i.e. a large number of objects are generated in the stage of parallel marking and concurrent cleaning, resulting in a concurrent mode failure in which the GARBAGE collection is not completed but starts again. In this case, STW will be converted to Serial collection using Serial Old

CMS Common Parameters

- XX: + UseConcMarkSweepGC: enable the CMS - XX: ConcGCThreads: concurrent GC threads - XX: + UseCMSCompactAtFullCollection: FullGC do compression after finishing (pieces) - XX: CMSFullGCsBeforeCompaction: How many times interval FullGC compression after finishing time, the default is 0, representative FullGC interval 0 times will be compressed, namely are finishing - XX: after each FullGC CMSInitiatingOccupancyFraction: When use the old s reach the proportion triggered when FullGC (default is 92, this is the percentage) - XX: + UseCMSInitiatingOccupancyOnly: Use only set recycling threshold (- XX: CMSInitiatingOccupancyFraction setting value), if not specified, the JVM is only used for the first time set data, follow-up will automatically adjust - XX: + CMSScavengeBeforeRemark: Before CMS GC tag to start a minor GC, the purpose is to reduce the old s reference for young generation, reduce the costs of CMS mark phase of GC, CMS GC time-consuming commonly 80% mark phase - XX: + CMSParallellnitialMarkEnabled: The stW-XX :+ CMSPARallelenabled: The STW during the re-marking.Copy the code

An example of CSM parameter Settings:

-Xmx4096M -Xms4096M -Xmn1536M 
-XX:MaxMetaspaceSize=512M -XX:MetaspaceSize=512M 
-XX:+UseConcMarkSweepGC 
-XX:+UseCMSInitiatingOccupancyOnly 
-XX:CMSInitiatingOccupancyFraction=70 
-XX:+ExplicitGCInvokesConcurrentAndUnloadsClasses 
-XX:+CMSClassUnloadingEnabled 
-XX:+ParallelRefProcEnabled 
-XX:+CMSScavengeBeforeRemark 
-XX:ErrorFile=/home/admin/logs/xelephant/hs_err_pid%p.log 
-Xloggc:/home/admin/logs/xelephant/gc.log 
-XX:HeapDumpPath=/home/admin/logs/xelephant 
-XX:+PrintGCDetails 
-XX:+PrintGCDateStamps 
-XX:+HeapDumpOnOutOfMemoryError
Copy the code

5. Three-color marking algorithm

The garbage collector marks the objects in three colors during the marking process: black: the objects scanned by GC ROOT and scanned by all direct references are marked as black and gray; all direct reference objects scanned by GC ROOT but not finished are marked as gray and white: Objects that are scanned are white. All objects marked white are recycled

Here take CMS’s collection algorithm as an example to look at the process of three-color marking initial marking – “concurrent marking -” re-markingInitial tagThe process of initial tagging only scans GC ROOT direct references, as shown below, and gray them out, waiting for a deep scan while concurrent tagging. So the process is extremely fast. Concurrent tagsIn concurrent marking, the situation will be much more complicated, and there may be multiple marks or missing marks. As shown in the figure below, at a certain moment of concurrent marking, B object has been scanned and marked as gray, C object has been scanned and no other reference is marked as black, while D object has not been scanned. At this point, set the reference D of object B to null and add A reference to object A to point to D. At this point, object D is still white, and since object A has been marked black, it will not be scanned during re-marking. Now D objectThe leakage, which remains white when cleaned, resulting in miscollection of non-garbage objects. Similarly, if B object has been marked black at a certain point, the reference to C object is set to null, but B and C are both black and will not be rescanded, resulting in the garbage C object will not be collected eventuallyFloating garbageThat isMore than theIn the case.forMore than theandThe leakageThe garbage collector is based onTricolor markingAnd the object’sReading and writing barrierOffers two solutions,Incremental updatingandThe original snapshot. CMS uses incremental updates, while G1 uses raw snapshots.Incremental updating: Incremental update means that during concurrent marking, the reference will be recorded when the black object member variable updates the reference, and the object will be scanned again when the object is re-marked. It can also be understood as graying the state of an object if it writes a new value.The original snapshot: original snapshot as the name suggests that retain the original information, in a gray member variable reference object update record the old object references, when will these old object in to tag all marked as black don’t recycle, it may produce floating garbage, but also reduced the amount of time from the new scanning, and incremental updating each have advantages and disadvantages.Reading and writing barrierRead operation: read member variable B B =a.b write operation: the reference to the member variable of the object changes a.b=null the so-called read and write barrier here is actually to insert some code logic before and after the variable read and write operation to achieve some functions, c++ assignment pseudo-code is as follows

/** * @param field Member variable of an object, for example, A.B.D * @param new_value New value, for example, null */
void oop_field_store(oop* field, oop new_value) { 
    *field = new_value; // Assign
} 
Copy the code

At this point we can insert pre-write barriers and post-write barriers into the code for assignment operations, similar to Aop concepts

void oop_field_store(oop* field, oop new_value) {  
    pre_write_barrier(field);          // Write barrier - pre-write operation
    *field = new_value; 
    post_write_barrier(field, value);  // Write barrier - post-write operation
}
Copy the code

Incremental updates and raw snapshot SATB can be implemented based on pre-write barriers and post-write barriers.

The pre-write barrier implements SATB raw snapshot to record old references to an object as its member variable references change

void pre_write_barrier(oop* field) {
    oop old_value = *field;    // Get the old value
    remark_set.add(old_value); // Record the original reference object
}
Copy the code

The post-write barrier implements incremental updates to record new references as the object’s member variable references change

void post_write_barrier(oop* field, oop new_value) {  
    remark_set.add(new_value);  // Record the newly referenced object
}
Copy the code

CMS implementation incremental update source code


template <class T> inline void oop_store(volatile T* p, oop v) {
  update_barrier_set_pre((T*)p, v);   // cast away volatile
  // Used by release_obj_field_put, so use release_store_ptr.
  oopDesc::release_encode_store_heap_oop(p, v);
  // When using CMS we must mark the card corresponding to p as dirty
  // with release sematics to prevent that CMS sees the dirty card but
  // not the new value v at p due to reordering of the two
  // stores. Note that CMS has a concurrent precleaning phase, where
  // it reads the card table while the Java threads are running.
  update_barrier_set((void*)p, v, true /* release */);    // cast away type
}
Copy the code

G1 implementation SATB source code

void G1SATBCardTableModRefBS::enqueue(oop pre_val) {
  // Nulls should have been already filtered.
  assert(pre_val->is_oop(true), "Error");

  if(! JavaThread::satb_mark_queue_set().is_active()) return;
  Thread* thr = Thread::current(a);if (thr->is_Java_thread()) {
    JavaThread* jt = (JavaThread*)thr;
    jt->satb_mark_queue().enqueue(pre_val);
  } else {
    MutexLockerEx x(Shared_SATB_Q_lock, Mutex::_no_safepoint_check_flag);
    JavaThread::satb_mark_queue_set().shared_satb_queue() - >enqueue(pre_val); }}Copy the code

Memory sets and card tables

In the process of new generation garbage collected tag, may appear across generations of reference, such as the young generation of an object is an object of the old s reference, lead to the new generation of GC ROOTS accessibility analysis of scanning also need to go to traverse the scanning of the old era, the efficiency is very poor, so new generation introduced the concept of memory set. In the new generation an area of memory is maintained to record all cross-generation reference objects. Hotspot uses card tables to implement memory sets. The relationship between memory sets and card tables can be read as a Map and a HashMap. The implementation of this method is that the memory area of the old age is divided into card pages, each card page is 512 bytes by default. In the young generation, a small memory area is divided to maintain the card table (byte array) of CARD_TABLE[]. If there is an object referenced across generations in the card page of the old age, it will be recorded in the card table (the address of the memory of the card page and the status of the card page), and the object in the card page will be added to the scan during GC ROOTS scanning.