Running time Data area (heap)


As the saying goes: stack tube running, heap tube storage. The heap is the main memory working area of a Java program and an important part of the runtime data area. Almost all Java object instances are stored in the Java heap. Heap space is shared by all threads, and is a piece of memory closely associated with Java applications.

Get the relevant class information from the method area, allocate memory in the heap and instantiate the object, and the variables in the stack hold references to the address of the object in the heap.

Runtime data area diagram

Summary of heap

  • There is only one heap memory per JVM instance, and the heap is the core area of Java memory management. It is the key area for Garbage Collection (GC) to perform Garbage Collection.
  • The Java heap is created when the JVM is started and its size is determined. Is the largest chunk of memory managed by the JVM.
  • The size of the heap memory is adjustable.
  • The Java Virtual Machine Specification states that the heap can be in a physically discontiguous memory space, but logically it should be considered contiguous.
  • All threads share the Java heap, where you can allocate thread-private buffers (ThreadLocal Allocation buffers, TLAB).
  • The Java. Virtual Machine Specification describes the Java heap as: all object instances and arrays should be allocated on the heap at run time. (The heap is the run-time data area from which memory for all class instances and arrays is allocated )
  • Almost all object instances allocate memory here. (Escape analysis uses scalar substitution to decompose objects on the heap into scalars on the stack)
  • Arrays and objects may never be stored on the stack because the stack frame holds a reference to the location of the object or array in the heap.
  • Objects in the heap are not removed immediately after the method ends, but only during garbage collection.

Heap structure

  • The heap
    • Young generation (1/3)
      • The garden of Eden (8/10)
        • TLAB(1/100)
        • Scaling area
      • Live area (2/10)
        • The from area (1/10)
        • To area (1/10)
        • Scaling area
    • The old age (2/3)

Jdk7 and JDK8 change

  1. New changes after JDK8
  • Metaspace replaces PermGen, that is, JDK8 does not have PermSize related parameters: XX:PermSize and -xx :MaxPermSize are invalid.

  • The metadata space uses the local memory (system memory), and the permanent generation uses the memory in the vm heap.

  1. Why abolish permanent generation
  • Official documentation: Removing the permanent generation is an effort to merge the HotSpot JVM with the JRockit VM, as JRockit does not have a permanent generation and does not need to be configured.
  • PermGen is difficult to adjust. Metadata information of classes in PermGen may be collected during each FullGC, but the performance is not satisfactory (low recycling efficiency).
  • It is also difficult to determine how much space should be allocated to PermGen because the size of PermSize depends on many factors, such as the total number of classes loaded by the JVM, the size of the constant pool, the size of the method, and so on.
  • And permanent generation memory is often insufficient and memory leaks occur.

  1. Knowledge:
  • The implementation of method area (virtual machine specification) has jdK7 permanent generation and JDK8 space

  • Direct Memory is not part of the data area of the virtual machine running, nor is it defined in the Java Virtual Machine specification. It is allocated directly from the operating system and is therefore not limited by the Java heap size, but is limited by the total native Memory size and processor addressing space. So it can also cause OutOfMemoryError exceptions. NIO is a new channel and buffer-based APPROACH to I/O that allocates memory directly from the operating system, that is, out of the heap. This can improve performance in some scenarios by avoiding copying data back and forth between the Java and Native heaps.

4. The size of the meta space is limited only by the local memory. You can specify the size of the meta space by using the following parameters:

  • -xx :MetaspaceSize, the initial size of the space. When this value is reached, garbage collection will be triggered for type unloading, and GC will adjust the value: if a large amount of space is freed, the value will be reduced appropriately; If a small amount of space is freed, raise the value as much as possible without exceeding MaxMetaspaceSize.

  • -xx :MaxMetaspaceSize, maximum space, default is unlimited.

  • -xx :MinMetaspaceFreeRatio, the smallest percentage of the remaining Metaspace space capacity after GC, which is reduced to garbage collection caused by space allocation

  • -xx :MaxMetaspaceFreeRatio, the percentage of the maximum remaining Metaspace space capacity after GC, which is reduced to garbage collection caused by space release

The difference between memory leak and memory overflow

  • Memory Leak: After a program uses Memory, an object becomes useless, but it is not collected by the GC. It always occupies Memory, and the accumulation of Memory leaks will eventually lead to a Memory Leak.

  • Memory Overflow: An error that occurs when a program cannot obtain enough Memory while running. Memory overruns usually occur in the old days when there is still no memory space to hold new Java objects after garbage collection. This is usually caused by a memory leak that causes stack memory to grow, causing a memory overflow

Heap space size setting

  • -xms indicates the start memory of the heap and is equivalent to -xx :InitialHeapSize

  • -xmx indicates the maximum memory in the heap area, which is equivalent to -xx :MaxHeapSize

  • An OutOfMemoryError will be thrown whenever the size of memory in the heap area exceeds the maximum memory specified by “-xmx”.

  • The -xMS and -xmx parameters are usually configured with the same value to improve performance by not needing to reseparate the heap size calculations after the Java garbage collection mechanism has cleaned it up.

By default:

Initial memory size: 1/64 of the physical computer’s memory size

Maximum memory size: 1/4 of the physical computer’s memory size

Heap size exploration

* -xms is used to set the initial memory size of the heap space (young generation + old age) * -x is a JVM running parameter * ms is memory start * -xmx is used to set the maximum memory size of the heap space (young generation + old age) ** 2. Default heap space size * initial memory size: physical computer memory size / 64 * Maximum memory size: physical computer memory size / 4 * 3. Manual Settings: -xMS600m -XMx600m * Development recommended that the initial heap memory and maximum heap memory be set to the same value. * * 4. View the set parameters. Mode 1: JPS/jstat -GC Process ID * Mode 2: -xx :+PrintGCDetails */ public class HeapSpaceInitial {public static void main(String[] args) {// Return the total heap memory of the Java VM long initialMemory = Runtime.getRuntime().totalMemory() / 1024 / 1024; Long maxMemory = runtime.geTruntime ().maxMemory() / 1024/1024; System.out.println("-Xms : " + initialMemory + "M"); System.out.println("-Xmx : " + maxMemory + "M"); // system.out. println(" System memory size: "+ initialMemory * 64.0/1024 + "G"); // system.out. println(" System memory size: "+ maxMemory * 4.0/1024 + "G"); try { Thread.sleep(1000000); } catch (InterruptedException e) { e.printStackTrace(); }}}Copy the code

Parameter -XMS600m -XMx600m Starts

-Xms : 575M
-Xmx : 575M
Copy the code

Initial memory size:

25600.0 + OC(aged) 409600 =588800/1024=575m

Heap OOM

public class HeapOomMock { public static void main(String[] args) { List<byte[]> list = new ArrayList<byte[]>(); int i = 0; boolean flag = true; while (flag){ try { i++; List. Add (new byte[1024 * 1024]); }catch (Throwable e){ e.printStackTrace(); flag = false; System.out.println("count="+ I); }}}}Copy the code


java.lang.OutOfMemoryError: Java heap space
Copy the code

Memory overflow test method for each area

Memory area Test method for memory overflow
The Java heap Loop out the new object indefinitely, saving the reference in the List so that it is not collected by the garbage collector. In addition, Memory leaks can occur in this area, so pay attention to the difference when something goes wrong.
Methods area Generate a large number of dynamic classes, or wirelessly loop through the String’s intern () method to generate different instances of String objects and save them in the List so that they are not collected by the garbage collector. The latter tests the constant pool, and the former tests the non-constant pool portion of the method area.
Virtual machine stack and local method stack (recursively calling a simple method) Fixed stack size: If a thread requests a stack depth greater than the maximum allowed by the virtual machine, a StackOverflowError exception is thrown. Dynamic stack expansion: An OutOfMemoryError is thrown if the VM cannot obtain sufficient memory space when extending the stack.

The young and the old

Java objects stored in the JVM can be divided into two categories:

  • One is transient objects with a short life cycle, which are created and die very quickly (young generation)
  • Another class of objects has a very long life cycle, and in some extreme cases can be consistent with the LIFE cycle of the JVM (old age)

Configure the ratio of young generation to old generation

  • The default value is -xx :NewRatio=2, indicating that the Cenozoic occupies 1, the old occupies 2, and the Cenozoic occupies 1/3 of the whole heap
  • If -xx :NewRatio=4 is modified, the Cenozoic occupies 1, the old occupies 4, and the Cenozoic occupies 1/5 of the whole heap

Tool to view the area ratio

  1. JPS Displays the process ID

  1. Run the following command to view the specified attribute: jinfo-flag NewRatio process PI

  1. Run the jstat -GC process PI command to view the proportion of each region

  1. Use the VisualVM tool to view

Startup parameters: -xMS600m -XMx600m -xx :+UseAdaptiveSizePolicy

New Generation 1: Old age 2

Startup parameters: -xMS600m -XMx600m -xx: -useadaptivesizePolicy

New Generation 1: Old age 2

Startup parameters: -xMS600m -XMx600m -xx: -useadaptivesizePolicy

Eden 6: S0 1: S1 1

Start parameter: -xms600m -XMx600m -xx :SurvivorRatio=8 -xx: -useAdaptivesizePolicy (Explicitly specify the ratio of Eden and Survivor zones in a new generation 8.) The default is also 8; Turn off adaptive memory allocation policies)

Eden 8: S0 1: S1 1

Object allocation process

For the new object allocate memory is a very precise and complicated task, the JVM’s designers not only need to consider how to allocate memory, where is the distribution problems, and because the memory allocation algorithm is closely related to the memory recovery algorithm, so you also need to consider the GC to perform after the memory recovery will produce memory fragments in the memory space.

  1. The object for new is the Garden of Eden. This area has a size limit.
  2. When the space in The Garden of Eden fills up and the program needs to create objects again, the JVM’s garbage collector will pairEden Park and survival zoneMinor GC is performed, and objects that are no longer referenced by other objects are destroyed. Reload the new object and put it in the Eden zone
  3. Then move the remaining objects from The Garden of Eden To the Survivor To area.
  4. If garbage collection is triggered again, the last survivor is placed in the survivor To (now From) area, and if there is no collection, it is placed in the survivor To area.
  5. Age over 15 is promoted to senior citizen. The default is 15.
    • The parameters can be set as follows: -xx :MaxTenuringThreshold=N times to set.
  6. When the memory of the aged area is insufficient, GC (Major GC) is triggered again to clean up the memory of the aged area.
  7. If the aged region still cannot save the object after the Major GC is performed, a 00M exception will be generated
  • conclusion
  1. Summary of survivor S0, S1 block: after replication there is an exchange, who is empty who is to block.

  2. About garbage collection: frequently collected in the freshmen area, rarely collected in the elderly area, almost not in the permanent area, meta space collection.

  3. If the object survives Eden’s birth and the first MinorGC, and can be accommodated by Survivor, it will be moved to the Survivor space with the object age set to 1. The age of the object increases by one year for each MinorGC it survives in Survivor, and when its age increases to a certain age (the default is 15, but this varies from JVM to JVM and GC to GC), it is promoted to the old age.

  4. The age threshold for an object to be promoted to an older age can be set with the -xx :MaxTenuringThreshold option.

Principles of object allocation for different age groups:

  • Eden is assigned first
  • Large objects are allocated directly to older ones (try to avoid too many large objects in your program)
  • Long-lived objects are allocated to old ages
  • Dynamic object age judgment: If the sum of the size of all objects of the same age in the Survivor area is greater than half of the Survivor space, objects whose age is greater than or equal to this age can enter the old age directly without waiting for the age required in MaxTenuringThreshold.

Big objects go straight into old age testing

Launch parameters

-Xms60m -Xmx60m -XX:NewRatio=2 -XX:SurvivorRatio=8 -XX:+PrintGCDetails

/ * * test: * -xms60m -xmx60m -xx :NewRatio=2 -xx :SurvivorRatio=8 -xx :+PrintGCDetails */ public class YoungOldAreaTest { public static void main(String[] args) { byte[] buffer = new byte[1024 * 1024 * 20]; //20m } }Copy the code

ParOldGen total 40960K, used 20480K is similar to the new byte[1024 * 1024 * 20] set by the program.

Heap PSYoungGen total 18432K, used 1976K [0x00000000fec00000, 0x0000000100000000, 0x0000000100000000) eden space 16384K, 12%, informs [x00000000fec00000 0, 0 x00000000fedee1e8, 0 x00000000ffc00000) from space 2048 k, 0%, informs [x0000000100000000 x00000000ffe00000 0, 0 x00000000ffe00000, 0) to space 2048 k, 0%, informs [x00000000ffc00000 0, 0 x00000000ffc00000, 0 x00000000ffe00000) ParOldGen total 40960 k, used 20480K [0x00000000fc400000, 0x00000000fec00000, 0x00000000fec00000) object space 40960K, 50%, informs [x00000000fc400000 0, 0 x00000000fd800010, 0 x00000000fec00000) Metaspace informs the 3203 k, capacity 4496 k, committed 4864K, reserved 1056768K class space used 348K, capacity 388K, committed 512K, reserved 1048576K Process finished with exit code 0Copy the code

Space allocation guarantee

Set the parameters (Boolean) : – XX: HandlePromotionFailure

Before a Minor GC occurs, the virtual machine checks whether the maximum contiguous space available in the old generation is greater than the total space of all objects in the new generation.

  • If it is, then the Minor GC is safe
  • If less than, the virtual opportunity view – XX: HandlePromotionFailure setting whether to allow guarantees failure.
    • If HandlePromotionFailure=true, then it continues to check that the maximum available contiguous space in the old age is greater than the average size of objects promoted to the old age.
      • If it is greater than, then a Minor GC is attempted, but this Minor GC is still risky:
      • If the value is less than or equal to the value, the system performs a Full GC instead.
    • If HandlePromotionFailure=false, then do a Full GC instead.

After JDK6 Update24, the HandlePromotionFailure parameter will no longer affect the guarantee policy for vm space allocation. Watch the OpenJDK source code change, although the source code also defines HandlePromotionFailure. But it is no longer used in the code.

After JDK6 Update24, the rule becomes that if the contiguities of the older generation are larger than the total size of the younger generation or the average size of the previous promotions, the Minor GC will be performed, otherwise the Full GC will be performed. The HandlePromotionFailure parameter is invalid

Common tuning tools

  • Jconsole
  • VisualVM
  • Jprofiler
  • The JDK command line
  • Eclipse :Memory Analyzer Tool
  • Java Flight Recorder
  • GCViewer
  • GC Easy

Three GC differences

When GC is performed by the JVM, it does not always perform GC on the above three types of memory (new generation, old; Method area), most of the time recycling refers to the new generation.

For the implementation of HotSpotVM, the GC in it is divided into two types according to the collection area:

  • Partial GC: A garbage collection that does not collect the entire Java heap. It is divided into:

    • Minor GC/Young GC: Only the garbage collection of the New Generation (Eden\S0,S1).

    • Major GC/ 0LD GC: Just old GC.

      • At the moment, only the CMSGC has a separate collection of old ages.
      • Note that many times the Major GC is used in confusion with the Full GC. You need to distinguish between the old collection and the Full heap collection.
    • Mixed GC: Collect the entire generation of garbage collections as well as parts of the older generation.

      • Currently, only the G1 GC behaves this way
  • Whole heap Collection (Fu1l GC): A garbage collection that collects the entire Java heap and method area.

Note: The stop-the-world mechanism in Java, or STW, suspends all other threads of The Java application (except The garbage collection helper) while The garbage collection algorithm is being performed.

Minor GC

Triggering mechanism of Minor GC:

  • Minor GC is triggered when the young generation is full, and when the young generation is full, the Eden generation is full, but the Survivor generation is not. (Each Minor GC cleans up the memory of the younger generation.)
  • Because Java objects tend to die in a hurry, MinorGC is very frequent and generally fast.
  • Minor GC raises STW, suspending other user threads until garbage collection is complete and the user thread resumes running.

Major GC

Old GC (Major GC/) trigger mechanism:

  • Refers to GC that occurred in the old days.
  • If a Major GC is found, it is usually accompanied by at least one Minor GC. Be insane. The Collection policy of the Parallel Scavenge has a policy selection process for the Major GC directly. That is, when the old age runs out of space, the Minor GC is first attempted. If there is not enough space after that, the Major GC fires
  • A Major GC is typically more than 10 times slower than a Minor GC, and STW takes longer. If there is not enough memory after the Major GC, then 00M is reported.

Note: The distinction between “Major GC” and “Fu11 GC” is now less strict, and Fu11 GC is generally considered as “Major GC”.

Full GC

Full GC is defined as a global GC for the entire new generation, old generation, and metaspace (java8 + replaces perm gen)

Full GC trigger mechanism:

  • When system.gc () is called, Full GC is recommended, but not required
  • There is not enough space in the old age
  • The method area has insufficient space
  • The average size into the old age after Minor GC is greater than the available memory in the old age
  • When an object is copied From Eden or survivor SPACE0 (From Space) To Survivor space1 (ToSpace), if the size of the object is larger than the available memory of ToSpace, the object is dumped To the old age, and the available memory of the old age is smaller than the size of the object

Note: Full GC is something to avoid when developing or tuning. Make the pause time of the user thread shorter.

Why are heaps generational

  • According to research, the life cycle of different objects is different. 70-99% of objects are temporary. Separation is easy to manage.

  • This helps improve the garbage collection efficiency. Many objects are ephemeral. If you want to generation objects, you can put the newly created objects in one place, and when GC collects the area where the ephemeral objects are stored, it will free up a lot of space.


What is a TLAB

  • Thread Local Allocation Buffer (TLAB) Specifies a thread-private Allocation Buffer.

  • From the perspective of memory model rather than garbage collection, the Eden region continues to be divided, and the JVM allocates a private cache region for each thread, which is contained within the Eden space.

  • When multiple threads allocate memory at the same time, TLAB can avoid a series of non-thread-safe problems and improve the memory allocation throughput, so we can call this memory allocation method fast allocation strategy. Improve allocation efficiency

  • All OpenJDK-derived JVMS provide TLAB designs.

Why do YOU need TLAB

  • The heap area is a thread shared area, and any thread can access the shared data in the heap area
  • Because object instances are created so frequently in the JVM, it is thread unsafe to partition memory space from the heap in a concurrent environment
  • To prevent multiple threads from operating on the same address, mechanisms such as locking are required, which affects the allocation speed.

TLAB structure

TLAB instructions

  • Although not all object instances will be able to successfully allocate memory in TLAB, the JVM does use TLAB as its first choice for memory allocation.
  • In the program, developers can use the option “-xx :UseTLAB” to set whether to enable the TLAB space.
  • By default, TLAB memory space is very small, only occupies the whole Eden space 1, through the option set TLAB space “- XX: TLABWasteTargetPercent” Eden space occupied the percentage of size.
  • If an object fails to allocate memory in TLAB space, the JVM will attempt to ensure atomicity of the data operation by using the locking mechanism, thereby allocating memory directly in Eden space.
  • After TLAB allocation, object movement and reclamation are not affected. That is, although objects may initially be allocated through TLAB and stored in Eden, they will still be garbage collected or moved to Survivor Space, Old Gen, etc.
  • To check whether TLAB is enabled, run the following command: jinfo-flag UseTLAB process ID; The + sign indicates enabled, and the – sign indicates disabled

Escape analysis

Is the heap the only option for allocating object storage?

In Understanding the Java Virtual Machine, there is a description of The Java heap memory: as THE JIT compile time evolves and escape analysis techniques mature, stack allocation, scalar replacement optimization techniques lead to subtle changes, and all objects allocated to the heap become less “absolute.”

In the Java Virtual Machine, it is common knowledge that objects are allocated memory in the Java heap. However, there is a special case where an object may be optimized for stack allocation if, after Escape Analysis, it is found that there is no Escape method. This eliminates the need to allocate memory on the heap and garbage collection. This is also the most common out-of-heap storage technique.

In addition, based on openJDK’s deeply customized TaoBaoVM, where the innovative GCIH (GC Invisible Heap) technology implements off-heap, moves long-life Java objects from the heap to the outside of the heap, and the GC cannot manage Java objects inside the GCIH, In this way, the collection frequency of GC can be reduced and the collection efficiency can be improved.

Escape analysis

Escape analysis is used to allocate objects on the heap to the stack. This is a cross-function global data flow analysis algorithm that can effectively reduce the synchronization load and memory heap allocation pressure in Java programs.

  • Through escape analysis, the Java Hotspot compiler can analyze the usage scope of a reference to a new object and decide whether to allocate that object to the heap.
  • The basic behavior of escape analysis is the dynamic scope of the analysis object:
    • When an object is defined in a method and is used only within the method, no escape is considered to have occurred.
    • When an object is defined in a method and is referenced by an external method, it is considered to have escaped. For example, as a call parameter passed elsewhere.

Escape analysis scenario

/** * Escape analysis ** How to quickly determine if escape analysis has occurred, you will see if the new object entity is likely to be called outside of the method. */ public class EscapeAnalysis { public EscapeAnalysis obj; Public EscapeAnalysis getInstance(){return obj == null? new EscapeAnalysis() : obj; } public void setObj(){this.obj = new EscapeAnalysis(); } // Consider: What if the current obj reference is declared static? Escapes still happen. Static is just an initialization. It's a class variable. As long as it's out there. Public void useEscapeAnalysis(){EscapeAnalysis e = new EscapeAnalysis(); } /* public void useEscapeAnalysis1(){EscapeAnalysis e = getInstance(); //getInstance().xxx() will also escape}}Copy the code

To quickly determine if an escape analysis has occurred, look at whether a new object entity is likely to escape when called outside of the method:

  • Member variable assignment
  • Method return value
  • Instance reference passing

Parameter Settings:

After JDK 6U23, escape analysis has been enabled in HotSpot by default. If you are using an earlier version, the developer can:

  • -xx :+DoEscapeAnalysis: enables explicit escape analysis
  • -xx :+PrintEscapeAnalysis: Displays the filter results of escape analysis.

Escape analysis – Allocation on the stack

The JIT compiler, based on the results of escape analysis during compilation, finds that if an object does not have an escape method, it may be optimized for stack allocation. After the allocation is complete, execution continues in the call stack until the thread ends, the stack space is reclaimed, and the local variable object is reclaimed. This eliminates the need for garbage collection.

  • Example analysis

    /** * StackAllocation test * -xmx1g-xms1g-xx: -doescapeanalysis -xx :+PrintGCDetails */ public class allocation {public static void main(String[] args) { long start = System.currentTimeMillis(); for (int i = 0; i < 10000000; i++) { alloc(); Long end = system.currentTimemillis (); System.out.println(" time spent: "+ (end-start) +" ms"); // To check the number of objects in the heap memory, the Thread sleep try {thread.sleep (1000000); } catch (InterruptedException e1) { e1.printStackTrace(); } } private static void alloc() { User user = new User(); } static class User {}}Copy the code
  1. Run result: -xmx1g -xms1g -xx :+DoEscapeAnalysis -xx :+PrintGCDetails

    Time spent: 3 msCopy the code

2. Disable escape analysis parameters. Run the following command: -xmx1g -xms1g -xx: -doescapeAnalysis -xx :+PrintGCDetails

Time spent: 58 msCopy the code

Bottom line: Running escape analysis takes less time. There are also significantly fewer objects in the heap.

Escape analysis – Synchronous Elision (lock elimination)

If an object is found to be accessible only from one thread, then operations on that object can be performed without regard to synchronization.

When compiling a synchronized block dynamically, the JIT compiler can use escape analysis to determine whether the lock object used by the synchronized block can be accessed by only one thread and not published to other threads. If not, then the JIT compiler will unsynchronize that part of the code when compiling the synchronized block. This can greatly improve concurrency and performance. This de-synchronization process is called synchronization elision, or lock elimination.

  • Example analysis

    Public class SynchronizedTest {public void f() {Object hollis = new Object(); synchronized(hollis) { System.out.println(hollis); }}}Copy the code

Note: The compiled bytecode still has a lock object, but the JIT compiler will optimize it at run time. The equivalent of

public class SynchronizedTest { public void f() { Object hollis = new Object(); System.out.println(hollis); }}Copy the code

Escape analysis – scalar substitution

A Scalar, “Scalar,” is a piece of data that cannot be broken down into smaller pieces. Primitive data types in Java are scalars. In contrast, data that can also be decomposed is called aggregates, and objects in Java are aggregates because they can be decomposed into other aggregates and scalars.

In the JIT phase, if an object is found not to be accessed by the outside world after escape analysis, then the JIT optimization will break the object into thousands of member variables to replace. So this is a scalar substitution.

  • -xx :+EliminateAllocations: Scalar substitution is turned on (by default), allowing object fragmentation allocations on the stack.

  • Example analysis

    /** * Scalar Substitution Test * -xmx100m -xms100m -xx :+DoEscapeAnalysis -xx :+PrintGC -XX: -eliminateallocations */ public class ScalarReplace { public static class User { public int id; public String name; } public static void alloc() {// User u = new User(); = 5; = ""; } public static void main(String[] args) { long start = System.currentTimeMillis(); for (int i = 0; i < 10000000; i++) { alloc(); } long end = System.currentTimeMillis(); System.out.println(" time spent: "+ (end-start) +" ms"); }}Copy the code
  1. Turn on the scalar replacement parameter run result

-Xmx100m -Xms100m -XX:+DoEscapeAnalysis -XX:+PrintGC -XX:+EliminateAllocations

Time spent: 3 msCopy the code
  1. Turn off the scalar replacement parameter run result

-Xmx100m -Xms100m -XX:+DoEscapeAnalysis -XX:+PrintGC -XX:-EliminateAllocations

[GC (Allocation Failure) 25600K->960K(98304K), 0.0006191 secs] [GC (Allocation Failure) 26560K->824K(98304K), 0.0004392 secs] [GC (Allocation Failure) 26424K->824K(98304K), 0.0003864 secs] [GC (Allocation Failure) 26424K->760K(98304K), 0.0004038 secs] [GC (Allocation Failure) 26360K->824K(98304K), 0.0003964 secs] [GC (Allocation Failure) 26424K->792K(101376K), 0.0004266 secs] [GC (Allocation Failure) 32536K->716K(100864K), 0.0005152 secs] [GC (Allocation Failure) 32460K->716K(100864K), 0.0002451 secs] Takes 36 msCopy the code

The same program takes less time to turn on scalar substitution, and no GC occurs.

Check whether the Server mode is enabled because escape analysis can be enabled only in Server mode. My JDK is in Server mode. Add parameters to run: -server

Bottom line: Quantity replacement greatly reduces heap memory usage. Because once you don’t need to create objects, you don’t need to allocate heap memory anymore. Scalar substitutions provide a good basis for allocating on the stack.

Code optimization Conclusion

Where development can use local variables, do not use them outside of the method definition.

Escape analysis is not mature

The paper on escape analysis was published in 1999, but it wasn’t implemented until JDK 1.6, and the technology isn’t quite mature yet. The fundamental reason is that there is no guarantee that the performance cost of escape analysis will be higher than its cost. Although escape analysis can do scalar substitution, stack allocation, and lock elimination.

However, escape analysis itself requires a series of complex analysis, which is also a relatively time-consuming process. An extreme example would be to find that no object is unescapable after escape analysis. Then the process of escape analysis has been wasted.

Although this technique is not very mature, it is also a very important means of just-in-time compiler optimization technology. Note that there is some idea that through escape analysis, the JVM will allocate objects on the stack that will not escape, which is theoretically possible but depends on the choice of the JVM designer. As far as I know, this is not done in the Oracle Hotspot JVM, which is documented in the escape analysis, so it is clear that all object instances are created on the heap. Most books are based on pre-7 versions of the JDK, and the JDK has changed a lot. The cache of intern strings and static variables used to be allocated on the permanent generation, which has been replaced by the metadata area. However, the INTERN string cache and static variables are not transferred to the metadata area, but are allocated directly on the heap, so this also fits the previous conclusion: object instances are allocated on the heap. It’s not allocated on the stack, it’s just implemented using scalar substitution

Summary of common heap parameters

  • Jdk8 website description:…
  • -xx:+PrintFlagsInitial: View the default values for all parameters
  • -xx:+PrintFlagsFinal: Check the final value of all parameters (may be modified, not the initial value)
  • -xms: initial heap space memory (default is 1/64 of physical memory)
  • -xmx: maximum heap space memory (default: 1/4 of physical memory)
  • -xx :NewRatio: Specifies the ratio of the new generation to the old generation in the heap structure (default: 2).
  • -xx :SurvivorRatio: sets the proportion of Eden zone and Survivor zone in the new generation. (Default: 8)
  • -xmn: Set the size of the generation (usually do not set this parameter, use the ratio parameter)
  • -xx: -useadaptivesizePolicy: disables the adaptive memory allocation policy
  • -xx :UseTLAB: specifies whether to enable the TLAB space. (Open by default)
  • TLAB space – XX: TLABWasteTargetPercent: set the percentage of Eden space size.
  • – XX: whether HandlePromotionFailure: open space allocation guarantees failure after (1.6)
  • -xx :+DoEscapeAnalysis: enable explicit escape analysis (default after 1,6)
  • -xx :+PrintEscapeAnalysis: Displays the filter results of escape analysis.
  • -xx :+EliminateAllocations: Opens scalar substitution (default)
  • -xx:+PrintGC: Prints GC logs
  • -xx :+PrintGCDetails Prints GC detailed logs
  • -server: Starts the server mode, because escape analysis can be enabled only in the server mode.
  • -xx :MaxTenuringThreshold=N times of age promotion old age threshold (included)

Gleanings – Direct memory

Direct memory is memory space outside the Java heap that is directly applied to the system. Direct memory is often accessed faster than the Java heap.

Therefore, for performance reasons, direct memory may be considered in high read/write situations.

Since direct memory exists outside the Java heap, its size is not directly limited by the maximum heap size specified by Xmx, but system memory is finite, and the sum of the Java heap and direct memory is still limited by the maximum memory that the operating system can give.

Java’s NIO libraries allow Java programs to use direct memory.