This is my fourth article on getting Started

preface

In Java, objects are allocated on the heap. In Java, objects are allocated on the heap. In Java, objects are allocated on the heap. I believe you can understand after reading the article.

Allocation policy

First, let’s look at the process of allocating Java objects

Graph LR [to] -- > B {try to stack allocated on} - > | | 1 success [stack allocated on] -- > B | | C failure} {try TLAB distribution - > success | | 2 [TLAB distribution] -- > C failed | | D {whether to enter the old generation} - > | | success 3 [old generation distribution] D - > | | E failure [Eden distribution]

In this flowchart, we can see that the objects in Java must be allocated on the stack and TLAB at the beginning. When the conditions are met, the objects will not be allocated on the heap. Next, we are familiar with what are the allocation on the stack and TLAB allocation respectively

On the stack

Stack allocation is an optimization technique provided by the Java Virtual Machine. The basic idea is that thread-private objects (that is, objects that cannot be accessed by other threads) can be split up and allocated on the stack rather than on the heap. The advantage of allocating on the stack is that it can be self-destruct at the end of the function call without the intervention of the garbage collector, thus providing system performance. Let’s look at a piece of code

private static void test(a){
   User user = new User();
   u.name = "xtianyaa";
   u.website = "https://juejin.cn/user/2084329778071479";
}
Copy the code

You can see that the user instance scope is only in the test function and is not accessed by any other thread, nor is it accessed. Therefore, the scope of the user instance object is only in this function. After escape analysis, no escape occurs. What is escape analysis? If we return this user object out, then we’re not going to use the stack allocation for that object, because we’re going to need to use the instance of that object in future code, and we’re going to say that any object that’s shared by multiple threads is an escape object and in that case, It is possible for the virtual machine to allocate it on the stack rather than on the heap. So how do I start allocating on the stack, We can enable or allocations by running the following command -server-xms16m -xmx16m -xx :+PrintGC -xx :+DoEscapeAnalysis -xx :+UseTLAB -xx :+EliminateAllocations where -server Indicates that the JVM is run in Server mode because escape analysis can be enabled only in server mode. So what does allocation on the stack do? There is no need for GC to reclaim the object. Exiting the stack releases resources and improves performance. Because we every time The GC to reclaim, will trigger a Stop The World World (Stop), then all threads are stopped, then our GC to garbage collection, if The object is frequent to create in our heap, also means that we have to pause frequently all of The threads, this is very affect The experience for The users, The purpose of allocating on the stack is to reduce the number of garbage collections.

TLAB distribution

What is a TLAB? The JVM allocates a small area of memory in the new generation of Eden Space, which is private to the Thread.

  • From the perspective of memory model rather than garbage collection, the Eden region continues to be divided, and the JVM allocates a private cache region for each thread, which is contained within the Eden region.
  • When multiple threads allocate memory at the same time, using TLAB can avoid a series of non-thread-safe problems, and at the same time can improve memory allocation throughput, so we can talk about this way of memory allocation as a fast allocation strategy

The general object allocation memory, are in the new generation of space application. When multiple threads are requesting space, each object allocation must be synchronized. The efficiency of allocation in highly competitive situations declines further. TLAB is a memory area exclusively owned by threads in the Eden region, which is mainly used to reduce memory competition in the new generation of object allocation and improve the efficiency of object allocation. This function is enabled by default, or it can be enabled by using the -xx :+UseTLA parameter.

conclusion

So why not allocate it all on the heap? This problem goes back to the original JVM memory structure. By definition, the heap is thread shared, which means that all objects allocated on the heap are competing resources and must be locked when an allocation is created on the heap. Since there is a lock, there must be an overhead caused by the lock, and because the entire heap is locked, the relative granularity of the lock is relatively large, affecting the efficiency. Both TLAB and stack are thread private, which avoids competition. Therefore, for some special cases, it is possible to avoid allocating objects on the heap to improve the efficiency of object creation and destruction.