JVM memory structure, is a very important knowledge, I believe that every quiet preparation of the interview programmer can clearly the heap, stack, method area, etc., introduced more clearly.

The figure above is a drawing of the JVM runtime memory region structure described by the authors in the Java Virtual Machine Specification (Java SE 8). Again, this is a logical partition as defined by the JVM specification, and it may be implemented differently by different virtual machine vendors, or by different versions of the same virtual machine.

Most people know that Java objects are allocated in heap memory (except for JIT optimization) and that memory allocation is thread-safe, so how can virtual machines be thread-safe? This article will give a brief introduction.

Memory allocation for Java objects

As we know, Java is an object-oriented language, all objects we use in Java need to be created. In Java, there are many ways to create an object, such as using new, using reflection, using Clone method, etc. But in any case, the object creation process, need to allocate memory.

Take the most common new keyword as an example. When we create an object using new and the code starts to run, the virtual machine will first check whether the corresponding class of the object to be new has been loaded. If not, the class will be loaded first.

After the class load check is passed, the object needs to be allocated memory, which is mainly used to store the instance variables of the object.

When allocating memory, you need to determine how much space to allocate based on information such as instance variables in the object, and then divide such an area from the Java heap (assuming no JIT optimization).

Depending on the type of garbage collector the JVM is using, memory allocation in the heap can vary depending on its collection algorithm. If there are a large number of discontinuous memory fragments in the mark-clear algorithm, it is necessary to use the “free list” to determine a free area when allocating new objects. (This part is not the focus of this article, so you can learn it by yourself.)

Either way, you eventually need to decide on an area of memory to allocate to newly created objects. As we know, in the process of allocating the memory of an object, it is mainly the reference of the object pointing to the memory region, and then the initialization operation.

So here’s the question:

How is the memory allocation process thread-safe in concurrent scenarios? What if two threads point object references to the same memory region in succession?

TLAB

There are generally two solutions:

  • 1. Synchronize the action of allocating memory space, adopt CAS mechanism, and ensure atomicity of update operation with retry method of failure.
  • 2. Each thread preallocates a small chunk of memory in the Java heap, and then allocates memory to objects directly in its own “private” memory, and allocates new “private” memory when it runs out.

Scheme 1 requires synchronous control at each allocation, which is inefficient.

Scheme 2 is used in the HotSpot VM and is called TLAB Allocation Buffer. This portion of the Buffer is partitioned from the heap, but is exclusive to the local thread.

It is worth noting here that we say TLAB is thread exclusive, but only in the “allocate” action is thread exclusive, as for reading, garbage collection and other actions are thread shared. And there is no difference in use.

In addition, TLAB only works on Eden Space of the new generation. Objects are first placed in this area when they are created, but large objects that cannot allocate memory of the new generation will be directly moved to the old age. So when writing Java programs, it is often more efficient to allocate multiple small objects than large ones.

Therefore, although the object may initially be allocated memory through TLAB and stored in Eden area, it will still be garbage collected or moved to Survivor Space, Old Gen, etc.

If we use TLAB, we can allocate memory exclusively to objects in TLAB, so there is no conflict. However, the process of TLAB itself from the heap may also have memory safety issues.

** Therefore, synchronization control is still needed in the TLAB allocation process. ** But this overhead is much lower than controlling synchronization each time memory is allocated for a single object.

Whether the virtual machine uses TLAB is optional and can be specified by setting the -xx :+/ -usetlab parameter.

TLAB

In order to ensure the security of memory Allocation for Java objects and improve efficiency, each Thread can allocate a small portion of memory in the Java heap in advance. This portion of memory is called TLAB (Thread Local Allocation Buffer). This portion of memory can be allocated exclusively by the Thread. Read, use, and recycle are shared by threads.

You can specify whether to enable TLAB allocation by setting the -xx :+/ -usetlab parameter.

References:

The deep understanding of the Java virtual machine “www.cnblogs.com/straybirds/… www.zhihu.com/question/56…