I. Why do we talk about this?

After summarizing AQS, I will make a passing review of this aspect. This paper starts from the following high-frequency questions:

What is the memory layout of the object in memory? Describes the underlying implementation of synchronized and ReentrantLock and the underlying principles of reentrant. Talking about AQS, why is the bottom layer of AQS CAS+volatile? Describe the four states of a lock and the lock upgrade process? Object o = new Object() How many bytes in memory does Object o = new Object() occupy? Is spin locking more efficient than weight locking? Does turning on biased locks necessarily increase efficiency? What is the weight of a heavyweight lock? When is a heavy lock more efficient than a light lock, and vice versa?

What happened to the two locks?

Unintentional use of locks:

Public void println(String x) {synchronized (this) {public void println(String x) {synchronized (this) {public void println(String x);

print(x);
newLine();

}}

What happened to a simple lock?

So to figure out what’s going on with the lock you need to look at the layout in memory after the object has been created. Okay?

A new object is divided into four main parts in memory:

This part of Markword is the core of locking, and it also contains information about the life of the object, such as whether it is GC or not, and whether it has survived several Young GC.

Klass Pointer records the class file pointer to an object.

Instance Data records the variable data in the object.

Padding: Objects in the 64-bit server version must be divisible by 8 bytes. If not, the object’s memory must be made up by alignment. Padding =6: new object has only 18 bytes of memory, but it has to be divisible by 8.

With these four parts in mind, let’s verify the bottom layer. With the aid of the third party package JOL = Java Object Layout Java memory Layout to see. In a few simple lines of code, you can see how the memory layout looks:

public class JOLDemo {

private static Object o; public static void main(String[] args) { o = new Object(); synchronized (o){ System.out.println(ClassLayout.parseInstance(o).toPrintable()); }}

}

Print out the results:

From the output results:

1) The object header contains 12 bytes divided into 3 lines, where the first 2 lines are actually Markword and the third line is the Klass pointer. Note that the output changes from 001 to 000 before and after locking. Markword uses an 8-byte (64bit) header to record information, and a lock is used to modify the content of Markword. From a 001 lockless state, to a 00 lightweight lockable state.

2) New object (16 bytes); The new Object() header takes 12 bytes, and the new Object() header takes 12 bytes, and the new Object() header takes 12 bytes, and the new Object() header takes 12 bytes, and the new Object() header takes 12 bytes.

Expansion: What kind of objects will enter the old age? Many scenarios, such as objects that are too big, can be entered directly, but here we want to discuss why an object from the Young GC can survive up to 15 Young GC times and enter the Old section (the age is adjustable, the default is 15). In the chart above, Markword for Hotspots uses 4 bits to indicate generational age, so the maximum range that can be represented is 0-15. So that’s why the age of the new generation should not exceed 15. It can be adjusted through -XX: maxTenuringThreshold at work, but we generally don’t move it.

The upgrade process of triple lock

1. Upgrade verification of the lock

Before we explore lock upgrades, let’s do an experiment. Two pieces of code, the difference is that one let it sleep for 5 seconds, the other did not sleep. See if there’s a difference.

public class JOLDemo {

private static Object o; public static void main(String[] args) { o = new Object(); synchronized (o){ System.out.println(ClassLayout.parseInstance(o).toPrintable()); }}

}

public class JOLDemo {

private static Object o; public static void main(String[] args) { try { Thread.sleep(5000); } catch (InterruptedException e) { e.printStackTrace(); } o = new Object(); synchronized (o){ System.out.println(ClassLayout.parseInstance(o).toPrintable()); }}

}

Will there be any difference between the two codes? See the result after running it:

Interestingly, the memory layout of the main thread will be different after 5 seconds of sleep than if the main thread had not slept.

After SYN locking is upgraded, bias locking is turned on after an underlying default setting of JDK 1.8, 4S. That is to say, there is no bias lock in the 4S, and the lock is directly upgraded to lightweight lock.

So there are a couple of questions here?

Why do you want to upgrade the lock? Was it either SYN or heavyweight before? It’s either nothing or something else?

Since the 4S is directly lightweight if the lock is added, then can we do not have biased lock, why do we have biased lock?

Why do we set biased locking after 4s?

Question 1: Why do you want to upgrade locks? Lock it, lock it, lock it, lock it?

First off, it’s clear that early JDK 1.2 is very inefficient. At that time, SYN was a heavyweight lock. To apply for a lock, you must go through the operating system kernel for system call, join the queue for sorting operation, and then return to user mode after operation.

If you want to do some more dangerous operations directly access the hardware, it is easy to make the hardware dead (formatting, access to network card, access to memory to kill,) operating system for system security is divided into two layers, user state and kernel state. When applying for lock resources, the user state should apply to the operating system boss kernel state. In JDK 1.2, the user needs to lock with the kernel mode, and then the kernel mode will give the user mode. This process is very time consuming, resulting in particularly low efficiency early on. Why leave it to the operating system to do something the JVM can do? Can you pull out locks that can be done by the JVM to improve efficiency, hence lock optimization?

Question 2: Why do we have biased locking?

In fact, this essentially comes down to a probability problem. According to statistics, in the process of SYN lock in our daily use, there is usually only one thread to take the lock, such as System.out.println, StringBuffer, although the underlying SYN lock is added. But there is almost no multi-thread contention. In this case, there is no need to upgrade to the lightweight locking level. The meaning of bias is that the first thread gets the lock and marks its thread information on the lock, so that the next time it comes in, there is no need to check the lock again. If more than one thread grabs the lock, the biased lock will be revoked and upgraded to a lightweight lock. In fact, I think the biased lock is not a real lock in the strict sense, because the biased lock only occurs when one thread is trying to access the shared resource.

Unintentional use of locks:

Public synchronized int length() {return count; }

Public void println(String x) {synchronized (this) {public void println(String x) {synchronized (this) {public void println(String x) {

 print(x);
 newLine();

}}

Question 3: Why does JDK8 enable biased locking after 4s?

In fact, this is a compromise. It is clear that there must be many threads to steal the lock at the beginning of the execution of the code. If the biased lock is turned on, the efficiency will be reduced, so the above program will only open the biased lock after 5S sleep. The reason why the efficiency of biased locking is reduced is that there are several additional processes in the middle of the process. After biased locking, multiple threads scramble for shared resources to upgrade the lock to lightweight lock. This process also revokes the biased lock and upgrades the lock, so the efficiency is reduced. Why 4S? This is a statistical time value.

Of course, we can disable biased locking by configuring the parameter -XX: -UsebiasedLocking = false. BiASED locking has been disabled by default since JDK15. This article is in the JDK8 environment to do the lock upgrade verification.

2. The upgrade process of locks

We have already demonstrated the process of an object entering memory from the moment it is created from a lockless state -> bias lock (if enabled) -> lightweight lock. As the process of upgrading locks continues, a lightweight lock becomes a heavyweight lock. Compare and Swap (Compare and Swap, Compare and Swap, Compare and Swap, Compare and Swap, Compare and Swap, Compare and Swap, Compare and Swap, Compare and Swap One of the simplest examples in concurrent programming is to package the atomic operation class AtomicInteger below. When doing operations like ++, the underlying CAS lock is actually there.

public final int getAndIncrement() {

return unsafe.getAndAddInt(this, valueOffset, 1);

}

public final int getAndAddInt(Object var1, long var2, int var4) {

int var5;

do {

   var5 = this.getIntVolatile(var1, var2);

} while(! this.compareAndSwapInt(var1, var2, var5, var5 + var4));

return var5;

}

Question 4: Under what circumstances should a lightweight lock be upgraded to a heavyweight lock?

The first thing we can think about is using a lightweight lock for multiple threads, and upgrading it to a heavyweight lock if it can’t carry. When does a lightweight lock fail to carry? 1. If the number of threads is too large, for example, the number of threads is 10000, then how long does it take for CAS to exchange values? At the same time, the CPU consumes huge resources in switching back and forth among the 10000 alive threads. So even if you have 10,000 threads, you’re still dealing with sleeping situations waiting to be queued up to wake up. 2. CAS will also be upgraded to heavyweight if it spins 10 times and fails to acquire the lock.

In general, both cases will be upgraded from lightweight to heavyweight, with 10 spins or more than half of the CPU cores waiting for the CPU to schedule, automatically upgraded to heavyweight lock. To see the number of CPU cores on the server, type the top command and press 1 to see it.

Question 5: SYN is said to be a heavyweight lock, so what is the weight?

For example, the synchronization of scheduling locks is directly handed over to the operating system for execution. In the operating system, queuing is required before execution. In addition, the operating system needs to consume a lot of resources when starting a thread, which is relatively heavy.

The whole lock upgrade process is shown in the figure:

4. The low-level implementation of synchronized

After looking at the object memory layout above, we know that the state of the lock is mainly stored in Markword. Here we look at the underlying implementation.

public class RnEnterLockDemo {

public void method() { synchronized (this) { System.out.println("start"); }}

}

Reverse this simple code and see what happens. javap -c RnEnterLockDemo.class

The first thing we can confirm is that Syn must be locking as well, and we see both monitorenter and monitorexit in the messages. Subconsciously, we can assume that these are related to locking and unlocking. Of interest are 1 monitorenter and 2 monitorexit. Why is that? Normally there should be a lock and a release lock. This is also the difference between syn and lock. The SYN is a lock at the JVM level that is automatically released by the JVM if an exception is not released on its own, depending on the extra Monitorexit. The LOCK exception needs to be recovered and released manually.

For the purpose of these two directives, we refer directly to the JVM specification:

Monitorenter:

Each object is associated with a monitor. A monitor is locked if and only if it has an owner. The thread that executes monitorenter attempts to gain ownership of the monitor associated with objectref, as follows: • If the entry count of the monitor associated with the objectref is zero, The thread enters the monitor and sets its entry count to one. The thread is then the owner of the monitor thread already owns the monitor associated with objectref, it reenters the monitor, • If another thread already owns the monitor associated with objectref, incrementing its entry count. the thread blocks until the monitor’s entry count is zero, then tries again to gain ownership

Translation:

Each object has a monitor lock. The monitor is locked when it is occupied, and the thread executing monitorenter attempts to gain ownership of the monitor as follows:

If the entry number of Monitor is 0, the thread enters Monitor, and then sets the entry number to 1, making the thread the owner of Monitor.

If a thread already possesses the Monitor and is simply re-entering, the number of entries into the Monitor is increased by 1.

If another thread already owns the Monitor, the thread blocks until the entry number of Monitor reaches zero, and then tries again to take ownership of the Monitor.

Monitorexit:

The thread that executes monitorexit must be the owner of the monitor associated with the instance referenced by objectref. The thread decrements the entry count of the monitor associated with objectref. If as a result the value of the entry count is zero, the thread exits the monitor and is no longer its owner. Other threads that are blocking to enter the monitor are allowed to attempt to do so.

Translation:

The thread that executes MonitoreXit must be the owner of the Monitor to which the Objectref corresponds. The thread exits the monitor and is no longer the owner of the monitor. If the entry count of the monitor is 0, the thread exits the monitor and no longer owns the monitor. Other threads blocked by the monitor can try to take ownership of the monitor.

Synchronized uses a Monitor object to complete its implementation, and wait/notify methods depend on the Monitor object. Synchronized uses a Monitor object to complete its implementation. This is why only in the synchronized block or method calls to wait/notify method, otherwise will be thrown. Java lang. Exception IllegalMonitorStateException.

Each lock object has a lock counter and a pointer to the thread holding the lock.

When executing monitorenter, if the target object’s counter is zero, indicating that it is not held by another thread, the Java VM sets the holding thread of the lock object to the current thread and increases its counter by I. If the counter of the target lock object is not zero, the Java virtual machine can increment its counter if the thread holding the lock object is the current thread, otherwise it waits until the thread holding the lock releases the lock. When MonitoreXit is executed, the Java Virtual Machine decrements the lock object’s counter by one. A counter of zero indicates that the lock has been released.

conclusion

In the past, the use of synchronized has been regarded as a heavyweight lock. This was true prior to JDK 1.2, but it was found to be too heavy and consuming operating system resources, so synchronized was optimized. In the future, we can use it directly. As for the strength of the lock, we can use it directly because the underlying JVM has already done it.

Finally, look at the beginning of a few questions, is it all understood. When you go to research with a problem in mind, it’s often clearer. I hope it’s helpful to you.