Synchronized had low performance prior to JDK 1.5, and at that time we often chose to use Lock instead of synchronized. However, this changed with JDK 1.6, which included various improvements and performance improvements for synchronized, which is one of the main reasons that synchronized is still visible in the current version. In addition to performance, synchronized is also very convenient, which is an important part of its popularity.

Lock inflation is one of the most effective ways to improve synchronized performance (other optimizations will be discussed later), but what is lock inflation? And various details of lock expansion.

The body of the

In JDK 1.5, synchronized was implemented by calling Monitor locks, which in turn relied on the underlying operating system’s Mutex Lock, which was released and acquired by converting from user mode to kernel mode. This leads to high costs and long execution times, and the locks that rely on the operating system’s Mutex Lock implementation are called “heavyweight locks.”

What is user mode and kernel mode?

User Mode: When a process is executing the User’s own code, it is said to be in User Mode. Kernel Mode: When a task (process) executes a system call and is executed in Kernel code, the process is said to be in Kernel Mode, and the processor executes in the most privileged Kernel code.

Why are there nuclear and user states?

Assuming that there is no kernel mode or user mode, programs can read and write hardware resources at will, such as reading and writing and allocating memory at will. If a programmer accidentally writes something in the wrong place, the system will probably crash.

And with the distinction between user mode and kernel mode, program when they perform an operation after a series of verification and inspection, confirmed after to normal operation resources, so as not to worry about accidentally put the addled system, there is the distinction between kernel mode and user mode after can make programs more safe operation, However, switching between the two modes will lead to some performance overhead.

Lock expansion

In JDK 1.6, “biased lock” and “lightweight lock” states were introduced to address the performance cost of acquiring and releasing locks, and synchronized has one of the following four states:

  1. unlocked
  2. Biased locking
  3. Lightweight lock
  4. Heavyweight lock

The lock levels are upgraded in the order described above, a process known as lock inflation.

PS: Until now, the lock upgrade is one-way, that is, only from low to high (no lock -> biased lock -> light lock -> heavy lock), there will be no lock degradation.

Why does lock expansion optimize synchronized performance? The answer will come naturally when we understand these lock states. Let’s take a look at them.

1. The biased locking

Bias locking was introduced to make it cheaper for threads to acquire locks, as HotSpot authors found through research and practice that, in most cases, locks are not contested by multiple threads and are always acquired by the same thread multiple times.

Biased Locking means that the first thread to access the lock will be Biased. If only one thread accesses the lock during the process, and there is no contention among multiple threads, then the thread does not need to trigger synchronization. In this case, a Biased lock will be added to the thread.

Biased lock execution process

When a thread accesses a synchronized block and acquires a lock, the thread ID of the lock bias is stored in the Mark Word of the object header. When a thread enters and exits the synchronized block, the CAS operation is no longer used to lock and unlock the lock. Instead, it checks whether the Mark Word stores a bias lock pointing to the current thread. If the thread ID in Mark Word is consistent with the accessed thread ID, it can directly enter the synchronized block for code execution; if the thread ID is different, CAS is used to try to acquire the lock; if the lock is successfully acquired, it enters the synchronized block for code execution; otherwise, the status of the lock will be upgraded to lightweight lock.

The advantages of biased locking

Bias locking is designed to minimize unnecessary lock switching without multithreaded competition, because lock acquisition and release depend on multiple CAS atomic instructions, whereas bias locking requires only one CAS atomic instruction to be executed while replacing the thread ID.

Mark Word extension knowledge: Memory layout

In the HotSpot virtual machine, the layout of objects stored in memory can be divided into the following three areas:

  1. Object Header
  2. Instance Data
  3. Align the Padding

The object header again contains:

  1. Mark Word: This is where our biased locking information is stored.
  2. Klass Pointer (Pointer to a Class)

The layout of objects in memory is as follows:In JDK 1.6, bias locking is enabled by default. You can run the -xx: -usebiasedlocking =false command to disable bias locking.

Lightweight locks

Lightweight locking was introduced to reduce the performance cost of traditional heavyweight locking with operating system Mutex locks without multithreaded competition. If Mutex Lock is used to switch between user mode and kernel mode every time a Lock is acquired and released, the performance cost of the system is high.

When a bias lock is turned off or multiple threads compete for a bias lock, a bias lock is upgraded to a lightweight lock. A lightweight lock is acquired and released by CAS, where a lock may be acquired by a certain number of spins.

Matters needing attention

It is important to note that lightweight locking is not intended to replace heavyweight locking, it is intended to reduce the performance cost of traditional heavyweight locking without multi-threaded competition. The scenario that lightweight locks are adapted to is when threads execute synchronized blocks alternately, which can cause lightweight locks to expand into heavyweight locks if multiple threads are accessing them at the same time.

3. Heavyweight locks

Synchronized relies on Monitor to implement method synchronization or code block synchronization. Code block synchronization is implemented using the Monitorenter and monitorexit directives. The monitorenter directive is inserted at the beginning of the synchronized block after compilation. While Monitorexit is inserted at the end of the method and at the exception, any object has a Monitor associated with it, and when a Monitor is held, it is locked.

Like the following lock code:

public class SynchronizedToMonitorExample {
    public static void main(String[] args) {
        int count = 0;
        synchronized (SynchronizedToMonitorExample.class) {
            for (int i = 0; i < 10; i++) { count++; } } System.out.println(count); }}Copy the code

When we have compiled the above code into bytecode, it looks like this:From the above results, we can see that there are multiple monitorenter and monitorexit instructions in the main method execution, thus we can see that synchronized depends on the Monitor lock implementation. Monitor locks, in turn, are operating system dependent Mutex locks that switch between user mode and kernel mode each time a Lock is acquired and released, adding to the system’s performance overhead.


Synchronized optimized its performance in JDK 1.6. Lock inflation is one of the key ways to improve synchronized’s performance. Lock inflation refers to the transition from unlocked, biased, to lightweight locks. The process that leads to the heavyweight lock. All states prior to heavyweight can significantly improve synchronized’s performance in most cases.

Recommended articles in this series

  1. Concurrency Lesson 1: Thread in Detail
  2. What’s the difference between user threads and daemon threads in Java?
  3. Get a deeper understanding of ThreadPool
  4. 7 ways to create a thread pool, highly recommended…
  5. How far has pooling technology reached? I was surprised to see the comparison between threads and thread pools!
  6. Thread synchronization and locking in concurrency
  7. Synchronized = “this” and “class”
  8. The difference between volatile and synchronized
  9. Are lightweight locks faster than weight locks?
  10. How can terminating a thread like this lead to service downtime?
  11. 5 Solutions to SimpleDateFormat Thread Insecurity!
  12. ThreadLocal doesn’t work? Then you are useless!
  13. ThreadLocal memory overflow code demonstration and cause analysis!
  14. Semaphore confessions: I was right to use the current limiter!
  15. CountDownLatch: Don’t wave. Wait till you’re all together!
  16. CyclicBarrier: When all the drivers are ready, they can start.

Follow the public number “Java Chinese community” for more interesting and informative Java concurrent articles.