1. Synchronized is a key word in Java, which can be used in code blocks and methods (common methods and static methods). Its functions are divided into three parts :(1) to ensure mutually exclusive access to synchronization code by threads; (2) to ensure that the modification of shared variables can be seen in time; (3) to effectively solve the reordering problem.

The basic use of Synchronized is not described in this article.

Principle of Synchronized:

Here is the code:

public class SynchronizedDemo { public void method() { synchronized (this) { System.out.println("Method 1 start"); }}}Copy the code

Decompile this code (network diagram)

There are two directives as shown: Monitorenter and Monitorexit.

Refer to the JVM specification description

Monitorenter:

Each object has a monitor lock. The monitor is locked when it is occupied, and the thread attempts to acquire ownership of the Monitor when it executes the Monitorenter instruction as follows:

1. If the number of entries to monitor is 0, the thread enters monitor and sets the number of entries to 1. The thread is the owner of Monitor.

2. If the thread already owns the monitor and just re-enters, the number of entries into the monitor is increased by 1.3. If another thread has occupied monitor, the thread blocks until the number of monitor entries is zero, and then tries again to acquire ownership of monitor.

Monitorexit:

The thread executing monitorexit must be the owner of the monitor to which objectref corresponds. When the instruction is executed, the number of monitor entries decreases by 1. If the number of monitor entries decreases by 1, the thread exits the monitor and is no longer the owner of the monitor. Other threads blocked by the monitor can try to take ownership of the monitor.

Synchronized semantics are implemented through a monitor object. In fact, wait/notify and other methods also rely on monitor objects. This is why only in the synchronized block or method calls to wait/notify method, otherwise will be thrown. Java lang. The cause of the exception IllegalMonitorStateException.

Synchronous method decompilation:

public class SynchronizedMethod {     
    public synchronized void method() {
         System.out.println("Hello World!");
     }
 }
Copy the code

Decompilation result:

As a result of the decompile, the method is not synchronized through monitorenter and Monitorexit (which, in theory, could be), but has the ACC_SYNCHRONIZED flag in its constant pool as opposed to the regular method. The JVM synchronizes methods based on this identifier: When a method is invoked, the calling instruction checks whether the ACC_SYNCHRONIZED access flag of the method is set. If so, the thread of execution obtains monitor, executes the method body after the method is successfully acquired, and releases Monitor after the method is executed. During method execution, the same Monitor object is no longer available to any other thread. There is essentially no difference, except that method synchronization is done implicitly, without bytecode.

Lock escalation:

  1. Heavyweight locking: Synchronized is implemented through a lock inside an object called a monitor lock. But the essence of the monitor Lock depends on the underlying operating system Mutex Lock to implement. However, the operating system realizes the switch between threads, which requires the conversion from user state to core state. This cost is very high, and the conversion between states takes a relatively long time, which is why Synchronized has low efficiency. Therefore, this type of Lock, which relies on the implementation of the operating system Mutex Lock, is called a “heavyweight Lock.” At the heart of all the JDK optimizations for Synchronized are efforts to reduce the use of this heavyweight lock. After JDK1.6, “lightweight locking” and “biased locking” were introduced to reduce the performance cost of acquiring and releasing locks and improve performance.

  2. Lightweight locks: There are four types of locks: no locks, biased locks, lightweight locks and heavyweight locks. As locks compete, locks can be upgraded from biased locks to lightweight locks to heavyweight locks (but locks are upgraded in one direction, meaning they can only be upgraded from low to high, with no lock degradation). Bias locking and lightweight locking are enabled by default in JDK 1.6. We can also disable bias locking by -xx: -usebiasedlocking.

  3. Biased locking: Biased locking is created in the absence of a multithreaded competition situation as far as possible to reduce unnecessary lightweight lock execution path, because of the lightweight lock acquisition and release rely on the CAS atomic instructions for many times, and biased locking only need to rely on a CAS when replacement ThreadID atomic instruction (because once appear, multithreading competition we must cancel the biased locking, So the performance loss of the undo operation with bias lock must be less than the performance cost of the saved CAS atomic instruction. As mentioned above, lightweight locking is intended to improve performance when threads alternately execute synchronized blocks, while biased locking is intended to further improve performance when only one thread executes synchronized blocks.

Java objects, when stored in memory, consist of three parts: object headers, instance data, and aligned padding bytes

The object header is divided into three parts: ① Mark word ② pointer to class ③ array length

mark word

  1. It records information about the object and the lock. When the object is treated as a synchronized lock, a series of operations around the lock are related to mark Word
  2. The mark Word length is related to the length of the JVM, and 32 bits correspond to 32 bits and 64 bits to 64 bits
  3. Mark Word stores different contents in different lock states. In 32-bit JVMS, it stores the following:

The lock flag bit of both no-lock and biased lock is 01, but the 1bit in front distinguishes whether the state is no-lock or biased lock.

Later versions of JDK1.6 introduce the concept of lock escalation when dealing with synchronous locks. The JVM starts with biased locks and moves from biased locks to lightweight locks and eventually to heavyweight locks as competition heats up.

JVM mark Word and lock upgrade process

  1. When not treated as a lock, it is a common object. The Mark Word records the HashCode of the object. The lock flag bit is 01, and the bias towards the lock bit is 0.

  2. When the object is treated as A synchronous lock and A thread A grabs the lock, the lock flag bit is still 01, but the bit of bias lock is changed to 1. The first 23bit records the ID of the thread that grabbed the lock, indicating that it enters the bias lock state.

  3. When thread A tries to acquire the lock again, the JVM finds that the flag bit of the synchronized lock object is 01 and the bias lock is 1, i.e., the bias state. The thread ID recorded in the Mark Word is thread A’s own ID, indicating that thread A has acquired the bias lock and can execute the code of the synchronized lock.

  4. When thread B attempts to acquire the lock, the JVM finds that the synchronous lock is in A biased state, but the thread ID in the Mark Word is not B, so thread B attempts to acquire the lock with A CAS operation first, which is likely to succeed because thread A does not automatically release the biased lock. If the lock grab succeeds, change the thread ID in The Mark Word to the ID of thread B, which means that thread B has obtained the biased lock and can execute the synchronous lock code. If the lock grab fails, go to Step 5.

  5. If cas fails, it indicates that the current lock has some competition, and cas will be upgraded to lightweight lock. The JVM creates a separate space in the thread stack for the current thread, where Pointers to the object lock Mark Word are stored, and where Pointers to this space are stored in the object lock Mark Word. The above two saving operations are CAS operations. If the saving is successful, it means that the thread has grabbed the synchronization lock. Then change the lock flag bit in Mark Word to 00, and the synchronization lock code can be executed. If the saving fails, lock snatching fails. In this case, go to Step 6.

  6. When a lightweight snatch lock fails, the JVM uses a spin lock, which is not a lock state, but simply represents repeated attempts to snatch the lock. Starting with JDK1.7, spin locks are enabled by default and the number of spins is determined by the JVM. If the lock capture succeeds, execute the synchronization lock code. If the lock capture fails, perform Step 7. (Lock upgraded to heavyweight lock)

  7. If the attempt fails again after the spin lock retry, the synchronization lock is upgraded to the heavyweight lock with the lock flag bit changed to 10. In this state, any thread that does not grab the lock is blocked.

A sync lock upgrade flowchart was moved elsewhere: