This article will document the common usage scenarios of Synchronized and the underlying implementation principles of Synchronized. Although we often use the Synchronized keyword in multithreading, we may not pay much attention to the underlying implementation of this familiar keyword. As a developer, now that you’ve used it, you can try to uncover it step by step.

Why Synchronized?

Let’s look at this code first


public class Demo {
    private static int count=0;
    public /*synchronized*/ static void inc(a){
        try {
            Thread.sleep(1);
        } catch (InterruptedException e) {
            e.printStackTrace();
        }
        count++;
    }
    public static void main(String[] args) throws InterruptedException {
        for(int i=0; i<1000; i++){new Thread(()-> Demo.inc()).start();
        }
        Thread.sleep(3000);
        System.out.println("Operating Results"+count); }}Copy the code

Run result of this code: run result 970. In this code, the synchronized keyword is not added, and we use the method of loop 1000 threads to access the count variable. The result of running tells us that the state of the shared variable is not thread safe (we expect 1000 accesses to get 1000 results). To solve this problem, the Synchronized keyword does the trick.

Introduction of synchronized

Synchronized has long been an elder statesman in multithreaded concurrent programming, and many would call it a heavyweight lock. However, with various optimizations made to Synchronized in Java SE 1.6, biased and lightweight locks introduced in Java SE 1.6 to reduce the performance cost of acquiring and releasing locks are less heavy in some cases. We’ll talk about that in a little bit later on.

The basic syntax of synchronized

Synchronized has three ways of locking

  1. Modifier instance method, used to lock the current instance, before entering the synchronization code to obtain the current instance lock
  2. Static method that locks the current class object before entering the synchronization code
  3. Modifies a block of code that specifies a lock object, locks a given object, and acquires the lock for the given object before entering a synchronized code base.

I found a picture on the Internet, which roughly corresponds to the above.

Analysis of synchronized principle

Java object headers and Monitor are the basis for synchronized! The following two concepts will be introduced in detail.

Let’s do a little demo for monitor

package com.thread;

public class Demo1{

    private static int count = 0;

    public static void main(String[] args) {
        synchronized(Demo1.class) { inc(); }}private static void inc(a) { count++; }}Copy the code

The above code, Demo, uses the synchroized keyword and locks the class object. After compiling, switch to the same directory as demo1.class and use javap -v demo1.class to view the bytecode files:

public static void main(java.lang.String[]); descriptor: ([Ljava/lang/String;)V flags: ACC_PUBLIC, ACC_STATIC Code: stack=2, locals=3, args_size=1 0: LDC # 2 / / class 2: com/thread/SynchronizedDemo dup 3:4: astore_1 monitorenter / / pay attention to the 5: Invokestatic #3 // Method Inc :()V 8: aload_1 9: monitorexit // Note this 10: goto 18 13: astore_2 14: ALOad_1 15: Monitorexit // Note this 16: ALOad_2 17: athrow 18: returnCopy the code

When a thread acquises a lock, it actually acquises a monitor, which can be thought of as a synchronization object. All Java objects naturally carry the Monitor. Monitor is unique after the addition of the Synchronized keyword. Synchronized blocks use monitorenter and Monitorexit directives for synchronization, both of which essentially fetch an object’s monitor exclusively. This means that only one thread can acquire the monitor of synchronized objects at a time. When the thread executes monitorenter, it attempts to acquire the object’s monitor ownership (i.e., the lock on the object), and monitorexit releases the monitor ownership.

Layout of objects in memory

In the Hotspot virtual machine, objects are stored in three areas: Header, Instance Data, and Padding. In general, lock objects used by synchronized are stored in Java object headers. It is the key to lightweight and biased locking.

Java object head

The object header contains two parts of data: a Mark Word and a Klass Pointer. Klass Point: a pointer to an object’s class metadata that the virtual machine uses to determine which class instance the object is. Mark Word: Used to store the runtime data of the object itself, such as HashCode, GC generation age, lock status flags, locks held by threads, biased thread ids, biased timestamps, etc. It is the key to implementing lightweight and biased locks.

Upgrades to synchronized locks

In analyzing Markword, biased locks, lightweight locks, and heavyweight locks are mentioned. In analyzing the differences between these types of locks, let’s first consider the problem that using locks can achieve data security, but will bring performance degradation. Not using locks improves performance based on thread parallelism, but does not guarantee thread-safety. There seems to be no way between the two to meet both performance and security requirements.

The authors of the hotspot VIRTUAL machine have investigated and found that, in most cases, locking code is not only not contested by multiple threads, but is always acquired multiple times by the same thread. Based on this probability, synchronized made some optimizations after JDK1.6, introducing the concept of biased locking and lightweight locking in order to reduce the performance overhead of acquiring and releasing locks. Therefore, it can be found that synchronized has four states: no lock, biased lock, lightweight lock and heavyweight lock; The lock status escalates from low to high depending on the level of competition.

The basic principle of bias locking

As mentioned above, in most cases, the lock is not only not competing with multiple threads, but is always acquired by the same thread multiple times. In order to make the cost of acquiring the lock lower, the concept of biased lock is introduced. How do we understand biased locking? When a thread accesses a block of code with a synchronized lock, the ID of the current thread is stored in the object header, and the thread subsequently enters and exits the block without having to re-lock and release the lock. Instead, it directly compares whether the bias lock to the current thread is stored in the object header. If equality indicates that the biased lock is biased in favor of the current thread, there is no need to try to obtain the undo biased lock using a mechanism that waits until a race occurs to release the lock, so the thread holding the biased lock will release the lock only when other threads attempt to contest the biased lock. And directly upgrade the biased lock object to the state with a lightweight lock. When the thread holding the biased lock is revoked, the thread that obtained the biased lock has two situations:

  1. If the thread that acquired the bias lock has exited the critical section, that is, the synchronization block has finished executing, then the object head is set to lock free and the contention thread can re-bias the previous thread based on the CAS
  2. If the original get biased locking thread synchronization code block is not performed, in the critical zone, this time will win the biased locking the original thread after the upgrade for lightweight lock continue synchronized code block in our application development, most of the time there will be two or more threads competition, if open the biased locking, Instead, it increases the resource cost of acquiring locks. Therefore, the JVM parameter UseBiasedLocking can be used to enable or disable biased locking

This is a very classic partial lock flow chart

The fundamentals of lightweight locks

lock

After the lock is upgraded to a lightweight lock, the object’s Markword is changed accordingly. Upgrading to a lightweight lock process:

  1. A thread creates a LockRecord in its own frame.
  2. Copies the MarkWord in the object header of the lock object into the lock record just created by the thread.
  3. Points the Owner pointer in the lock record to the lock object.
  4. Replace the MarkWord in the object header of the lock object with a pointer to the lock record.

spinlocks

Lightweight lock in the lock process, the use of spin lock, the so-called spin, is that when another thread competing for the lock, the thread will wait in place, rather than blocking the thread, until the lock was released by the thread, the thread can immediately acquire the lock. Note that locking in an in-place loop consumes CPU, which is equivalent to executing a for loop with nothing at all. Therefore, lightweight locks are suitable for scenarios where synchronized blocks of code execute quickly, so that threads wait in place for a short time before acquiring the lock. Spin-locks are used in a probabilistic context, where most synchronized blocks of code execute for a very short time. So the performance of the lock can be improved by seemingly uncontested loops. But the spin has to be conditional, otherwise if a thread executes a block of synchronized code for a long time, the thread’s constant loop will consume CPU resources. The default number of spins is 10, which can be modified with preBlockSpin. After JDK1.6, adaptive spin locking was introduced. Adaptive means that the number of spins is not fixed, but is determined by the time the previous spin was on the same lock and the state of the lock owner. If the spin wait has just successfully acquired the lock on the same lock object, and the thread holding the lock is running, the virtual machine will assume that the spin wait is likely to succeed again, and it will allow the spin wait to last a relatively long time. If spin is rarely successfully acquired for a lock, it is possible to omit the spin process and block the thread directly in future attempts to acquire the lock, avoiding wasting processor resources.

unlock

When lightweight unlocks the product of the Hermite product, atomic CAS operation will be used to replace the product back to the object head. If successful, no competition will occur. If it fails, it indicates that the current lock is competing, and the lock expands to a heavyweight lock.

Heavyweight lock

When the lightweight lock expands beyond the heavyweight lock, it means that the thread has to be suspended and blocked waiting to be awakened.

Comparison of various locks

conclusion

The JVM optimizes itself by automatically upgrading the Synchronized keyword during runtime. This is the implementation principle of Synchronized and the optimization made after java1.6, as well as the lock upgrade principle that may be encountered in the actual operation. Although the word synchronized is well understood, I find it fun to dig into its principles step by step.