Synchronized is one of the most common keywords in Java concurrency. Locks are used to ensure synchronization between threads.

Three uses of synchronized

Synchronized has three main uses, namely to modify ordinary methods, static methods, and code blocks. The following uses code to demonstrate the three uses.

General method of modification

Synchronized decorates ordinary methods that apply to an object instance and acquire the lock of the current object instance before entering the synchronization code.

public class SynchronizedDemo {
 
 public static void main(String[] args) {
  MyRunnable r = new MyRunnable();
  Thread t1 = new Thread(r);
 Thread t2 = new Thread(r);  t1.start();  t2.start();  } }  class MyRunnable implements Runnable {   // Modify common methods  public synchronized void fun(a) {  try {  System.out.println("fun()");  TimeUnit.SECONDS.sleep(2);  } catch (InterruptedException e) {  e.printStackTrace();  }  }   @Override  public void run(a) {  fun();  } } Copy the code

Decorated static methods

Synchronized modifies static methods that apply to all object instances of a class, because static methods belong to classes rather than objects. The following shows the synchronization code for different object instances of the same class.

public class SynchronizedDemo {
 
 public static void main(String[] args) {
  Thread t1 = new Thread(new MyRunnable());
  Thread t2 = new Thread(new MyRunnable());
 t1.start();  t2.start();  } }  class MyRunnable implements Runnable {   // Decorate static methods  public static synchronized void fun(a) {  try {  System.out.println("fun()");  TimeUnit.SECONDS.sleep(2);  } catch (InterruptedException e) {  e.printStackTrace();  }  }   @Override  public void run(a) {  fun();  } } Copy the code

Modify code block

Synchronized modifies blocks of code in a more complex way and can be further subdivided. Synchronized (obj) and synchronized(this) acquire the lock of a given object, while synchronized(class name.class) acquire the class lock of the current class. The following code demonstrates the process of acquiring a class lock.

public class SynchronizedDemo {
 
 public static void main(String[] args) {
  Thread t1 = new Thread(new MyRunnable());
  Thread t2 = new Thread(new MyRunnable());
 t1.start();  t2.start();  } }  class MyRunnable implements Runnable {   public void fun(a) {  synchronized(MyRunnable.class) {  try {  System.out.println("fun()");  TimeUnit.SECONDS.sleep(2);  } catch (InterruptedException e) {  e.printStackTrace();  }  }  }   @Override  public void run(a) {  fun();  } } Copy the code

Underlying principle of synchronized

Synchronizeddemo. class decompiles bytecode files using javap -c synchronizedDemo. class to see how synchronized implements locking.

The first step is to decomcompile the code for the modifier method, and you can see that the method uses the ACC_SYNCHRONIZED identifier modifier, which the JVM treats as a synchronized method.


We then decompile the modifiers. Instead of the ACC_SYNCHRONIZED identifier, we find monitorenter and Monitorexit directives, Monitorenter and Monitorexit indicate where the synchronized code block begins and ends, respectively. Careful observers will see that there are actually two Monitorexit in the figure below. The main reason for the monitorexit is that a deadlock may occur if the application exits unexpectedly without releasing the lock. The exit instruction ensures that the lock will be released in the event of an exception.


In summary, synchronized modifiers use the ACC_SYNCHRONIZED flag, while blocks of code are modified using monitorenter and Monitorexit directives, both of which essentially acquire the monitor lock monitor.

The optimization of synchronized

Before JDK6, synchronized used heavyweight locks to ensure synchronization between threads, and the overhead of locking and releasing locks was very high. Starting with JDK6, a series of synchronized optimizations were introduced, such as adaptive spin locks, lock elimination, lock coarser, biased locks, and lightweight locks.

First of all, we need to understand that the level of synchronized locks after optimization is no longer a single heavyweight lock, but locks without, bias, lightweight, and heavyweight. Synchronized has been greatly optimized by favoring locks to lightweight locks, and then upgrading to heavyweight locks.

Adaptive spin lock

To help you understand spin locks, let’s imagine a scenario. When we eat in the canteen, some Windows need to get the number plate, generally this kind of window takes a long time, such as malatang window, some Windows only need to queue up at the back and soon can hit the meal, such as fast food window. Fast food Windows do not need to get a number to wait, but directly in front of the window looks very similar to our spin lock. A spin lock is a thread that executes at a higher speed than the cost of switching. Therefore, the spin lock allows the thread to wait and not give up CPU execution time, so that the current thread can directly obtain synchronous resources after the other thread finishes running, avoiding the overhead of thread switching.

Spin-locking was added in JDK1.4, but was turned off by default. Since JDK6, spin-locking has been turned on by default, and adaptive spin-locking has been introduced. Spinlocks are more flexible than spinlocks, which use the traditional method of suspending threads after they have waited a certain number of times without obtaining synchronous resources to avoid wasting CPU resources. For example, on the same lock object, if another thread succeeds in spinning and waits for success, the VM has a high probability that the spin will succeed again, and the spin may last for a long time. If the spin rarely succeeds, the process of obtaining synchronous resources may be omitted.

Lock elimination

Also use the above canteen example to explain lock elimination, when malatang window only you a person, but also need to issue a number card? Obviously not. In fact, this is called lock elimination, which is a JIT optimization of locks by using escape analysis techniques to determine whether the lock object is accessed by only one thread, and if so, to cancel the lock process.

Lock coarsening

In most cases, we need to narrow the scope of a program because if the scope is too large, the execution time will increase. But there are special cases. Let’s look at the following case.

public class LockDemo {
 public void fun(a) {
  for (int i = 0; i < 10000; ++i) {
   synchronized(LockDemo.class) {
    System.out.println("test...");
 }  }  } } Copy the code

The above for loop is constantly locking and releasing locks, which can be very performance expensive, so the JIT will optimize locking outside the for loop, as shown in the code below, which is the lock coarsening process.

public class LockDemo {
 public void fun(a) {
  synchronized(LockDemo.class) {
   for (int i = 0; i < 10000; ++i) {
    System.out.println("test...");
 }  }  } } Copy the code

Biased locking

Biased locking is generally used when the same thread applies for the same lock many times. If multiple threads are competing for lock resources, biased locking does not work.

When a thread first applies for a lock, the Mark Word lock bit in the Java object header is set to 01, and the thread ID is recorded, which is in the biased lock state. If there are no threads competing for the lock resources, no further synchronization operation is required.

Lightweight lock

Biased locks are upgraded to lightweight locks when multiple threads attempt to acquire them.

For lightweight locks, it is suitable for multi-threads to acquire lock resources alternately or acquire the lock quickly after spinning. When thread 1 releases the lock, thread 2 tries to acquire the lock resource, then revoke the biased lock, and Mark Word stores hashcode and GC generation age instead of thread ID, which is in the lightweight lock state.

Heavyweight lock

If thread 1 and thread 2 acquire lock resources alternately, then lightweight locks can still do the job. But if thread 1 is holding the lock resource, thread 2 is requesting the lock resource, and then thread 3 and thread 4 are competing for the lock resource, then you need a heavyweight lock.

Analysis of three characteristics of concurrency

atomic

Atomicity refers to the ability to execute one or more operations without interruption. Synchronized modified code is atomic, and either all or none of it executes successfully.

As we mentioned earlier, synchronized essentially acquires the monitor lock, whether it modifies a block of code or a method. The thread that has acquired the lock enters the critical region, and no other thread can acquire the processor resource until the lock is released. This ensures that no time slice rotation occurs, and therefore atomicity is guaranteed.

visibility

Visibility means that when a thread changes a shared variable, other threads can immediately know that the variable has been changed. We know that in the Java memory model, different threads have their own local memory, and local memory is a copy of main memory. If threads modify local memory without updating main memory, visibility is not guaranteed.

Synchronized, after modifying variables in the local memory, will refresh the modified contents of the local memory to the main memory before unlocking, ensuring that the value of the shared variable is the latest and thus ensuring visibility.

order

Orderliness means that programs are executed in order of code.

Synchronized is a guarantee of order. According to the as-IF-serial semantics, no matter how the compiler and processor optimize or rearrange instructions, the result of running in a single thread must be correct. Synchronized guarantees CPU exclusivity for a single thread, and thus orderliness.

This article is formatted using MDNICE