The foreword 0.

The main causes of thread-safety problems are the existence of shared data (also known as critical resources) and the existence of multiple threads operating on shared data. So in order to solve the problem of thread safe, we may need to such a plan, when there are multiple threads share data operation, need to make sure that the same time one and only one thread in Shared data operation, other threads must wait until after the thread processing the data, this approach has a noble name is called mutual exclusion, which can achieve the goal of mutually exclusive access. In other words, when a shared data is mutex by the currently accessing thread, other threads can only wait until the current thread finishes processing and releases the lock at the same time. In Java, the keyword synchronized ensures that only one thread can execute a method or block of code at any one time. It is also important to note that synchronized has another important function. Synchronized is also important in that it ensures that changes made by one thread (primarily shared data) are seen by other threads (guaranteed visibility, a complete substitute for Volatile).

1. Three application methods of synchronized

The three application modes of synchronized are described as follows

  • Modifier instance method, used to lock the current instance, before entering the synchronization code to obtain the current instance lock
  • Modifies a static method that locks the current class object before entering the synchronized code
  • Modifies a code block to specify a lock object for a given object and obtains the lock for the given object before entering a synchronized code block

1.1 Synchronized acts on instance methods

Synchronized is used to modify instance methods in an instance object. Note that instance methods do not include static methods

public class AccountingSync implements Runnable{
    // Share resources (critical resources)
    static int i=0;

    /** * synchronized modifs instance methods */
    public synchronized void increase(a){
        i++;
    }
    @Override
    public void run(a) {
        for(int j=0; j<100;j++){
            increase();
        }
    }
    public static void main(String[] args) throws InterruptedException {
        AccountingSync instance=new AccountingSync();
        Thread t1=new Thread(instance);
        Thread t2=new Thread(instance);
        t1.start();
        t2.start();
        System.out.println(i);/ / 200}}Copy the code

In the code above, we start two threads operating on the same shared resource, variable I. Since I ++ is not atomic, the operation reads the value first and writes back a new value, which is equivalent to adding 1 to the original value. If the second thread reads the value of I between the first thread reading the old value and writing back the new value, the second thread will see the same value together with the first thread and perform the increment of the same value, thus causing a thread-safe failure. Therefore, you must use the synchronized modifier for the increase method to keep it thread-safe. Note that synchronized modifies the instance method increase, in which case the lock on the current thread is the instance object. Note that in Java thread synchronization locks can be any object. This is indeed true from the code execution; if we hadn’t used the synchronized keyword, our final output would probably have been less than 200, which is where the synchronized keyword comes in. Here we also need to realize that when one thread is accessing an object’s synchronized instance method, other threads cannot access the object’s other synchronized methods. After all, an object has only one lock, and when one thread acquires the lock, other threads cannot acquire the lock. Therefore, other synchronized instance methods of that object cannot be accessed, but other threads can access other non-synchronized methods of that instance object. Of course, if thread A needs to access the synchronized method f1 of obj1 (the current lock is obj1), and thread B needs to access the synchronized method f2 of obj2 (the current lock is obj2), This is allowed because the two instance locks are not identical. If two threads are operating on data that is not shared, thread-safety is guaranteed. Unfortunately, if two threads are operating on data that is shared, thread-safety may not be guaranteed.

public class AccountingSyncBad implements Runnable{
    static int i=0;
    public synchronized void increase(a){
        i++;
    }
    @Override
    public void run(a) {
        for(int j=0; j<1000000;j++){
            increase();
        }
    }
    public static void main(String[] args) throws InterruptedException {
        / / new new instances
        Thread t1=new Thread(new AccountingSyncBad());
        / / new new instances
        Thread t2=new Thread(new AccountingSyncBad());
        t1.start();
        t2.start();
        System.out.println(i);/ / 146}}Copy the code

The difference is that we create two new instances AccountingSyncBad at the same time, and then start two different threads to operate on the shared variable I. Unfortunately, the result is 146 instead of 200, because the code made a serious error. Although we modify the increase method with synchronized, two different instance objects are new, which means that there are two different instance object locks, so T1 and T2 enter their own object locks, which means that t1 and T2 threads use different locks, so thread safety cannot be guaranteed. The solution to this dilemma is to use synchronized on a static increase method, in which case the object lock is on the current class object, and since there is only one class object for which no matter how many instance objects are created, the object lock is unique in such cases. Let’s take a look at using a static increase method that applies synchronized.

1.2 Synchronized applies to static methods

When synchronized acts on a static method, the lock is the class object lock of the current class. Because static members are not exclusive to any instance object and are class members, concurrent operations on static members can be controlled through class object locks. Note that if thread A calls the non-static synchronized method of an instance object and thread B calls the static synchronized method of the class that the instance object belongs to, the mutual exclusion will not occur. Because a lock used to access a static synchronized method is the current class object, and a lock used to access a non-static synchronized method is the current instance object lock, look at the code below

public class AccountingSyncClass implements Runnable{
    static int i=0;

    /** * on static methods, the lock is on the current class object, that is, the corresponding class object */ of the * AccountingSyncClass class
    public static synchronized void increase(a){
        i++;
    }

    /** * non-static, access to different locks will not be mutually exclusive */
    public synchronized void increase4Obj(a){
        i++;
    }

    @Override
    public void run(a) {
        for(int j=0; j<1000000;j++){
            increase();
        }
    }
    public static void main(String[] args) throws InterruptedException {
        / / new new instances
        Thread t1=new Thread(new AccountingSyncClass());
        / / new new instances
        Thread t2=new Thread(new AccountingSyncClass());
        // Start the threadt1.start(); t2.start(); System.out.println(i); }}Copy the code

Because the synchronized keyword modifies static increase methods, its lock object is the class object of the current class, unlike the synchronized method that modifies instance methods. Note that the increase4Obj method in this code is an instance method whose object lock is the current instance object. If it is called by another thread, there will be no mutual exclusion (lock objects are different, after all), but we should be aware that thread-safety issues can be found in this case (handling the shared static variable I).

1.3 synchronized Code blocks

In addition to using keywords to modify instance method and static method, you can also modify synchronized code block, in some cases, we can write the method body is bigger, at the same time there are some more time-consuming operation, and need to be synchronized code and only a small part, if directly with the method of synchronous operation, may do more harm than good, At this point, we can wrap the code to be synchronized in the way of synchronous code block, so that there is no need to synchronize the whole method. The example of synchronous code block is as follows:

public class AccountingSync implements Runnable{
    static AccountingSync instance=new AccountingSync();
    static int i=0;
    @Override
    public void run(a) {
        // Omit other time-consuming operations....
        // use the synchronization block to synchronize variable I with the lock object instance
        synchronized(instance){
            for(int j=0; j<1000000; j++){ i++; }}}public static void main(String[] args) throws InterruptedException {
        Thread t1=new Thread(instance);
        Thread t2=newThread(instance); t1.start(); t2.start(); System.out.println(i); }}Copy the code

It can be seen from the code that synchronized is applied to a given instance object, that is, the current instance object is the lock object. Each time a thread enters the code block wrapped by synchronized, the current thread is required to hold the instance object lock. If other threads currently hold the lock, New threads must wait, ensuring that only one thread executes i++ at a time; Operation. In addition to instance as an object, we can also use this object (representing the current instance) or the current class object as the lock, as follows:

//this, the current instance object lock
synchronized(this) {for(int j=0; j<100;j++){
        i++;
    }
}

/ / class object lock
synchronized(AccountingSync.class){
    for(int j=0; j<100;j++){
        i++;
    }
}
Copy the code

3. Basic semantic principle of synchronized

Synchronization in Java virtual machines is implemented based on incoming and outgoing Monitor objects, whether it is explicit (with explicit Monitorenter and Monitorexit directives, i.e. synchronized code blocks) or implicit. In the Java language, synchronization is probably the most commonly used synchronization method modified by synchronized. The synchronized methods are not synchronized by monitorenter and Monitorexit directives, but are implicitly implemented by method-calling directives that read the ACC_SYNCHRONIZED flags of methods in the runtime constant pool, more on this later. Let’s start with a concept called Java object headers, which are key to understanding how synchronized works.

3.1 Understanding Java Object headers and Monitor

In the JVM, objects are laid out in memory in three areas: object headers, instance data, and aligned padding. As follows:

  • Instance data: Stores the attribute data of a class, including the attribute information of its parent class.
  • Align padding: Because the virtual machine requires that the object’s starting address be a multiple of 8 bytes. Padding data does not have to exist, just for byte alignment.
  • Object: Java object headers typically hold two machine codes (in a 32-bit virtual machine, one machine code equals four bytes, or 32bit, and in a 64-bit virtual machine, one machine code equals eight bytes, or 64bit), but if the object is an array type, three machine codes are required. Because the JVM virtual machine can determine the size of a Java object from its metadata information, but cannot determine the size of an array from its array metadata, a block is used to record the length of the array.

In order to represent information about an object’s properties, methods, and so on, you have to have a structural description. Hotspot VM uses a pointer to the Class section of the object header to find the Class description of the object, as well as the internal method and property entry. As shown below:

Number of VM slots Header object structure instructions
32/64bit Mark Word Store the runtime data of the object itself, such as HashCode, GC generational age, lock status flags, locks held by threads, biased thread ID, biased timestamp, object generational age, etc. Mark Word is designed as a flexible data structure to store as much information as possible in a very small space. It reuses its storage space according to its state
32/64bi Class Metadata Address A type pointer points to an object’s class metadata, which the JVM uses to determine which class instance the object is.
32/64bi Array Length If the object is a Java array, there must also be a block of data in the object header that records the length of the array. Because a virtual machine can determine the size of a Java object from the metadata information of a normal Java object, it cannot determine the size of an array from the metadata of an array. The length of this data is 32 – and 64-bit, respectively, on a 32-bit or 64-bit VM (with compression pointer disabled).

For example, in a 32-bit HotSpot VIRTUAL machine, if the object is not locked, the Mark Word 32bit space contains 25bits for object hash codes, 4bits for object generation age, 2bits for lock flag bits, 1bit is fixed to 0, as shown in the following table:

On a 64-bit VM, Mark Word is 64-bit and its storage structure is as follows:

The last two bits of the object header store the flag bit of the lock. 01 is the initial state and is not locked. The object header stores the hash code of the object itself, and different contents are stored in the object header depending on the lock level. The bias lock stores the ID of the thread currently occupying the object; Lightweight stores Pointers to lock records in the thread stack. From here we can see that the “lock”, could be a lock head record + object reference pointer (whether a thread has lock the thread lock record address and object head pointer comparison), object may also be possible that the thread ID (to determine whether a thread has a lock head thread ID and object storage thread ID).

Lightweight and biased locks are new additions to Java 6’s optimization of synchronized locks, which we’ll briefly examine later. Here we focus on heavyweight locks, also known as synchronized object locks, with a lock identifier of bit 10, where the pointer points to the starting address of a monitor object (also known as a pipe or monitor lock). Each object has a Monitor associated with it, and the relationship between the object and its Monitor can be implemented in various ways. For example, the monitor can be created and destroyed together with the object or automatically generated when a thread tries to acquire an object lock. However, when a monitor is held by a thread, it is locked. In the Java virtual machine (HotSpot), monitor is implemented by ObjectMonitor and its main data structure is as follows (located in the ObjectMonitor. HPP file of the HotSpot virtual machine source code, implemented in C++)

ObjectMonitor() {
    _header       = NULL;
    _count        = 0; // Number of records
    _waiters      = 0,
    _recursions   = 0;
    _object       = NULL;
    _owner        = NULL;
    _WaitSet      = NULL; // Threads in wait state are added to _WaitSet
    _WaitSetLock  = 0 ;
    _Responsible  = NULL ;
    _succ         = NULL ;
    _cxq          = NULL ;
    FreeNext      = NULL ;
    _EntryList    = NULL ; // Threads in the waiting block state are added to the list
    _SpinFreq     = 0 ;
    _SpinClock    = 0 ;
    OwnerIsThread = 0 ;
  }
Copy the code

ObjectMonitor has two queues, _WaitSet and _EntryList, that hold the list of ObjectWaiter objects (each thread waiting for a lock is encapsulated as an ObjectWaiter object). _owner refers to the thread that holds the ObjectMonitor object. When multiple threads simultaneously access a piece of synchronized code, they first enter the _EntryList collection. When a thread obtains the object’s monitor, it enters the _Owner area, sets the owner variable in monitor to the current thread, and increases the count in monitor by 1. If the thread calls wait(), it will release the currently held monitor. The owner variable returns to null, count decreases by 1, and the thread enters the WaitSet collection to be woken up. If the current thread completes, it also releases the monitor(lock) and resets the value of the variable so that another thread can enter to acquire the monitor(lock). As shown in the figure below

From this point of view, a Monitor object exists in the object header of every Java object (the point of a stored pointer), and synchronized locks are acquired this way, which is why any object in Java can be used as a lock. It is also the reason why notify/notifyAll/ Wait methods exist in the top-level Object (more on this later). With this knowledge in hand, we will further analyze the specific semantic implementation of synchronized at the bytecode level.

3.2 Underlying principle of synchronized code block

Now let’s redefine a synchronized modified block of code to operate on the shared variable I, as follows

public class SyncCodeBlock {

   public int i;

   public void syncTask(a){
       // Synchronize code blocks
       synchronized (this){ i++; }}}Copy the code

Compiling the above code and decompiling it using JavAP yields the following bytecode (we omit some unnecessary information here) :

Classfile /***/src/main/java/com/zejian/concurrencys/SyncCodeBlock.class Last modified 2018-07-25; size 426 bytes MD5 checksum c80bc322c87b312de760942820b4fed5 Compiled from "SyncCodeBlock.java" public class com.hc.concurrencys.SyncCodeBlock minor version: 0 major version: 52 flags: ACC_PUBLIC, ACC_SUPER Constant pool: / /... Omitted in the constant pool data / / constructor public com. Hc. Concurrencys. SyncCodeBlock (); descriptor: ()V flags: ACC_PUBLIC Code: stack=1, locals=1, args_size=1 0: aload_0 1: invokespecial #1 // Method java/lang/Object."<init>":()V 4: return LineNumberTable: line 7: 0 / / = = = = = = = = = = = main see syncTask method = = = = = = = = = = = = = = = = public void syncTask (); descriptor: ()V flags: ACC_PUBLIC Code: stack=3, locals=3, args_size=1 0: aload_0 1: dup 2: astore_1 3: Aload_0 5: dUP 6: getField #2 // Field I :I 9: iconST_1 10: iadd 11: Putfield #2 // Field I :I 14: ALOad_1 15: monitorexit // note here, exit method 16: goto 24 19: astore_2 20: ALOad_1 21: Monitorexit // Notice here, exit synchronization method 22: aload_2 23: athrow 24: return Exception table: // omit other bytecode....... } SourceFile: "SyncCodeBlock.java"Copy the code

We will focus on the following code in bytecode

3: monitorenter  // Enter the synchronization method
/ /... Omit the other
15: monitorexit   // Exit the synchronization method
16: goto          24
// omit other.......
21: monitorexit // Exit the synchronization method
Copy the code

It can be seen from the bytecode that the synchronized statement block is implemented using monitorenter and Monitorexit instructions. Monitorenter refers to the starting position of the synchronized code block, and Monitorexit refers to the ending position of the synchronized code block. When monitorenter executes, the current thread attempts to acquire the objectref’s monitor. If the objectref’s monitor’s entry counter is 0, the thread succeeds in acquiring the Monitor. And set the counter value to 1, the lock is successfully obtained. If the current thread already owns objectref’s Monitor, it can re-enter the monitor (more on reentrancy later) and the counter will be incresed by one. If another thread already owns objectref’s monitor, the current thread will block until the executing thread completes, i.e. the monitorexit directive is executed, which releases the monitor(lock) and sets the counter to 0. Other threads will have the opportunity to own monitor. Note that the compiler will ensure that regardless of how the method completes, every Monitorenter directive called in the method executes its monitorexit counterpart, regardless of whether the method terminates normally or abnormally. To ensure that monitorenter and Monitorexit can be paired correctly when the method exception completes, the compiler automatically generates an exception handler that claims to handle all exceptions. The exception handler is intended to execute monitorexit. You can also see from the bytecode that there is an additional Monitorexit directive, which is the monitorexit directive that is executed to release monitor when the exception ends.

3.3 Basic principle of synchronized method

Method-level synchronization is implicit, that is, controlled without bytecode instructions, and is implemented in method calls and return operations. The JVM can distinguish whether a method is synchronized from the ACC_SYNCHRONIZED access flag in the method_info Structure in the method constant pool. When a method is invoked, the calling instruction checks whether the ACC_SYNCHRONIZED access flag of the method is set. If so, the thread of execution will hold monitor (a term used in the virtual machine specification) before executing the method. Monitor is finally released when the method completes, either normally or abnormally. During method execution, the executing thread holds the Monitor, and no other thread can obtain the same monitor. If an exception is thrown during the execution of a synchronous method and cannot be handled within the method, the monitor held by the synchronous method is automatically released when the exception is thrown outside the synchronous method. Let’s look at how the bytecode layer is implemented:

public class SyncMethod {

   public int i;

   public synchronized void syncTask(a){ i++; }}Copy the code

The decompiled bytecode using Javap is as follows:

Classfile /***/src/main/java/com/zejian/concurrencys/SyncMethod.class Last modified 2017-6-2; size 308 bytes MD5 checksum f34075a8c059ea65e4cc2fa610e0cd94 Compiled from "SyncMethod.java" public class com.hc.concurrencys.SyncMethod minor version: 0 major version: 52 flags: ACC_PUBLIC, ACC_SUPER Constant pool; / / omit unnecessary bytecode / / = = = = = = = = = = = = = = = = = = syncTask method = = = = = = = = = = = = = = = = = = = = = = public synchronized void syncTask (); Descriptor: ()V // method identifier ACC_PUBLIC stands for public modifier, ACC_SYNCHRONIZED indicates that the method is synchronized flags: ACC_PUBLIC, ACC_SYNCHRONIZED Code: stack=3, locals=1, args_size=1 0: aload_0 1: dup 2: getfield #2 // Field i:I 5: iconst_1 6: iadd 7: putfield #2 // Field i:I 10: return LineNumberTable: line 12: 0 line 13: 10 } SourceFile: "SyncMethod.java"Copy the code

As you can see from the bytecode, the synchronized modified method does not have monitorenter and Monitorexit directives. Instead, it does have the ACC_SYNCHRONIZED identifier, which identifies the method as a synchronized method. The JVM uses the ACC_SYNCHRONIZED access flag to tell if a method is declared to be a synchronized method and to perform the corresponding synchronized call. This is the basic principle behind synchronized locking on synchronized blocks of code and synchronized methods. It is also important to note that synchronized was a heavyweight Lock in earlier Versions of Java, which was inefficient because the monitor Lock relied on the underlying operating system’s Mutex Lock. However, when the operating system realizes the switch between threads, it needs to change from the user state to the core state, which requires a relatively long time and a relatively high time cost, which is why the efficiency of early synchronized is low. Java6 introduced lightweight locking and biased locking to reduce performance costs associated with acquiring and releasing locks. Let’s take a quick look at Java’s official optimization of synchronized locks at the JVM level.

4. Optimization of Java virtual machine to Synchronized

There are four lock states: unlocked, biased, lightweight and heavyweight. Competition with the lock, the lock can be biased locking to upgrade to the lightweight lock, and then upgrade the heavyweight locks, lock upgrade is one-way, that is to say, can only upgrade from low to high, not be downgraded, lock on heavyweight locks, we had a detailed analysis of the front, now we will introduce biased locking and lightweight locks and other means of optimization, the JVM This paper is not intended to go into the implementation and transformation process of each lock. It is more about the core optimization idea of each lock provided by the Java virtual machine. After all, the specific process is quite tedious.

4.1 biased locking

Biased locking is to join the new lock after Java 6, it is a kind of for locking operation means of optimization, through the study found that in most cases, the lock does not exist a multithreaded competition not only, and always by the same thread for many times, so in order to reduce the same thread locks (will involve some CAS operation, time-consuming) introduced by the cost of biased locking. Biased locking the core idea is that if a thread got a lock and then lock into bias mode, the structure of Mark Word become biased locking structure, when the thread lock request again, no need to do any synchronization operation, namely the process of acquiring a lock, which saves a large amount of relevant lock application operation, thus the performance of the provider. Therefore, in the case of no lock contention, biased locking has a good optimization effect, after all, it is very likely that the same thread applies for the same lock for many consecutive times. But for lock more competitive situation, biased locking failure, because such occasions is likely to lock the thread is not the same for each application, so this situation should not be used to lock, or you will do more harm than good, it is important to note that the biased locking failed, does not immediately into a heavyweight locks, but upgraded to a lightweight lock first. Let’s move on to lightweight locks.

4.2 Lightweight Lock

If biased locking fails, the virtual machine does not immediately upgrade to heavyweight locking, but instead attempts to use an optimization called lightweight locking (added after 1.6), in which the Mark Word structure also changes to lightweight locking. Lightweight locks improve application performance on the basis that “for the vast majority of locks, there is no competition for the entire synchronization cycle”, note that this is empirical data. It is important to understand that lightweight locks are suitable for scenarios where threads alternately execute synchronized blocks. If the same lock is accessed at the same time, this will cause the lightweight lock to expand into a heavyweight lock.

4.3 the spin lock

When lightweight locking fails, the virtual machine also performs an optimization called spin locking to prevent threads from actually hanging at the operating system level. This is based on in most cases, the thread holding the lock time not too long, if hang directly operating system level thread may do more harm than good, after all, the operating system to realize the need when switching between threads from user mode to kernel mode, the state transitions between needs a relatively long time, time cost is relatively high, So spinlock will assume that in the near future, the current thread lock can be obtained, therefore the virtual opportunity for current wants to make a few empty thread fetching the lock loop (which is also called spin), generally not too long, may be 50 or 100 cycles, after several cycles, if get the lock, I successfully enter the critical section. If the lock is not available, threads are suspended at the operating system level, which is an optimized way to spin locks, which can actually improve efficiency. Finally there is no way to upgrade to the heavyweight lock.

4.4 Adaptive spin lock

DK 1.6 introduces a smarter spin lock called adaptive spin lock. Adaptive means that the number of spins is no longer fixed, but is determined by the time of the previous spin on the same lock and the state of the lock owner.

If the thread spins successfully, it spins more next time, because the virtual machine thinks that if it succeeded last time, it will likely spin again, and it will allow the spin wait to last more times. On the other hand, if few spins succeed for a lock, the spins are reduced or even omitted in future attempts to retrieve the lock to avoid wasting processor resources.

4.5 lock elimination

Eliminate another lock lock is virtual machine optimization and the optimization more thoroughly, the Java virtual machine in the JIT compiler (can be simple to understand for the first time a piece of code will be executed when compile, also known as instant compiled), run through the context of the scan, remove lock there can be no Shared resource competition, in this way to eliminate unnecessary lock, The append of the following StringBuffer is a synchronous method, but in the add method the StringBuffer is a local variable and cannot be used by other threads, so it cannot compete for shared resources. The JVM automatically removes the lock.

public class StringBufferRemoveSync {

    public void add(String str1, String str2) {
        //StringBuffer is thread-safe, and since sb is only used in append methods, it cannot be referenced by other threads
        // Therefore sb is a resource that cannot be shared, and the JVM automatically removes the internal lock
        StringBuffer sb = new StringBuffer();
        sb.append(str1).append(str2);
    }

    public static void main(String[] args) {
        StringBufferRemoveSync rmsync = new StringBufferRemoveSync();
        for (int i = 0; i < 100; i++) {
            rmsync.add("abc"."123"); }}}Copy the code

4.6 State transition between biased locks, lightweight locks, and heavyweight locks

5. Key points you may need to know about synchronized

5.1 Reentrancy of synchronized

In terms of mutex design, when a thread attempts to manipulate a critical resource of an object lock held by another thread, it will block. However, when a thread requests the critical resource of an object lock held by itself again, this situation is a reentrant lock and the request will succeed. In Java, synchronized is an atomic-based internal locking mechanism and can be reentrant. Therefore, when a thread calls a synchronized method, another synchronized method of the object is called inside its method body. That is, a thread obtains an object lock and then requests the object lock again. Yes, that’s the reentrancy of synchronized. As follows:

public class AccountingSync implements Runnable{
    static AccountingSync instance=new AccountingSync();
    static int i=0;
    static int j=0;
    @Override
    public void run(a) {
        for(int j=0; j<100; j++){//this, the current instance object lock
            synchronized(this){
                i++;
                increase();// Synchronized reentrancy}}}public synchronized void increase(a){
        j++;
    }


    public static void main(String[] args) throws InterruptedException {
        Thread t1=new Thread(instance);
        Thread t2=newThread(instance); t1.start(); t1.join(); t2.start(); t2.join(); System.out.println(i); }}Copy the code

As demonstrated in the code, after obtaining the current instance lock, the synchronized code block is entered to execute the synchronized code, and another synchronized method of the current instance object is called in the code block. When the current instance lock is requested again, it will be allowed to execute the method body code, which is the most direct embodiment of reentrant lock. Special attention is paid to the fact that when a subclass inherits from its parent, it can also call its parent’s synchronized methods via a reentrant lock. Note that since synchronized is implemented based on monitor, the counter in Monitor is still incremented by one with each reentrant.

Reentrant further enhances the encapsulation of locking behavior, thus simplifying the development of concurrent object-oriented code. Analyze the following procedures:


    public class Father  
    {  
        public synchronized void doSomething(a){  
            // do something...  }}public class Child extends Father  
    {  
        public synchronized void doSomething(a){  
            // do something...  
            super.doSomething(); }}Copy the code

The subclass overrides the synchronized method of the parent class and then calls the method in the parent class. If there is no reentrant lock, this code will generate a deadlock.

Because the doSomething methods in Father and Child are synchronized methods, each doSomething method acquires a lock on the Child object instance before execution. If the built-in lock is not reentrant, the mutex on the Child will not be acquired when super.dosomething is called because the lock is already held, and the thread will block forever, waiting for a lock that will never be acquired. Reentrant avoids such deadlock situations.

The same thread does not block the execution of other synchronized methods, blocks, or synchronized methods/blocks in its parent class from being called by the same thread, because the mutex is reentrant.

5.2 Thread Interruption and synchronized

To interrupt a thread in the middle of a run, as the word interrupts, provides the following three methods for thread interrupts in Java

// Interrupt the thread (instance method)
public void Thread.interrupt();

// Determine whether the thread is interrupted (instance method)
public boolean Thread.isInterrupted();

// Determine whether it is interrupted and clear the current interrupted state (static method)
public static boolean Thread.interrupted();
Copy the code

When a Thread is blocked or attempting to perform a blocking operation, interrupt the Thread with Thread.interrupt(). Note that an InterruptedException is thrown and the interrupted state is reset. The following code demonstrates this process:

public class InterruputSleepThread {
    public static void main(String[] args) throws InterruptedException {
        Thread t1 = new Thread() {
            @Override
            public void run(a) {
                // While In a try, an exception breaks to exit the run loop
                try {
                    while (true) {
                        // The current thread is blocked. Exceptions must be caught and handled
                        TimeUnit.SECONDS.sleep(2); }}catch (InterruptedException e) {
                    System.out.println("Interruted When Sleep");
                    boolean interrupt = this.isInterrupted();
                    // The interrupt state is reset
                    System.out.println("interrupt:"+interrupt); }}}; t1.start(); TimeUnit.SECONDS.sleep(2);
        // Interrupts the blocked thread
        t1.interrupt();

        Interruted When Sleep interrupt:false */}}Copy the code

As shown in the code above, we create a thread and use the sleep method to put it into a blocking state. After starting the thread, the interrupt method on the thread instance object is called to interrupt the blocking exception and throw InterruptedException. The interrupted state is reset. Some of you may wonder why thread. sleep(2000) is not used; But with the TimeUnit. SECONDS. Sleep (2); The reason is simple: the former is used without explicit units, while the latter is very explicit in units of seconds, and in fact the internal implementation of the latter eventually calls Thread.sleep(2000). Timeunit.seconds.sleep (2) is recommended for more semantic clarity. Note that TimeUnit is an enumeration type. In addition to blocking interrupts, we may also encounter run-time threads that are not blocking. In this case, calling thread.interrupt () directly to interrupt the Thread will not get any response. The following code will not interrupt the non-blocking Thread:

public class InterruputThread {
    public static void main(String[] args) throws InterruptedException {
        Thread t1=new Thread(){
            @Override
            public void run(a){
                while(true){
                    System.out.println("Not interrupted"); }}}; t1.start(); TimeUnit.SECONDS.sleep(2);
        t1.interrupt();

        /** * Output result (infinite execution): not interrupted not interrupted not interrupted...... * /}}Copy the code

Although the interrupt method is called, thread T1 is not interrupted because a thread in a non-blocking state requires us to manually detect the interrupt and terminate the program. The improved code looks like this:

public class InterruputThread {
    public static void main(String[] args) throws InterruptedException {
        Thread t1=new Thread(){
            @Override
            public void run(a){
                while(true) {// Determine whether the current thread is interrupted
                    if (this.isInterrupted()){
                        System.out.println("Thread interrupt");
                        break;
                    }
                }

                System.out.println("Loop out, thread interrupted!"); }}; t1.start(); TimeUnit.SECONDS.sleep(2);
        t1.interrupt();

        /** * Output: thread interrupt has broken out of the loop, thread interrupt! * /}}Copy the code

We use the example method isInterrupted in our code to determine if the thread has been interrupted, and if interrupted it will jump out of the loop and end the thread. Note that calling interrupt() in a non-blocking state does not reset the interrupted state. One is when a thread is blocking or trying to perform a blocking operation. We can interrupt it with the example method interrupt(). InterruptException (which must be caught and cannot be thrown) is thrown and the interrupt status is reset. Another way to interrupt a thread while it is running is to call the instance method Interrupt (), but the interrupt status must be determined manually. And write code that interrupts the thread (essentially terminating the body of the run method). Sometimes we need to do both of these things in our code, so we can write:

public void run(a){
    try {
    // Check whether the current thread has been interrupted. Note that the interrupted method is static and restores the interrupted state after being interrupted
    while(! Thread.interrupted()) { TimeUnit.SECONDS.sleep(2); }}catch (InterruptedException e) {

    }
}
Copy the code

In fact, thread interrupts do not apply to synchronized methods or blocks of code that are waiting for a lock. That is, synchronized has only two outcomes if a thread is waiting for a lock: it either acquires the lock and continues to execute, or it saves the wait. Even if the method that interrupts the thread is called, it does not take effect. The demo code is as follows:

public class SynchronizedBlocked implements Runnable{

    public synchronized void f(a) {
        System.out.println("Trying to call f()");
        while(true) // Never releases lock
            Thread.yield();
    }

    /** * Creates a new thread in the constructor and starts acquiring the object lock */
    public SynchronizedBlocked(a) {
        // The thread already holds the current instance lock
        new Thread() {
            public void run(a) {
                f(); // Lock acquired by this thread
            }
        }.start();
    }
    public void run(a) {
        // interrupt the judgment
        while (true) {
            if (Thread.interrupted()) {
                System.out.println("Interrupt thread!!");
                break;
            } else{ f(); }}}public static void main(String[] args) throws InterruptedException {
        SynchronizedBlocked sync = new SynchronizedBlocked();
        Thread t = new Thread(sync);
        // Call f() after startup, cannot get the current instance lock is in wait state
        t.start();
        TimeUnit.SECONDS.sleep(1);
        // Interrupts the threadt.interrupt(); }}Copy the code

The SynchronizedBlocked constructor creates a new thread and initiates the fetch call f() to acquire the current instance lock. Since SynchronizedBlocked is itself a thread, it calls f() in its run method after starting, but because the object lock is occupied by another thread, (t) (lock path (); But you can’t interrupt the thread.

5.3 Wake-on-wait and synchronized

This article mainly refers to notify/notifyAll and WAIT methods. To use these three methods, one must be in a synchronized code block or synchronized method. Otherwise you will be thrown IllegalMonitorStateException is unusual, this is because the call before this several methods must take the current monitor the monitor the object, that is to say wait and notify/notifyAll method relies on the monitor object, in the previous analysis, We know that monitor exists in the Mark Word of the object header (where the monitor reference pointer is stored), and that the synchronized keyword retrieves monitor, This is why notify/notifyAll and wait methods must be called in a synchronized code block or synchronized method.

The wait method releases the monitor lock until notify/notifyAll is called. The wait method releases the monitor lock until notify/notifyAll is called. The sleep method only lets the thread sleep and does not release the lock. In addition, the notify/notifyAll method does not release the monitor lock immediately, but automatically releases the monitor lock after synchronized(){}/synchronized.