The introduction

Synchronized keyword (mutex) principle, a line of big plant invariable interview questions, but also understand Java concurrent programming essential part! It covers a lot of knowledge, need to understand the point is also a lot of, this article is related to the books and combined with their own personal understanding from the basic application range to the bottom in-depth analysis of the way to elaborate, if the error or doubt welcome to see the officer comment area message correction, thank you!

1. Synchronized application and lock type

As we all know, the effect of using multiple threads in a project development process is one word: quick! Multithreaded programming can bring great performance benefits to our programs, but also to maximize the performance of the machine. Now when the era of progress, the machine’s hardware would have to get rid of the limitation of the mononuclear, so we often just write single-threaded applications in the development process in many cases is a waste of computing power to the machine, so the multithreaded programming in the process of our current development of more and more important, at the same time also has become a line companies interview will ask a threshold. When we study concurrent programming in Java, thread safety is an important concern, and the root causes of this problem are no more than three elements: multithreading, shared resources (critical resources), and non-atomic operations. A brief summary of the root causes of thread safety problems: Multiple threads performing non-atomic operations on a shared resource at the same time can cause thread-safety problems. (If you have any doubts about these three concepts, read my previous article on JMM versus Volatile.) So now that we have this problem and how to solve it? Destroy any one of the three components of the problem. Therefore, in order to solve this problem, we may need to change the parallel execution of multiple threads to single-threaded serial execution when the above problem occurs, and other threads have to wait until the completion of the execution of this thread. This scheme has the elegant and elegant name of mutex/exclusive lock. That is, when multiple threads simultaneously execute a code protected by a mutex (critical resource), they need to acquire the lock, but only one thread will successfully obtain the lock resource execution, and other threads will fall into a waiting state, until the current thread completes the execution of the lock resource release after the other threads can execute. In Java concurrent programming, the Synchronized keyword provides a mechanism to implement the mutex function described above. It is important to note, of course, that Synchronized does another thing: it ensures that changes made by one thread to a critical resource (a shared resource) are visible to all other threads, replacing the visibility guaranteed by Volatile in the previous section. (Synchronized, however, is not a complete replacement for Volatile, since it guarantees visibility, atomicity, and “orderliness,” but does not prohibit instruction reordering, which we’ll examine later.)

1.1 Synchronized keyword three types of lock (are essentially dependent on the object to lock)

  • This lock: current instance lock
  • Class lock: Class object lock
  • Object lock: Object instance lock

1.2 Three application modes of Synchronized keyword

  • Modifying instance member methods: With this lock, a thread that wants to execute a member instance method modified by the Synchronized keyword must first acquire the lock resource of the current instance object;
  • Modifying static member methods: Using a class lock, a thread that wants to execute a static method modified by the Synchronized keyword must first acquire the lock resource of the current class object.
  • Modify code block: use the Object lock, use the given Object to achieve the lock function, thread to execute the code block modified by the Synchronized keyword must first obtain the lock resource of the current given Object;

1.2.1 synchronized modifies instance member methods

public class SyncIncrDemo implements Runnable{
    // Share resources (critical resources)
    static int i = 0;

    The synchronized keyword modifies instance member methods
    public synchronized void incr(a){
        i++;
    }
    @Override
    public void run(a) {
        for(int j=0; j<1000;j++){
            incr();
        }
    }
    public static void main(String[] args) throws InterruptedException {
        SyncIncrDemo syncIncrDemo = new SyncIncrDemo();
        Thread t1=new Thread(syncIncrDemo);
        Thread t2=new Thread(syncIncrDemo);
        t1.start();
        t2.start();
        /** *join: abandon the execution of the current thread and return to the corresponding thread. For example, if the program calls the join method of t1 and T2 on the main thread, the main thread loses control of the CPU and returns to T1. The result is that t1 and T2 threads finish their execution and then go to the main thread to execute. This is equivalent to synchronizing t1 and T2 threads in the main thread
        t1.join();
        t2.join();
        System.out.println(i);
    }
    /** * Output result: * 2000 */
}
Copy the code

In the code above, we turn ont1,t2Two threads operate on the same shared resource, the int variablei, due to the increment operationi++;According to the analysis in the previous chapter, this operation does not have atomicity. Specifically, it is carried out in three steps: 1) Read the value from main memory first; 2) +1 operation in own working memory; 3) Refresh the result back to main memory. ift2The thread int1The thread reads the global resource between reading the old value and writing back the new value (that is, t2 when T1 does +1 calculations in its own working memory)iThe value of theta, thent2The thread will be connectedt1Threads see the same value together and perform a +1 operation on the same value, which is a thread-safe failure and therefore must be used for incR methodssynchronizedModify, do thread mutual exclusion solve thread safety problems. We should notice at this pointsynchronizedModifies object instance methodsincr()In this case, the lock of the current thread is the current instance this lock, which is the current instance objectsyncIncrDemo, note that a thread synchronization lock in Java can be any object (depending on Monitor in the object header, which is examined later). This is true if we don’t use itsynchronizedKeyword modificationincr()Method, its final output may be less than 2000, which issynchronizedThe function of the keyword, memory diagram is as follows:Here we also need to realize that when a thread is accessing an object’ssynchronizedInstance method, so that other threads cannot access other objects of the objectsynchronizedAfter all, an object has only one lock. When one thread acquires the lock of the object, other threads cannot acquire the lock of the object, so they cannot access the other objectssynchronizedModifier object instance method, but other threads can still access the instance object otherwisesynchronizedDecorated member methods or bysynchronizedModified static member methods, of course, if it is a threadAInstance objects need to be accessedobj1In thesynchronizedModified object instance methodf1The current object lock isobj1), another threadBNeed to access the instance object imageobj2In thesynchronizedModified object instance methodf2The current object lock isobj2), which is allowed to visit at the same time, because the two instance object lock and different, as if two threads operation data is not Shared, thread safety is guaranteed, it is a pity that if two threads operation is to share data, they may not be able to guarantee the thread safety, the following code demonstrates the situation:

public class SyncIncrDemo implements Runnable{
    // Share resources (critical resources)
    static int i = 0;
    
    The synchronized keyword modifies instance member methods
    public synchronized void incr(a){
        i++;
    }
    @Override
    public void run(a) {
        for(int j=0; j<1000;j++){
            incr();
        }
    }
    public static void main(String[] args) throws InterruptedException {
        SyncIncrDemo syncIncrDemo1 = new SyncIncrDemo();
        SyncIncrDemo syncIncrDemo2 = new SyncIncrDemo();
        Thread t1=new Thread(syncIncrDemo1);
        Thread t2=new Thread(syncIncrDemo2);
        t1.start();
        t2.start();
        t1.join();
        t2.join();
        System.out.println(i);
    }
    /** * Output: * 1991 */
}
Copy the code

What makes this code different is that we create two new instances at the same timesyncIncrDemo1,syncIncrDemo2And then start two different threads on the shared variableiI’m going to operate, but unfortunately the result of the operation is1991Instead of expecting results2000Because the above code made a serious mistake while we used itsynchronizedModify theincrMethod, but new two different instance objects, which means there are two different instance object locks, sot1andt2They both enter their respective object locks, which meanst1andt2Threads are not using the same lock, so thread-safety cannot be guaranteed. Memory diagram is as follows:The solution to this dilemma is to changeincrMethods usingstaticSo no matter how many instance objects we create, for class objects, the virtual machine will only generate one after loading the bytecode, so the object lock is unique in this case. Now let’s see how to use itsynchronizedOperating on a static stateincrMethods.

1.2.2 synchronized modifies static member methods

When synchronized is used to modify static methods, the lock is the class object lock of the current class. (When synchronized is used, only one class object lock is used in the current Java program, and multiple locks are not created because of multiple new instances, and threads acquire different lock resources.) Because static members are not exclusive to any instance object and are class members, concurrent operations on static members can be controlled through class object locks. Note that if thread A calls A static method of an instance object that is synchronized, and thread B calls A static method of the class that is synchronized, the simultaneous execution is allowed without mutual exclusion. A thread accessing a static synchronized method obtains the lock resource of the current class object, while a thread accessing a non-static synchronized method obtains the lock resource of the current instance object.

public class SyncIncrDemo implements Runnable{
    // Share resources (critical resources)
    static int i = 0;

    The synchronized keyword modifies the instance member method lock object: this is the instance object of the current new
    public synchronized void reduce(a){
        i--;
    }
    
    The synchronized keyword modifiers the static member method lock object: class syncincrDemo.class
    public static synchronized  void incr(a){
        i++;
    }
    @Override
    public void run(a) {
        for(int j=0; j<1000;j++){
            incr();
        }
    }
    public static void main(String[] args) throws InterruptedException {
        SyncIncrDemo syncIncrDemo1 = new SyncIncrDemo();
        SyncIncrDemo syncIncrDemo2 = new SyncIncrDemo();
        Thread t1=new Thread(syncIncrDemo1);
        Thread t2=new Thread(syncIncrDemo2);
        t1.start();
        t2.start();
        t1.join();
        t2.join();
        System.out.println(i);
    }
    /** * Output result: * 2000 */
}
Copy the code

Due to thesynchronizedThe keyword modifies staticincrMethod, unlike the decorated instance method, whose lock object is the current instance object (this object), while the static method’s lock object is the current class object, so that even if new multiple instances will not be executed in multiple threads at the same timeincrMethod has a thread safety problem. Memory diagram is as follows:Notice in the codereduceThe method is an instance method whose object lock is the current instance object. If this method is called by another thread, there will be no mutual exclusion. After all, the lock object is different, but we should be aware that thread-safety issues may be found in this case (operating on the shared static variable I).

PS: no mattersynchronizedEither the object instance method or the static member method, the lock type used is this, except that this refers to the object being newly created, because the object instance method belongs to the current object. And when thesynchronizedThis refers to the class object, because static members do not belong to any instance object. They are class members. (This may be a bit abstract, but remember,synchronizedThis is used as the lock.

1.2.3 Synchronized modifies code blocks

In addition to modifying instance and static methods with the synchronized keyword, you can also use it to synchronize blocks of code, because in some cases the method body may be large (such as a 2000-line method), and if you simply use the synchronized keyword to modify the method, So its execution process is time consuming, and the 2000 lines of code and not all code is the possibility of a thread safety problems, if there are more time-consuming in 2000 lines of code operations (IO), if the situation directly to the whole method for synchronous operation, so may cause a large number of threads in a program block, In this case, we can use a synchronized code block to wrap the code that needs to be synchronized, so that the whole method does not need to be modified with the synchronized keyword, as shown in the following code example:

public class SyncIncrDemo implements Runnable{
    // Share resources (critical resources)
    static int i = 0;

    // The synchronized keyword modifies code blocks
    public void methodA(a){
        // omit a thousand lines of code....
        
        /** * Assuming that this is the only shared resource operation that exists, it would be inappropriate to synchronize the entire method, which can be synchronized with the 'synchronized' keyword */
        synchronized(SyncIncrDemo.class){
            i++;
        }
        
        // Omit 800 lines of code....
    }
    @Override
    public void run(a) {
            methodA();
    }
    
    public static void main(String[] args) throws InterruptedException {
        SyncIncrDemo syncIncrDemo = new SyncIncrDemo();
        for(int j=0; j<1000; j++){new Thread(syncIncrDemo).start();
        }
        Thread.sleep(10000);
        System.out.println(i);
    }
    /** * Output result: * 1000 */
}
Copy the code

As you can see from the above code, we give a class object (lock object) as the lock resource when we currently use synchronized modified code block. Each time a thread enters a synchronized wrapped code block, the current thread is required to hold the syncincrDemo. class object lock. If another thread is currently holding the object lock, the newly arrived thread must wait, ensuring that only one thread executes at a time. Operation. Of course, in addition to the class object as the lock resource, we can also use this object (representing the current instance) or give an object as the lock object, as follows:

// The current instance
synchronized(this){
    i++;
}
// Give the object
Object obj = new Object();
synchronized(obj){
    i++;
}
Copy the code

Above is the basic description and use of synchronized, then what we need to study is the underlying realization principle of synchronized keyword, so as to further deepen our understanding of synchronized.

Ii. Analysis of the underlying principle of Synchronized

Synchronized is a locking function that relies on the Monitor in the object header of an object. The official vm specification document shows that Synchronized in Java is based on the Monitor object. Monitorenter directive), release the lock: exits the pipe object (explicit: Monitorexit directive), but it is important to note that we cannot see the entry/exit of a pipe object through JavAP when we use synchronized modifier members. Instead, the method call directive reads the ACC_SYNCHRONIZED flag of the method in the runtime constant pool. Implicit synchronization is used in the synchronized modifier, but both explicit and implicit synchronization is achieved by entering and exiting tube objects (we’ll discuss explicit and implicit later), but it is worth noting that synchronization in Java is not only used with synchronized, However, most of the synchronization scenes in Java language are codes or methods modified by synchronized. Synchronized is only an implementation of synchronization. It is worth noting that synchronized does not completely represent the Java synchronization mechanism. Synchronization is described in the VIRTUAL machine specification as follows: Synchronization in a Java virtual machine is implemented based on inbound and outbound Monitor objects.

2.1. Understand the memory layout of Java objects: object headers, MarkWord and Monitor objects

The memory layout of a Java object in the JVM is divided into three areas: object headers, instance data, and aligned padding, as follows:

  • Object: storage MarkWord pointer (ClassMetadataAddress/KlassWord) and type, if it is an array of objects, also exist array length.
  • Example: If there are two ints and one long, then 4 + 4 + 8 = 16 bytes (bytes).
  • Align fill: Because the virtual machine request object start address must be 8 byte integer times, so the virtual opportunities do 8 multiples of fill for each object, if the size of the object (object) + instance data size is 8 integer times, so there will not be aligned populated, so it is worth noting that the alignment fill every object doesn’t exist, For byte alignment only, avoid reducing heap memory fragmentation space and facilitate OS reading.

Java object header is actually the key element of synchronized low-level implementation, and we focus on the structure of object header. JVM takes 2 Word width (Word/Class pointer size) to store object header (if the object is an array, additional array length is required to store the object header, so 32 bit virtual machine takes 3 Word width to store object header. In a 64-bit VM, the size of a character is 4 bytes in a 32-bit VM, and the size of a character in a 64-bit VM is 8 bytes in a 64-bit VM. When 64-bit pointer compression is enabled, -xx :+UseCompressedOops, MarkWord is 8byte, KlassWord is 4byte, and a lot of information about this is unclear, so I give the following information here (if you have any questions, please leave a comment), so the final object header structure information is described as follows:

Number of VM slots Object header structure information instructions The size of the
32 – MarkWord HashCode, generational age, lock bias, and lock flag bit 4byte/32bit
32 – ClassMetadataAddress/KlassWord A type pointer points to an object’s class metadata, which the JVM uses to determine which class instance the object is 4byte/32bit
32 – ArrayLenght If an array object stores the length of an array, non-array objects do not exist 4byte/32bit
Number of VM slots Object header structure information instructions The size of the
A 64 – bit MarkWord Unused, HashCode, generation age, lock preference, and lock flag bit 8byte/64bit
A 64 – bit ClassMetadataAddress/KlassWord A type pointer points to an object’s class metadata, which the JVM uses to determine which class instance the object is 8byte/64bit
A 64 – bit ArrayLenght If an array object stores the length of an array, non-array objects do not exist 4byte/32bit

By default, the MarkWord in the object header of the 32-bit JVM stores the object’s HashCode, generation age, whether it is biased towards locks, lock marker bits and other information. In 64-bit JVMS, the default information for MarkWord in the object header stores HashCode, generation age, lock preference, lock marker unused, and so on:

Number of VM slots The lock state HashCode Generational age Biased lock Lock label information
32 – Lock-free (default) 25bit 4bit 1bit 2bit
Number of VM slots The lock state HashCode Generational age Biased lock Lock label information unused
A 64 – bit Lock-free (default) 31bit 4bit 1bit 2bit 26bit

Because the object of the header information is has nothing to do with members of the definition of object itself attribute data of additional storage costs, so considering the space efficiency, the JVM MarkWord is designed as a non fixed data structure, so that you can reuse convenience store more effective data, it will be according to the state of the object itself reuse their own storage space, In addition to the MarkWord default storage structures listed above, there are the following structures that can change (first 32 bits, last 64 bits) :

  • Markwork: Refer to the above.
  • LockRecord: As we mentioned earlier, when the object is in a biased lock state, markWord stores the ID of the biased thread; In the lightweight lock state, MarkWord stores a pointer to the LockRecord in the thread stack. Is a pointer to a Monitor object in the heap when the state is a heavyweight lock, While the LockRecord exists in the thread stack, which translates to the LockRecord, it will copy the markword information in the header of the object to its thread stack, the copied markword is called the product Mark Word, and there is also a pointer to the object.

Markword information:

  • Unused: unused.
  • Identity_hashcode: The original hashcode of the object, which does not change if hashcode() is overwritten.
  • Age: indicates the age of the object.
  • Biased_lock: indicates whether bias lock is used.
  • Lock: lock flag bit.
  • ThreadID: ID of the thread that holds the lock resource.
  • Epoch: Bias lock timestamp.
  • Ptr_to_lock_record: pointer to lock_record in the thread’s local stack.
  • Ptr_to_heavyweight_monitor: pointer to monitor in the heap.

Among them, lightweight lock and biased lock are newly added after optimization of Synchronized lock by JavaSE1.6, which will be briefly analyzed later. Here we focus on heavyweight locks, also known as synchronized object locks, with a lock identifier of bit 10, where the pointer points to the starting address of a monitor object (also known as a pipe or monitor lock). Each object has a Monitor associated with it, and the relationship between the object and its Monitor can be implemented in various ways. For example, the monitor can be created and destroyed together with the object or automatically generated when a thread tries to acquire an object lock. However, when a monitor is held by a thread, it is locked. In the Java Virtual Machine (HotSpot), Monitor is implemented by ObjectMonitor and its main data structure is as follows (in the HotSpot VIRTUAL machine source objectMonitor.hpp file) :

Openjdk \hotspot\ SRC \share\ VM \ Runtime \objectMonitor () {_header = NULL; //markOop object header _count = 0; _waiters = 0, _recursions = 0; _object = NULL; // The monitor locks the parasitic object. Locks do not appear in plain sight, but are stored in objects. _owner = NULL; _WaitSet = NULL; // Threads in wait state are added to _WaitSet _WaitSetLock = 0; _Responsible = NULL; _succ = NULL; _cxq = NULL; FreeNext = NULL; _EntryList = NULL; // Threads in the lock block state are added to the list. _SpinClock = 0 ; OwnerIsThread = 0 ; // _owner is (Thread *) vs SP/BasicLock _previous_owner_tid = 0; // The ID of the thread before the monitor}Copy the code
  • Monitor: Monitor exists in the heap. What is monitor? It can be understood as a synchronization tool or described as a synchronization mechanism, and is usually described as an object.

Like all objects, all Java objects are born Monitor, and every Java object has the potential to become a Monitor, because in Java design, every Java object comes out of the womb with an invisible lock called an internal lock or Monitor lock.

Monitor is a thread-private data structure, and each thread has a list of available Monitor Records, as well as a global list of available records. Each locked object is associated with a Monitor (the LockWord in the MarkWord of the object header points to the start address of the monitor), and an Owner field in the monitor stores the unique identifier of the thread that owns the lock, indicating that the lock is occupied by the thread. The Monitor object has the following structure (negligible, for personal note-taking) :– Contention List: Contention queue, in which all threads requesting locks are placed first. – Entry List: The threads in the Contention List that qualify as candidate resources are moved to the Entry List. – Wait Set: the threads that blocked calling the Wait method are placed here. – OnDeck: at any time, at most one thread is competing for the lock resource. This thread is called OnDeck. – Owner: The initial value is NULL, indicating that no thread owns the Monitor Record. When the thread successfully owns the lock, the unique identifier of the thread is saved. When the lock is released, the thread is set to NULL. -! Owner: specifies the current thread that releases the lock. – RcThis: indicates the number of all threads whose blocked or waiting is in the monitor Record. – Nest: used to count reentrant locks. – the Candidate: Used to avoid unnecessary obstruction or waiting thread to wake up, because each time only one thread can have lock, if every time a lock is released before the thread wakes up all threads that are being blocked or wait for, will cause unnecessary context switch (from blocking to ready then lock failure because competition is blocked) resulting in serious decline in performance. Candidate has only two possible values: 0 means there are no threads to wake up and 1 means a successor thread to wake up to compete for the lock. – HashCode: Stores the HashCode value (possibly including GC age) copied from the object header. – EntryQ: associates a system muaphore and blocks all threads that attempt to lock the Monitor Record.

ObjectMonitor has two queues, _WaitSet and _EntryList, that hold the list of ObjectWaiter objects (each thread waiting for a lock is encapsulated as an ObjectWaiter object). _owner refers to the thread that holds the ObjectMonitor object. When multiple threads simultaneously access a piece of synchronized code, they first enter the _EntryList collection. When a thread obtains the object’s monitor, it enters the _Owner area, sets the owner variable in monitor to the current thread, and increases the count in monitor by 1. If the thread calls wait(), it will release the currently held monitor. The owner variable returns to null, count decreases by 1, and the thread enters the WaitSe T collection to be woken up. If the current thread completes, it also releases the monitor(lock) and resets the value of the variable so that another thread can enter to acquire the monitor(lock). As shown below:

From this point of view, the Monitor object exists in the object header of every Java object, in markword (the point of the stored pointer). This is how the synchronized keyword obtains the lock, and why any object in Java can be used as a lock. It is also the reason why notify/notifyAll/ Wait methods exist in the top-level Object (more on this later). With this knowledge in hand, we will further analyze the specific semantic implementation of synchronized at the bytecode level.

2.2. Understand the underlying implementation principle of synchronized modified code block from decompilating bytecode

Java source files before compilation:

public class SyncDemo{ int i; public void incr(){ synchronized(this){ i++; }}}Copy the code

Compile the above code using Javac and disassemble it using Javap -p -v -c to get the following bytecode:

Classfile /C:/Users/XYSM/Desktop/com/SyncDemo.class Last modified 2020-6-17; size 454 bytes MD5 checksum 457e08e7b9caa345db5c5cca53d8d612 Compiled from "SyncDemo.java" public class com.SyncDemo minor version: 0 major version: 52 flags: ACC_PUBLIC, ACC_SUPER Constant pool: ...... // omit constant pool information {int I; Descriptor: I flags: // constructor public com.syncdemo (); descriptor: ()V flags: ACC_PUBLIC Code: stack=1, locals=1, args_size=1 0: aload_0 1: invokespecial #1 // Method java/lang/Object."<init>":()V 4: return LineNumberTable: line 3: 0 /*-------synchronized -------- */ public void incr(); descriptor: ()V flags: ACC_PUBLIC Code: stack=3, locals=3, args_size=1 0: aload_0 1: dup 2: astore_1 3: Monitorenter // Monitorenter enters sync 4: aload_0 5: DUP 6: getField #2 // Field I :I 9: iconST_1 10: iadd 11: Putfield #2 // Field I :I 14: ALOad_1 15: Monitorexit // Monitorexit exit sync 16: goto 24 19: astore_2 20: ALOAD_1 21: Monitorexit // Second occurrence of monitoreXit Exit synchronization 22: ALOad_2 23: athrow 24: return Exception table: // Omit other bytecode information........ } SourceFile: "SyncDemo.java"Copy the code

The bytecode associated with synchronized is the following:

3: Monitorenter // Monitorenter enters synchronization 15: Monitorexit // Monitorexit exits synchronization 21: Monitorexit // For the second time, Monitorexit exits synchronizationCopy the code

It can be seen from the bytecode that the implementation of the synchronization block is based on the monitorenter instruction which points to the beginning of the synchronization block and the Monitorexit instruction which indicates the end of the synchronization block. When monitorenter executes, the current thread attempts to acquire the objectref’s monitor. If the objectref’s monitor’s entry counter is 0, the thread succeeds in acquiring the Monitor. And set the counter value to 1, the lock is successfully obtained. The pseudocode is as follows:

Monitorenter directive: if(count == 0){Lock acquired successfully! count = count + 1; } else{The current lock resource is already held by another thread! } Monitorexit directive: count = count - 1;Copy the code

It is worth noting, however, that if the current thread already owns objectref’s Monitor, it can re-enter the monitor (more on reentrancy later) and the counter will be incrementing when reentrant. If another thread already owns objectref’s monitor, the current thread will block until the executing thread completes, i.e. the monitorexit directive is executed, which releases the monitor(lock) and sets the counter to 0. Other threads will have the opportunity to own monitor. Note that the compiler will ensure that regardless of how the method terminates, every Monitorenter called in the method executes its monitorexit counterpart, whether the method terminates normally or abnormally. This is why two Monitorexit directives appear in the above bytecode file. To ensure that monitorenter and Monitorexit can be paired correctly when the method exception completes, the compiler automatically generates an exception handler that claims to handle all exceptions. The exception handler is intended to execute monitorexit. As you can see from the bytecode, monitorexit is the monitorexit directive that is executed to release Monitor when an exception ends, ensuring that methods that end unexpectedly due to exceptions do not deadlock during method execution.

2.3. Understand the underlying implementation principle of synchronized modification method from decompilating bytecode

Method-level synchronization is implicit, that is, controlled without bytecode instructions, and is implemented in method calls and return operations. The JVM can distinguish whether a method is synchronized from the ACC_SYNCHRONIZED access flag in the method_info Structure in the method constant pool. When a method is invoked, the instruction checks to see if the method’s ACC_SYNCHRONIZED access flag is set. If so, the thread of execution will hold monitor before executing the method. Monitor is finally released when the method completes, either normally or abnormally. During method execution, the executing thread holds the Monitor, and no other thread can obtain the same monitor. If an exception is thrown during the execution of a synchronous method and cannot be handled within the method, the monitor held by the synchronous method is automatically released when the exception is thrown outside the synchronous method. Let’s look at how the bytecode layer is implemented, with Java source files before compilation:

public class SyncDemo{ int i; public synchronized void reduce(){ i++; }}Copy the code

Compile the above code using Javac and disassemble it using Javap -p -v -c to get the following bytecode:

Classfile /C:/Users/XYSM/Desktop/com/SyncDemo.class Last modified 2020-6-17; size 454 bytes MD5 checksum 457e08e7b9caa345db5c5cca53d8d612 Compiled from "SyncDemo.java" public class com.SyncDemo minor version: 0 major version: 52 flags: ACC_PUBLIC, ACC_SUPER Constant pool: ...... // omit constant pool information {int I; Descriptor: I flags: // constructor public com.syncdemo (); descriptor: ()V flags: ACC_PUBLIC Code: stack=1, locals=1, args_size=1 0: aload_0 1: invokespecial #1 // Method java/lang/Object."<init>":()V 4: return LineNumberTable: line 3: Public synchronized void reduce(); descriptor: ()V flags: ACC_PUBLIC, ACC_SYNCHRONIZED Code: stack=3, locals=1, args_size=1 0: aload_0 1: dup 2: getfield #2 // Field i:I 5: iconst_1 6: iadd 7: putfield #2 // Field i:I 10: return LineNumberTable: line 11: 0 line 12:10 // Omit other bytecode information........ } SourceFile: "SyncDemo.java"Copy the code

Monitorenter and Monitorexit are omitted from synchronized bytecodes. Instead, monitorenter and Monitorexit are retrieved at flags: ACC_PUBLIC is followed by the ACC_SYNCHRONIZED flag, which indicates that the method is a synchronized method. The JVM uses the ACC_SYNCHRONIZED access flag to determine whether a method is declared as a synchronized method and therefore to execute the corresponding synchronized call. This is the basic principle behind synchronized locking on synchronized blocks of code and synchronized methods. It is also important to note that synchronized was a heavyweight Lock in early Versions of Java, which was inefficient because the monitor Lock depended on the underlying operating system’s Mutex Lock, and the operating system needed to switch from user mode to core mode when switching between threads. The conversion between these states requires relatively long time and relatively high time cost, which is also the reason why early synchronized performance is inefficient. Java 6, in order to reduce the performance cost of acquiring and releasing locks, introduced lightweight locking and biased locking. Java 6, in order to reduce the performance cost of acquiring and releasing locks, introduced lightweight locking and biased locking. Let’s take a quick look at Java’s official optimization of synchronized locks at the JVM level.

3. Optimized lock expansion for synchronized after Java6

There are four lock states: unlocked, biased, lightweight and heavyweight. Competition as the thread, the lock can be biased locking to upgrade to the lightweight lock, and then upgrade heavy lock, but the lock upgrade generally is one-way, that is to say, can only upgrade from low to high, there will be no lock down, but there is a detail to note that this will not be lock down just for user threads, but for the heavyweight lock lock relegation would happen, Degradation occurs during the STW phase and is for Monitor objects that can only be accessed by vmThreads and no other JavaThread (see Heavyweight lock degradation). About heavy lock, and we had a detailed analysis of the front, now we will introduce biased locking and lightweight locks and other means of optimization, the JVM is tedious, but after all, involves the specific process for detailed process can refer to the deep understanding of the Java virtual machine principle “, so in this will not be detailed analysis was carried out on the lock escalation but timely summary.

3.1. No lock state

Anonymous bias locks are enabled by default when a new object is created in a Java program, but there is a small detail that should be noted. The default is 4 seconds. (Anonymous bias locks are not enabled for new objects until 4 seconds after JVM starts.) Because the JVM has its own threads started by default, there is a lot of sync code in the sync code. The sync code starts with the knowledge that there will be contention. If biased locks are used, biased locks will constantly revoke and upgrade locks, which is inefficient. But there is a noteworthy is for an object even launched the anonymous biased locking, this object in the head also don’t have any thread ID, because is the newly created object, so for a new new object, with or without start anonymous biased locking is called a conceptual unlocked state object, because even launched the anonymous biased locking, But the threadID in the Markword message is empty until it becomes a true biased lock, because no thread has acquired the lock (but the lock flag bit in Mrakword is still changed to 101: biased lock flag when the object becomes an anonymous biased lock).

3.2. Biased lock

Biased locking is to join the new lock after Java 6, it is a kind of for locking operation means of optimization, through the study found that in most cases, the lock does not exist a multithreaded competition not only, and always by the same thread for many times, so in order to reduce the same thread locks (will involve some CAS operation, time-consuming) introduced by the cost of biased locking. Biased locking the core idea is that if a thread got a lock and then lock into bias mode, the structure of Mark Word become biased locking structure, when the thread lock request again, no need to do any synchronization operation, namely the process of acquiring a lock, which saves a large amount of relevant lock application operation, thus the performance of the provider. In other words: biased lock among them “partial” is eccentric partial. This means that the lock is biased in favor of the first thread to acquire it, and if the lock is not acquired by another thread during subsequent execution, and no other thread is competing for the lock, the thread holding the biased lock will never need to synchronize. In subsequent execution, if the thread enters or exits the same synchronized block code again and does not need to lock or unlock it, it will do the following steps:

  • Load-and-test: simply check whether the current thread ID matches the thread ID in Markword
  • If so, the thread has successfully acquired the lock and proceeds with the following code
  • If not, check to see if the object is still biased, that is, the value of the biased lock flag bit
  • If not already biased, the CAS operation is used to compete for the lock, that is, when the lock is first acquired

But when the second thread tries to acquire the lock, if the object is already biased, and not to itself, then there is a race. At this time, depending on the thread contention of the lock, bias cancellation and bias re-may occur, but in most cases, it is ballooned into a lightweight lock. Therefore, in the case of no lock contention, biased locking has a good optimization effect, after all, it is very likely that the same thread applies for the same lock for many consecutive times. However, in the case of fierce lock competition, biased lock will be invalid, because it is very likely that the thread applying for lock is not the same every time. Therefore, biased lock should not be used in this case, otherwise the loss will outweigh the gain.

Bias lock revocation process

  • Stop the thread that owns the lock at a safe point.
  • Iterate through the thread stack and fix the lock record and Markword to make it lock-free if there is a lock record.
  • Wake up the current thread to upgrade the current lock to a lightweight lock.

Therefore, if a large number of synchronized code blocks in the program are competed by two or more threads in most cases, biased locking will be a liability. In this case, we can disable the default function of biased locking by XX: -usebiasedlocking at the beginning, so as to optimize the performance.

Biased locking expansion process

When the first thread enters and finds that it is in anonymous biased state, it will replace threaDID in The Mark Word with the ID of the current thread with CAS instruction. If the replacement succeeds, it will prove that the lock is successfully obtained, and if the replacement fails, the lock expands. If the thread ID is found to be the same as the preferred thread ID in the object header when the thread enters the synchronized block for the second time, an empty product Mark will be added to the lock record of the current thread stack after some comparison Word does not need CAS operation because it operates on a private thread stack, and the overhead brought by synchronized can be basically ignored. When other thread enters the synchronized block, they found the thread is not the current thread to drop the biased locking is into the logic, when global security point is reached, locked begin expansion for lightweight, original thread lock is held still, if it is found that thread to hang up and then locked to withdraw, and the objects inside the head of the MarkWord unlocked state, instead of lock, However, it should be noted that after the partial lock fails, it will not immediately expand to a heavyweight lock, but first upgrade to a lightweight lock. Let’s move on to lightweight locks.

3.3 lightweight Locks

If biased locking fails, Synchronized does not immediately upgrade to heavyweight locking and attempts to use an optimization called lightweight locking, in which case the MarkWord structure also changes to lightweight locking. Lightweight locks improve application performance on the basis that “for the vast majority of locks, there is no competition for the entire synchronization cycle”, note that this is empirical data.

Lightweight lock expansion process

If so, create a lock record in the current thread stack copy mark Word and place the address of the lock record in the current thread stack in the object header (details: Previously, the thread holding biased lock will give priority to cas and change the lock information pointer to mrakword in the object header. If it succeeds, it means that it has obtained the lightweight lock; if it fails, it means that the lock has been occupied. At this time, the reentry times of the thread are recorded (the mark word of lock record is changed Set to null), the lock spins (adaptive spin) to ensure that it does not inflate to a heavyweight lock even if the race is not fierce, thus reducing consumption. If cas fails, it indicates that the thread is competing and needs to inflate to a heavyweight lock, the code is as follows:

void ObjectSynchronizer::slow_enter(Handle obj, BasicLock* lock, TRAPS) { markOop mark = obj->mark(); assert(! mark->has_bias_pattern(), "should not see bias pattern here"); If (mark-> neutral()) {if (mark->is_neutral()) {if (mark->is_neutral()) {if (mark->is_neutral()) {if (mark->is_neutral()) {if (mark->is_neutral()) {if (mark->is_neutral()) {if (mark->is_neutral()) {if (mark->is_neutral()); if (mark == (markOop) Atomic::cmpxchg_ptr(lock, obj()->mark_addr(), mark)) { TEVENT (slow_enter: release stacklock) ; return ; } } else if (mark->has_locker() && THREAD->is_lock_owned((address)mark->locker())) { assert(lock ! = mark->locker(), "must not re-lock the same lock"); assert(lock ! = (BasicLock*)obj->mark(), "don't relock with same BasicLock"); Function of product product product is set to null lock->set_displaced_header(null); function of product product is set to null lock->set_displaced_header(null); return; }... Set_displaced_header (markOopDesc::unused_mark())); ObjectSynchronizer::inflate(THREAD, obj())->enter(THREAD); }Copy the code

However, it is important to understand that lightweight locks are suitable for scenarios where threads alternately execute synchronized blocks. If the same lock is accessed at the same time, this will cause the lightweight lock to expand into a heavyweight lock.

Lightweight lock small details

There are two main types of lightweight locks:

  • Spin lock: Spin means that when another thread is competing for the lock, the thread will wait in place, rather than blocking the thread until the thread that acquired the lock releases the lock, the thread can immediately acquire the lock.

Note that locking in an in-place loop consumes CPU, which is equivalent to executing a for loop with nothing at all. Therefore, lightweight locks are suitable for scenarios where synchronized blocks of code execute quickly, so that threads wait in place for a very, very short time to acquire the lock. Experience has shown that most synchronized blocks of code execute in very short time, which is why lightweight locks exist. – Some problems with spin locking: – If synchronous code blocks execute slowly and consume a lot of time, it can be uncomfortable while other threads wait in place to consume CPU. – The current thread can acquire the lock, but if there are several threads competing for the lock, the current thread may not acquire the lock, and may have to wait for the empty loop to consume the CPU, or even may not acquire the lock at all. – Based on these problems, we must set a number of empty loops to the thread with -xx :PreBlockSpin. When the thread exceeds this number, we consider that it is no longer appropriate to continue using the spin lock, and the lock will inflate again and upgrade to a heavyweight lock. By default, lock inflation occurs with 10 spins or more than half of the spin threads in the CPU (spin locking was introduced in JDK1.4.2).

  • Adaptive spin lock: The so-called adaptive spin lock is that the number of spin waits in an empty loop is not fixed, but dynamically changes according to the actual situation.

Here’s how it works: If a thread 1 just successfully obtain a lock, lock when it released after, thread 2 get the lock, and thread 2 in the running process, thread 1 at this time again want to get the lock, but there is no release the lock, thread 2 so thread 1 just spin waiting, but the virtual machine thinks, because thread 1 just won the lock, The virtual machine decides that thread 1 has a good chance of successfully acquiring the lock again, so it extends the number of spins thread 1 has. In addition, if a thread rarely succeeds in obtaining a lock after spinning, it is possible to directly ignore the spin process and upgrade to a heavyweight lock in the future, so as not to waste resources waiting in an empty loop.

3.4. Heavyweight locks

Heavyweight locks have been analyzed in detail in the previous section, A heavyweight lock is a lock in our traditional sense. When threads compete, the lock expands to a heavyweight lock, and the object’s Mark word points to the heap Monitor, then encapsulates the thread as an objectwaiter object and inserts it into the contextList of the monitor, then suspends the current thread, and when the thread holding the lock releases the thread, inserts all contextList thread objects into the EntryList Heir Heir Heir Heir Heir Heir Heir Heir Heir Heir Heir Heir Heir Heir Heir Heir Heir Heir Heir Heir Heir Heir Heir Heir Heir Heir Heir Heir So the presumed heir doesn’t necessarily get the lock (that’s why it’s called the “presumed” heir).

If a thread calls Object#wait after it has acquired the lock, it is added to WaitSet, and when awakened by Object#notify, it is moved from WaitSet to CXQ or EntryList. Note that when you call wait or notify on a lock object, the current state of the lock, such as a partial lock or a lightweight lock, will inflate to a heavyweight lock first.

3.5 summary of lock status

  • The default anonymous bias lock for objects created after the JVM starts for 4S and the default lock-free state for common objects before 4S ——-
  • Only one thread enters the critical section ——- bias lock
  • Multiple threads alternately enter critical sections ——– lightweight locks
  • Multiple threads simultaneously enter the critical region ——- heavyweight lock

3.6 Analysis of four lock states of Object

Public class ObjectHead {public static void main(String[] args) throws InterruptedException {/** No lockout: **/ Object obj = new Object(); / / see the object inside information System. The out. Println (ClassLayout. ParseInstance (obj). ToPrintable ()); /** The object created after sleeping for 4S is in the state of anonymous bias lock. When a thread executes code or methods modified by the synchronized keyword and sees that the lock object is in the anonymous biased lock state (the flag bit is biased lock but threadID in MrakWord in the object header is empty), Then the thread will use cas mechanism to set its thread ID in mrakword. If there is no other thread to compete for the lock, the thread will execute the code that needs to acquire the lock without any process of acquiring and releasing the lock. **/ Thread.sleep(4000); Object obj1 = new Object(); System.out.println(ClassLayout.parseInstance(obj1).toPrintable()); /** Lightweight lock: Synchronized (obj) {// synchronized (obj) System.out.println(ClassLayout.parseInstance(obj).toPrintable()); ** new Thread(()->{try {obj. Wait (); } catch (InterruptedException e) { e.printStackTrace(); } }).start(); Thread.sleep(1); Synchronized (obj) {/ / view object internal information System. The out the println (ClassLayout. ParseInstance (obj). ToPrintable ()); }}} Java.lang. Object Object internals: Lock flag bit status: 001: OFFSET SIZE TYPE DESCRIPTION VALUE 0 4 (Object Header) 01 000000 (00000001 00000000 00000000 00000000) (1) 4  4 (object header) 00 00 00 00 (00000000 00000000 00000000 00000000) (0) 8 4 (object header) e5 01 00 20 (11100101 00000001 00000000 00100000) (536871397) 12 4 (loss due to the next object alignment) Instance size: 16 bytes Space losses: 0 bytes internal + 4 bytes external = 4 bytes total java.lang.Object object internals: Lock flag bit status: 101: Anonymous bias lock OFFSET SIZE TYPE DESCRIPTION VALUE 0 4 (Object header) 05 000000 (00000101 00000000 00000000 00000000) (5) 4 4  (object header) 00 00 00 00 (00000000 00000000 00000000 00000000) (0) 8 4 (object header) e5 01 00 20 (11100101 00000001 00000000 00100000) (536871397) 12 4 (loss due to the next object alignment) Instance size: 16 bytes Space losses: 0 bytes internal + 4 bytes external = 4 bytes total java.lang.Object object internals: Lock flag bit status: 000: OFFSET SIZE TYPE DESCRIPTION VALUE 0 4 (Object Header) 18 F5 41 01 (00011000 11110101 01000001 00000001) (21099800) 4 4 (object header) 00 00 00 00 (00000000 00000000 00000000 00000000) (0) 8 4 (object header) e5 01 00 20 (11100101 00000001 00000000 00100000) (536871397) 12 4 (loss due to the next object alignment) Instance size: 16 bytes Space losses: 0 bytes internal + 4 bytes external = 4 bytes total java.lang.Object object internals: Lock flag bit status: 010: OFFSET SIZE TYPE DESCRIPTION VALUE 0 4 (Object header) 5a de DB 17 (01011010 11011110 11011011 00010111) (400285274) 4 4 (object header) 00 00 00 00 (00000000 00000000 00000000 00000000) (0) 8 4 (object header) e5 01 00 20 (11100101 00000001 00000000 00100000) (536871397) 12 4 (loss due to the next object alignment) Instance size: 16 bytes Space losses: 0 bytes internal + 4 bytes External = 4 bytes Total /** Throw an exception cause: The illegal monitoring status is abnormal. This exception is thrown when a thread tries to wait on a monitor for an object (Obj) that it does not own or notifies another thread to wait on the monitor for that object (Obj). **/ Exception in thread "Thread-0": java.lang.IllegalMonitorStateException at java.lang.Object.wait(Native Method) at java.lang.Object.wait(Object.java:502)  at com.sixstar.springbootvolatilesynchronized.Synchronized.ObjectHead.lambda$main$0(ObjectHead.java:27) at java.lang.Thread.run(Thread.java:748)Copy the code

4. Analysis of details and other characteristics of Synchronized keyword

4.1. Synchronous elimination

Synchronization elimination is lock of the virtual machine is another kind of optimization, the optimization more thoroughly, the Java virtual machine in the JIT compiler (can be simple to understand for the first time a piece of code will be executed when compile, also known as instant compiled), run through the context of the scan, remove lock there can be no Shared resource competition, in this way to eliminate unnecessary lock, The append of StringBuffer is a synchronous method, whereas in appendString StringBuffer is a local variable and is not used by other threads. Therefore, a StringBuffer cannot compete for shared resources and the JVM will automatically unlock it.

Case 1: Public void appendString(String s1, String s2) {/* StringBuffer is thread safe, */ StringBuffer sb = new StringBuffer(); */ StringBuffer(); sb.append(s1).append(s2); } StringBuffer sb = new StringBuffer(); Public synchronized void appendString(String s1, String s2) {/* synchronized void appendString(String s1, String s2); Since sb is used in the appendString method, which is thread-safe and synchronized, there is no need to acquire two locks in the appendString method, so the JVM automatically removes the internal lock. */ sb.append(s1).append(s2); */ sb.append(s1). }Copy the code

4.2 Synchronized reentrancy

In terms of mutex design, when a thread attempts to manipulate a critical resource of an object lock held by another thread, it will block. However, when a thread requests the critical resource of an object lock held by itself again, this situation is a reentrant lock and the request will succeed. In Java, synchronized is an atomic-based internal locking mechanism and can be reentrant. Therefore, when a thread calls a synchronized method, another synchronized method of the object is called inside its method body. That is, a thread obtains an object lock and then requests the object lock again. Yes, that’s the reentrancy of synchronized. As follows:

public class SyncIncrDemo implements Runnable{
    // Share resources (critical resources)
    static int i = 0;

    The synchronized keyword modifies instance member methods
    public synchronized void incr(a){
        i++;
    }
    @Override
    public void run(a) {
        synchronized(this) {for(int j=0; j<1000; j++){ incr(); }}}public static void main(String[] args) throws InterruptedException {
        SyncIncrDemo syncIncrDemo = new SyncIncrDemo();
        Thread t1=new Thread(syncIncrDemo);
        Thread t2=newThread(syncIncrDemo); t1.start(); t2.start(); t1.join(); t2.join(); System.out.println(i); }}Copy the code

In the above code, when I create a SyncIncrDemo instance and start two threads, the threads start to execute the run method, and inside the run method I use the synchronized modifier block and use this object as the lock resource. A thread must acquire the lock resource of the current instance syncIncrDemo before it can execute the for loop. If a thread succeeds in acquiring the lock, it will find that the for loop calls incr(), another member instance of the class modified by synchronized. Should the current thread acquire the current instance lock resource again? The current thread needs to acquire the current instance’s lock resource twice. It is not necessary, because this is the most direct manifestation of reentrant locking. However, it is worth noting that synchronized is implemented based on Monitor, and the counter in Monitor will still be +1 when reentrant. There is another detail that needs to be paid attention to, that is, when a subclass inherits its parent class, Subclasses can also invoke synchronized methods of their parent class through reentrant locks.

4.3 Wake-on-wait mechanism of Thread and synchronized keyword

This article mainly refers to notify/notifyAll and WAIT methods. To use these three methods, one must be in a synchronized code block or synchronized method. Otherwise you will be thrown IllegalMonitorStateException is unusual, this is because the call before this several methods must take the current monitor the monitor the object, that is to say wait and notify/notifyAll method relies on the monitor object, in the previous analysis, We know that monitor exists in the Mark Word of the object header (where the monitor reference pointer is stored), and that the synchronized keyword retrieves monitor, This is why notify/notifyAll and wait methods must be called in a synchronized code block or synchronized method.

Object obj = new Object();
synchronized (obj) {
   obj.wait();
   obj.notify();
   obj.notifyAll();         
 }
Copy the code

The wait method releases the monitor lock until notify/notifyAll is called. The wait method releases the monitor lock until notify/notifyAll is called. The sleep method only lets the thread sleep without releasing the lock (for(;;)). {}). In addition, the notify/notifyAll method does not release the monitor lock immediately, but automatically releases the monitor lock after synchronized(){}/synchronized.

4.4 Interruption mechanism of Thread and synchronized keyword

4.4.1. Thread Interruption

If you want to stop a Thread after calling start(), you can call the stop() method in the Thread class to force the Thread to stop. Unfortunately, the stop() method is used forcibly and causes serious problems. It was abandoned after JDK version 1.2. So instead of providing developers with the ability to forcibly stop a thread that is executing (running the run method), the current Java release provides the following three apis for thread interrupts:

// Interrupt Thread (instance method) public void thread.interrupt (); // check whether the Thread isInterrupted(example method) public Boolean thread.isinterrupted (); Public static Boolean thread.interrupted (); // Check whether the interrupted state is interrupted and clear the current interrupted state (static method).Copy the code

When a Thread is blocked or attempting to perform a blocking operation, interrupt the Thread with Thread.interrupt(). Note that an InterruptedException is thrown and the interrupted state is reset. The following code demonstrates this process:

public static void main(String[] args) throws InterruptedException { Thread t1 = new Thread() { @Override public void Timeunit.seconds.sleep (2); timeunit.seconds.sleep (2); timeunit.seconds. } } catch (InterruptedException e) { System.out.println("Interruted When Sleep"); boolean interrupt = this.isInterrupted(); System.out.println("interrupt:"+interrupt); }}}; t1.start(); TimeUnit.SECONDS.sleep(2); // Interrupts the blocked thread t1.interrupt(); Interruted When Sleep interrupt:false */}Copy the code

As shown in the code above, we create a thread and use the sleep method to put it into a blocking state. After starting the thread, the interrupt method on the thread instance object is called to interrupt the blocking exception and throw InterruptedException. The interrupted state is reset. Some of you may wonder why thread. sleep(2000) is not used; But with the TimeUnit. SECONDS. Sleep (2); The reason is simple: the former is used without explicit units, while the latter is very explicit in units of seconds, and in fact the internal implementation of the latter eventually calls Thread.sleep(2000). Timeunit.seconds.sleep (2) is recommended for more semantic clarity. Note that TimeUnit is an enumeration type. In addition to blocking interrupts, we may also encounter run-time threads that are not blocking. In this case, calling thread.interrupt () directly to interrupt the Thread will not get any response. The following code will not interrupt the non-blocking Thread:

public static void main(String[] args) throws InterruptedException { Thread t1=new Thread(){ @Override public void Run (){while(true){system.out.println (" not interrupted "); }}}; t1.start(); TimeUnit.SECONDS.sleep(2); t1.interrupt(); /** * Output result (infinite execution): not interrupted not interrupted not interrupted...... * /}Copy the code

Although we call interrupt, thread T1 is not interrupted, because thread interrupts in Java are currently coordinated. In this case, the mian thread sends an interrupt signal to t1 thread, but t1 thread is still executing, so it does not stop. Therefore, for the thread in the non-blocking state, we need to manually detect the interruption and end the program. The improved code is as follows:

public static void main(String[] args) throws InterruptedException { Thread t1=new Thread(){ @Override public void Run (){while(true){if (this.isinterrupted ()){system.out.println (" thread interrupted "); break; }} system.out. println(" out of loop, thread interrupted!") ); }}; t1.start(); TimeUnit.SECONDS.sleep(2); t1.interrupt(); /** * Output: thread interrupt has broken out of the loop, thread interrupt! * /}Copy the code

Yes, we use the example method isInterrupted in our code to determine if the thread has been interrupted, and if interrupted it will jump out of the loop and end the thread. Note that calling interrupt() in a non-blocking state does not reset the interrupted state. One is when a thread is blocking or trying to perform a blocking operation. We can interrupt it with the example method interrupt(). InterruptException (which must be caught and cannot be thrown) is thrown and the interrupt status is reset. Another way to interrupt a thread while it is running is to call the instance method Interrupt (), but the interrupt status must be determined manually. And write code that interrupts the thread (essentially terminating the body of the run method). Sometimes we need to do both of these things in our code, so we can write:

Public void run(){try {// Check whether the current thread has been interrupted. Note that the interrupted method is static and restores the interrupted state after being interrupted. Thread.interrupted()) { TimeUnit.SECONDS.sleep(2); } } catch (InterruptedException e) { } }Copy the code

4.4.2 Thread interrupt and synchronized

In fact, thread interrupts do not apply to synchronized methods or blocks of code that are waiting for a lock. That is, synchronized has only two outcomes if a thread is waiting for a lock: it either acquires the lock and continues to execute, or it saves the wait. Even if the method that interrupts the thread is called, it does not take effect. The demo code is as follows:

public class SyncBlock implements Runnable{ public synchronized void occupyLock() { System.out.println("Trying to call occupyLock()"); While (true) // Never release the lock thread.yield (); Public void run() {public void run() {occupyLock(); public void run() {public void run() {occupyLock(); // The current thread acquires the lock}}.start(); } public void run() {// interrupt while (true) {if (thread.interrupted ()) {system.out.println (" interrupt Thread!!" ); break; } else { occupyLock(); } } } public static void main(String[] args) throws InterruptedException { SyncBlock sync = new SyncBlock(); Thread t = new Thread(sync); T.start (); // Use the occupyLock() method. TimeUnit.SECONDS.sleep(1); // Rupt thread, no effect t.innterrupt (); }}Copy the code

OccupyLock () is called to acquire the current instance lock. Since SyncBlock is itself a thread, it also calls occupyLock() in its run method. (t) (the lock is occupied by another thread, so the t thread will only have to wait until the lock is rupt(); But you can’t interrupt the thread.

4.5 Why synchronized can guarantee order but cannot prohibit instruction reordering?

Before explaining the answer to this question, if you are not familiar with the concepts of instruction reordering and ordering, please refer to my article on the Underlying principles of the Java Memory Model (JMM) and the Volatile keyword. In fact, the atomicity, visibility and orderliness guaranteed by synchronized keyword are actually based on one idea: the previous multi-threaded parallel execution is changed into single-threaded serial execution. In Java programs, if in this thread, all operating orderly behavior, as if it is a multithreaded environment, observed in one thread to another thread, all operations are disorderly, within the first half of the sentence refers to a single thread to ensure serial semantic consistency of execution, after half sentence is refers to the instruction rearrangement phenomenon is synchronous with the main memory and working memory delay phenomenon. So practical for a single thread, all operations are ordered, before so it’ll be synchronized multithreaded parallel execution into a single thread of serial execution must can guarantee the “order”, for single thread, instruction rearrangement is good for single-threaded execution, so there is no need to ban instruction reordering, Disabling it affects the performance of single threads. So for this question, why does synchronized guarantee orderliness but not prohibit instruction reordering? That’s because synchronized doesn’t necessarily prohibit instruction reordering.

4.6 Reasons for poor performance of synchronized compared with ReentrantLock

Synchronized is implemented based on the entry and exit procedures of Monitor, while the underlying layer of Monitor is Mutex Lock that depends on the underlying OS. Lock acquisition and Lock release both need to go through system call, which involves the switch between user state and kernel state and will pass 0x80 interruption. It is inefficient to go through the kernel call and then return to user state. The underlying implementation of ReentrantLock relies on special CPU instructions, such as sending LOCK and UNLOCK instructions, and does not require switching between user and kernel states, making it efficient (similar to volatile).

Five, Hotspot source in-depth interpretation of Synchronized keyword principle

About this content due to the length of this article has been long, the interpretation of Hotspot level of the source code intends to open another chapter to explain, if you are interested in the implementation of the source code level partners move: die to concurrent in-depth Hotspot source code analysis Synchronized keyword implementation.

Vi. Reference materials

  • Deep Understanding of the JVM VIRTUAL Machine
  • The Beauty of Concurrent Programming in Java
  • Java High Concurrency Programming
  • Core Technology of Website Architecture with 100 million Traffic
  • Java Concurrent Programming