The basics of multithreaded keywords
Synchronized
Use the demo code:
package main;
public class SynchronizedExample {
private Object lockObj = new Object();
public void testLockObj(a) {
synchronized (lockObj) {
}
}
public void testLockClass(a) {
synchronized (SynchronizedExample.class) {
}
}
public void testLockThis(a) {
synchronized (this) {}}public synchronized static void synStaticTest(a) {}public synchronized void synTest(a) {}}Copy the code
Then javac generates the class:
//
// Source code recreated from a .class file by IntelliJ IDEA
// (powered by Fernflower decompiler)
//
package main;
public class SynchronizedExample {
private Object lockObj = new Object();
public SynchronizedExample(a) {}public void testLockObj(a) {
synchronized(this.lockObj) { ; }}public void testLockClass(a) {
Class var1 = SynchronizedExample.class;
synchronized(SynchronizedExample.class) { ; }}public void testLockThis(a) {
synchronized(this) {; }}public static synchronized void synStaticTest(a) {}public synchronized void synTest(a) {}}Copy the code
Finally, the bytecode generated by Javap -v:
Some of the bytecodes generated by these methods are captured below:
Classfile /Users/lu/IdeaProjects/untitled/src/main/SynchronizedExample.class
Last modified 2021years4month6Day; size787 bytes
MD5 checksum 0fbebceb0b7cbfde87441f65c0b1f505
Compiled from "SynchronizedExample.java"
public class main.SynchronizedExample
minor version: 0
major version: 56
flags: (0x0021) ACC_PUBLIC.ACC_SUPER
this_class: / / # 4main/SynchronizedExample
super_class/ / : # 2java/lang/Object
interfaces: 0.fields: 1, methods: 6, attributes: 1.Constant pool# 1:= Methodref #2.#20 // java/lang/Object."<init>":()V
#2 = Class #21 // java/lang/Object
#3 = Fieldref #4.#22 // main/SynchronizedExample.lockObj:Ljava/lang/Object;
#4 = Class #23 // main/SynchronizedExample
#5 = Utf8 lockObj
#6 = Utf8 Ljava/lang/Object;
#7 = Utf8 <init>
#8 = Utf8 ()V
#9 = Utf8 Code
#10 = Utf8 LineNumberTable
#11 = Utf8 testLockObj
#12 = Utf8 StackMapTable
#13 = Class #24 // java/lang/Throwable
#14 = Utf8 testLockClass
#15 = Utf8 testLockThis
#16 = Utf8 synStaticTest
#17 = Utf8 synTest
#18 = Utf8 SourceFile
#19 = Utf8 SynchronizedExample.java
#20 = NameAndType #7: #8 // "<init>":()V
#21 = Utf8 java/lang/Object
#22 = NameAndType #5: #6 // lockObj:Ljava/lang/Object;
#23 = Utf8 main/SynchronizedExample
#24 = Utf8 java/lang/Throwable
{
public main.SynchronizedExample();
descriptor: ()V
flags: (0x0001) ACC_PUBLIC
Code:
stack=3, locals=1, args_size=1
0: aload_0
1: invokespecial #1 // Method java/lang/Object."<init>":()V
4: aload_0
5: new #2 // class java/lang/Object
8: dup
9: invokespecial #1 // Method java/lang/Object."<init>":()V
12: putfield #3 // Field lockObj:Ljava/lang/Object;
15: return
LineNumberTable:
line 3: 0
line 5: 4
}
SourceFile: "SynchronizedExample.java"
Copy the code
Lock the class reference
public void testLockClass(a);
descriptor: ()V
flags: (0x0001) ACC_PUBLIC
Code:
stack=2, locals=3, args_size=1
// The constant value in the constant pool is pushed
0: ldc #4 // class main/SynchronizedExample
2: dup // Copy the data one word long at the top of the stack and push the copied data onto the stack
3: astore_1 // Save the top of the stack reference type value to local variable 1
4: monitorenter // Here the monitor enters to acquire the lock
5: aload_1 SynchronizedExample locks the SynchronizedExample class by loading the reference type value from local variable 1
6: monitorexit // Here the monitor exits to release the lock
7: goto 15
10: astore_2
11: aload_1
12: monitorexit
13: aload_2
14: athrow
15: return.Copy the code
(int, float, string reference, object reference); (int, float, string reference, object reference) Here. This is a constant value that class SynchronizedExample class, which is the reference to the whole class lock.
Locks the object instance
public void testLockObj(a);
descriptor: ()V
flags: (0x0001) ACC_PUBLIC
Code:
stack=2, locals=3, args_size=1
0: aload_0 // Load the reference type value from local variable 0 onto the stack.
// Get the value of the object field.
1: getfield #3 // Field lockObj:Ljava/lang/Object;
4: dup // Copy the lockObj object and push it onto the stack
5: astore_1 // The lockObj object is saved to the local variable table
6: monitorenter // Here the monitor enters to acquire the lock
7: aload_1 // Load the lockObj object value in local variable table 1
8: monitorexit // Here the monitor exits to release the lock
9: goto 17
12: astore_2
13: aload_1
14: monitorexit
15: aload_2
16: athrow
17: return.Copy the code
It can be seen that the lock is lockObj. In the current class instance, the use of lockObj needs to determine the lock acquisition.
For this lock
public void testLockThis(a);
descriptor: ()V
flags: (0x0001) ACC_PUBLIC
Code:
stack=2, locals=3, args_size=1
0: aload_0 // Load the reference type value from local variable 0 onto the stack, where the reference type is the instance of the class object
1: dup // Copy class objects and push them
2: astore_1 // The top reference type value is saved to local variable 1
3: monitorenter // Here the monitor enters to acquire the lock
4: aload_1 // Load the reference type value from local variable 1 onto the stack.
5: monitorexit // Here the monitor exits to release the lock
6: goto 14
9: astore_2
10: aload_1
11: monitorexit
12: aload_2
13: athrow
14: return.Copy the code
This locks a block of code. To call a block of code in this method in the current class instance, you need to participate in a lock contention operation. If different instances call this method, there will be no lock contention.
Normal methods declare locking
The normal method declares the bytecode generated by locking:
public synchronized void synTest(a);
descriptor: ()V
flags: (0x0021) ACC_PUBLIC, ACC_SYNCHRONIZEDMonitorenter and Monitorexit are implicitly called if there is a synchronization flag
Code:
stack=0, locals=1, args_size=1
0: return
LineNumberTable:
line 31: 0
Copy the code
This method is locked, and to call it in the current class instance requires a lock contention operation. If different instances call this method, there will be no lock contention. The lock object is the current instance and has a higher priority than synchronized(object). That is to say, the common method locks and the object locks first.
Static methods declare locking
When the static method is locked, the resulting bytecode is as follows:
public static synchronized void synStaticTest(a);
descriptor: ()V
flags: (0x0029) ACC_PUBLIC, ACC_STATIC, ACC_SYNCHRONIZEDMonitorenter and Monitorexit are implicitly called if there is a synchronization flag
Code:
stack=0, locals=0, args_size=0
0: return
LineNumberTable:
line 27: 0
Copy the code
When a static method is locked, the object being locked is the current class.class, or class reference. Because static methods belong to classes.
conclusion
The same object
Same object: * lockclassandthis---- simultaneous entry simultaneous output * lockobjectandthis-- Enter at the same time, output firstobjectAfter that the outputthis* lockmethodandthis-- Enter at the same time, output firstsyncMethodAfter that the outputthis* lockstaticandthis-- Simultaneously enter, simultaneously output * lockmethodandobject- to enter firstmethodAnd the outputmethodAnd then enter theobjectAnd then the outputobject* lockmethodandclass-- Simultaneously enter, simultaneously output * lockmethodandstatic-- Simultaneously enter, simultaneously output * lockobjectandclass-- Simultaneously enter, simultaneously output * lockobjectandstatic-- Simultaneously enter, simultaneously output * lockobjectandmethod- to enter firstmethodAnd the outputmethodAnd then enter theobjectAnd then the outputobject* lockstaticandclass-- Enter at the same time, output firststaticAfter that the outputclass
Copy the code
Different objects
Different objects: *thisandthis-- Simultaneously enter simultaneously output * lockstaticandclass-- Enter at the same time, output firststaticAfter that the outputclass* lockobjectandstatic-- Simultaneous entry and simultaneous outputCopy the code
Lock contention acquisition
Monitorenter and monitorexit
Source code for monitor acquisition and release of underlying virtual machine. Declarations of monitor acquisition and release can be found in interpreterRuntime. HPP file:
// Synchronization
static void monitorenter(JavaThread* thread, BasicObjectLock* elem);
static void monitorexit (JavaThread* thread, BasicObjectLock* elem);
Copy the code
Monitorenter source code:
UseBiasedLocking is an indicator that is assigned to enable biased locking at JVM startup.
IRT_ENTRY_NO_ASYNC(void, InterpreterRuntime::monitorenter(JavaThread* thread, BasicObjectLock* elem))
//BasicObjectLock (monitor)
//JavaThread is the thread that acquires the lock
#ifdef ASSERT
thread->last_frame().interpreter_frame_verify_monitor(elem);
#endif
if (PrintBiasedLockingStatistics) { // Prints bias lock information
Atomic::inc(BiasedLocking::slow_path_entry_count_addr());
}
Handle h_obj(thread, elem->obj());
assert(Universe::heap()->is_in_reserved_or_null(h_obj()),
"must be NULL or an object");
if (UseBiasedLocking) { // Whether to use bias locking
// Retry fast entry if bias is revoked to avoid unnecessary inflation
ObjectSynchronizer::fast_enter(h_obj, elem->lock(), true, CHECK);
} else { // Use lightweight locks instead of bias locks
ObjectSynchronizer::slow_enter(h_obj, elem->lock(), CHECK);
}
assert(Universe::heap()->is_in_reserved_or_null(elem->obj()),
"must be NULL or an object");
#ifdef ASSERT
thread->last_frame().interpreter_frame_verify_monitor(elem);
#endif
IRT_END
Copy the code
Fast_enter and slow_enter
// bias lock fetch function
void ObjectSynchronizer::fast_enter(Handle obj, BasicLock* lock, bool attempt_rebias, TRAPS) {
//obj stores the thread and lock object that need to acquire the lock
//lock is the current lock
if (UseBiasedLocking) { // Whether to open the lock
if(! SafepointSynchronize::is_at_safepoint()) {// If you are not at a safe point
Revoke_and_rebias attempts to obtain bias locks
BiasedLocking::Condition cond = BiasedLocking::revoke_and_rebias(obj, attempt_rebias, THREAD);
if (cond == BiasedLocking::BIAS_REVOKED_AND_REBIASED) { // The bias lock was revoked or recaptured
return; }}else { // At the safe pointassert(! attempt_rebias,"can not rebias toward VM thread");
BiasedLocking::revoke_at_safepoint(obj); // Undo bias lock} assert(! obj->mark()->has_bias_pattern(),"biases should be revoked by now");
}
slow_enter (obj, lock, THREAD) ;
}
/ / summary:
// Check whether bias lock is enabled again
Revoke_and_rebias: revoke_and_rebias: revoke_and_rebias: revoke_and_rebias:
// If it fails, it enters the lightweight lock acquisition process
//revoke_and_rebias the logic for obtaining bias locks is in biasedLocking. CPP
// If bias locks are not enabled, enter the slow_Enter process for obtaining lightweight locks
// Lightweight lock fetch function
void ObjectSynchronizer::slow_enter(Handle obj, BasicLock* lock, TRAPS) { markOop mark = obj->mark(); assert(! mark->has_bias_pattern(),"should not see bias pattern here");
if (mark->is_neutral()) { // The current state is unlocked
// Save mark directly to the _displaced_header field of the BasicLock object
lock->set_displaced_header(mark);
// Use CAS to update the mark word to a pointer to the BasicLock object
if (mark == (markOop) Atomic::cmpxchg_ptr(lock, obj()->mark_addr(), mark)) {
TEVENT (slow_enter: release stacklock) ;
return ;
}
// Fall through to inflate() ...
}
// If the thread is currently locked and the lock is owned by the thread, i.e., the reentrant operation, then no lock contention is performed
else
if(mark->has_locker() && THREAD->is_lock_owned((address)mark->locker())) { assert(lock ! = mark->locker(),"must not re-lock the same lock"); assert(lock ! = (BasicLock*)obj->mark(),"don't relock with same BasicLock");
lock->set_displaced_header(NULL);
return;
}
#if 0
// The following optimization isn't particularly useful.
if (mark->has_monitor() && mark->monitor()->is_entered(THREAD)) {
lock->set_displaced_header (NULL);return ;
}
#endif
// The _displaced_header field of the BasicLock object is set to "indicator of the heavyweight monitor being used"
lock->set_displaced_header(markOopDesc::unused_mark());
[public view] [public view] [public view] [public view] [public view] [public view] [public view] [public view
ObjectSynchronizer::inflate(THREAD, obj())->enter(THREAD);
}
/ / summary:
// Go to the lightweight lock acquisition function
If there is no lock, set displaced_header of the lock object to the pointer to the thread
// Displaced_header is null if the lock is in a locked state and the owner of the lock is the thread that currently acquired the lock
// Finally, if the above situation is not met, the thread needs to acquire the lock, enter the contention, and then the lightweight lock expands to the heavyweight lock
Copy the code
Four kinds of locks appear in the process of competitive acquisition of locks
In the process of acquiring locks, there are four situations: no lock, biased lock, lightweight lock and heavyweight lock. These four states are gradually upgraded and irreversible.
Biased locking
Biased locking means that if a thread has acquired a biased lock, if no other threads compete for the lock in the following period of time, then the thread holding the biased lock enters the synchronization again and does not need to preempt the lock and release the lock again.
This process adopts CAS objective lock operation, which means that when a thread acquires biased lock, if there is no other thread to compete for the lock, then the thread holding biased lock enters again, and the VIRTUAL machine does not perform any synchronization operation, and the flag bit can be added by 1. If different threads compete for the lock, the CAS will fail, and the lock will fail to be acquired.
Xx: -usebiasedlocking =false. This parameter is enabled by default in JDK1.6.
Biased lock acquisition
When a thread accesses a block to acquire a lock, the thread ID of the biased lock is stored in the lock record in the object header (Mark Word) and the stack frame, indicating which thread acquired the biased lock.
Acquisition process:
1) First judge whether the lock is in the state of biased lock according to the sign of the lock
2) If it is in biased lock state, it will write its thread ID into MarkWord through CAS operation. If CAS operation succeeds, it means that the current thread has obtained biased lock, and then continue to execute synchronized code block. If the CAS fails, it means that the lock failed to be acquired.
3) If it is not a biased lock, it will check whether the thread ID stored in MarkWord is equal to the thread ID of the currently accessed thread. If it is equal, it means that the current thread has acquired the biased lock, and then directly execute the synchronization code; If not, the biased lock is acquired by another thread, and the biased lock needs to be revoked.
Undo bias lock
Only the thread that acquired the bias lock will release the bias lock, and the process of removing the bias lock must wait for a global safe point (that is, all the threads waiting to acquire the bias lock will stop bytecode execution).
Procedure for undoing bias locks:
1) First, determine whether the thread that obtains the bias lock is alive
2) If the thread is dead, set Mark Word to lock free
3) If the thread is still alive, when the global safe point is reached, the thread that acquired the bias lock will be suspended, and then the bias lock will be upgraded to the lightweight lock, and finally wake up the thread blocked at the global safe point to continue to execute the synchronization code
Lightweight lock
When multiple threads compete for biased lock, biased lock will be revoked. There are no more than two states of biased lock cancellation:
1) No lock-free state of biased lock is obtained
2) The locked state of biased lock is not obtained
Lightweight locking locking process
1) if the object is lock-free, the JVM creates a LockRecord on the current thread’s stack frame to copy the Mark Word from the object’s header into the LockRecord
2) The JVM then uses CAS to replace the Mark Word in the object header with a pointer to the lock record
3) If the replacement succeeds, the current thread has obtained the lightweight lock; If the replacement fails, other threads are competing for the lock. The current thread tries to use CAS to acquire the lock, and when it fails to acquire the lock after a specified number of spins (which can be customized), the lock expands to a heavyweight lock
Spin to prevent the thread from being suspended. Once the resource can be acquired, the attempt succeeds directly. If the threshold is exceeded and the lock has not been acquired, then upgrade to a heavyweight lock. (The default spin lock is 10, -xx: PreBlockSpin can be modified
Heavyweight lock
Once a lock is upgraded to a heavyweight lock, it does not revert to a lightweight lock. When a lock is in the heavyweight lock state, other threads attempting to acquire the lock are BLOCKED, or BLOCKED. These fields are awakened when the thread holding the lock releases the lock, and the awakened threads start a new round of contention.
Monitorexit source code:
//%note monitor_1
IRT_ENTRY_NO_ASYNC(void, InterpreterRuntime::monitorexit(JavaThread* thread, BasicObjectLock* elem))
#ifdef ASSERT
thread->last_frame().interpreter_frame_verify_monitor(elem);
#endif
Handle h_obj(thread, elem->obj());
assert(Universe::heap()->is_in_reserved_or_null(h_obj()),
"must be NULL or an object");
if (elem == NULL || h_obj()->is_unlocked()) {
THROW(vmSymbols::java_lang_IllegalMonitorStateException());
}
ObjectSynchronizer::slow_exit(h_obj(), elem->lock(), thread);
// Free entry. This must be done here, since a pending exception might be installed on
// exit. If it is not cleared, the exception handling code will try to unlock the monitor again.
elem->set_obj(NULL);
#ifdef ASSERT
thread->last_frame().interpreter_frame_verify_monitor(elem);
#endif
IRT_END
Copy the code
conclusion
-
Synchronized modifies code blocks, methods, and fields. For locked objects, synchronized has object, method reference this, and class reference. They play different roles.
-
Because every object in Java implicitly contains a Monitor object, with synchronized modifiers, corresponding threads compete for locks when they execute the modified code, acquire the locks when monitorenter is executed, and release the locks when Monitorexit is executed.
-
When monitorenter executes, it determines whether to use biased locking (enabled by default after JDK1.6). If biased lock is used, it will judge whether it is in the safe point, revoke biased lock if it is in the safe point, obtain biased lock if it is not in the safe point, and enter the process of lightweight lock acquisition if the biased lock fails or is revoked. If the biased lock is successfully obtained, it will directly execute the synchronization code block.
-
When a lightweight lock is acquired, the thread owner of the lightweight lock is determined to be the current thread. If not, the lock is upgraded to a heavyweight lock, and the thread enters the contested lock state (BLOCKED).
Volatile
Code test:
package main.volatiledemo;
public class TestVolatile {
public static void main(String[] args) {
VolatileThread thread = new VolatileThread();
thread.start();
while(true) {
// Thread is locked to ensure visibility
if (thread.isFlag()) {
System.out.println("Main Thread visited flag ===>"+ thread.isFlag()); }}}}class VolatileThread extends Thread {
//private volatile boolean flag = false; Keyword decoration ensures visibility
private boolean flag = false;
@Override
public void run(a) {
try {
Thread.sleep(1000);
} catch (Exception e) {
e.printStackTrace();
}
// Modify variable values
flag = true;
System.out.println("flag = " + flag);
}
public boolean isFlag(a) {
return flag;
}
public void setFlag(boolean value) { flag = value; }}Copy the code
The code above tests that, without volatile, the main thread has been accessing false, but the child thread has changed flag.
The reason for the result is as follows: The flag value is in the main memory. When the sub-thread uses the flag, it copies a copy of the flag and uses it in its own memory. After using the flag, it buffers it into the main memory and updates the flag value of the main memory. The main thread accesses the original copy of flag from main memory. The value of this copy is always false, so the main thread does not output, while the child thread has updated the value of flag.
Solutions:
- Notifies the main thread that the replica value in thread memory is not up to date and tells it to read the value in main memory
- Synchronizer locks the thread variable and waits for the main thread to update flag
- Flag is volatile to ensure visibility
Synchronizer does: When a thread enters a synchronizer code block, the thread acquires the lock, clears the thread’s local memory, copies the latest value of the shared variable from main memory to its local copy, executes the code, flusher the modified copy to main memory, and releases the lock.
The effect of the volatile modifier: ** When volatile is used to modify a shared variable, each thread will copy the variable from main memory to local memory as a copy. When the thread writes the variable copy back to main memory, the CPU bus sniffing mechanism informs other threads that the variable copy is invalid and needs to be read from main memory again. ** Volatile ensures the visibility of shared variables performed by different threads. That is, when a volatile variable is written back to main memory by one thread, the other threads immediately see its latest value.
The effect of volatile
visibility
When volatile is used to modify a shared variable, each thread will copy the variable from main memory to local memory as a copy. When a thread manipulates a variable copy and writes it back to main memory, the CPU bus sniffing mechanism informs other threads that the variable copy is invalid and needs to be read from main memory again. Volatile ensures the visibility of shared variables by different threads. That is, when a volatile variable is written back to main memory by one thread, the other threads immediately see its latest value
Implementation rationale – How can visibility be guaranteed
In modern computers, the speed of the CPU is very high, if when the CPU needs to access data directly dealing with memory, and in the process of access, the CPU will have been free, this is a great waste, so, in order to improve the processing speed, CPU and memory communicate directly, but in between the CPU and memory to join a lot of register, Multi-level caches, which can be accessed much faster than memory, solve the problem of inconsistency between CPU processing speed and memory reading speed.
Because a cache is added between the CPU and memory, data is copied from the memory to the cache before performing data operations. The CPU directly operates the data in the cache. However, in the case of multiple processors, the cache data may be inconsistent (this is where the visibility problem comes from). To ensure that the cache of each processor is consistent, the cache consistency protocol is implemented, and sniffing is a common mechanism to achieve cache consistency.
Note that cache consistency issues are not caused by multiple processors, but by multiple caches.
How the sniffing mechanism works: Each processor by monitoring the spread of the data on the bus to check their cache value is expired, if the processor found himself cache line corresponding to the memory address modification, will set the current processor cache line invalid state, when the processor to modify the data operation, will be to read the data from main memory to the processor cache.
Note: The JVM implements the visibility of volatile based on the CPU cache consistency protocol, but because of the bus sniffing mechanism, it constantly monitors the bus and can cause bus storms if volatile is used in large amounts. Therefore, the use of volatile needs to be contextually appropriate.
The entire CPU memory model can be seen in detail in this article describing the CPU cache – the MESI protocol
Disallow instruction reordering
What is instruction reorder?
To improve performance, the execution results of a single-threaded program cannot be changed in compliance with the as-if-serial semantics (i.e., no matter how reordered the program is). The compiler, runtime, and processor must comply. Compilers and processors often reorder instructions.
General reordering can be divided into the following three types:
- The compiler optimizes reordering. The compiler can rearrange the execution order of statements without changing the semantics of a single-threaded program.
- Instruction level parallel reordering. Modern processors use instruction-level parallelism to superimpose multiple instructions. If there is no data dependency, the processor can change the execution order of the machine instructions corresponding to the statement.
- Memory system reordering. Because the processor uses caching and read/write buffers, this makes the load and store operations appear to be out of order.
Data dependencies: If two operations access the same variable and one of them is a write operation, there is a data dependency between the two operations. Data dependencies are only for sequences of instructions executed in a single processor and operations performed in a single thread. Data dependencies between different processors and between different threads are not considered by compilers and processors.
The sequence of instructions from the Java source code to the final execution goes through one of the following three reorders:
Reordering order
To better understand reordering, take a look at some of the sample code below:
int a = 0;
/ / thread A
a = 1; / / 1
flag = true; / / 2
/ / thread B
if (flag) { / / 3
int i = a; / / 4
}
Copy the code
If you just look at the program, it doesn’t seem to be a problem. At the end, the value of I is 1. But to improve performance, compilers and processors often reorder instructions without changing their data dependencies. Suppose thread A is reordered to execute code 2 before code 1; Thread B reads the flag variable after thread A executes code 2. Since the condition is judged true, thread B will read variable A. At this point, variable A has not been written by thread A at all, so the final value of I is 0, resulting in incorrect execution results. So how does the program execute correctly? The volatile keyword can still be used here.
In this example, the use of volatile not only ensures the memory visibility of the variables, but also forbids instruction reordering, ensuring that volatile variables are compiled in the same order as the program is executed. Using volatile to modify flag ensures that code 1 is executed before code 2 in thread A.
Implementation principle – how to ensure that instruction reordering can be prohibited
To implement volatile memory semantics (that is, memory visibility), the JMM restricts certain types of compiler and processor reordering. To do this, the JMM establishes a volatile reordering table for the compiler, as follows:
Volatile reordering
When volatile is used to modify variables, according to the volatile reordering table, the Java compiler inserts a memory barrier instruction into the instruction sequence to prohibit a particular type of handler from reordering when the bytecode is generated.
A memory barrier is a set of processor instructions that prohibit instruction reordering and resolve memory visibility problems.
The JMM classifies memory barrier instructions into the following four categories:
The memory barrier
The StoreLoad barrier is an all-purpose barrier that simultaneously has the effect of the other three barriers. So performing this barrier is expensive because it forces the processor to flush all the data in the cache into memory.
Let’s look at how volatile read/write inserts memory barriers, as shown below:
From the figure above, we know the memory barrier rules for volatile read/write inserts:
- Insert LoadLoad and LoadStore barriers after each volatile read.
- A StoreStore barrier and a StoreLoad barrier are inserted before and after each volatile write.
That is, the compiler does not reorder volatile reads or any memory operations following volatile reads; The compiler does not reorder volatile writes or any memory operations that precede volatile writes.
Happens-before overview
In order to speed up the process, the JVM optimizes code for compilation, also known as instruction reordering optimization. However, instruction reordering in concurrent programming also has some safety risks: instruction reordering can cause multiple thread operations to be invisible. The burden of understanding the memory visibility guarantees provided by the JMM is too great for the programmer to learn complex reordering rules and their implementation, which can seriously affect the efficiency of concurrent programming.
So starting with JDK5, the concept of happens-before was introduced to illustrate memory visibility between operations. If the results of one operation need to be visible to another, there must be a happens-before relationship between the two operations. The two operations mentioned here can be within a thread or between different threads.
Happens-before rules are as follows:
- Procedure order rule: For every action in a thread, happens-before any subsequent action in that thread.
- Monitor lock rule: The unlocking of a monitor lock, happens-before the subsequent locking of the monitor lock.
- Volatile variable rule: Writes to a volatile field, happens-before any subsequent reads to that volatile field.
- Transitivity: If A happens-before B, and B happens-before C, then A happens-before C.
- The start() rule: the call to thread.start () is happens-before the action in the starting Thread.
- Join () rule: All actions in Thread happens-before another Thread returns successfully from thread.join ().
As a special note, the happens-before rule does not describe the order of the actual actions. It is a rule that describes visibility.
From the happens-before rule for volatile variables, we know that if thread A writes volatile variable V, and thread B reads V, then thread A writes V and the previous writes are visible to thread B.
The use of volatile
Synchronized ensures atomicity of operations and visibility of shared variables. However, volatile guarantees visibility, not atomicity, so use volatile correctly to ensure the safety and reliability of concurrent operations.
Application of singleton pattern:
There are eight singleton patterns, and the lazy singleton double detection uses the volatile keyword.
The code is as follows:
public class Singleton {
// Volatile ensures visibility and disallows instruction reordering
private static volatile Singleton singleton;
public static Singleton getInstance(a) {
// First check
if (singleton == null) {
// Synchronize code blocks
synchronized(this.getClass()) {
// Second check
if (singleton == null) {
Instantiation of an object is a non-atomic operation
singleton = newSingleton(); }}}returnsingleton; }}Copy the code
In the above code, new Singleton() is a non-atomic operation, and object instantiation is divided into three steps :(1) allocating memory, (2) initializing the instance, and (3) returning the memory address to the reference. Therefore, when creating objects using constructors, the compiler may perform instruction reordering. Let’s say thread A reorders (2) and (3) when it creates the object. If thread B gets the reference address when thread A executes (3) and determines the singleton in the first check! = null, but thread B is not holding a complete object, which causes problems when manipulating objects.
Therefore, the use of volatile to modify the Singleton variable is intended to prohibit instruction reordering when the object is instantiated.
The description of volatile is quoted here
A deadlock
How to appear
A deadlock is a lock that is blocked by two or more threads waiting for another thread lock in a deadlock state to hold. Usually occurs when multiple threads request the same set of locks simultaneously but in different orders.
Example: Thread 1 acquires the lock of A and tries to acquire the lock of B, while thread 2 acquires the lock of B and tries to acquire the lock of A. Thread 1 waits for THE lock of B to block, thread 2 waits for the lock of A to block, thread 1 never gets the lock of B, thread 2 never gets the lock of A, and A deadlock occurs.
Thread 1 locks A, waits for B
Thread 2 locks B, waits for A
Copy the code
Code:
package main.syncdemo;
/** * test deadlock */
public class TestDeadLock {
public Object objectA = new Object();
public Object objectB = new Object();
public Thread thread1 = new Thread(() -> {
synchronized (objectA) {
System.out.println(Thread1 --objectA);
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
synchronized (objectB) {
System.out.println(Thread1 --objectB); }}});public Thread thread2 = new Thread(() -> {
synchronized (objectB) {
System.out.println(Thread2 --objectB);
try {
Thread.sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
synchronized (objectA) {
System.out.println(Thread2 --objectA); }}});public static void main(String[] args) {
TestDeadLock lock = newTestDeadLock(); lock.thread1.start(); lock.thread2.start(); }}Copy the code
How to avoid
-
Lock the order
In multi-thread locking, ensure that the lock acquired by the current thread is released after the previous thread. A subsequent lock can be acquired only after the first lock in the sequence has been acquired.
Like this: Change the thread2 of the demo above to the following, thus avoiding deadlocks. Make sure the code executes sequentially.
public Thread thread2 = new Thread(() -> { synchronized (objectA) { // To avoid deadlocks, change this to objectA System.out.println(Thread2 --objectA); try { Thread.sleep(1000); } catch (InterruptedException e) { e.printStackTrace(); } synchronized (objectB) { // Then change this to objectB System.out.println(Thread2 --objectB); }}});Copy the code
-
Could you consider adding a timer to the thread that is acquiring the lock and waiting beyond a certain amount of time to cancel the lock?
-
Every time a thread obtains a lock, it stores the lock object and thread of the thread into a map. Then the subsequent thread obtains the lock and queries whether the lock has been held in the map.
Related code demo test
Code address — Github