preface

In Java programming, there is a problem that every program ape would like to hide from the very headache, that is, the thread safety of multithreading. This knowledge should be one of the more troublesome pieces of knowledge in Java, today we will talk about how to solve the problem of thread safety in Java, and the differences between various locks.

A major cause of thread safety problems

1. There is shared data (critical resources)

2. Multiple threads operate the data

The fundamental solution to the problem is that only one thread is working on the shared data at a time, and other threads must wait until the thread has finished processing the data before working on the shared data.

Synchronized

  • Mutex features

    Mutual exclusion: This allows only one thread to hold an object lock at a time, so that only one thread can access the code block (compound operation) to be synchronized at a time. Mutual exclusion is also called atomicity of operations. Visibility: You must ensure that changes made to a shared variable before the lock is released are visible to the other thread that subsequently acquired the lock (that is, the value of the latest shared variable should be obtained when the lock was acquired), otherwise another thread may continue on a copy of the local cache, causing inconsistencies.

    For Java, the Synchronized keyword satisfies both of these features. Note: Synchronized locks are not code, but objects.

  • Synchronized refers to the classification of acquiring locks

    • Object lock

      There are two ways to obtain an object lock: 1. Synchronized (this),synchronized(class instance object). The lock is an object in parentheses. 2. Synchronized method, in which the lock is an instance of the current object.

    • Kind of lock

      There are two ways to obtain a class lock: 1. Synchronized (object.getClass()),synchronized(class name.class)), the lock is the class object of the object in parentheses (class object). 2. Synchronized static method, where the lock is the class object of the current object.

  • Object locking and class locking tests

    • code

      ThreadDemo.java:

      public class ThreadDemo implements Runnable{
          public void asynMethod(a) {
              System.out.println(Thread.currentThread().getName()+"Start running (asynchronous method)");
              try {
                  Thread.sleep(1000);
              } catch (InterruptedException e) {
                  // TODO Auto-generated catch block
                  e.printStackTrace();
              }
              System.out.println(Thread.currentThread().getName()+"End running (asynchronous method)");
          }
          public synchronized void syncMethod(a) {
              System.out.println(Thread.currentThread().getName()+"Start running (synchronous method)");
              try {
                  Thread.sleep(1000);
              } catch (InterruptedException e) {
                  // TODO Auto-generated catch block
                  e.printStackTrace();
              }
              System.out.println(Thread.currentThread().getName()+"End run (synchronous method)");
          }
          public void syncBlock(a) {
              synchronized(this) {
                  System.out.println(Thread.currentThread().getName()+"Start running (sync code block)");
                  try {
                      Thread.sleep(1000);
                  } catch (InterruptedException e) {
                      // TODO Auto-generated catch block
                      e.printStackTrace();
                  }
                  System.out.println(Thread.currentThread().getName()+"End of run (sync code block)"); }}public void syncClass(a) {
              synchronized (this.getClass()) {
                  System.out.println(Thread.currentThread().getName()+"Start running (synchronized code block with class file as lock object)");
                  try {
                      Thread.sleep(1000);
                  } catch (InterruptedException e) {
                      // TODO Auto-generated catch block
                      e.printStackTrace();
                  }
                  System.out.println(Thread.currentThread().getName()+"End run (synchronized code block with class file as lock object)"); }}public static synchronized void syncStaticMethod(a) {
              System.out.println(Thread.currentThread().getName()+"Start running (statically synchronized method (class file locked))");
              try {
                  Thread.sleep(1000);
              } catch (InterruptedException e) {
                  // TODO Auto-generated catch block
                  e.printStackTrace();
              }
              System.out.println(Thread.currentThread().getName()+"End run (statically synchronized method (class file as lock))");
          }
          @Override
          public void run(a) {
          	// TODO Auto-generated method stub
              if(Thread.currentThread().getName().startsWith("ASYN")) {
                  asynMethod();
              }else if(Thread.currentThread().getName().startsWith("SYNC_METHOD")) {
                  syncMethod();
              }else if(Thread.currentThread().getName().startsWith("SYNC_BLOCK")) {
                  syncBlock();
              }else if(Thread.currentThread().getName().startsWith("SYNC_CLASS")) {
                  syncClass();
              }else if(Thread.currentThread().getName().startsWith("SYNC_STATIC")) { syncMethod(); }}}Copy the code

      ThreadTest.java:

      public class ThreadTest {
          public static void main(String[] args) {
              ThreadDemo threadDemo = new ThreadDemo();
              Thread thread1 = new Thread(threadDemo,"ASYN_Thread1");
              Thread thread2 = new Thread(threadDemo,"ASYN_Thread2");
              Thread thread3 = new Thread(threadDemo,"SYNC_METHOD_Thread1");
              Thread thread4 = new Thread(threadDemo,"SYNC_METHOD_Thread2");
              Thread thread5 = new Thread(threadDemo,"SYNC_BLOCK_Thread1");
              Thread thread6 = new Thread(threadDemo,"SYNC_BLOCK_Thread2");
              Thread thread7 = new Thread(threadDemo,"SYNC_STATIC_Thread1"); 
              Thread thread8 = new Thread(threadDemo,"SYNC_STATIC_Thread2"); thread1.start(); thread2.start(); thread3.start(); thread4.start(); thread5.start(); thread6.start(); thread7.start(); thread8.start(); }}Copy the code
    • The test results

      From the above tests can also be seen: the same classThere is no association between class object locks and instance object locks.

  • Summary of object locking and class locking

    1. When a thread accesses a synchronized code block of an object, another thread can access an unsynchronized code block of the object.

    2. If the same object is locked, when one thread accesses the synchronized code block of the object, another thread accessing the synchronized code block of the object will be blocked.

    3. If the same object is locked, one thread accessing the synchronization method of the object will block another thread accessing the synchronization method of the object.

    4. If the same object is locked, one thread accessing the synchronized code block of the object will block another thread accessing the synchronized method of the object, and vice versa.

    5. Object locks of different objects of the same class do not interfere with each other.

    6. A class lock is a special type of object lock, so it behaves like 1, 2, 3, and 4 above. Since a class has only one object lock, different objects of the same class will use the class lock synchronously.

    7. Class locks and object locks do not interfere with each other.

Implementation of Synchronized underlying principles

After a brief introduction to Synchronized, we will further understand the underlying principles of Synchronized.

  • The basis for implementing Synchronized

    • Java object head

      In the Hotspot VIRTUAL machine, the object layout in memory is divided into three parts: 1. As I don’t know much about instance data and its filling, and I don’t have much relevance to this part of the content, SO I won’t mention it for the moment. The object header is divided into two parts: 1.Mark Word: stores the Hashcode, generation age, lock type, and lock flag of the object by default. Class Metadata Address: The type pointer points to the Class Metadata of an object. The JVM uses this pointer to determine which Class the object belongs to. Since the object header information is additional data unrelated to the operation of the object, MarkWord is designed as a non-fixed data structure to store more valid data, and it will reuse its storage space according to the state of the object itself.

    • Monitor (Monitor lock)

      From birth, Java objects encapsulate an invisible lock internally, the Moditor (monitor lock), which we can use as a synchronization tool. Monitor has a variety of relationships with objects, for example, it can be generated with objects or automatically when an object needs to acquire a lock. The constructor for Monitor (C++) is as follows:

      ObjectMonitor() {
          _header       = NULL;
          _count        = 0;
          _waiters      = 0,
          _recursions   = 0;
          _object       = NULL;
          _owner        = NULL;// Lock holder
          _WaitSet      = NULL;
          _WaitSetLock  = 0 ;
          _Responsible  = NULL ;
          _succ         = NULL ;
          _cxq          = NULL ;
          FreeNext      = NULL ;
          _EntryList    = NULL ;
          _SpinFreq     = 0 ;
          _SpinClock    = 0 ;
          OwnerIsThread = 0 ;
          _previous_owner_tid = 0;
       }
      Copy the code

      Start by remembering the attributes (from the top down) :count (counter), Object,owner (thread currently holding the lock),WaitSet (wait pool),_EntryList (lock pool).

      The EntryList queue is an EntryList queue. The EntryList queue is an EntryList queue. The EntryList queue is an EntryList queue, and the EntryList queue is an EntryList queue. The remaining threads must wait in the lock pool, and the thread that successfully acquired the lock enters the _object section, changes the _owner property in Monitor to the current thread, and increments the _count counter in Monitor by one. When a thread calls wait(), the owner property is set to null, count is reduced by one, and the current thread goes into _WaitSet to be woken up.

      From this point of view, the Monitor lock exists in the object header of every Java object, which is how the Synchronized keyword acquires the lock, which is why any object in Java can be used as a lock.

  • Understand Synchronized at the bytecode level

    • Pre-preparation (code)

      public class ThreadClassDemo {
      	public void syncPrint(a) {
              // Synchronize code blocks
          	synchronized(this) {
              	System.out.println("hello——sync block"); }}public synchronized void syncMethodPrint(a) {
          	System.out.println("hello——sync method"); }}Copy the code
    • Use the javap command to view the bytecode file

      Use the javap -verbose threadclassdemo. class command to view the bytecode file.

      Bytecode files that use synchronized code blocks

      Bytecode files using synchronous methods:

      We can see that Monitorenter and Monitorexit are explicitly used for lock and lock release in the bytecodes that use synchronized blocks, while Monitorenter and Monitorexit are not present in the bytecodes that use synchronized methods. The calls to Monitorenter and Monitorexit are not shown during the synchronization. So how does it synchronize? As you can see, the bytecode file of a synchronized method has an attribute, flags, which contains ACC_SYNCHRONIZED. This flag can distinguish whether a method is synchronized or not. When the method is called, the system checks whether the method has the ACC_SYNCHRONIZED flag. The thread executing the method will hold the Monitor, execute to completion, and release the monitor.

  • Gossip about Synchronized

    Back in the day, I learned that Synchronized was a heavyweight lock. Indeed, in earlier versions (pre-Java 1.6), Synchronized was a heavyweight lock because it relied heavily on the MutexLock implementation and required a switch from user to core state each time a lock was added, which was a heavyweight operation for the CPU. In high-concurrency scenarios, it is unrealistic to perform a user-mode core mindset shift every time a lock is added, which is why I will use ReentrantLock as a contrast to Synchronized later. But today, Synchronized has improved a lot since JDK1.6, and is no longer the heavyweight lock that was sneered at at the time.

  • Some optimizations Hotspot has made to locks since JDK1.6

    • spinlocks

      In many cases, the locked state of shared data is short-lived, and it is not worth switching or suspending threads for this time. In today’s multiprocessor environment, it is perfectly possible not to block a thread that has not acquired a lock, but to let it wait a little longer without giving up CPU execution time. This behavior is called spin, which loops to make the thread wait without giving up CPU. This strategy can be efficient in cases where lock occupancy is very short, because frequent context switches are avoided. However, if a lock is held by another thread for a long time, this circular waiting method will uniformly occupy the CPU and incur a lot of unnecessary performance overhead. If this is the case, you should use the traditional method of directly suspending the thread that did not acquire the lock.

    • Adaptive spin lock

      Adaptive spin locks, as the name suggests, no longer have a fixed number of spins. Instead, the number of spins is determined by how many times the previous thread of the current thread acquired the lock. If the previous thread failed to acquire the lock, the JVM assumes that the current thread is also likely to acquire the lock, and increases the spin count appropriately to avoid lock switching overhead. If the previous thread was slow to acquire the lock, the JVM will conclude that the current thread is unlikely to acquire the lock, and will end the spin early to avoid wasting CPU resources.

    • Lock elimination

      Lock elimination is another optimization strategy for virtual machines. During JIT compilation, locks that the JVM considers uncontested are automatically removed. When the JVM determines that a locked resource is one that cannot be shared, the lock is removed.

    • Lock coarsening

      public class Test{
      	public static void main(String[] args){
      		StringBuffer sb = new StringBuffer();
      		int i = 0;
      		while(i<100){
      			sb.append("test") i++; }}}Copy the code

      For example, in the code above, we know that StringBuffer is thread-safe, so every time the append method is called, we try to lock it, but in the above example, only one object repeatedly locks and releases locks and releases locks… This causes frequent switching, in which the JVM automatically extends the lock boldness, meaning that the lock is expanded outside the loop, that it is locked when the loop is entered, and then not locked again during the append operation.

  • The four states of Synchronized

    Synchronized has four states, namely, no-lock, biased lock, lightweight lock, and heavyweight lock. According to different scenarios, Synchronized automatically upgrades or downgrades. The upgrading direction is no-lock > biased lock > lightweight lock > heavyweight lock.

    • unlocked

      It’s easy to understand, it’s unlocked, unlocked, asynchronous, yeah.

    • Biased locking

      The main purpose of biased locking is to reduce the cost of acquiring locks for the same thread. In most cases, locks are not contested by multiple threads and are always acquired repeatedly by the same thread, which would have to go through a cumbersome application process each time it acquired the lock. To solve this problem, hotspot introduces bias locking during optimization. The core idea is: If a thread has acquired the lock, the lock will be in biased mode, and the structure of the Mark Word will become biased lock structure (the lock flag bit is 1 01, see the object header structure diagram for details). When the thread requests the lock again, there is no need to conduct any synchronization operation. Just check that the Mark Word lock flag bit is biased lock and the current thread Id is equal to the thread Id in Mark Word, thus reducing a large number of lock application operations.

      Note: Biased locking is not suitable for multi-threaded situations where lock competition is fierce

      I also read a paragraph about biased lock explanation is relatively easy to understand:

      When an object is first instantiated and no threads are accessing it. It’s biased, meaning it now thinks that only one thread can access it, so when the first thread accesses it, it favors that thread, and in that case, the object holds the biased lock. Bias to the first thread. This thread uses CAS when it changes the object header to bias lock and changes the ThreadID in the object header to its own ID. When it accesses the object again, it only needs to compare the IDS and does not need to use CAS for operations.

      Once a second thread to access the object because the biased locking does not take the initiative to release, so the second thread can see objects to state, at this time that already there is competition on the object, check whether the original owners of the thread lock the object is still alive, if you hang up and the object can be become unlocked state, then back to the new thread, If the original thread is still alive, the stack of that thread is immediately executed to check the usage of the object, and if bias locks are still needed, bias locks are upgraded to lightweight locks (this is when bias locks are upgraded to lightweight locks). If no use exists, you can revert the object to an unlocked state and then re-bias it.

      Java partial locking, lightweight locking, and heavyweight synchronized principles

    • Lightweight lock

      Lightweight locks are upgraded from bias locks, which operate in the case of one thread entering a synchronized block and then upgrade to lightweight locks when the second thread enters lock contention. (If the thread currently holding the lock is still alive, the upgrade will continue to favor the thread currently competing for the lock). The lightweight lock is suitable for alternating threads. If multiple threads compete for the same lock at the same time, the lightweight lock will be upgraded to the heavyweight lock.

      Lightweight lock locking process:

      1. When the code enters the synchronized block for execution, if the Lock state of the synchronized object is in the no-lock state, the virtual machine firstly establishes a space named Lock Record in the stack frame of the current thread, which is used to store the copy of the current Mark Word of the locked object, which is officially called product Mark Word.

      2. Copy the Mark Word in the object header to the Lock Record.

      3. After the copy is successful, the VM uses CAS to change the object Mark Word to a pointer to Lock Record and the _owner pointer in Lock Record to an object Mark Word. If the update is successful, go to Step 4; otherwise, go to Step 5.

      4. If the update succeeds, the thread owns the lock on the object, and the object’s Mark Word bit is set to 00, indicating that the object is locked in the lightweight lock state.

      5. If the update operation fails, the virtual machine will first check whether the object MarkWord points to the current thread’s stack frame, if it is means that the current thread already has the lock of the object, then you can directly enter the synchronized block to continue, or multiple threads competition, locked lightweight lock will be upgraded to heavyweight lock, The status of the lock flag changes to 10, the Mark Word stores Pointers to heavyweight locks (mutex), and subsequent threads waiting for the lock block, while the current thread attempts to acquire the lock using spin.

    • Heavyweight lock

      Lightweight locks consider contention to exist, but the degree of contention is minimal. Usually two threads will stagger operations on the same lock, or wait a little (spin) before the other thread releases the lock. But when the spin exceeds a certain number, or when one thread is holding the lock, one is spinning, and a third person is calling, the lightweight lock bulges into a heavyweight lock, which blocks all threads except the one that owns the lock, preventing the CPU from idling.

  • Memory semantics of locks

    When a thread releases a lock, the Java memory model flusher shared variables from the thread’s local memory to main memory. When a thread acquises a lock, the Java memory model invalidates the thread’s local memory, so that the critical section code protected by the monitor must read the shared variable from main memory.

Synchronized and already

There may be some ReentrantLock source parsing involved here, which may require some prior knowledge:

AQS related knowledge: CAS, spinlock, Park, unPark

Among them, spin was mentioned above and CAS will be mentioned below. You can go down and read the paragraph first and then come back to it. Park and unpark only need to know their functions.

  • Already profile

    ReentrantLock, a semantic equivalent of Synchronized, is an AQS implementation written by Doug Lea and introduced in Java1.5. The initial pain point that Synchronized wanted to solve was that it needed to carry out tedious user-mode kernel mode switching (before Java1.6) every time it locked, resulting in a waste of resources. However, ReentrantLock can be performed entirely in THE JVM layer under the scenario of alternate thread execution, without invoking Native methods and then making system calls like Synchronized. In this way, frequent user kernel mode switching is avoided and the speed is improved.

  • ReentrantLock Lock process

    To understand this process, will be a little long, will be a little difficult, also some people say that let me don’t just began to learn something is to be impetuous, start writing content, such as concurrent, but I still hope oneself can output something source, I will in line with the most objective and actual jumping-off place in their own way to tell the paragraph, I hope you are also in the spirit of learning from each other to look at this paragraph. So if there’s a bug, it’s always the same, in the comments section or just contact me in private.

    I’ll talk about the ReentrantLock locking process based on A hypothetical scenario where threads A, B, and C acquire the lock (I’ll write the whole process in A comment of the code, with some skipping in between).

    • First, let’s take a look at the structure of AQS

      /*AQS are composed of nodes, each Node holds the precursor Node, successor Node, synchronization state, wait state of the current thread. Of course, it also contains a thread entity */
      
      // Synchronize the head node of the queue
      private transient volatile Node head;
      // Synchronize the last node of the queue
      private transient volatile Node tail;
      // Whether the lock is occupied. 0 indicates free and 1 indicates occupied
      private volatile int state;
      
      Copy the code

      Since ReentrantLock is implemented based on AQS (ReentrantLock internally uses Sync, which is a subclass of AQS), it is important to understand the structure of AQS first.

    • ReentrantLock fair and unfair

      public ReentrantLock(boolean fair) {
          // constructor
          // If true is passed, a fair lock is created, if false, an unfair lock is created
          // Default is unfair lock
          Both unfair and fair locks inherit from ReentrantLock's static inner class Sync, which inherits from AQS
      	sync = fair ? new FairSync() : new NonfairSync();
      }
      Copy the code
    • A thread lock (first say fair lock lock process)

      1. First thread A attempts to acquire the lock(call reentrantLock.lock())

      Thread A acquires the lock
      public void lock(a) {
      	sync.lock();
      }
      Copy the code

      2. Enter the fair lock lock() method

      final void lock(a) {
          Acquire (CAS); acquire (CAS); /* Acquire (CAS); /* Acquire (CAS); * /
      / * = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = line = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = * /
      	acquire(1);
          /* Acquire (); /* Acquire (); * /
      }
      Copy the code

      3. Enter the acquire method

      public final void acquire(int arg) {
      /* First try to acquire the lock by calling the tryAcquire method. For convenience, I'll paste the tryAcquire method directly below
      / * = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = line = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = * /
      	if(! tryAcquire(arg) && acquireQueued(addWaiter(Node.EXCLUSIVE), arg)) selfInterrupt();/* tryAcquire returns true, the if statement inverts true to false, and the if statement is not executed. Acquire returns. * /
      }
          
      Enter the tryAcquire method from the acquire method
      protected final boolean tryAcquire(int acquires) {
      	// Get the current thread: thread A
      	final Thread current = Thread.currentThread();
      	// Get the current state. State is the flag bit of whether the lock is occupied. 0 indicates free and 1 indicates occupied
          State == 0, c == 0, c == 0,
          int c = getState();
          if (c == 0) {
          // c must be equal to 0
          // There is a HasqueuedBoommethod that determines whether there is an element in the wait queue before it. The source code is also posted below. (Those of you who see this jump straight to point 4 to see the HasqueuedToraise method)
      / * = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = line = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = * /
          // According to the result in point 4 below, hasqueueboomreturns false. The if statement reverses the result and becomes true.
          // Continue with the next condition: campareAndSetState(0,acquires), CAS.
          // the CAS operation changes the state from 0 to 1, indicating that the lock is occupied.
          // Execute setExclusiveOwnerThread(current) to change the lock holder to the current thread.
          // The entire method returns true.
          if(! hasQueuedPredecessors() && compareAndSetState(0, acquires)) {
          	setExclusiveOwnerThread(current);
              return true; }}else if (current == getExclusiveOwnerThread()) {
              int nextc = c + acquires;
              if (nextc < 0)
              	throw new Error("Maximum lock count exceeded");
              setState(nextc);
              return true;
          }
          return false;
      }
      Copy the code

      4. Enter the Hasqueued24 method

      public final boolean hasQueuedPredecessors(a) {
      	// Assign the end of AQS to t
          Node t = tail; 
          // Assign the header element of AQS to h
          Node h = head;
          Node s;
          /* thread A will come in here and check if h is equal to t * thread A is the first one to come in here, and the queue is not initialized, so h ==null, t==null, * so h! If t is false, this method directly returns false */
          returnh ! = t && ((s = h.next) ==null|| s.thread ! = Thread.currentThread()); }Copy the code

      Through the above 4 steps, we can directly work out the locking process of thread A. The locking process of thread A is also A locking process when there is no element in the whole AQS queue. How can we call it successful locking? The lock method will not return if the lock failed.

    • Thread B locks

      As usual, thread B enters the lock method to try to hold the lock. 1. Thread B enters the lock method

      final void lock(a) {
          /* If the CAS operation fails to acquire the acquire lock, acquire will be called. * /
      acquire(1);
      }
      Copy the code

      2. Thread B enters the acquire method

      public final void acquire(int arg) {
          /* Thread B first calls the tryAcquire method to try to acquire the lock. TryAcquire method) / * = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = line = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = * /* /if(! tryAcquire(arg) && acquireQueued(addWaiter(Node.EXCLUSIVE), arg)) selfInterrupt();/* The tryAcquire method returns the result: False, the if statement will negate false to true, and the if statement will proceed and call the acquireQueue method. The addWaiter method will be called before the acquireQueue method is called. Enter the addWaiter method */
      / * = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = line = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = * /
      /* Based on the result of 4, the addWaiter method returns a Node object that holds the current thread, thread B */
      / * = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = line = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = * /
      /* Continue with the acquireQueued method and skip to 6*/
          
      }
      Copy the code

      3. Thread B enters the tryAcquire method

      Thread B enters the tryAcquire method from the acquire method
      protected final boolean tryAcquire(int acquires) {
          // Get the current thread: thread B
          final Thread current = Thread.currentThread();
          // Get the current state. State is the flag bit of whether the lock is occupied. 0 indicates free and 1 indicates occupied
          Thread A is holding the lock, so state==1, i.e. c==1
          int c = getState();
          if (c == 0) {
              // c==1, so it can't get here
              if(! hasQueuedPredecessors() && compareAndSetState(0, acquires)) {
                  setExclusiveOwnerThread(current);
                  return true; }}else if (current == getExclusiveOwnerThread()) {
              // Determine whether the current thread is the thread that holds the lock
              // Thread B is obviously not the thread currently holding the lock, so it can't get here.
              int nextc = c + acquires;
              if (nextc < 0)
                  throw new Error("Maximum lock count exceeded");
              setState(nextc);
          return true;
          }
      Thread B's entire tryAcquire method returns false
          return false;
      }
      Copy the code

      4. Thread B enters the addWaiter method

      private Node addWaiter(Node mode) {
          // Create a Node and change the thread property to the current thread (thread B)
          //Node stores the current thread, precursor Node, successor Node, synchronization state, wait state of the current thread. AQS are made up of nodes
          Node node = new Node(Thread.currentThread(), mode);
          // Assign tail to pred, but tail is null
          Node pred = tail;
          if(pred ! =null) {
              // Pred is null because the wait queue is not initialized, so enq is executed
              node.prev = pred;
              if (compareAndSetTail(pred, node)) {
                  pred.next = node;
                  returnnode; }}// Execute enq method, skip to 5, enq method
      enq(node);
      / * = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = line = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = * /
      // the enq method returns
          // The entire addWaiter method returns, which returns the node that holds the current thread
          return node;
      }
      Copy the code

      5. Thread B enters enQ

      private Node enq(final Node node) {
          // The enq method passes in a node that holds thread B
          for (;;) {/ / death cycle
              // First loop: create a Node for the running Thread with the Thread attribute null
              // Second loop: Point the prev pointer of the Node holding thread B to the Node created in the first loop
              // Finally, A queue is formed. The first queue is the Node that holds the running thread (thread A), followed by the Node that holds thread B
              Node t = tail;
              if (t == null) { 
                  /* This is a CAS operation that sets the head of the AQS queue, which is equal to a new Node. Only when this is set successfully will the head of the queue be assigned to the tail */
                  /* This is just an initialization * so now the queue should look like this: * There is only one element in the queue, * and head and tail point to the same Node */
                  // Then the enq method enters the second loop
              	if (compareAndSetHead(new Node()))
                      tail = head;
              } else {
                  /* The second loop passes through this code * assigns t to the prev of node, where t==head, so head is assigned to the prev of t. What is node? Node is the node that holds thread B, i.e., thread B's pointer to the previous node, prev, pointing to the newly created node. * CAS then updates the queue tail, pointing to the current thread node * and returns the queue t */
                  node.prev = t;
                  if (compareAndSetTail(t, node)) {
                  	t.next = node;
                      returnt; }}}}Copy the code

      6. Thread B enters the acquireQueued method

      final boolean acquireQueued(final Node node, int arg) {
      	// This method passes in a node that holds the current thread, thread B
      	//arg = 1
          boolean failed = true;
          try {
              boolean interrupted = false;
              / / death cycle
              for (;;) {
                  // First assign the previous node of the current thread to p
                  // P is equal to the previous Node of thread B, and p is the Node that loaded thread A
                  final Node p = node.predecessor();
                  P ==head = tryAcquire (tryAcquire)
                  /* * There are two cases. * set head to thread B's node * p.ext = thread A's next pointer, and set it to null to make it referential. Help the GC recover * and then the whole method returns false (why interrupted, we'll see later) * *2. If thread A is not finished running, the lock attempt fails. So the program doesn't go into this if, and it keeps going. * The essence of this is: Immediately after the judge oneself need to line up, don't park, but the first spin time, attempts to acquire locks, if acquiring a lock successful, can be taken directly lock, rather than a context switch, failed to get the lock if you try, to do other operations, such attempts to acquire lock will repeat twice, that is to say, the thread will spin twice before park! * /
                  if (p == head && tryAcquire(arg)) {
                      setHead(node);
                      p.next = null; // help GC
                  	failed = false;
                      return interrupted;
              	}
                  / / if a program attempts to acquire the lock fails, will enter the judgment and execution shouldParkAfterFailedAcquire method
                  / / shouldParkAfterFiledAcquire into two parameters, namely the thread A Node and loading the thread B Node
                  / / see the classmates to 7, have shouldParkAfterFailedAcquire method.
      / * = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = line = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = * /
                  / / by 7 known shouldParkAfterFailedAcquire method returns false, then enter the next cycle.
                  /* In the second loop, another attempt is made to acquire the lock, and if the lock fails, * * will also enter shouldParkAfterFailedAcquire method then because for the first time in this method have been amend the waitStatus to the Node. The SIGNAL (that thread is ready to be blocked and wait for the wake up) * so shouldParkAfterFailedAcquire will return true * then enter parkAndCheckInterrupt method, this method is only two lines of code * LockSupprt park (this); // Put the Thread into a blocked state *return thread.interrupted (); // Returns true if the thread is interrupted, which has a pit, as discussed later. * So thread B will block after two failed spin attempts to lock. * This method terminates, but does not return normally because the thread is already blocked. * So the original call to the lock method does not return properly, so the entire thread is stuck on the lock method line. The shared area cannot be accessed. * This is also what I mentioned earlier, as long as the lock returns normally, the lock is successful, and if the lock does not return normally, the thread is blocked. * /
                  if (shouldParkAfterFailedAcquire(p, node) &&
                  parkAndCheckInterrupt())
                      interrupted = true; }}finally {
              if(failed) cancelAcquire(node); }}Copy the code

      7. Thread B into shouldParkAfterFailedAcquire (after attempt failed to get the lock need park) method

      private static boolean shouldParkAfterFailedAcquire(Node pred, Node node) {
          / * * shouldParkAfterFailedAcquire (trying to do after failed to get the lock need to park) method has three purposes: *1. If the status bit of pred.waitStatus is greater than 0, it indicates that the node has cancelled the lock operation, and the doWhile loop will recursively delete the nodes that have abandoned the lock. *2. If the status bit is not Node.SIGNAL and the operation is not canceled, the system tries to change the status bit to Node.SIGNAL. *3. If the status bit is Node.SIGNAL, the thread is ready to block and wait to wake up. * /
          
      	// This method takes two arguments, the node of the current thread and the pred of the current node
          // We first assign the waitStatus attribute of PREd to WS, which is currently 0
      	int ws = pred.waitStatus;
          // node.signal indicates that the Node is ready to block and wait to wake up. If the Node is not set to this state, the thread will not block. The waitStatus of the current node's PRED is not set to this state
          if (ws == Node.SIGNAL)
          	return true;
          // currently ws is equal to 0 instead of greater than 0
          if (ws > 0) {
              do {
                  node.prev = pred = pred.prev;
              } while (pred.waitStatus > 0);
              
              pred.next = node;
          } else {
              // Enter the code block
              // Try CAS to change pred! pred! pred! Change the waitStatus property from 0 to Node.signal
              In other words, the waitStatus property of the current threaded Node can only be changed by the following Node. In other words, the waitStatus of the previous Node identifies its own waitStatus.
          	compareAndSetWaitStatus(pred, ws, Node.SIGNAL);
          }
          // The entire method returns false
          return false;
      }    
      Copy the code
    • Thread C locks

      1.lock

      Thread C acquires the lock
      public void lock(a) {
      	sync.lock();
      }
      Copy the code

      2. Lock fairly

      final void lock(a) {
          /* If the CAS operation fails to acquire the acquire lock, acquire will be called. * /
      	acquire(1);
      }
      Copy the code

      3. The method to acquire

      public final void acquire(int arg) {
          /* Thread B will call the tryAcquire method to try to acquire the lock
      / * = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = line = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = * /
          /* The tryAcquire method returns false, so proceed with the addWaiter method. If you see this, go to 5
      / * = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = line = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = * /
          /*addWaiter returns the Node that holds thread C, and acquireQueued is executed. If you see this, enter the acquireQueued method */
          if(! tryAcquire(arg) && acquireQueued(addWaiter(Node.EXCLUSIVE), arg)) selfInterrupt(); }Copy the code

      4. TryAcquire method

      // Thread C enters the tryAcquire method from acquire
      protected final boolean tryAcquire(int acquires) {
          // Get the current thread: thread C
          final Thread current = Thread.currentThread();
          // Get the current state. State is the flag bit of whether the lock is occupied. 0 indicates free and 1 indicates occupied
          Thread A is holding the lock, so state==1, i.e. c==1
          int c = getState();
          if (c == 0) {
              // c==1, so it can't get here
              if(! hasQueuedPredecessors() && compareAndSetState(0, acquires)) {
                  setExclusiveOwnerThread(current);
                  return true; }}else if (current == getExclusiveOwnerThread()) {
              If the thread holding the lock is the current thread, then the CAS is not required to acquire the lock */
              // Determine whether the current thread is the thread that holds the lock
              // Thread C is not the thread currently holding the lock, so it can't get here.
              int nextc = c + acquires;
              if (nextc < 0)
                  throw new Error("Maximum lock count exceeded");
              setState(nextc);
              return true;
          }
          Thread C's entire tryAcquire method returns false
          return false;
      }
      Copy the code

      5. AddWaiter method

      private Node addWaiter(Node mode) {
          //Node stores the current thread, precursor Node, successor Node, synchronization state, wait state of the current thread. AQS are made up of nodes
          // Create a Node and change the thread property to the current thread (thread C)
          Node node = new Node(Thread.currentThread(), mode);
          // Assign tail to pred, but tail is the Node that holds thread B
          Node pred = tail;
          if(pred ! =null) {
              // Pred does not equal null, the queue is initialized, and there are two nodes in it
              // Assign pred to the precursor pointer to the Node of thread C.
              // Pred is the Node that holds thread B
              node.prev = pred;
              // Perform a CAS operation to point tail to the Node where thread C is stored
              // The entire AQS queue should look like this
              < -- > Thread B < -- > thread C (Tail)
              if (compareAndSetTail(pred, node)) {
                  pred.next = node;
                  return node;
                  // The method returns. The return value is Node that holds thread C
              }
          }
          enq(node);
          return node;
      }
      Copy the code

      6. AcquireQueued method

      final boolean acquireQueued(final Node node, int arg) {
      	// This method passes in a node that holds the current thread, thread C
      	//arg = 1
          boolean failed = true;
          try {
              boolean interrupted = false;
              / / death cycle
              for (;;) {
                  // Assign the first node of the current thread to p, i.e. p== the node that holds thread B
                  final Node p = node.predecessor();
                  //p is now the second position in the queue, so it is not the head and will not enter the judgment.
                  /* * When a thread whose previous thread is the head of the queue enters this method, it is possible that the head of the queue has already completed its execution and released the lock. There is a queue ahead of you * so you don't have to try to get the lock, because it's not your turn. * So just give up executing this code and get in line. * /
                  if (p == head && tryAcquire(arg)) {
                      setHead(node);
                      p.next = null; // help GC
                      failed = false;
                      return interrupted;
                  }
                  // Enter the judgment directly to determine whether park is required
                  / / implementation of process and thread B just shouldParkAfterFailedAcquire method is the same
                  // So I won't repaste the code.
                  // Thread C will eventually live here by Park
                  // The method completes, but does not return normally. Thread C is blocked, waiting for the previous thread to finish executing to wake it up.
                  if (shouldParkAfterFailedAcquire(p, node) &&
                      parkAndCheckInterrupt())
                      interrupted = true; }}finally {
              if(failed) cancelAcquire(node); }}Copy the code
  • Locking process summary

    • The first thread acquires the lock

      When the lock is idle, the first thread attempts to acquire the lock, which is the case with thread A above. The queue is not initialized at this point, and in fact will not be initialized after thread A successfully acquires the lock. As you can see, the whole process of thread A acquiring the lock is very smooth, and there is no system call, no user-mode kernel transformation, all operations are completely completed in Java. Therefore, if threads are executing alternately, using ReentrantLock as a means of synchronization can be efficient. Because there is no thread blocking, locking does not require a system call.

    • The second thread acquires the lock

      When the lock is occupied, the second thread attempts to acquire the lock, which corresponds to thread B acquiring the lock above. At this point, the queue is not initialized, so thread B will initialize the entire queue, but there is not only thread B in the queue, but there is A Node in front of thread B. Although the Node has A null thread attribute, we can treat it as representing thread A that currently holds the lock. Thread A is not queued at all, so the thread attribute of the first Node is null. Or you can think of it this way: in a queue, the first person standing in the queue is not standing in the queue, but is doing business, and the person standing second or later in the queue is standing in the queue. The point is that there are now two nodes in the queue. Then, after B has created the Node, thread B, since thread B is the second thread in the queue, gets its turn when THREAD A finishes, so thread B will check to see if thread A has finished releasing the lock, so it will try to acquire the lock, which is the first spin. Change the waitStatus of the previous Node to Node.SIGNAL to indicate that it is ready to block, but thread B will make another desperate attempt before park. What if I can get the lock now? !!!!!!!!! , it tries to acquire the lock again, and if it fails again, it blocks directly with park.

    • The third thread acquires the lock

      When the lock is occupied and the queue is initialized, there are threads queuing in front of the queue. That’s what thread C is going to do. Thread C is going to try to get the lock, but it’s not going to get the lock because there’s a queue in front of it. So it went directly to wait in line. When waiting in line, it looked at its position and stopped trying to get the lock. There were a lot of other people waiting in front of it, so it didn’t have the chance to get the lock. But third place doesn’t mean we lose hope! It spins twice to see if it has a chance to move up to second place, and if it does, it still tries to get the lock.

    • Final queue diagram

    • The nature of the Lock

      In fact, it is very simple, the thread that successfully acquired the Lock can return normally, after the normal return can continue to execute the code after the Lock. Threads that do not acquire the lock are blocked in the lock and cannot proceed.

  • A pit left in the locking process

    • Why does the parkAndCheckInterrupt() method return thread.interrupted () instead of true or false, or nothing?

      parkAndCheckInterrupt

      private final boolean parkAndCheckInterrupt(a) {
          LockSupport.park(this);// Call park() to make the thread enter a waiting state
      	return Thread.interrupted();// If you are awakened, check to see if you are interrupted.
      }
      Copy the code

      When the park method is executed, the thread is already blocked, but when the current thread releases the lock, it will wake up the currently blocked thread and continue to execute. Of course it’s not that simple, first need to determine whether this thread has been interrupted by the user, because some developers will add is similar to the logic in the code: if you wait for x seconds does not lock, so you don’t go to the execution, interrupt directly, so the thread to be awakened, it is actually an interrupt state.

      This raises the question: if a thread that uses the lock() method to acquire the lock is occupied and the thread is blocked, does calling interrupt() on the blocked thread cancel the lock? The answer is no. Locksupport.park responds to the interruption but does not throw InterruptedException() (The lockInterruptibly() method can be called to lock, or InterruptedException() is thrown if the thread is interrupted). That is, if we lock the thread with a lock and interrupt the thread with an interrupt, it will not respond. So why return the interrupt status in parkAndCheckInterrupt? Wouldn’t it be good to just return void? Let’s look at the logic of the lockInterruptibly() method:

      public void lockInterruptibly(a) throws InterruptedException {
      	sync.acquireInterruptibly(1);
      }
      Copy the code
      public final void acquireInterruptibly(int arg) throws InterruptedException {
      	if (Thread.interrupted())
              // If the thread has been interrupted, an exception is thrown
      		throw new InterruptedException();
      
      	if(! tryAcquire(arg)) doAcquireInterruptibly(arg); }Copy the code
      private void doAcquireInterruptibly(int arg) throws InterruptedException {
      	final Node node = addWaiter(Node.EXCLUSIVE);
          boolean failed = true;
          try {
          	for (;;) {
              final Node p = node.predecessor();
              if (p == head && tryAcquire(arg)) {
              	setHead(node);
                  p.next = null; // help GC
                  failed = false;
                  return;
              }
              if (shouldParkAfterFailedAcquire(p, node) &&
              	parkAndCheckInterrupt())
                  // This is not the same as the normal lock.
                  // Lock only records the interrupted state
                  //interrupted=true;
                  // The exception will be thrown directly
                  throw newInterruptedException(); }}finally {
      		if(failed) cancelAcquire(node); }}Copy the code

      So why doesn’t the parkAndCheckInterrupt() method return void instead of Thread.interrupted()? Because parkAndCheckInterrupt is also called in doAcquireInterruptibly, the code logic in this method can be reused in both lockInterruptibly and lockInterruptibly, so the interrupted state is returned.

      Question: What does selfInterrupt() do, why do I need to call this method again, and what happens if I don’t? Readers are welcome to comment in the discussion section.

  • Gossip about Synchronized and ReentrantLock

    Before Java1.6, Synchronized was a real heavyweight lock. Every time the lock was locked, it needed to switch between user mode and kernel mode, and blocked threads that did not acquire the lock. At this time, ReentrantLock was launched to solve this pain point and survive. Synchronized, on the other hand, has been greatly improved since Java1.6, introducing spin, adaptive spin, and Synchronzied into four different states: Lock free, bias, lightweight, and heavyweight automatically upgrade and degrade locks in different scenarios, so Synchronized can now shed its weight lock hat, and in some cases can be more effective than ReentrantLock. Does ReentrantLock still exist? Obviously, there is. ReentrantLock provides a rich API, which is more flexible to use, and it provides fair and unfair lock implementation, which Synchronized cannot do.

JMM memory visibility

  • Java Memory Model JMM (Non-JVM Java Memory Model)

    The Java Memory Model, itself an abstract concept, does not really exist. It describes a set of rules or specifications that define how variables in a program (including instance fields, static fields, and elements that make up array objects) can be accessed. It is not exactly the same concept as the Java Memory Model for the JVM. You can see from my previous blog that the Java Memory model for the JVM is referred to as the JMM because it has the same name but different concepts. JMM and Java memory partitioning are different conceptual levels. JMM describes a set of rules around atomicity, visibility, and order. The only similarity between JMM and Java memory model is that they both have shared regions and private regions. The JMM’s private area is the working memory, while the Java memory model’s private area contains program counters, the Java virtual machine stack, and the local method stack.

    The JVM creates a working memory (stack space) for each thread to store thread-private data, whereas Java dictates that all variables reside in main memory, which is a shared memory area accessible to all threads, but thread operations on variables must be performed in the working memory. So each thread first copies variables from main memory to working memory, and then writes variables back to main memory when the operation is complete.

  • The JMM main memory

    Main memory mainly stores Java instance objects, including member variables, class information, constants, static variables, and so on. Because it is a shared area, it causes thread safety problems when multiple threads operate concurrently.

  • JMM working memory

    Working memory mainly stores all local variable information, bytecode line number indicator and Native method information of the current method. Local variables are not visible to other threads. Working memory stores copies of variables in main memory, and each thread can only access its own local memory. Since working memory is private space, there are no thread-safety issues.

  • The data store l type and operation mode of main memory in working memory are summarized

    Local variables are stored directly in the stack frame structure in working memory.

    2. Local variables of reference type. Reference objects are stored in stack frames and instance objects are stored in main memory.

    3. Member variables, static variables, and class information of the instance object are stored in the main memory.

    4. The main memory sharing mode is that each thread copies a variable to the working memory and flusher it back to the main memory after the operation.

  • How does the JMM solve the memory visibility problem

    • Conditions for instruction reordering

      1. The running result of the program cannot be changed in the single-thread environment. 2. Reordering is not allowed if data dependencies exist. That is, instructions can only be reordered if they cannot be deduced by the happens-before principle.

    • Happens-before principle

      The result of operation A needs to be visible to operation B, then there is A happens-before relationship between A and B, for example:

      i=1;// Thread A executes
      j=i;// Thread B executes
      /* thread A happens-before B*/
      Copy the code

      Happens-before eight rules:

      1. Program order principle: in a thread, according to the code order, the first operation written before the operation written later.

      2. Lock rule: An unLock operation occurs first when another lock operation is performed on the same lock.

      3. The volatile variable rule: Writes to a variable occur before reads to that variable.

      4. Transfer rule: If operation A precedes operation B and operation B precedes operation C, it can be concluded that operation A precedes operation C.

      5. Thread start rule: The start() method of the Thread object precedes every action of the Thread.

      6. Thread interrupt rule: Calls to the threadinterrupt () method occur first when the interrupt event is detected by the code of the terminal thread.

      7. Thread termination rule: All operations in a Thread occur before the Thread terminates. We can terminate the Thread by using thread.join () and detect that the Thread has terminated by using the return value of thread.isalive ().

      8. Object finalization rule: The finalization of an object happens first at the start of its Finalize () method.

      If two operations do not meet either of the happens-before rules, there is no guarantee that they are in order, and the JVM can reorder them. If operation A happens-before operation B, then all operations done on memory by operation A are visible to operation B.

    • Visibility of Volatile

      Volatile: A lightweight synchronization mechanism provided by the JVM. It can ensure that the decorated shared variable is always visible to all threads, and it can prohibit instruction reordering, but Volatile is not safe in a multithreaded environment.

      public class VolatileTest{
      	public static volatile int value = 0;
      	public static void main(String[] args){
      		inc();
      	}
      	public static void inc(a){ value++; }}Copy the code

      The value of the volatile modifier is flushed to main memory on each operation, but in a multithreaded environment, this can be problematic because value++ is not atomic and can be secured using synchronized.

    • Why do volatile variables guarantee visibility?

      When a volatile variable is written, the JMM flushers the value of the shared variable in the thread’s working memory to main memory.

      When a volatile variable is read, the JMM invalidates the thread’s working memory, making it read from main memory.

    • How does Volatile prohibit reordering

      Memory barriers: Guarantee the order in which certain operations are performed and the visibility of certain variables into Memory. Volatile can prevent reordering optimization of instructions before and after the memory barrier by inserting a memory barrier instruction. It then forces the cache data of various cpus to be flushed out, so that any thread on any CPU can read the latest version of the variable.

    • The difference between volatile and synchronized

      1. Volatile essentially tells the JVM that the value of the current variable in the register (working memory) is uncertain and needs to be read from memory; Synchronized locks the current variable, which can only be accessed by the current thread, and blocks other threads until the thread has finished manipulating the variable.

      Volatile can only be used at the variable level, while synchronized can be used at the method, variable, and class levels.

      3. Volatile only allows variable change visibility and does not guarantee atomicity. Synchronized can guarantee the visibility and atomicity of variable modification.

      4. Volatile does not block threads, whereas synchronized does.

      5. Volatile variables are not optimized by the compiler, whereas synchronized variables are optimized by the compiler.

CAS

  • CAS is an efficient method to implement thread security

    1. It supports atomic update operation and is suitable for counters, sequence generators and other scenarios.

    2. Optimistic locking mechanism.

    3. If the CAS operation fails, it is up to the developer to decide whether to continue trying or to perform other operations.

  • CAS thought

    CAS contains three operands, V (memory location), E (original value), and N (new value). V can only be changed to N if and only if V==E, otherwise it is up to the developer to retry or do something else. In fact, the Atomic family of classes in JUC all use CAS to guarantee Atomic operations on variables. In many cases, the developer does not need to manipulate CAS directly to solve thread-safety problems, but instead uses JUC packages directly to ensure thread-safety.

  • CAS shortcomings

    1. If you select retry after the failure, the performance is affected greatly if the loop time is long.

    2. Atomic operations of only one shared variable can be guaranteed.

    3.ABA question: If A thread reads A variable with the value of A and prepares to assign the variable with the value of A, can it be guaranteed that the value of the variable has not been modified by other threads? What if the value of this variable is first changed to B by another thread and then changed to A by another thread while the value is being changed? Solution: AtomicStampedReference, import the version number.

Optimistic locks and pessimistic locks

Finally, let’s talk about some relatively light content, optimistic lock and pessimistic lock, and make a concept introduction.

  • Optimistic locking

    Always assume the best case, every time I go to get the data, I think others will not modify it, so I will not lock it, but when updating, I will judge whether others have updated the data during this period, which can be achieved by using the version number mechanism and CAS algorithm. Optimistic locks are suitable for multi-read applications to improve throughput. Optimistic locks are provided by databases similar to write_condition. In Java. Java util. Concurrent. Atomic package this atomic variable classes is to use the optimistic locking a way of implementation of CAS.

  • Pessimistic locking

    Always assume the worst, every time to fetch the data that people will change, so every time when take data will be locked, so people want to take this data will be blocked until it got locked * * (Shared resources to only one thread at a time using, other threads blocked, after use to transfer resources to other threads) * *. Traditional relational database inside used a lot of this locking mechanism, such as row lock, table lock, read lock, write lock, etc., are in the operation before the first lock. Exclusive locks such as synchronized and ReentrantLock in Java are implementations of the pessimistic locking idea.

  • Two types of lock usage scenarios

    Optimistic locking is suitable for low write situations (multi-read scenarios) where conflicts are really rare, thus eliminating the overhead of locking and increasing the overall throughput of the system. However, in the case of overwrite, conflicts often arise, which can cause the upper application to be repeatedly retry, thus reducing performance. Pessimistic locking is suitable for overwrite scenarios.

conclusion

Originally, I wanted to write this article for a long time. Lock is always a very complicated thing, and I always feel that how much I write is not enough, so I have been learning while adding content; I’ve also been told that I’m fickle and shouldn’t have learned something and started writing. Once again, blogging is really about building your own knowledge, not trying to teach you anything. After all, I’m still a student. So I hope you in the spirit of mutual discussion, mutual learning mentality, read this blog, if there is any mistake, you can directly put forward to discuss learning together.

The title of this article is “Lock”, as a summary of my time learning about concurrent programming.

Finally, I wish you all a happy National Day holiday, have a good time.

The picture of this article is from the Internet.

Welcome to visit my personal Blog: Object’s Blog