preface

Java provides a rich variety of locks, each of which can be very efficient in the right scenarios because of its unique characteristics. This article aims to give examples of lock-related source code (the source code in this article comes from JDK 8) and usage scenarios to introduce the knowledge of mainstream locks and the applicable scenarios of different locks for readers.

In Java, locks are often defined according to whether they contain a certain feature. We categorize locks by features and introduce them by comparison to help you understand relevant knowledge more quickly. The following is an overview of the contents of this article:



1. Optimistic locks vs. pessimistic locks

Optimistic locking and pessimistic locking are broad concepts that reflect different perspectives on thread synchronization. There are practical applications of this concept in Both Java and databases.

First, the concept. For concurrent operations on the same data, pessimistic locks assume that other threads must modify the data when they use the data, so they lock the data first when they acquire the data to ensure that the data will not be modified by other threads. In Java, the implementation classes for the synchronized keyword and Lock are pessimistic locks.

Optimistic locks, on the other hand, assume that no other thread will modify the data when they use it, so they will not add the lock, but just check whether the data has been updated before by another thread. If the data is not updated, the current thread writes its modified data successfully. If the data has already been updated by another thread, different operations are performed (such as reporting an error or automatic retry) depending on the implementation.

Optimistic locking is implemented in Java by using lock-free programming, most commonly the CAS algorithm, where incremental operations in Java atomic classes are implemented by CAS spin.



According to the above concept description, we can find:

  • Pessimistic locking applies to scenarios where many write operations are performed. The pessimistic locking ensures correct data during write operations.
  • Optimistic lock is suitable for scenarios where many read operations are performed. The no-lock feature greatly improves the read operation performance.

The concept is somewhat abstract, so let’s look at an example of how optimistic and pessimistic locks can be called:

/ / -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- pessimistic locks the invocation of the way -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- / / synchronized public synchronized voidtestMethod() {ReentrantLock private ReentrantLock lock = new ReentrantLock(); // Ensure that multiple threads use the same lock public voidmodifyPublicResources() { lock.lock(); // Synchronize resource lock.unlock(); } / / -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- optimistic locking the invocation of the way -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- private AtomicInteger AtomicInteger = new AtomicInteger(); / / need to make sure that multiple threads are using the same AtomicInteger AtomicInteger. IncrementAndGet (); // Increment by 1Copy the code

Using the invocation example, we can see that pessimistic locks almost always operate on synchronized resources after an explicit lock, while optimistic locks operate on synchronized resources directly. So, how can optimistic locking be able to achieve thread synchronization without locking synchronized resources? We introduce the main implementation of optimistic lock “CAS” technology principle to solve the confusion for everyone.

CAS, short for Compare And Swap, is a lock-free algorithm. Variable synchronization between multiple threads without locking (no threads are blocked). CAS is used by atomic classes in the java.util.Concurrent package to implement optimistic locking.

The CAS algorithm involves three operands:

  • Memory value V that needs to be read or written.
  • The value A for comparison.
  • The new value B to write.

CAS updates the value of V atomically with A new value B if and only if the value of V is equal to A (” compare + update “is an atomic operation altogether), otherwise nothing is done. In general, “update” is a retry operation.

The java.util.concurrent package contains atomic classes that implement optimistic locking via CAS. AtomicInteger



According to the definition, we can see the function of each attribute:

  • Broadening: Accesses and manipulates memory data.
  • ValueOffset: Stores the offset of a value in AtomicInteger.
  • Value: Stores the int value of AtomicInteger, which requires the volatile keyword to be visible across threads.

Unsafe.getandaddint () is the underlying call to AtomicInteger’s incrementAndGet() function. Unsafe.class is the only method in the JDK itself, so the unsafe. class file doesn’t have the required parameters.

/ / -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- the JDK 8 -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- / / AtomicInteger method on the public final intincrementAndGet() {
  return unsafe.getAndAddInt(this, valueOffset, 1) + 1;
}

// Unsafe.class
public final int getAndAddInt(Object var1, long var2, int var4) {
  int var5;
  do {
      var5 = this.getIntVolatile(var1, var2);
  } while(! this.compareAndSwapInt(var1, var2, var5, var5 + var4));return var5;
}

// ------------------------- OpenJDK 8 -------------------------
// Unsafe.java
public final int getAndAddInt(Object o, long offset, int delta) {
   int v;
   do {
       v = getIntVolatile(o, offset);
   } while(! compareAndSwapInt(o, offset, v, v + delta));return v;
}Copy the code

The getAndAddInt() loop takes the value v at the offset of the given object o and determines whether the memory value is equal to v. If it is equal, set the memory value to v + delta, otherwise return false, continue the loop and retry until it exits the loop on success, and return the old value. The entire “compare + update” operation is encapsulated in compareAndSwapInt(). In JNI, it is done with the help of a SINGLE CPU instruction. It is an atomic operation that ensures that multiple threads can see the changed value of the same variable.

The SUBSEQUENT JDK uses the CPU’s CMPXCHG instruction to compare A in the register with the value V in memory. If they are equal, the new value B to be written is stored in memory. If not, the memory value V is assigned to the value A in the register. The CMPXCHG instruction is then called again through the while loop in the Java code to retry until the setup succeeds.

Although CAS is very efficient, it also suffers from three major problems, which are briefly described here:

1. The problem of ABA. The CAS checks whether the memory value is changed when operating on the value. If the memory value is not changed, the CAS updates the memory value. But if the memory value is A, then B, and then A again, CAS checks to see that the value has not changed, but it has. The solution to the ABA problem is to add A version number to A variable, incrementing the version number by one each time the variable is updated, so that the process changes from “A-B-A” to “1A-2B-3A.”

The JDK provides an AtomicStampedReference class to solve ABA problems beginning in 1.5, encapsulated in compareAndSet(). CompareAndSet () first checks if the current reference and current flag are equal to the expected reference and expected flag, and if they are, sets the values of the reference and flag to the given updated value atomically.

2. Long cycle time and high cost. If the CAS operation is not successful for a long time, the CAS operation spins all the time, which incurs high CPU overhead.

3. Atomic operations of only one shared variable can be guaranteed. CAS guarantees atomic operations on one shared variable, but cannot guarantee atomic operations on multiple shared variables.

The JDK provides the AtomicReference class to ensure atomicity between reference objects. Multiple variables can be placed in one object for CAS operations.

2. Spinlocks vs. adaptive spinlocks

Before introducing spin locks, we need to introduce some prerequisites to help you understand the concept of spin locks.

Blocking or waking up a Java thread requires the operating system to switch CPU state, which takes processor time. If synchronizing the contents of a code block is too simple, state transitions can take longer than user code execution.

In many scenarios, synchronization resources are locked for a short period of time, and the cost of thread suspension and site recovery may not be worth the cost to the system to switch threads for this short period of time. If the physical machine has multiple processors that allow two or more threads to execute in parallel at the same time, we can let the later thread that requests the lock not give up CPU execution time to see if the thread that holds the lock will release the lock soon.

In order for the current thread to “hold on”, we need to spin the current thread. If the thread that held the synchronized resource before spinning has released the lock, the current thread can directly acquire the synchronized resource without blocking, thus avoiding the overhead of switching threads. This is the spin lock.



Spinlocks have their own drawbacks; they are no substitute for blocking. Spin waiting avoids the overhead of thread switching, but it takes up processor time. If the lock is held for a short period of time, the spin wait works very well. Conversely, if the lock is held for a long time, the spinning thread is a waste of processor resources. Therefore, the spin wait time must be limited, and the thread should be suspended if the lock is not successfully acquired if the spin exceeds the limit (the default is 10 spins, which can be changed using -xx :PreBlockSpin).

The implementation principle of spin locking is also CAS. In AtomicInteger, the source code calling unsafe for the increment operation, the do-while loop is a spin operation. If changing the value fails, the spin operation is performed through the loop until the change succeeds.



Spin-locking was introduced in JDK1.4.2 and is enabled using -xx :+UseSpinning. It became default in JDK 6 and introduced adaptive spin locking (adaptive spin locking).

Adaptive means that the spin time (number of spins) is no longer fixed, but is determined by the previous spin time on the same lock and the state of the lock owner. If the spin wait has just successfully acquired the lock on the same lock object, and the thread holding the lock is running, the virtual machine will assume that the spin wait is likely to succeed again, and it will allow the spin wait to last a relatively long time. If spin is rarely successfully acquired for a lock, it is possible to omit the spin process and block the thread directly in future attempts to acquire the lock, avoiding wasting processor resources.

There are three other common types of spin lock :TicketLock, CLHlock, and MCSlock.

3. Unlocked vs. biased vs. lightweight vs. heavyweight locks

These four types of locks refer to states of lock, specifically synchronized. There is a bit of additional information to cover before we introduce these four lock states.

Why does Synchronized achieve thread synchronization in the first place?

Before we can answer this question, we need to understand two important concepts: “Java object headers” and “Monitor.”

Java object head

Synchronized is a pessimistic lock that requires a lock on a synchronized resource before operating on it. This lock is stored in the Java object header.

Let’s take Hotspot virtual machine as an example. Hotspot’s object header contains two parts of data: Mark Word (Mark field) and Klass Pointer (type Pointer).

Mark Word: Default store object HashCode, generational age, and lock flag bit information. This information is irrelevant to the definition of the object itself, so Mark Word is designed as a non-fixed data structure to store as much data as possible in a very small amount of memory. It reuses its own storage space based on the state of the object, which means that the data stored in Mark Word changes as the lock flag bit changes during runtime.

Klass Point: a pointer to the object’s class metadata that the virtual machine uses to determine which class the object is an instance of.

Monitor

Monitor can be understood as a synchronization tool or a synchronization mechanism and is often described as an object. Each Java object has an invisible lock, called an internal lock or Monitor lock.

Monitor is a thread-private data structure, and each thread has a list of available Monitor Records, as well as a global list of available records. Each locked object is associated with a Monitor, and an Owner field in the monitor stores the unique identity of the thread that owns the lock, indicating that the lock is occupied by the thread.

Back to synchronized, synchronized implements thread synchronization through Monitor, which relies on the underlying operating system’s Mutex Lock for thread synchronization.

As we mentioned in spin locking, “blocking or waking up a Java thread requires the operating system to switch CPU state, a state transition that takes processor time. If synchronizing the contents of a code block is too simple, state transitions can take longer than user code execution. This is how synchronized was originally synchronized, which is why synchronized was inefficient prior to JDK 6. This type of Lock, which relies on the operating system Mutex Lock, is called a “heavyweight Lock.” In JDK 6, “biased locks” and “lightweight locks” were introduced to reduce the performance cost of acquiring and releasing locks.

There are four lock states, from lowest to highest: no lock, biased lock, lightweight lock, and heavyweight lock. The lock status can only be upgraded, not degraded.

Through the above introduction, we have an understanding of the locking mechanism and related knowledge of synchronized. Then we give the Mark Word content corresponding to the four lock states below, and then explain the ideas and characteristics of the four lock states respectively:

The lock state Store content Store content
unlocked Object hashCode, object generation age, whether biased lock (0) 01
Biased locking Biased thread ID, biased timestamp, object generation age, whether biased lock (1) 01
Lightweight lock Pointer to the lock record in the stack 00
Heavyweight lock A pointer to a mutex (heavyweight lock) 10

unlocked

None Lock No resource is locked. All threads can access and modify the same resource, but only one thread can modify the resource successfully.

The lockless feature is that the modification operation occurs in a loop, and the thread constantly tries to modify the shared resource. If there are no conflicts, the modification succeeds and exits, otherwise the loop continues. If multiple threads modify the same value, one thread will succeed, and the others will retry until the modification succeeds. The CAS principle and application described above is the implementation of lock-free. Lockless can’t completely replace locking, but the performance of locking is very high in some cases.

Biased locking

Biased locking means that a piece of synchronized code is always accessed by a thread, so the thread will automatically acquire the lock, reducing the cost of acquiring the lock.

In most cases, the lock is always acquired multiple times by the same thread, and there is no multithreading competition, so biased locking occurs. The goal is to improve performance when only one thread executes a synchronized code block.

When a thread accesses a block of synchronized code and acquires a lock, the thread ID of the lock bias is stored in Mark Word. Instead of using CAS to lock and unlock a thread when it enters and exits a synchronized block, it checks whether the Mark Word stores a biased lock pointing to the current thread. Biased locks are introduced to minimize unnecessary lightweight lock execution paths without multithreading competition, because the acquisition and release of lightweight locks depend on multiple CAS atomic instructions, while biased locks only need to rely on one CAS atomic instruction when replacing ThreadID.

Biased lock Only when other threads attempt to compete for biased lock, the thread holding biased lock will release the lock, the thread will not actively release biased lock. A partial lock is revoked by waiting for the global safe point (at which no bytecode is being executed), which first suspends the thread with the partial lock to determine whether the lock object is locked. After the biased lock is revoked, the system restores to the no-lock (flag bit is “01”) or lightweight lock (flag bit is “00”) state.

Biased locking is enabled by default in JDK 6 and later JVMS. Biased locking can be turned off by JVM arguments: -xx: -usebiasedLocking =false. After this is turned off, the program enters the lightweight locking state by default.

Lightweight lock

It means that when a biased lock is accessed by another thread, the biased lock will be upgraded to a lightweight lock. Other threads will try to acquire the lock through the form of spin without blocking, thus improving performance.

When the code enters the synchronization block, if the Lock status of the synchronization object is lockless (the Lock flag bit is “01”, whether the bias Lock is “0”), the VIRTUAL machine will first establish a space named Lock Record in the stack frame of the current thread, which is used to store the copy of the current Mark Word of the Lock object. Then copy the Mark Word from the object header into the lock record.

After the copy is successful, the VM attempts to update the Mark Word of the object to a pointer to the Lock Record using the CAS operation, and sets the owner pointer in the Lock Record to the Mark Word of the object.

If the update succeeds, the thread owns the lock on the object, and the object’s Mark Word lock bit is set to 00, indicating that the object is in a lightweight locked state.

If the lightweight lock fails to update, the virtual machine first checks whether the object’s Mark Word refers to the current thread’s stack frame. If it does, the current thread already owns the lock and can proceed directly to the synchronization block. Otherwise, multiple threads compete for the lock.

If there is currently only one waiting thread, the thread waits through spin. But when the spin exceeds a certain number, or when one thread is holding the lock, one is spinning, and a third person is calling, the lightweight lock is upgraded to the heavyweight lock.

Heavyweight lock

When upgrading to a heavyweight lock, the status value of the lock flag changes to “10”, then the pointer to the heavyweight lock is stored in the Mark Word, and all the threads waiting for the lock will enter the blocking state.

The upgrade process of the lock status is as follows:



To sum up, biased locking can solve the locking problem by comparing Mark Word to avoid CAS operation. Lightweight locking uses CAS operation and spin to solve the locking problem and avoid thread blocking and wake up, which affects performance. A heavyweight lock blocks all threads except the one that owns the lock.

4. Fair locks vs. unfair locks

A fair lock means that multiple threads acquire locks in the order in which they apply for locks. The thread directly enters the queue to queue, and the first thread in the queue can obtain the lock. The advantage of a fair lock is that the thread waiting for the lock does not starve. The disadvantage is that the overall throughput efficiency is lower than that of the unfair lock. All threads except the first thread in the waiting queue will block, and the cost of CPU waking up the blocked thread is higher than that of the unfair lock.

An unfair lock is a process in which multiple threads attempt to acquire the lock directly and wait at the end of the queue if they fail to acquire the lock. However, if the lock is available at this time, the thread can directly acquire the lock without blocking, so an unfair lock may occur in which the line that applied for the lock later obtains the lock first. The advantage of unfair locking is that it reduces the overhead of invoking threads, and the overall throughput is high, because threads have a chance to acquire the lock without blocking and the CPU does not have to wake up all threads. The downside is that threads in a waiting queue might starve to death or wait too long to acquire locks.

It may be a bit abstract to describe it directly in words, but here the author uses an example seen elsewhere to talk about fair and unfair locks.



As is shown in the picture above, suppose there is a well guarded by a caretaker. The caretaker has a lock. Only those who get the lock can draw water and return the lock to the caretaker when they have finished. Everyone who comes to fetch water needs permission from the warden and a lock before they can do so. If someone is fetching water in front of them, the person who wants to fetch water must wait in line. The superintendent checks to see if the next person to fetch water is first in line, and if so, locks you up to fetch water. If you’re not first in line, you have to go to the end of the line, which is fair lock.

But for unfair locks, the administrator has no requirement for the person who draws the water. Even wait in line with waiting in line, but if a person just played on the water lock to the administrator and the administrator has not yet been allowed to wait in line when the next person to fetch a pail of water, just right for a man who jumped the queue, the people in the queue can be obtained directly from the administrator lock to fetch a pail of water, do not need to queue, the people were waiting in line only to wait. As shown below:



Let’s take a look at the source code of ReentrantLock to explain fair and unfair locks.



According to the code, and already there is an inner class Sync, Sync inheritance AQS (AbstractQueuedSynchronizer), add the locks and the lock is released most of the operations are actually implemented in Sync. It has two subclasses, FairSync and NonfairSync. ReentrantLock uses non-fair locks by default, and can also be specified using fair locks via the constructor.

Here we look at the fair lock and unfair lock lock method source:



Through the comparison of the source code in the figure above, it is clear that the only difference between the lock() method of a fair lock and that of an unfair lock is that the fair lock has an additional constraint on acquiring the synchronous state: hasqueued24 ().



Once again on hasqueued24 (), you can see that the method does one thing: it chiefly determines whether the current thread is the first on a synchronous queue. Returns true if it is, false otherwise.

To sum up, fair lock is to achieve multiple threads to obtain locks in accordance with the order of lock application through synchronization queue, so as to achieve fair characteristics. Unfair lock does not consider the queuing problem when adding the lock, directly try to obtain the lock, so there is the situation of obtaining the lock first after applying.

5. Reentrant locks vs. non-reentrant locks

A reentrant lock, also known as a recursive lock, means that if the same thread acquires the lock from the outer method, the inner method of the same thread automatically acquires the lock (provided that the lock object is the same object or class). The lock is not blocked because it has been acquired before and has not been released. Java ReentrantLock and synchronized are both reentrantlocks. One advantage of ReentrantLock is that it can avoid deadlocks to some extent. Here is a sample code for analysis:

public class Widget {
    public synchronized void doSomething() {
        System.out.println("Method 1 executes...");
        doOthers();
    }

    public synchronized void doOthers() {
        System.out.println("Method 2 executes..."); }}Copy the code

In the code above, both methods in the class are decorated with built-in synchronized, and the doSomething() method calls the doOthers() method. Because the built-in lock is reentrant, the same thread that calls doOthers() can directly acquire the lock of the current object and go into doOthers() to operate.

If it is a non-reentrant lock, the current thread needs to release the lock that acquired the current object when doSomething() was executed before calling doOthers(), which is actually held by the current thread and cannot be released. So a deadlock occurs.

Why should a reentrant lock be automatically acquired when a nested call is made? Let’s parse the diagram and the source code separately.

In the case of fetching water, the administrator allows the lock to be tied to multiple buckets belonging to the same person when multiple people are waiting in line to fetch water. When the person draws water with multiple buckets, the first bucket is bound to the lock and filled with water, and the second bucket can be directly bound to the lock and filled with water. Only after all the buckets are filled with water can the lock be returned to the administrator. All of this person’s water fetching process can be successfully executed, and the subsequent waiting people can also fetch water. This is the reentrant lock.



However, if the lock is not reentrant, the administrator will only allow the lock to be bound to one bucket of the same person. The first bucket and lock will not release the lock after it finishes watering, so the second bucket will not be able to bind to the lock or draw water. The current thread is deadlocked and all threads in the entire wait queue cannot be woken up.



ReentrantLock and synchronized are both ReentrantLock and NonReentrantLock. The source code of ReentrantLock and NonReentrantLock is used to compare and analyze why non-reentrantLock can cause deadlocks when repeatedly calling synchronized resources.

First, both ReentrantLock and NonReentrantLock inherit from their parent class AQS, which maintains a synchronization status to count the reentrant times. The initial value of status is 0.

When a thread attempts to acquire the lock, the reentrant lock first attempts to acquire and update the status value. If status == 0 indicates that no other thread is executing the synchronization code, the status is set to 1 and the current thread starts executing. If the status! If = 0, the current thread is determined to be the thread that acquired the lock. If so, status+1 is executed and the current thread can acquire the lock again. Instead of a reentrant lock, simply get and try to update the current status value, if status! If = 0, it will fail to acquire the lock and the current thread will block.

When the lock is released, the reentrant lock also obtains the value of the current status, provided that the current thread is the thread holding the lock. If status-1 == 0, it indicates that all repeated lock acquisition operations of the current thread have been completed before the thread actually releases the lock. A non-reentrant lock is released by setting status to 0 after confirming that the current thread is the thread holding the lock.

6. Exclusive locks vs. shared locks

Exclusive and shared locks are also concepts. Let’s introduce the concepts first, then introduce exclusive and shared locks through the source code for ReentrantLock and ReentrantReadWriteLock.

An exclusive lock, also called an exclusive lock, means that the lock can only be held by one thread at a time. If thread T holds an exclusive lock on data A, no other thread can hold any type of lock on data A. A thread that acquires an exclusive lock can both read and modify data. The implementation classes for synchronized in the JDK and Lock in JUC are mutex.

A shared lock means that the lock can be held by multiple threads. If thread T adds A shared lock to data A, other threads can only add A shared lock to data A, not an exclusive lock. The thread that acquires the shared lock can only read the data, not modify it.

Exclusive lock and shared lock is also achieved through AQS, through the implementation of different methods, to achieve exclusive or shared.

ReentrantReadWriteLock



ReentrantReadWriteLock has two locks: ReadLock and WriteLock. A closer look reveals that ReadLock and WriteLock are locks implemented by the internal class Sync. Sync is a subclass of AQS, and is also found in CountDownLatch, ReentrantLock, and Semaphore.

In ReentrantReadWriteLock, the lock body of a read lock and a write lock is Sync. However, the read lock and write lock are added in different ways. Read locks are shared locks, while write locks are exclusive locks. The shared lock of a read lock ensures efficient concurrent reads, while the read-write, read-write, and write-write processes are mutually exclusive because the read lock and the write lock are separated. So ReentrantReadWriteLock is a huge increase in concurrency over regular mutex.

What is the difference between read lock and write lock? Before we look at the source code we need to review some other things. When we first mentioned AQS, we also mentioned the state field (int, 32 bits), which describes how many threads hold the lock.

In exclusive locks this value is usually 0 or 1 (state is the number of reentrant locks), in shared locks state is the number of locks held. However, ReentrantReadWriteLock has read and write locks, so you need to describe the number of read and write locks (or state) on an integer variable called state. Thus, the state variable “bitwise cut” is split into two parts, with the high 16 bits representing the read lock status (number of read locks) and the low 16 bits representing the write lock status (number of write locks). As shown below:



Understand the concept after we look at the code, first look at write lock lock source code:

protected final boolean tryAcquire(int acquires) { Thread current = Thread.currentThread(); int c = getState(); Int w = exclusiveCount(c); int w = exclusiveCount(c); // Get the number of write locks wif(c ! = 0) {// If a thread already holds the lock (c! =0) // (Note:ifc ! = 0 and w == 0thenshared count ! = 0)if(w == 0 || current ! GetExclusiveOwnerThread ()) // Returns failure if the number of writer threads (w) is 0 (in other words, there is a read lock) or if the thread holding the lock is not the current threadreturn false;
        if(w + exclusiveCount(acquires) > MAX_COUNT) // raise an Error if the number of locks written is greater than the maximum number (65535,2 ^ 16-1). throw new Error("Maximum lock count exceeded");
        // Reentrant acquire
    setState(c + acquires);
    return true;
  }
  if(writerShouldBlock() || ! CompareAndSetState (c, c + acquires)) // If the number of writer threads is 0 and the current thread needs to block then failure is returned; Alternatively, failure is returned if the number of writer threads fails to be increased by CAS.return false;
    setExclusiveOwnerThread(current); // If c=0, w=0 or c>0, w>0 (reentrant), set the owner of the current thread or lockreturn true;
}Copy the code
  • This code first fetches the number of current locks, C, and then passes c to get the number of write locks, W. Int w = exclusiveCount(c); int w = exclusiveCount(c); , the high 16 bits and 0 are 0 after the operation. What is left is the value of the low operation and the number of threads holding the write lock.
  • After obtaining the number of threads that write the lock, it first determines whether any thread already holds the lock. If a thread already holds the lock (c! If the number of writer threads is 0 (that is, there is a read lock) or the thread holding the lock is not the current thread, failure is returned (involving the implementation of fair and unfair locks).
  • An Error is raised if the number of write locks is greater than the maximum number (65535,2 ^ 16-1).
  • If when and the number of writer threads is 0 (then the reader thread should also be 0, since c! =0), and the current thread needs to block then return failure; If the CAS fails to increase the number of write threads, failure is returned.
  • If c=0,w=0 or c>0,w>0 (reentrant), set the owner of the current thread or lock and return success!

TryAcquire () adds a read lock determination in addition to the reentrant condition (the current thread is the thread that acquired the write lock). If a read lock exists, the write lock cannot be acquired. The reason is that the write lock must be visible to the read lock.

Therefore, the write lock can only be acquired by the current thread until all other reader threads release the read lock. Once the write lock is acquired, all subsequent access by other reader threads will be blocked. The write lock release process is similar to that of ReentrantLock. Each release reduces the write status. When the write status is 0, the write lock is released, and the waiting read-write thread can continue to access the lock.

Here is the code to read the lock:

protected final int tryAcquireShared(int unused) {
    Thread current = Thread.currentThread();
    int c = getState();
    if(exclusiveCount(c) ! = 0 && getExclusiveOwnerThread() ! = current)return- 1; Int r = sharedCount(c); int r = sharedCount(c);if(! readerShouldBlock() && r < MAX_COUNT && compareAndSetState(c, c + SHARED_UNIT)) {if (r == 0) {
            firstReader = current;
            firstReaderHoldCount = 1;
        } else if (firstReader == current) {
            firstReaderHoldCount++;
        } else {
            HoldCounter rh = cachedHoldCounter;
            if(rh == null || rh.tid ! = getThreadId(current)) cachedHoldCounter = rh =readHolds.get();
            else if (rh.count == 0)
                readHolds.set(rh);
            rh.count++;
        }
        return 1;
    }
    return fullTryAcquireShared(current);
}Copy the code

In the tryAcquireShared(int unused) method, if another thread has acquired the write lock, the current thread fails to acquire the read lock and enters the wait state. If the current thread acquires a write lock or the write lock is not acquired, the current thread (thread-safe, CAS guaranteed) increases the read state and successfully acquires the read lock. Each release of the read lock (thread-safe, possibly simultaneous release of multiple read threads) reduces the read state by “1<<16”. Therefore, read/write locks enable read/write processes to share, while read/write, read/write, and write processes are mutually exclusive.

ReentrantLock fair lock and unfair lock



We found that in ReentrantLock there are two types of fair and unfair locks, but they add exclusive locks. According to the source code, when a thread calls the lock method to acquire the lock, if the synchronized resource is not locked by another thread, the current thread will successfully preempt the resource after updating the state with CAS. If the common resource is occupied and not by the current thread, the lock fails. So you can be sure that the lock added by ReentrantLock is exclusive to both read and write operations.

The resources

1. Java Concurrent Programming art 2. Lock in Java 3.Java CAS principle analysis 4. Synchronized 7 in Java SE1.6 ReadWriteLock – ReadWriteLock – ReadWriteLock – ReadWriteLock – ReadWriteLock – ReadWriteLock – ReadWriteLock – ReadWriteLock 10.Java– How to implement read/write locks

Transfer: https://segmentfault.com/a/1190000017037430