Original: https://mp.weixin.qq.com/s/h3VIUyH9L0v14MrQJiiDbw the author: Xia Jie, alias Chu Zhao, is now working in BUC&ACL&SSO team of Alibaba Enterprise Intelligence Division. He provides authority control of personnel accounts and application data security access governance for Alibaba economy. Based on the existing technology and domain model, MOZI, an infrastructure SAAS product, is committed To building application information architecture in the fields of To B and To G.


Introduction to the

There is a law in the computer industry called “Moore’s Law”, under this law, the performance of the computer advances by leaps and bounds, and the price is also getting cheaper and cheaper, CPU from single-core to multi-core, cache performance has also been greatly improved, especially the arrival of multi-core CPU technology, the computer can handle multiple tasks at the same time. In the efficiency improvement brought by the development of hardware level, multi-threaded programming at the software level has become an inevitable trend, but multi-threaded programming will introduce data security problems, where there is a spear, there must be a shield, so the invention of “lock” to solve the problem of thread safety. In this article, we summarized several classic JVM-level locks in Java.



synchronized

The synchronized keyword is a classic lock, and one we use most often. Before JDK1.6, Syncronized was a heavyweight lock, but as JDK upgrades have improved, it is now less heavy and even performs better than lightweight locks in some scenarios. In the syncronized method, only one thread is allowed to enter a particular code block at a time, thus preventing multiple threads from simultaneously modifying the same data.

Synchronized locks have the following characteristics:




There is a lock upgrade process

Prior to JDK1.5 and beyond, synchronized’s underlying implementation was heavyweight, so it was commonly referred to as “heavyweight locking.” After JDK1.5, various optimizations have been made to make synchronized less heavy, based on the lock upgrade process. Let’s talk about how synchronized works after 1.5. When it comes to the principle of synchronized locking, we have to first talk about the memory layout of Java objects, which is as follows:






As shown in the figure above, after an object is created, in the JVM virtual machine (HotSpot), the object’s storage layout in Java memory can be divided into three parts:


The object header area stores two parts of the information:



1. Runtime data for the object itself (MarkWord)
Store hashCode, GC generation age, lock type marker, bias lock thread ID, CAS lock pointer to thread LockRecord, etc. Synconized lock mechanism is closely related to this part (markwork). The lowest three digits in Markword are used to represent the lock status, one of which is biased and the other two are normal.
Class Pointer ();
The pointer an object points to its Class metadata, which the JVM uses to determine which Class instance it is.


Instance data area



This is where the information about the object is actually stored, such as the contents of all the fields in the object


Align fill area



The JVM implementation HostSpot specifies that the object’s starting address must be a multiples of 8 bytes. In other words, 64-bit OS now reads 64-bit multiples of 8 bytes at a time when reading data. So HotSpot can read objects efficiently. If the actual memory size of an object is not an integer multiple of 8 bytes, it is “padded” to an integer multiple of 8 bytes. So the size of the alignment fill area is not fixed.
When a thread enters the synchronized location and attempts to acquire the lock, the upgrade process of synchronized lock is as follows:


As shown in the figure above, the sequence of synchronized lock upgrade is: biased lock -> lightweight lock -> heavyweight lock, and each step triggers the lock upgrade as follows:




Biased locking

In JDK1.8, actually the default lock is a lightweight, but if set – XX: BiasedLockingStartupDelay = 0, then do syncronized on an Object, a biased locking on immediately. When in the biased lock state, MarkWork records the current thread ID.


Upgrade to lightweight locks

When the next thread participates in biased lock competition, it will first determine whether the thread ID saved in Markword is equal to this thread ID. If not, it will immediately cancel biased lock and upgrade to lightweight lock. Each thread generates a LockRecord (LR) in its own thread stack, and then each thread sets the markwork in the lock object header as a pointer to its OWN LR through CAS (spin) operation, which thread successfully set, means to acquire the lock. CAS operations performed in synchronized are implemented by native C++ code invoking the bytecodeinterpreter. CPP file in HotSpot.


Upgrade to heavyweight locks

If lock contention increases (for example, the number of threads spinning or the number of threads spinning exceeds a certain threshold, which the JVM controls after JDK1.6), the lock is upgraded to heavyweight. At this point, the system applies for resources from the operating system, suspends the thread, enters the waiting queue of the operating system kernel mode, waits for the operating system to schedule, and then maps back to the user mode. In heavyweight locks, the transition from kernel-mode to user-mode is time-consuming, which is one of the reasons for the “heavy” lock.

reentrant

Synchronized has a forced atomic internal locking mechanism and is a reentrant lock. Therefore, a thread that acquires an object lock and then requests it again can always acquire the lock by calling another synchronized method on that object while using a synchronized method. In Java, threads acquire object locks on a thread-by-thread basis, not on a call-by-call basis. The thread holder and counter of a synchronized lock are recorded in the markwork of the object header of the lock. When a thread succeeds in a request, the JVM keeps track of the thread holding the lock and counts the counter to 1. At this point, other threads request the lock and must wait. If the thread that holds the lock requests the lock again, it can get it again, and the counter is incremented. When a thread exits a synchronized method/block, the counter is decremented, and if the counter is 0, the lock is released.


Pessimistic locks (mutex, exclusive locks)

Synchronized is a pessimistic lock (exclusive lock). If the current thread acquires a lock, all other threads that need to lock it will wait until the thread that holds the lock releases the lock before continuing to fight for the lock.




ReentrantLock

ReentrantLock is literally a ReentrantLock, similar to synchronized, but implemented in a very different way. It is implemented based on classical AQS(AbstractQueueSyncronized), which is implemented based on Volitale and CAS, where a variable state of valitale type is maintained in AQS to do a reentrant number of reentrant locks. Locking and releasing locks are also done around this variable. ReentrantLock also offers some features that synchronized does not, and therefore is better than synchronized.



AQS model is shown as follows:



ReentrantLock has the following features:



1. Reentrant
ReentrantLock and syncronized are both reentrantlocks, but their implementation principles are slightly different. RetrantLock uses the state of AQS to determine whether resources are locked. The state of the same thread reentrant locks, and the state of the same thread is +1. The state is -1 (the unlocking must be the current exclusive thread; otherwise, it is abnormal). When state is 0, the account is unlocked successfully.


2. Manually lock and unlock the device
Synchronized automatically locks and unlocks, whereas ReentrantLock requires lock() and unlock() methods combined with a try/finally statement block to manually lock and unlock.


3, support to set the lock timeout time
The synchronized keyword cannot set the timeout period of a lock. If a deadlock occurs within a thread that has acquired a lock, other threads will always block. ReentrantLock provides a tryLock method that allows a thread to set the timeout period of a lock. Do not perform any operations to avoid deadlocks.


4. Support fair/unfair locks
The synchronized keyword is an unfair type of lock, and the line that catches the lock first executes it. The ReentrantLock constructor allows true/false for fair and unfair locking. If true, threads acquire locks on a first-come-first-served basis, one thread Node at a time, and queue up at the “tail” of the bidirectional list. Wait for the preceding Node to release the lock resource.


5, can interrupt the lock
The lockInterruptibly() method in ReentrantLock enables threads to respond to interrupts if they are blocked, such as a thread t1 that obtains a ReentrantLock using the lockInterruptibly() method and performs a long task. Another thread can acquire the reentrant lock held by T1 by interrupting t1’s execution immediately with the interrupt() method. A thread holding a lock through ReentrantLock or Synchronized will not respond to interrupt() until the lock is released.


ReentrantReadWriteLock

ReentrantReadWriteLock consists of two locks. One is WriteLock and the other is ReadLock. The rules of read/write locks are: read not mutually exclusive, read/write mutually exclusive, and write mutually exclusive. In some actual scenarios, the frequency of read operations is much higher than that of write operations. If common locks are directly used for concurrency control, read/write mutual exclusion, read/write mutual exclusion, and write mutual exclusion are inefficient. Read/write locks are generated to optimize the operation efficiency in such scenarios. In general, the inefficiency of an exclusive lock is due to thread context switching caused by intense competition for critical sections under high concurrency. Therefore, when the concurrency is not very high, the read-write lock may be less efficient than the exclusive lock because it requires additional maintenance of the read lock state. Therefore, you need to use the read-write lock based on the actual situation.

ReentrantReadWriteLock is also implemented based on AQS. The difference between ReentrantReadWriteLock and ReentrantLock is that the ReentrantReadWriteLock lock has shared and exclusive lock attributes. Lock and lock release in read/write locks are also based on Sync (inherited from AQS) and are implemented using the state variable in AQS and the waitState variable in Node. The main difference between a read/write lock and a common mutex is that the state of the read lock and the state of the write lock need to be recorded separately, and the two locking operations need to be processed separately in the wait queue. ReentrantReadWriteLock divides the state of int type in AQS into the highest 16 bits and the 16th bit to record the read and write lock states respectively, as shown in the following figure:


WriteLock is pessimistic lock (exclusive lock, mutex lock)

By calculating state&((1<<16)-1), all the high 16 bits of state are erased, so the low level of state records the reentrant count for the write lock.


Get write lock source code:

Acquires the write lock. * If no thread holds the write lock or read lock, the current thread executes the CAS operation to update status. And immediately return * <p>Acquires the write lockif neither the readnor write lock * are held by another thread * and returns immediately, Setting the write lock hold count to * one. * If the current thread already holds the write lock, set the write lock hold count to 1. * <p>If the current thread already holds the write lockthenThe * hold count is incremented by one and the method returns * immediately. * If the lock is already held by another thread, stop the CPU scheduling of that thread and go to sleep, * The lock is held until the lock is released and the number of times the lock is held is set to 1 * <p>If the lock is held by another threadthen the current
         * thread becomes disabled for thread scheduling purposes and
         * lies dormant until the write lock has been acquired, at which
         * time the write lock hold count is set to one.
         */
public void lock() { sync.acquire(1); } /** * This method is used to acquire the lock in exclusive mode, ignoring the interrupt * if the tryAcquire method update status successfully, it will return the lock successfully * otherwise, it will enter the synchronization queue waiting. Keep executing the "tryAcquire" method to try the CAS to update the status until the lock is successfully captured * where the "tryAcquire" method has its own implementation in NonfairSync and FairSync * * Acquiresin exclusive mode, ignoring interrupts.  Implemented
     * by invoking at least once {@link #tryAcquire},
     * returning on success.  Otherwise the thread is queued, possibly
     * repeatedly blocking and unblocking, invoking {@link
     * #tryAcquire} until success. This method can be used
     * to implement method {@link Lock#lock}.
     *
     * @param arg the acquire argument.  This value is conveyed to
     *        {@link #tryAcquire} but is otherwise uninterpreted and
     *        can represent anything you like.
     */
public final void acquire(int arg) {
	if(! tryAcquire(arg) && acquireQueued(addWaiter(Node.EXCLUSIVE), arg)) selfInterrupt(); } protected final Boolean tryAcquire(int acquires) {/* * Walkthrough: * 1, if the lock count is not 0 and the thread holding the lock is not the current thread, returnfalse
             * 1. If readCount nonzero or write count nonzero * and owner is a different thread, fail. * 2false
             * 2. If count would saturate, fail. (This can only
             *    happen ifThis thread is eligible to acquire the lock if it is reentrant, or if the thread's policy on the queue allows it to attempt to acquire the lockfor lock if
             *    it is either a reentrant acquire or
             *    queue policy allows it. If so, update state
             *    and setowner. */ Thread current = Thread.currentThread(); Int c = getState(); Int w = exclusiveCount(c); // If the read/write lock status is not 0, another thread has acquired the read or write lockif(c ! = 0) {// If the reentry times of the write lock is 0, it indicates that a thread has acquired the read lockfalse// Or if the write lock reentrant count is not 0 and the thread that acquired the write lock is not the current thread, according to"Write lock exclusive"Principle, returnfalse
		// (Note: ifc ! = 0 and w == 0thenshared count ! = 0)if(w == 0 || current ! = getExclusiveOwnerThread())return false; // If the number of write lock reentrant times exceeds the maximum number (65535), an exception is thrownif (w + exclusiveCount(acquires) > MAX_COUNT)
		                    throw new Error("Maximum lock count exceeded"); Update the count of the reentrant write lock (+1)true
		// Reentrant acquire
		setState(c + acquires);
		return true; } // If the read-write lock status is 0, it indicates that neither the read-write lock nor the write lock is obtainedfalse// If no blocking is required and the CAS operation succeeds, the current thread successfully acquired the lock. Set the owner of the lock to the current threadtrue
	if(writerShouldBlock() || ! compareAndSetState(c, c + acquires))return false;
	setExclusiveOwnerThread(current);
	return true;
}Copy the code

Release write lock source code:

/* * Note that tryRelease and tryAcquire can be called by * Conditions. So it is possible that their arguments contain *  bothread and write holds that are all released during a
  * condition wait and re-established inTryAcquire. */ protected final Boolean tryRelease(int Releases) {// If the lock holder is not the current thread, throw an exceptionif(! isHeldExclusively()) throw new IllegalMonitorStateException(); // Write lock reentrant count minus Releases with int nextc = getState() -releases; Boolean Free = exclusiveCount(nexTC) == 0;if(free) // If the write lock is released, the lock holder is set to NULL for GCsetExclusiveOwnerThread(null); // Update the write lock reentrant countsetState(nextc);
	return free;
}Copy the code



ReadLock is a shared lock (optimistic lock)

Unsigned 0 is added by calculating state>>>16, 16 bits to the right, so that the high end of state records the reentrant count of the write lock.




Read lock acquisition process is slightly more complicated than write lock, first determine whether the write lock is 0 and the current thread does not possess the exclusive lock, directly return; Otherwise, determine whether the reader thread needs to block and the number of read locks is less than the maximum value and compare the setting status successfully. If there is no read lock, set firstReader and firstReaderHoldCount for the firstReader thread. If the current thread is the first reader thread, increment firstReaderHoldCount; Otherwise, the value of the HoldCounter object corresponding to the current thread is set. If the update is successful, the current thread reentrant number is recorded in the local thread copy of readHolds (ThreadLocal) in the firstReaderHoldCount. This is to implement the getReadHoldCount () method in JDK1.6. This method is used to obtain the number of times the current thread has re-entered the shared lock (state is the total number of times multiple threads have re-entered the lock). This method makes the code more complex, but the principle is simple: If there is only one thread, you do not need to use the ThreadLocal to store reentrant values directly into the firstReaderHoldCount member. When a second thread approaches, you need to use the ThreadLocal readHolds. Each thread has its own copy to store its own reentrant.
Get read lock source code:

/** * Acquires the read lockreadLock. * If the write lock is not held by another thread, perform the CAS operation to update the status value and return * <p>Acquires the as soon as the read lock is acquiredread lock ifThe write lock is not held by * another thread and returns immediately. * * If the write lock is held by another thread, stop the CPU scheduling of that thread and go to sleep. * <p>If the write lock is held by another threadthen
         * the current thread becomes disabled for thread scheduling
         * purposes and lies dormant until the read lock has been acquired.
         */
public void lock() { sync.acquireShared(1); } /** * If the tryAcquireShared update status is successfully called, the lock is successfully grabbed * otherwise, the lock will be queued. Try the "tryAcquireShared" method to update the CAS status Until the lock is successfully grabbed, the "tryAcquireShared" method has its own implementation in NonfairSync and FairSyncin shared mode, ignoring interrupts.  Implemented by
     * first invoking at least once {@link #tryAcquireShared},
     * returning on success.  Otherwise the thread is queued, possibly
     * repeatedly blocking and unblocking, invoking {@link
     * #tryAcquireShared} until success.
     *
     * @param arg the acquire argument.  This value is conveyed to
     *        {@link #tryAcquireShared} but is otherwise uninterpreted
     *        and can represent anything you like.
     */
public final void acquireShared(int arg) {
	if (tryAcquireShared(arg) < 0)
	            doAcquireShared(arg); } protected final int tryAcquireShared(int unused) { /* * Walkthrough: Write lock held by another thread, fail. Write lock held by another thread, write lock held by another thread, fail. So let's see if we want readerShouldBlock, and if we don't need to block, the * does the CAS operation to update the state and the reentrant count. Note that the above steps do not check whether reentrant is available (read locks are shared locks, which inherently support reentrant)for
             *    lock wrt state, so ask if it should block
             *    because of queue policy. If not, try
             *    to grant by CASing state and updating count.
             *    Note that step does not check for reentrant
             *    acquires, which is postponed to full version
             *    to avoid having to check hold count in* The more typical non-reentrant case. * If the CAS update status failed or the reentrant count exceeded the maximum value, step 2 failed. If step 2 fails either because thread * apparently not eligible or CAS fails or count * capitalized, Chain to version with full retry loop. */ / Thread current = thread.currentThread (); Int c = getState(); // If a thread has acquired the write lock, and it is not the current thread that has acquired the write lock, failure is returnedif(exclusiveCount(c) ! = 0 && getExclusiveOwnerThread() ! = current)return- 1; Int r = sharedCount(c); // If the reader thread should not be blocked, and the reentrant count is less than the maximum value, and CAS performed the read-lock reentrant count +1 successfully, then the thread reentrant count +1 is executed and returns successif(! ReaderShouldBlock () &&r < MAX_COUNT &&compareAndSetState (c, c + SHARED_UNIT)) { FirstReaderHoldCount is set to 1if (r == 0) {
			firstReader = current;
			firstReaderHoldCount = 1;
		} else if(firstReader == current) {// If firstReader is the current thread, the reentrant variable firstReaderHoldCount is added by 1 firstReaderHoldCount++; }else{// If there are at least two threads that share the lock, fetch the shared lock reentrant counter HoldCounter Add this thread's reentrant count count to 1 HoldCounter RH = cachedHoldCounter;if(rh == null || rh.tid ! = getThreadId(current)) cachedHoldCounter = rh =readHolds.get(); else if (rh.count == 0)
			                        readHolds.set(rh);
			rh.count++;
		}
		return1; } // If the aboveifIf none of the conditions are met, the method is entered for an endless loop of retrievalreturnfullTryAcquireShared(current); } /** * used to handle CAS state failure and the full method in tryAcquireShared not getting the reentrant lock * Full version of acquirefor reads, that handles CAS misses
         * and reentrant reads not dealt with inTryAcquireShared. */ Final int fullTryAcquireShared(Thread Current) {/* * This code is partially similar to the code in tryAcquireShared, but is generally simpler, This code is not complicated to judge between tryAcquireShared and retry and delayed read hold countsin part redundant with that in* tryAcquireShared but is simpler overall by not * complicating tryAcquireShared with interactions between * retries and  lazily reading hold counts. */ HoldCounter rh = null; / / death cyclefor(;;) Int c = getState(); // If a thread has acquired the write lockif(exclusiveCount(c) ! = 0) {// If the thread that acquired the write lock is not the current thread, return failureif(getExclusiveOwnerThread() ! = current)return- 1; //else we hold the exclusive lock; blocking here
			// would cause deadlock.
		} else if(readerShouldBlock()) {// If no thread is holding the write lock and the reader thread is blocking // Make sure we're not acquiring read lock reentrantly // If (firstReader == current) {// Assert firstReaderHoldCount > 0; If (rh == null) {rh = cachedHoldCounter; if (rh == null) {rh = cachedHoldCounter; if (rh == null || rh.tid ! = getThreadId(current)) { rh = readHolds.get(); if (rh.count == 0) readHolds.remove(); } } if (rh.count == 0) return -1; } /** * reentrant count is equal to the maximum reentrant count, If (sharedCount(c) == MAX_COUNT) throw new Error("Maximum lock count exceeded"); // If CAS successfully increments the reentrant count of the read/write lock by 1, then increments the reentrant count of the thread currently holding the shared read lock by 1. If (compareAndSetState(c, c + SHARED_UNIT)) {if (sharedCount(c) == 0) {firstReader = current; firstReaderHoldCount = 1; } else if (firstReader == current) { firstReaderHoldCount++; } else { if (rh == null) rh = cachedHoldCounter; if (rh == null || rh.tid ! = getThreadId(current)) rh = readHolds.get(); else if (rh.count == 0) readHolds.set(rh); rh.count++; cachedHoldCounter = rh; // cache for release } return 1; }}}Copy the code

Release read lock source code:

/**
  * Releases in shared mode.  Implemented by unblocking one or more
  * threads if {@link #tryReleaseShared} returns true.
  *
  * @param arg the release argument.  This value is conveyed to
  *        {@link #tryReleaseShared} but is otherwise uninterpreted
  *        and can represent anything you like.
  * @return the value returned from {@link #tryReleaseShared}
  */
public final Boolean releaseShared(int arg) {
	if(tryReleaseShared(arg)) {// Try to release a shared lock countdoReleaseShared(); // Actually release the lockreturn true;
	}
	return false; } /** * this method indicates that the reader thread releases the lock. FirstReaderHoldCount = 1 firstReaderHoldCount = 1 firstReaderHoldCount = 1 firstReaderHoldCount = 1 The number of resources held by the first reader thread, firstReaderHoldCount, is reduced by 1; If the current thread is not the first to read the thread, the first will get the cache counter (a read lock on the thread corresponding counter), if the counter is empty or dar dar value is not equal to the current thread, then get the current thread counter, if counter count count less than or equal to 1, then remove the current thread corresponding counter, If the count of a counter is less than or equal to 0, an exception is thrown and the count can then be reduced. In either case, it's an endless loop, State */ protected final Boolean tryReleaseShared(int unused) {// Get the current Thread current = Thread.currentThread();if(firstReader == current) {assert firstReaderHoldCount > 0;if(firstReaderHoldCount == 1) firstReader = null;elseFirstReaderHoldCount --; }else{// The current thread is not the first reader thread // Get the cached counter HoldCounter rh = cachedHoldCounter;if(rh == null || rh.tid ! = getThreadId(current)) // If the counter is empty or the tid of the counter is not the TID of the current running threadreadHolds.get(); Int count = rh.count;if(count <= 1) {// Count less than or equal to 1 // removereadHolds.remove();
			if(count <= 0) throw unmatchedUnlockException(); } // Reduce count -- rhes.count; }for(;;) Int c = getState(); Int nexTC = c-shared_unit;if(compareAndSetState(c, nexTC)read lock has no effect on readers,
		// but it may allow waiting writers to proceed if
		// both read and write locks are now free.
		returnnextc == 0; }} /** Release actionfor shared mode -- signals successor and ensures
  * propagation. (Note: For exclusive mode, release just amounts
  * to calling unparkSuccessor of head if it needs signal.)
  */
private void doReleaseShared() {
	/*
         * Ensure that a release propagates, even if there are other
         * in-progress acquires/releases.  This proceeds in the usual
         * way of trying to unparkSuccessor of head if it needs
         * signal. But if it does not, status is set to PROPAGATE to
         * ensure that upon release, propagation continues.
         * Additionally, we must loop in case a new node is added
         * while we are doing this. Also, unlike other uses of
         * unparkSuccessor, we need to know if CAS to reset status
         * fails, if so rechecking.
         */
	for (;;) {
		Node h = head;
		if(h ! = null && h ! = tail) { int ws = h.waitStatus;if (ws == Node.SIGNAL) {
				if(! compareAndSetWaitStatus(h, Node.SIGNAL, 0))continue;
				// loop to recheck cases
				unparkSuccessor(h);
			} else if(ws == 0 && ! compareAndSetWaitStatus(h, 0, Node.PROPAGATE))continue;
			// loop on failed CAS
		}
		if (h == head)                   // loop if head changed
		break; }}Copy the code

It can be seen from the analysis that:

If a thread holds a read lock, the thread cannot acquire a write lock (because when acquiring a write lock, if the current read lock is found to be occupied, the acquisition fails immediately, regardless of whether the read lock is held by the current thread).


If a thread holds a write lock, the thread can continue to acquire the read lock (if the write lock is occupied, it will fail to acquire the read lock only if the write lock is not occupied by the current thread).


LongAdder

In the case of high concurrency, we can not guarantee the atomicity of the operation when we directly perform i++ on an Integer of type Integer, and there will be thread safety issues. For this we will use AtomicInteger under JUC, which is an Interger class that provides atomic operations and is thread safe internally via CAS. However, when a large number of threads access at the same time, a large number of threads fail to perform CAS operation and empty rotation will occur, resulting in excessive CPU resource consumption and low execution efficiency. Doug Lea is not happy either, so in JDK1.8 he optimised CAS to provide LongAdder, which is based on the idea of CAS segwise locking.



When a thread reads or writes a variable of type LongAdder, the process is as follows:




LongAdder is also implemented based on the CAS operation + Valitale provided by Unsafe. Striped64, the parent of LongAdder, maintains a base variable and a cell array. When multiple threads operate a variable, cas will be performed on the base variable first, and the cell array will be used when it finds that the number of threads increases. For example, when the base is about to be updated, it finds that there are too many threads (that is, the casBase method fails to update the base value), then it automatically uses the cell array, each thread corresponds to a cell, and performs cas operation on the cell in each thread. In this way, the update pressure of a single value can be shared among multiple values, reducing the “heat” of a single value, reducing the idling of a large number of threads, improving the concurrency efficiency and dispersing the concurrency pressure. This segmented locking requires the maintenance of an additional cells of memory space, but this cost is almost negligible in high concurrency scenarios. Segmental locking is an excellent optimization idea, and the ConcurrentHashMap provided in JUC is also based on segmental locking to ensure thread-safe read and write operations.

The last

2019 common Java interview questions summed up a nearly 500 page PDF document, welcome to follow my official account: Programmer chasing wind, receive these sorted materials!


If you like the article, remember to pay attention to me. Thank you for your support!