This is my 11th day of the Genwen Challenge

The lock

1. CAS spin

1. Introduction:

CAS stands for compare and swap, as we call it.

CAS is a lock-based operation, and it is optimistic locking. In Java, there are optimistic locks and pessimistic locks. Pessimistic locking means that a resource is locked until the next thread can access it after the previous thread has released the lock. Optimistic locking, which takes a broad approach and processes resources in some way without locking, such as fetching data by adding version to records, has a significant performance improvement over pessimistic locking.

Definition 2. :

The CAS operation contains three operands — the memory location (V), the expected old value (A), and the new value (B). If the value in the memory address is the same as the value of A, then the value in memory is updated to B. CAS fetches data through an infinite loop. If thread A gets the address in the first loop and the value in the address is changed by thread B, then thread A needs to spin until the next loop is executed.

Principle 3.

JDK1.8 Unsafe class

while(!this.compareAndSwapInt(var1, var2, var5, var5 + var4));
Copy the code

Compare and Swap

cas(V, Expected, NewValue)
if V==E
     V=New;
otherwise try again or fail // If the expected value is not equal, try again. The expected value is given
Copy the code

CAS is CPU primitive support

4. Problems existing in CAS:

1. The problem of ABA:

More classic, general introduction to CAS, this question is to say. I saw a better explanation online. Thread 1 (ATM) : get current value 100, expect to update to 50, thread 2 (ATM) : Get current value 100, expect update to 50, thread 1 executes successfully, thread 2 blocks for some reason, then someone sends Xiaoming 50 thread 3 (default) : Thread 3 returns from the Block with a balance of 100. Thread 2 returns from the Block with a balance of 100. Thread 2 returns from the Block with a balance of 100. At this point, you can see that the actual balance should be 100 (100-50+50), but it is actually 50 (100-50+50-50), which is the ABA problem resulting in a successful submission.

1. The solution:

Solution: Version control + timestamp

2. Performance issues:

If the CAS is unsuccessful, the loop is repeated to try again. In high concurrency, if a large number of threads change a value frequently, it may cause a large number of threads to execute the compareAndSet() method M times before setting it successfully, that is, a large number of threads to execute a repeated empty loop (spin), causing significant overhead

1. The solution:

Java 8 has introduced a new class, LongAdder, which attempts to dramatically improve the performance of multi-threaded concurrent CAS operations using segmented CAS and automatic segmented migration

  1. In the underlying implementation of LongAdder, there is a base value at first. At the beginning, multithreading keeps adding the value, which is the sum of base, for example, base = 5 at the beginning.
  2. Then if the number of concurrent updates is too high, a segmented CAS mechanism is implemented, which is an internal array of cells, each of which is a number segment.
  3. In this case, let a large number of threads separately perform CAS accumulation on values in different cells, so that the CAS calculation pressure is distributed in different Cell segment values!

This can greatly reduce the multi-thread concurrent update of the same value of the infinite loop problem, greatly improve the performance and efficiency of multi-thread concurrent update value!

In addition, it implements the automatic migration mechanism. That is, if a value of a Cell fails to perform CAS, the CAS operation is automatically performed on the value of another Cell segment. This also solves the problem of thread spinning and waiting for CAS operation, allowing a thread to complete CAS operation as soon as possible. Finally, if you want to get the current total sum from LongAdder, the base value and all Cell segment values will be added up and returned to you.

3. Multivariable atomic problem:

Atomic operations of only one shared variable are guaranteed. A normal Atomic class can guarantee atomicity for only one variable, but what about multiple variables?

1. The solution

You can use AtomicReference, which encapsulates custom objects. Multiple variables can be placed in a custom object, and then it will check whether the object reference is the same. If multiple threads simultaneously assign a reference to an object variable, the CAS operation of AtomicReference can resolve the concurrency conflict problem.

2. AQS (AbstractQueuedSynchronizer) :

1. Definition:

AQS is a framework for building locks and synchronizers. It is easy and efficient to build a wide range of synchronizers, such as ReentrantLock, Semaphore, and others such as ReentrantReadWriteLock. SynchronousQueue, FutureTask, and so on are all implemented based on AQS.

2. Core ideas of AQS:

The core idea of AQS is that if the requested shared resource is idle, the thread of the current requested resource is set as a valid worker thread, and the shared resource is set to the locked state. If the requested shared resource is occupied, then a mechanism is needed for thread blocking and waiting and lock allocation when woken up. This mechanism is implemented by CLH queue lock, which is to queue the thread that can not acquire the lock temporarily.

3 General underlying implementation:

  1. Underlying data structure: The underlying data structure is a bidirectional linked list. Each Node Node contains the prev and next Pointers, as well as the data data field, which in this case is a Thread object.
  2. Node has four states: Canceled, wait, conditional wait, and shared

There are two common implementations: fair locks and unfair locks. How to Withdraw Fair refers to whether the data added at the same time and the data in the queue header can compete fairly for resources. 3. State state change: CAS is used to change the state. The underlying call to Sun.misc. Unsafe compareAndSwapInt is used to change the state.

1. Implementation of AQS by ReentrantLock is as follows:

The initial value of state is 0 when the thread is locked, and +1 when the lock is acquired. Only when state is 0 again does another thread have a chance to acquire the lock

2. The implementation of CountDownLatch for AQS is as follows:

The task is divided into N sub-threads for execution, and state is initialized to N. The N sub-threads are executed in parallel. After each sub-thread executes the task once, the CAS operation state is reduced by 1. After all child threads have finished, state=0, unpark() main thread, and the main thread returns from await() to continue.

3. The lock:

A lock is attempted before the thread is enqueued, and if the lock is not available it blocks the current thread and the thread blocks through locksupport.park ().

4. Release lock:

It goes to the node in the queue that has the head of the queue, wakes it up, and the head node in the synchronization queue is the thread that currently has the lock.

3. Various locks and concurrent tools based on AQS

1.ReentrantLock

  1. A ReentrantLock is a ReentrantLock that means that a shared resource can be repeatedly locked, that is, the current thread can acquire the lock again without blocking.

2. The Reentrancy Principle: Address these two issues

When the thread acquires the lock, if the thread that has acquired the lock is the current thread, it will directly acquire the lock again. Since the lock is acquired n times, the lock is not fully released until it is released the same n times.

3.ReentrantLock supports two types of locks:

Fair lock: What is fair is in terms of acquiring locks. If a lock is fair, then the order of acquiring locks should conform to the absolute time order on the request and satisfy the FIFO. Unfair: threads compete for resources, and the one who wins wins. That is, the thread gets the resources and starts executing

4. The ThreadLocal account:

Why does ThreadLocal cause OOM? The key used in ThreadLocalMap is a weak reference to ThreadLocal and the value is a strong reference. So, if ThreadLocal is not strongly referenced externally, the key will be cleaned up during garbage collection, but the value will not be cleaned up. This will result in an Entry with a null key in ThreadLocalMap. If we do nothing, the value will never be collected by the GC, which could cause a memory leak. The ThreadLocalMap implementation already takes this into account by calling the set(), get(), and remove() methods to clean up records with a null key. It is best to call the remove() method manually after using ThreadLocal