While the core of Concurrent programming in Java is the java.concurrent.util package, most synchronizer implementations in JUC are built around common basic behaviors such as wait queues, conditional queues, exclusive fetch, shared fetch, etc., And the behavior of the abstract is based on AbstractQueuedSynchronizer referred to as “AQS, AQS defines a set of synchronizer framework for multithreaded access to a Shared resource, is a reliance on state (state) of synchronizer.

1. AQS definition

Provides a framework for implementing blocking locks and associated synchronizers (semaphores, events, etc.) that rely on first-in, first-out (FIFO) wait queues.

2. Demand

2.1 features

Synchronizers generally contain two methods, acquire and Release. The Acquire operation blocks the calling thread until or unless the synchronization state allows it to continue. Release somehow changes the synchronization state so that one or more of acquire’s blocked threads continue to execute.

There is no uniform definition of the synchronizer API in the J.U.C package. As a result, some classes define generic interfaces (such as Lock), while others define their own proprietary versions. Therefore, acquire and Release operations have different names and forms in different classes. For example, lock. Lock, semaphore.acquire, countdownlatch.await, and FutureTask.get are all acquire operations in this framework. However, J.U.C has an agreement between classes to support a common set of usage options. Where it makes sense, each synchronizer supports the following operations:

  • Blocking and non-blocking (such as tryLock) synchronization
  • Optional timeout setting that allows the caller to waive waiting
  • Task cancellations via interrupts are usually split into two versions, with acquire being able to cancel and the other not

Synchronizer implementations vary depending on whether their state is exclusive or not. A synchronizer with an exclusive state can have only one thread passing through a choke point at a time, whereas a synchronizer with a shared state can have multiple threads executing at the same time. Generic lock implementation classes tend to maintain exclusive state only, but, for example, count semaphores allow multiple threads to execute simultaneously if quantity permits. In order for the framework to be widely used, both patterns need to be supported.

The Condition interface is also defined in the J.U.C package to support await/signal operations in the form of a pipe. These operations are associated with the Lock class in exclusive mode, and the implementation of Condition is inherently tied to the Lock class it is associated with.

2.2 Performance Objectives

The main performance goal here is scalability, that is, keeping the efficiency of synchronizers stable in most cases, even when, or especially when, synchronizers are competing. Ideally, the cost of passing a synchronization point should be constant no matter how many threads are trying to pass it. One of the main goals is to minimize the total time that a thread is allowed to pass through a synchronization point but has not yet. However, this must also consider balancing various resources, including total CPU time requirements, memory load, and thread scheduling overhead. For example, acquiring a spin lock usually takes less time than blocking a lock, but it usually wastes CPU clock cycles and causes memory contention, so it’s not used very often.

These goals for implementing synchronizers involve two different types of use. Most applications are designed to maximize their overall throughput, fault tolerance, and preferably to minimize starvation. However, it is more important for programs that control resource allocation to maintain the fairness of multithreaded reads and accept poor overall throughput. There is no framework to decide on behalf of the user which approach should be chosen, so different fairness strategies should be provided.

3. Design and implementation

The basic idea behind the synchronizer is very simple. Acquire operations are as follows:

while (synchronization state does not allow acquire) {
    enqueue current thread if not already queued;
    possibly block current thread;
}
dequeue current thread if it was queued;
Copy the code

The release operation is as follows:

update synchronization state;
if (state may permit a blocked thread to acquire)
    unblock one or more queued threads;
Copy the code

In order to do this, the following three basic components need to work together:

  • Atomic management of synchronization state
  • Thread blocking and unblocking
  • Queue management

3.1 Synchronization Status

The AQS class uses a single int (32-bit) to hold the synchronization state and exposes getState, setState, and compareAndSet operations to read and update the state. These methods rely on the support of the J.U.C.A.tomic package, which provides volatile read-write semantics compatible with JSR133, CompareAndSetState is implemented by using local compare-and-swap or load-linked/store-conditional instructions so that the synchronization state is set to a new value atomically only when it has an expected value.

3.2 the blocked

Prior to JSR166, both blocking and unblocking threads were based on Java built-in routines, and no other API was available for creating synchronizers that were not based on Java built-in routines. The only options are Thread.suspend and Thread.resume, but they both have unresolvable race problems and are therefore unusable: When a non-blocking Thread calls resume before suspend is called by a blocking Thread, the resume operation has no effect.

The J.U.C package has a LockSuport class that contains the solution to this problem. The locksupport. park method blocks the current thread unless/until a locksupport. unpark method is called (it is also possible to call the unpark method ahead of time). Calls to unpark are not counted, so calling the unpark method multiple times before a park call undoes only one park operation. In addition, they work on each thread, not each synchronizer. A thread calling the park operation on a new synchronizer may return immediately because there may be “remaining” unpark operations before that. However, in the absence of an unpark operation, the next call to park blocks. While it is possible to explicitly eliminate this state, it is not worth doing so. It is more efficient to call Park multiple times as needed.

3.3 the queue

The key to the whole framework is how to manage queues of blocked threads, which are strictly FIFO queues, so the framework does not support priority-based synchronization.

The CLH queue is chosen here. A CLH queue is not really that queue-like because its enqueue and dequeue operations are closely related to its purpose (that is, as a lock). It is a linked list queue accessed by two atomically updatable fields head and tail, both of which point to an empty node when initialized.

The advantage of CLH locks is that they are fast, lockless, and frictionless (even if a thread wins an insert and continues to execute in a race). It’s also quick to detect if any threads are waiting (just test if head is equal to tail); At the same time, the “free” state is scattered. Almost every node stores this state. The current node stores the “free” state of its successor nodes, so they are scattered rather than clustered. To avoid some unnecessary memory contention.

3.4 Conditional Queue

The AQS framework provides ConditionObject for classes that maintain exclusive synchronization and those that implement the Lock interface. A lock object can be associated with any number of conditional objects and can provide await, signal, and signalAll operations in typical pipe style, including with timeouts, as well as detection and monitoring methods.

ConditionObject uses the same internal queue node as the synchronizer. However, these nodes are maintained in a separate conditional queue. The signal operation is implemented by moving the node from the conditional queue to the lock queue, and there is no need to wake up the thread that needs to wake up before it has regained the lock.

The basic await operation is as follows:

create and add new node to conditon queue;
release lock;
block until node is on lock queue;
re-acquire lock;
Copy the code

Signal operations are as follows:

transfer the first node from condition queue to lock queue;
Copy the code

Because these operations can only be performed when the lock is held, they can use sequential linked list queue operations to maintain conditional queues (with a nextWaiter field on the node). The transfer operation simply unlinks the first node from the conditional queue and inserts it into the lock queue via the CLH insert operation.

The resources

  1. The J.U.C. Synchronizer Framework Translation ifeve.com/aqs-2/