A summary of concurrent containers provided by the JDK

Most of these containers provided by the JDK are in the java.util.Concurrent package.

  • ConcurrentHashMap: Thread-safe HashMap
  • CopyOnWriteArrayList: a thread-safe List that performs much better than a Vector if it reads more than it writes.
  • ConcurrentLinkedQueue: Efficient concurrent queue, implemented using linked lists. Think of it as a thread-safe LinkedList, which is a non-blocking queue.
  • BlockingQueue: This is an interface that the JDK implements internally via linked lists, arrays, etc. Represents a blocking queue, ideal for use as a channel for data sharing.
  • ConcurrentSkipListMap: Implementation of a hop table. This is a Map that uses the data structure of a hop table for quick lookup.

Two ConcurrentHashMap

We know that the concurrent HashMap is not thread-safe, scenario if you want to ensure that a feasible way is to use the Collections, synchronizedMap () method to wrap our HashMap. However, this is done by using a global lock to synchronize concurrent access between different threads, so there are significant performance issues.

Hence the birth of a thread-safe version of HashMap, ConcurrentHashMap. In ConcurrentHashMap, high performance is guaranteed for both read and write operations: locks are (almost) not required for read operations, while locks are only used for write operations without affecting client access to other segments.

ConcurrentHashMap is a topic I’ve covered in the Java Collections Framework Common Questions. Here are some important questions about ConcurrentHashMap:

  • ConcurrentHashMap and Hashtable
  • ConcurrentHashMap Thread-safe implementation/low-level implementation

Three CopyOnWriteArrayList

3.1 CopyOnWriteArrayList profile

public class CopyOnWriteArrayList<E>
extends Object
implements List<E>, RandomAccess, Cloneable, Serializable
Copy the code

In many applications, the read operation may be much larger than the write operation. Since reads do not modify the original data at all, locking each read is a waste of resources. We should allow multiple threads to access the internal data of the List at the same time, after all, the read operation is safe.

This is very similar to the idea of a ReentrantReadWriteLock read/write lock that we discussed earlier in the multithreading section: read/share, write/write mutually exclusive, read/write mutually exclusive, write/read mutually exclusive. The JDK provides an analogy to CopyOnWriteArrayList that takes the idea of locking in reads and writes a step further. To maximize read performance, CopyOnWriteArrayList reads are unlocked at all, and even better: writes don’t block reads. Only writes and writes need to wait synchronously. As a result, the performance of read operations is greatly improved. So how does it do that?

3.2 How does CopyOnWriteArrayList work?

All mutable operations of the CopyOnWriteArrayList class (add, set, and so on) are implemented by creating new copies of the underlying array. When the List needs to be modified, I do not modify the original content, but make a copy of the original data and write the modified content into the copy. After the write is complete, the modified copy is replaced with the original data so that the write operation does not affect the read operation.

From the name of CopyOnWriteArrayList, we can see that CopyOnWriteArrayList is an ArrayList that satisfies CopyOnWrite. In a computer, if you want to make changes to a block of memory, instead of writing to the old block of memory, we make a copy of the memory, write to the new memory, and when we’re done, we’ll point to the old memory and the new memory will be reclaimed.

3.3 CopyOnWriteArrayList read and write source code simple analysis

3.3.1 Implementation of CopyOnWriteArrayList reading operation

The read operation does not have any synchronization control or lock operation. The reason is that the internal array is not modified, only replaced by another array, so the data is safe.

    /** The array, accessed only via getArray/setArray. */
    private transient volatile Object[] array;
    public E get(int index) {
        return get(getArray(), index);
    }
    @SuppressWarnings("unchecked")
    private E get(Object[] a, int index) {
        return (E) a[index];
    }
    final Object[] getArray() {
        return array;
    }
Copy the code

3.3.2 Implementation of CopyOnWriteArrayList write operation

The CopyOnWriteArrayList write operation add() adds a lock to the collection to ensure synchronization and prevent multiple copies from being written by multiple threads.

/** * Appends the specified element to the end of this list. * * @param e element to be appended to this list * @return {@code true} (as specified by {@link Collection#add}) */ public boolean add(E e) { final ReentrantLock lock = this.lock;  lock.lock(); // try {Object[] elements = getArray(); int len = elements.length; Object[] newElements = Arrays.copyOf(elements, len + 1); NewElements [len] = e; setArray(newElements); return true; } finally { lock.unlock(); // Release the lock}}Copy the code

Four ConcurrentLinkedQueue

Java provides thread-safe queues that can be divided into blocking queues and non-blocking queues. A typical example of a BlockingQueue is BlockingQueue, and a typical example of a non-blocking Queue is ConcurrentLinkedQueue. In the actual application, blocking queue or non-blocking queue should be selected according to the actual needs. Blocking queues can be implemented by locking, and non-blocking queues can be implemented by CAS operations.

As the name suggests, ConcurrentLinkedQueue is a queue that uses a linked list as its data structure. ConcurrentLinkedQueue is probably the best performing queue in a high-concurrency environment. It has good performance because of its internal complex implementation.

We won’t analyze the internal code of ConcurrentLinkedQueue, but it should be known that ConcurrentLinkedQueue mainly uses the CAS non-blocking algorithm for thread safety.

ConcurrentLinkedQueue is suitable for scenarios that have high requirements on performance and where multiple threads are accessing the queue at the same time. That is, if the cost of locking the queue is high, it is suitable to use the unlocked ConcurrentLinkedQueue instead.

Five BlockingQueue

5.1 BlockingQueue Overview

ConcurrentLinkedQueue was mentioned above as a high-performance non-blocking queue. The next block queue is called BlockingQueue. Blockingqueues are widely used in producer-consumer problems because they provide blocking methods for insertion and removal. When the queue container is full, the producer thread blocks until the queue is full. When the queue container is empty, the consumer thread is blocked until the queue is not empty.

BlockingQueue is an interface that inherits from Queue, so its implementation class can be used as an implementation of Queue, which in turn inherits from the Collection interface. Here are the relevant implementation classes for BlockingQueue:

ArrayBlockingQueue, LinkedBlockingQueue, PriorityBlockingQueue, ArrayBlockingQueue, LinkedBlockingQueue, PriorityBlockingQueue

5.2 ArrayBlockingQueue

ArrayBlockingQueue is a bounded queue implementation class of the BlockingQueue interface. Once created, ArrayBlockingQueue cannot change its capacity. The concurrency control is controlled by reentrant lock, which is required for both insert operation and read operation. When the queue is full, trying to queue elements will block the operation; Trying to fetch an element from an empty queue blocks as well.

ArrayBlockingQueue does not guarantee fairness of thread access to the queue by default. Fairness is defined as the absolute time order in which threads wait, i.e. the thread that waits first is the first to access ArrayBlockingQueue. Non-fairness means that the order in which an ArrayBlockingQueue is accessed is not in strict chronological order. It is possible that, when an ArrayBlockingQueue can be accessed, a thread that has been blocked for a long time will still not be able to access it. If fairness is guaranteed, throughput is usually reduced. To get a fair ArrayBlockingQueue, use the following code:

private static ArrayBlockingQueue<Integer> blockingQueue = new ArrayBlockingQueue<Integer>(10,true);
Copy the code

5.3 LinkedBlockingQueue

LinkedBlockingQueue (ArrayBlockingQueue, ArrayBlockingQueue, ArrayBlockingQueue, ArrayBlockingQueue, ArrayBlockingQueue, ArrayBlockingQueue, ArrayBlockingQueue To prevent LinkedBlockingQueue from rapidly growing in size and consuming large amounts of memory. The size of the LinkedBlockingQueue object is usually specified when it is created, and if not, the size is equal to integer.max_value.

Related construction methods:

/** * Creates a {@code LinkedBlockingQueue} with a capacity of * {@code Integer#MAX_VALUE}. */ public LinkedBlockingQueue() { this(Integer.MAX_VALUE); } /** ** Creates a {@code LinkedBlockingQueue} with the given (fixed) capacity. ** @param capacity the capacity of  this queue * @throws IllegalArgumentException if {@code capacity} is not greater * than zero */ public LinkedBlockingQueue(int capacity) { if (capacity <= 0) throw new IllegalArgumentException(); this.capacity = capacity; last = head = new Node<E>(null); }Copy the code

5.4 PriorityBlockingQueue

PriorityBlockingQueue is an unbounded blocking queue that supports a priority. Elements are sorted in natural order by default, or they can be specified by implementing the compareTo() method of a custom class, or by using the constructor parameter Comparator when initialized.

PriorityBlockingQueue concurrency control uses ReentrantLock, and the queue is an unbounded queue. LinkedBlockingQueue can also specify the maximum queue size by passing capacity in the constructor, but PriorityBlockingQueue can only specify the initial queue size, which will be expanded later if there is not enough space to insert elements.

Simply put, it’s a thread-safe version of PriorityQueue. No NULL values can be inserted, and the objects to be inserted into the queue must be comparable, otherwise a ClassCastException is reported. Its insert put method does not block because it is an unbounded queue (the take method blocks when the queue is empty).

Recommended articles:

Unpack Java Concurrent Queue BlockingQueue

Javadoop.com/post/java-c…

Six ConcurrentSkipListMap

The following sections refer to the Geek Time column “The Beauty of Data Structures and Algorithms” and “Java High-concurrency Programming in Action.”

To introduce ConcurrentSkipListMap, let’s take a quick look at the skip table.

For a single linked list, even if the list is ordered, if we want to find some data in it, we have to go through the list from beginning to end, which is inefficient. A jumper is a data structure that can be used for quick lookup, somewhat like a balanced tree. Both can do quick lookups of elements. One important difference, however, is that the insertion or deletion of the balanced tree is likely to result in a global adjustment of the balanced tree. However, the insertion and deletion of the hop table only need to operate on a part of the entire data structure. The advantage of this is that in high concurrency cases, you will need a global lock to keep the entire balanced tree thread-safe. For jumpers, you only need partial locking. This way, you can have better performance in high-concurrency environments. In terms of query performance, the time complexity of a hop table is O(logn), so in concurrent data structures, the JDK uses a hop table to implement a Map.

The essence of a hopping list is that multiple lists are maintained at the same time, and lists are layered,

The lowest linked list maintains all the elements in the hop list, and each upper list is a subset of the lower one.

All linked list elements in a jumper are sorted. When looking, you can start with the top-level list. Once it is found that the searched element is greater than the value in the current list, it will go to the next list to continue the search. This means that during the search, the search jumps. Look for element 18 in the hop table, as shown above.

Finding 18 used to be 18 times, now it only takes 7 times. When the linked list length is relatively large, the efficiency of building index lookup will be significantly improved.

It is easy to see from the above that a skip table is an algorithm that uses space for time.

Another difference between a Map implemented using a hop table and a Map implemented using a hash algorithm is that the hash does not store the order of elements, whereas all elements in a hop table are sorted. So when you iterate over the hop table, you get an ordered result. So, if you need order in your application, then skip tables are your best bet. The class in the JDK that implements this data structure is ConcurrentSkipListMap.