The lock of the Java

There are three types of locks in Java: Synchronized, object lock, and volatile.

  • Synchronized is used in a code block or method, where a piece of code can be executed synchronously.
  • Lock is similar to synchronized, but requires locking and releasing the lock itself. You must manually release the lock, otherwise a deadlock will occur.
    • Lock has advantages over synchronized because it has more features than synchronized, such as sniffing locking, multi-channel branch notification, and lock status determination.
    • Sniffing lock: A lock can use the tryLock() method to try to obtain the lock, and if not, continue without blocking the thread. Synchronized can only get in and block him.
    • Multiple branch notification: Lock can create multiple conditions and then match the thread to a condition, which can be woken up with the corresponding condition when it wants to wake up.
  • Volatile has a small scope, affecting only one variable. Volatile has the following three characteristics:
    • Visibility: It copies a copy from working memory to main memory and updates to main memory with each update. When different threads need to get their values, they can get them from main memory to achieve consistency.
    • Atomicity: Atomicity here refers to maintaining atomicity under basic operations (read, fetch). No atomicity under compound operation (v++).
    • Disallow instruction reordering: for volatile variables, the preceding code cannot run after that variable, and the following code cannot run before it.
    • Volatile is not actually a lock. It does not lock, block, and so on. His approach is to determine what to do next based on whether volatile objects have changed.
    • Volatile has its drawbacks, and it is sometimes possible to change a variable and return to the previous value, but these are very rare events.

Java locking mechanism

1. Fair lock/unfair lock

Fair locks are acquired on a first-come, first-served basis. When a thread or lock does not have a lock, the next thread enters the blocking queue and waits, acquiring the lock in the queue order.

An unfair lock is one in which a new thread can compete with the head of a blocking queue for a lock and then be added to the end of the queue. In this case, it is possible for the last thread to acquire the lock without entering the wait queue.

Synchronized and lock are unfair locks by default. Lock can be changed to fair lock by way of constructor.

The performance of unfair locks is higher than that of fair locks. Because when a thread finishes releasing the lock, the blocked thread needs to be woken up, which can take a long time. If an active thread wants to claim the lock during the waiting time, it gives the lock to that thread, reducing the waiting time.

2. Optimistic/pessimistic lock

Optimistic locks and pessimistic locks are one concept.

Optimistic lock: Very optimistic, every time the data is not modified, so do not lock. Determine whether this data has changed by determining its version number.

Pessimistic lock: Very pessimistic, every time you take data, you think someone else will change the data, so you need to lock it to prevent others from making changes.

Optimistic lock is generally implemented by CAS (compare and swapper).

  • Variables decorated with the volatile keyword are used as version numbers because volatile is visible.

Implementation ideas: reference AtomicInteger implementation:

  • The original value of the variable is retrieved from memory using get() (this value is the version number).
  • The unsafe method is then called to modify the value.
  • This method requires passing in the base address, offset, old value (version number), updated value in memory.
  • If the old value is the same as the value in memory, the swap is performed. If the old value is different, it has been changed, and the swap is stopped, returning false.
  • If you want to change after a failed swap, you must re-get () the new value in memory and perform the CAS.

There is a flaw in this: the idea is that the value itself is the version number and can be changed at will, whereas the normal version number is constantly increasing. Problems caused:

  • If two threads get values from memory at the same time, they both get A.
  • The first thread stops and the second thread changes A to B.
  • Let’s do one more thread, swap B back into A.
  • When the first thread executes, it thinks the variable hasn’t changed at all.

Solution: Create a new field as the version number instead of using itself as the version number. The version can only be added and cannot be backtracked. However, only the version number is used for CAS, and the variable that needs to be synchronized is just a common change, which will also cause concurrency exceptions. The solution is to lock it.

if (version == UNSAFE.getObject()){ // The version number is changed successfully
    synchronized(objects) {// Change the version number after the assignment
        // Change the version number}}Copy the code

Pessimistic lock is more violent, direct lock.

3. The spin lock

Because most of the time, locks are held for a short period of time, so are shared variables. So there is no need for a thread to hang when it needs to wait for a lock, because switching between user and kernel modes is very performance critical.

Spins use CAS operation to compare whether the version number is the same or not. If they are the same, then get all. If they are not the same, the lock is kept in a loop, so that it is active, so as not to suspend the thread.

4. Adaptive spin lock

Because the loop is also very resource-intensive. The spin time is not fixed, so a method is adopted to stop the loop when the time is exceeded, but to suspend the thread directly.

This operation is used to add concurrentHashMap in JDK1.7.

private HashEntry<K,V> scanAndLockForPut(K key, int hash, V value) {
    HashEntry<K,V> first = entryForHash(this, hash);   // Get the first node of the list
    HashEntry<K,V> e = first;
    HashEntry<K,V> node = null;
    int retries = -1;      // Try again
    while(! tryLock()) {// Spin operation, no lock obtained
        HashEntry<K,V> f; 
        if (retries < 0) {
            if (e == null) {         // The first node is null
                if (node == null) 
                    node = new HashEntry<K,V>(hash, key, value, null);   // Create an object for Node
                retries = 0;     // Try again
            }
            else if (key.equals(e.key))   // The added node already exists
                retries = 0;
            else
                e = e.next;
        }
        else if (++retries > MAX_SCAN_RETRIES) {
            // If the number of attempts is greater than the default maximum number of attempts, use lock blocking. Reduced resource consumption, adaptive spin
            lock();
            break;
        }
        else if ((retries & 1) = =0 && (f = entryForHash(this, hash)) ! = first) {// Check whether the first node has changed
            e = first = f;   // Replace the first node
            retries = -1;    // Try again to see if the node added by the current thread is a new node}}return node;    // Exit the loop when the lock is acquired and return to this node
}
Copy the code

5. Reentrant lock

A reentrant lock is A thread that has acquired A lock and needs to acquire A lock again during the execution phase. It will not block because A lock is taken away, but because it has the lock to continue to execute.

Synchronized and ReentrantLock are both reentrant locks, but Synchronized automatically acquires and releases the lock. ReentrantLock must be acquired and released manually, and must be acquired as many times as the lock was released, otherwise a deadlock will occur.

6. Read/write locks (shared/mutex)

The ReentrantLock class has the effect of being completely mutually exclusive, with only one thread executing at a time, which is inefficient.

The JDK provides a read and write lock, the ReentrantReadWriteLock class, that can be used to perform operations without synchronization, improving efficiency.

Read locks are not mutually exclusive, read locks are mutually exclusive with write locks, and write locks are mutually exclusive with write locks (whenever write locks are present, they are mutually exclusive).

Synchronized locking mechanism

Synchronized itself has been improving its locking mechanism since JDK1.6, and in some cases it’s not a heavyweight lock. Optimization mechanisms include adaptive locking, spin locking, lightweight locking, and heavyweight locking.

Java object head

Java objects are divided into three parts: object headers, entity data, and alignment fillers.

  • Object:
    • Mark Word
      • The object’s HashCode
      • Generational age
      • The GC tag
      • Lock the tag
    • A pointer to a class
    • The length of the array
  • The instance data
  • Alignment filler

In the unlocked state, Mark Word records: the object’s HashCode, generational age, whether it is biased to lock, and the lock flag.

Biased locking

Biased lock design concept:

  • Since locks need to be acquired and released each time a synchronized block is entered and exited, resources are wasted.
  • After a lot of validation, it turns out that in many cases the same thread acquires the lock.
  • So it’s ideal to keep this lock on this thread.

To ensure that the lock is acquired by a thread, you must add the thread’s ID to the lock object. Therefore, in the biased lock state, Mark Word will record: HashCode of thread object, generation age, whether biased lock, lock flag.

Execution process:

  • When the lock is first acquired by a thread, the thread Hashcode is added to the lock’s object header.
  • The lock is not released after the thread completes execution.
  • When the lock is acquired the second time, the thread is checked to see if it matches the thread recorded in the object header. If so, the synchronization code is run directly.
  • If not, the lock will escalate/swell to become a lightweight lock.

Advantages: Biased locking saves the performance cost of acquiring and releasing locks when there is no contention or only one thread is using the lock.

Lightweight lock

In the lightweight lock state, Mark Word records: a pointer to the lock record in the thread stack, and the lock flag bit.

  • The virtual machine creates a memory Lock Record in the thread stack to hold information. (Copy from lock’s Mark Word)
  • When a thread wants to acquire a lock, it performs a CAS operation to update the lock’s Mark Word to a pointer to the lock record in the stack.
  • If the CAS operation succeeds, the thread has obtained the lock.
  • If the CAS fails, the lock is obtained by another thread, and the thread waits to obtain the lock in the mode of spin lock.

Advantages: avoid blocking in the thread, when the thread can not get the lock, will spin, but will not block, resulting in system call kernel state and user state.

Disadvantages: If there is a lot of contention, the CAS and spin operations used by lightweight locks can consume a lot of resources, and the performance of the program can actually degrade.

Application scenario: Using lightweight locks can reduce the conversion between user mode and kernel mode and improve system performance without multi-thread contention.

Heavyweight lock

A heavyweight lock is implemented by a Monitor lock inside an object, which in turn relies on the operating system MutexLock (MutexLock).

Heavyweight locks block threads, wake them up, release locks, and consume a lot of resources.