Optimistic and pessimistic locks

Optimistic locking and pessimistic locking are both used to solve the problem of data contention in concurrent scenarios, but they are two completely different ideas. They are widely used and are not limited to a programming language or database.

The concept of optimistic locking

The so-called optimistic lock refers to the operation of the data is very optimistic, optimistic that others will not modify the data at the same time, so the optimistic lock will not be locked, only when the update will determine whether others modify the data during the period, if others modify the data, the operation will be abandoned, otherwise the operation.

Pessimistic lock concept

The so-called pessimistic lock refers to the pessimistic view that others will modify the data at the same time when operating data. Therefore, the pessimistic lock directly locks the data when operating data, and the lock will not be released until the completion of the operation. During the locking period, other people cannot operate data.

The implementation of optimistic locking

There are two main ways to implement optimistic lock, one is CAS (Compare and Swap) mechanism, the other is version number mechanism.

Mechanism of the CAS

The CAS operation consists of three operands, namely, the memory location to be read (V), the expected value to be compared (A), and the new value to be written (B). The operation logic is that if the value of the memory location V is equal to the expected value A, the location will be updated to the new value B, otherwise the operation will not be performed. In addition, many CAS operations are spin, meaning that if the operation is unsuccessful, it will be retried until the operation succeeds.

Version number mechanism

The basic idea of the version number mechanism is to add a version field to the data to indicate the version number of the data. Each time the data is modified, the version number increases by 1. When a thread queries data, it reads the version number of the data together. Then, when the thread needs to update the data, it compares the version number read before with the current version number. If the version number is consistent, the operation is performed; otherwise, the operation is abandoned.

Pessimistic lock implementation

Pessimistic locking is implemented in the form of locking, either at the code level (such as the synchronized keyword in Java) or at the database level (such as exclusive locking in MySQL).

Advantages, disadvantages and usage scenarios of optimistic and pessimistic locking

Optimistic locks and pessimistic locks are not superior to each other, they have their own scenarios.

Function limit

Optimistic locking has more restrictions on the scenarios that can be used than pessimistic locking, whether it’s the CAS mechanism or version number mechanism.

For example, CAS can only ensure the atomicity of a single variable operation. When multiple variables are involved, CAS is powerless, while synchronized can lock the entire code block. For example, if the version number mechanism queries data against Table 1 and updates data against Table 2, it is difficult to implement optimistic locking with a simple version number.

Degree of competition

Optimistic locking has an advantage in scenarios where the competition is low (the probability of concurrent conflicts is low). Because pessimistic locks lock code blocks or data, other threads cannot access them at the same time and must wait for the previous thread to release the lock before entering the operation, which affects concurrent response speed. In addition, locking and releasing locks consume additional system resources and can affect concurrent processing speed.

Pessimistic locking is more advantageous in highly competitive scenarios (with a high probability of concurrent conflicts). Because optimistic locks may fail to update data due to repeated data modifications, which leads to a waste of CPU resources.

Optimistic about whether the lock will be locked

Optimistic locking itself is unlocked and only determines whether data has been updated by another thread at update time, such as AtomicInteger. However, sometimes optimistic locking can work with locking operations, such as MySQL’s exclusive locking when performing data update operations. Optimistic locks are unlocked only when data is updated.

The disadvantage of the CAS

The disadvantages of CAS mainly include ABA problem, cost problem under high competition and function limitation.

ABA problem

The so-called ABA, refers to a thread in the operating data, there are other threads to data in a series of operations, but at the time of this thread to read the data, by the modified data is consistent with the begin to read the data and the thread, the thread will not know the data has been modified, then the CAS operation was judgment is a success.

ABA problems may be harmless in some Settings, but they can be dangerous in others. For example, CAS operates on the data at the top of the stack. Although the data at the top of the stack is restored to its original value after two (or more) changes, the stack may be changed, and the change of the data in the stack may cause some problems.

A more effective solution for ABA problems is to introduce version numbers. When the value in memory changes, the version number increases by 1. During the CAS operation, both the value in memory and the version number are compared. The CAS operation succeeds only when the value in memory and the version number remain unchanged. The AtomicStampedReference class in Java is used to solve ABA problems with a version number.

The cost problem under high competition

In a highly competitive scenario with a high probability of concurrent conflicts, if the CAS operation fails all the time, the CAS operation will continue to be retried, resulting in high CPU overhead. A simple solution to this problem is to introduce an exit mechanism that forces a failed exit if the number of retries exceeds a certain threshold. Of course, it’s best to avoid using optimistic locks in highly competitive scenarios.

Its own functional limitations

CAS has limited capabilities, such as guaranteeing atomicity for operations on a single variable (or memory value). This means that atomicity is not necessarily thread-safe, and CAS is powerless when multiple variables (or memory values) are involved. In addition, the implementation of CAS requires the support of the hardware level processor, which is not directly available to ordinary users in Java. Only atomic classes under the Atomic package can be used, with limited flexibility.

Fair and unfair locks

The concept of fair locking

Fair lock means that multiple threads obtain locks in the order they apply for locks, similar to queuing for dinner.

The concept of unfair locking

An unfair lock means that multiple threads acquire locks in a different order than the one that applies for them first. In the case of high concurrency, priority reversal or starvation may occur.

The difference between

Fair lock/unfair lock

The ReentrantLock creation in the package can specify the Boolean type of the constructor to get a fair or unfair lock. The default is unfair lock.

About the difference between the two

Threads acquire a fair lock in the order which they requested it.

Fair lock, is very fair, in the concurrent environment, each thread in the lock will first check the maintenance of the lock wait queue, if it is empty, or the current thread is the first waiting queue, hold the lock, otherwise it will join the wait queue, in accordance with the RULES of FIFO from the queue to get their own.

A nonfair lock permits barging: treads requesting a lock can jump ahead of the queue of waiting threads if the lock happens to be available when it is requested.

Unfair locks, after all, are rude. First, try to possess the lock, and if you fail, try something similar to a fair lock.

digression

In terms of Java already

The constructor specifies whether the lock is fair. The default is unfair. The advantage of an unfair lock is that the throughput is greater than that of a fair lock.

Synchronized is also an unfair lock.

Spinlocks and non-spinlocks

The concept of spin locks

The spin lock (spinlock), which means that the thread trying to acquire the lock will not block immediately, but will try to acquire the lock in a loop. This has the advantage of reducing the consumption of thread context switch, but the disadvantage is that the loop consumes CPU.

Advantages of spin locks

The spin lock does not cause the thread state to switch, it is always in user state, that is, the thread is always active; Does not make the thread into the blocking state, reduces the unnecessary context switch, execution speed

When a non-spin lock cannot be acquired, it will enter the blocking state, thus entering the kernel state. When the lock is acquired, it needs to recover from the kernel state, requiring a thread context switch. (After the thread is blocked, it will enter the kernel (Linux) scheduling state, which will cause the system to switch back and forth between user state and kernel state, seriously affecting the lock performance)

// unsafe.getAndAddInt
public final int getAndAddInt(Object var1, long var2, int var4) {
        int var5;
        do {
            var5 = this.getIntVolatile(var1, var2);
        } while(!this.compareAndSwapInt(var1, var2, var5, var5 + var4));

        return var5;
    }
Copy the code

Demo1# implements a spin lock

package com.nuih.lock;

import sun.misc.Unsafe;

import java.util.concurrent.TimeUnit;
import java.util.concurrent.atomic.AtomicReference;
import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.ReentrantLock;

/** * Title: Implement a spinlock * spinlock benefits: * * Thread A calls myLock and holds the lock for 5 seconds. * B then comes in and finds that another thread holds the lock, not null. Until A releases the lock and B then grabs * *@author: hejianhui
 * @create: the 2020-06-21 sets *@see SpinLockDemo
 * @sinceJDK1.8 * /
public class SpinLockDemo {

    // Atomic reference
    AtomicReference<Thread> atomicReference = new AtomicReference<>();

    public void myLock(a){
        Thread thread = Thread.currentThread();
        System.out.println(thread.getName() + "\ t in 😊");
        while(! atomicReference.compareAndSet(null,thread)){

        }
    }

    public void myUnLock(a){
        Thread thread = Thread.currentThread();
        atomicReference.compareAndSet(thread,null);
        System.out.println(thread.getName() + "\t invoked myUnLock");
    }

    public static void main(String[] args) throws InterruptedException {
        SpinLockDemo spinLockDemo = new SpinLockDemo();

        new Thread(() -> {
            spinLockDemo.myLock();

            try {
                TimeUnit.SECONDS.sleep(5);
            } catch (InterruptedException e) {
                e.printStackTrace();
            }
            spinLockDemo.myUnLock();

        },"A").start();

        TimeUnit.SECONDS.sleep(1);

        new Thread(() -> {
            spinLockDemo.myLock();

            try {
                TimeUnit.SECONDS.sleep(1);
            } catch (InterruptedException e) {
                e.printStackTrace();
            }
            spinLockDemo.myUnLock();
        },"B").start(); }}Copy the code

Reentrant lock (aka recursive lock)

The concept of reentrant locks

Reentrant lock (also called recursive lock)

It refers to the code that the inner recursive function can still acquire the lock after the outer function of the same thread obtains the lock. When the outer method of the same thread obtains the lock, the inner method will automatically acquire the lock. That is, a thread can enter any lock it already owns that is synchronizing code fast.

ReentrantLock/Synchronized is a typical example of a ReentrantLock

The greatest benefit of reentrant locks is to avoid deadlocks

Demo2# Reentrant lock

package com.nuih.lock;

import java.util.concurrent.TimeUnit;
import java.util.concurrent.locks.Lock;
import java.util.concurrent.locks.ReentrantLock;

class MyPhone implements Runnable {

    public synchronized void sendMsg(a){
        System.out.println(Thread.currentThread().getName() + "\t invoked sendMsg");
        sendEmail();
    }

    public synchronized void sendEmail(a){
        System.out.println(Thread.currentThread().getName() + "\t##### invoked sendEmail");
    }

    Lock lock = new ReentrantLock();

    @Override
    public void run(a) {
        get();
    }

    public void get(a){
        lock.lock();
        try{
            System.out.println(Thread.currentThread().getName() + "\t invoked get");
            set();
        }finally{ lock.unlock(); }}public void set(a){
        lock.lock();
        try{
            System.out.println(Thread.currentThread().getName() + "\t###### invoked set");
        }finally{ lock.unlock(); }}}/** * Reentrant lock (also known as recursive lock) ** refers to the fact that the inner recursive function can still acquire the lock code after the same thread obtains the lock from the outer method. * If the same thread obtains the lock from the outer method, the inner method automatically acquires the lock. * * That is, a thread can enter any lock it already owns that is synchronizing code fast. * * T1 invoked sendMsg() The T1 thread obtains the lock in the outer method * T1 ##### invoked sendEmail T1 obtains the lock automatically in the inner method * * T2 invoked sendMsg() * T2 ##### invoked sendEmail * *@author: hejianhui
 * @create: then * 2020-06-21@see ReentrantLockDemo
 * @sinceJDK1.8 * /
public class ReentrantLockDemo {

    public static void main(String[] args) throws InterruptedException {

        MyPhone myPhone = new MyPhone();

        new Thread(() -> {
            myPhone.sendMsg();
        },"t1").start();

        new Thread(() -> {
            myPhone.sendMsg();
        },"t2").start();


        TimeUnit.SECONDS.sleep(1);
        System.out.println("\n\n\n\n");

        Thread t3 = new Thread(myPhone,"t3");
        t3.start();

        Thread t4 = new Thread(myPhone,"t4"); t4.start(); }}Copy the code

Exclusive locks (write locks)/shared locks (write locks)/mutex locks

What is?

Exclusive lock: the lock can only be held by one thread at a time. An exclusive lock for both ReentrantLock and Synchronized

Shared lock: The lock can be held by multiple threads.

For ReentrantReadWriteLock, the read lock is shared and the write lock is exclusive.

The shared lock of read lock ensures that concurrent read is very efficient. Read/write, read/write, and write processes are mutually exclusive.

Demo#3 exclusive/shared lock

package com.nuih.lock;

import com.nuih.map.HashMap;
import com.nuih.map.Map;

import java.util.concurrent.locks.ReadWriteLock;
import java.util.concurrent.locks.ReentrantReadWriteLock;

class MyCache{

    private volatile Map<String,Object> map = new HashMap<>();
    final ReadWriteLock readWriteLock = new ReentrantReadWriteLock();

    public void put(String key,Object value){
        readWriteLock.writeLock().lock();
        try{
            System.out.println(Thread.currentThread().getName() + "\t starts writing :" + key);
            map.put(key,value);
            System.out.println(Thread.currentThread().getName() + "\t write complete");
        }finally{ readWriteLock.writeLock().unlock(); }}public void get(String key){
        readWriteLock.readLock().lock();
        try {
            System.out.println(Thread.currentThread().getName() + "\t start reading");
            Object result = map.get(key);
            System.out.println(Thread.currentThread().getName() + "\t Reading finished :" + result);
        }finally{ readWriteLock.readLock().unlock(); }}}/** * There is no problem with multiple threads reading the same resource class at the same time, so in order to meet the concurrency, reading the shared resource should be possible at the same time * but * if another thread wants to write to the shared resource, no other thread should be able to read or write to the resource * Summary: * Read-read can coexist * read-write cannot coexist * write-write cannot coexist * * Write operation: atom + exclusive, intermediate process must be a complete unity, the middle must not be split * *@author: hejianhui
 * @create: in the 2020-06-21 s when the *@see ReadWriteLockDemo
 * @sinceJDK1.8 * /
public class ReadWriteLockDemo {

    public static void main(String[] args) {

        MyCache myCache = new MyCache();

        for(int i=0; i<5; i++){int finalI = i;
            new Thread(() -> {
                myCache.put(finalI+"",finalI);
            },"A"+String.valueOf(i)).start();
        }

        for(int i=0; i<5; i++){int finalI = i;
            new Thread(() -> {
                myCache.get(finalI+"");
            },"B"+String.valueOf(i)).start(); }}}Copy the code