Optimistic lock, pessimistic lock, fair lock, spin lock, biased lock, lightweight lock, heavyweight lock, lock inflation… Hard to understand? Do not save! Here, without further ado, I’ll take you to the car.

The last article introduced the use of thread pools, and it seems that in addition to enjoying the performance benefits of thread pools, there is another problem: the issue of thread safety.

So what is thread safety?

First, the generation of thread safety problems

Thread-safety problem: A problem in multithreaded programming where simultaneous manipulation of the same variable resource causes the actual result to be different from the expected result.

For example, A and B simultaneously transfer 100,000 yuan to C. If you don’t have atomic transfer operation, A to C transfer, read the C A balance of 200000, 100000, then add transfer calculated at this time there should be 300000, but also the future and write 300000 back to C account, B transfer request at this time come over and find A balance of 200000 C B, then add 100000 and write back. Then A’s transfer continues — write $300,000 back to C’s balance. In this case, the final balance of C is 300,000 instead of 400,000 as expected.

If that doesn’t make sense to you, let’s look at the non-safe threading simulation:

public class ThreadSafeSample {
    public int number;
    public void add(a) {
        for (int i = 0; i < 100000; i++) {
            int former = number++;
            int latter = number;
            if(former ! = latter-1){
                System.out.printf("Former =" +  former + " latter="+ latter); }}}public static void main(String[] args) throws InterruptedException {
        ThreadSafeSample threadSafeSample = new ThreadSafeSample();
        Thread threadA = new Thread(new Runnable() {
            @Override
            public void run(a) { threadSafeSample.add(); }}); Thread threadB =new Thread(new Runnable() {
            @Override
            public void run(a) { threadSafeSample.add(); }}); threadA.start(); threadB.start(); threadA.join(); threadB.join(); }}Copy the code

=> former=5555 latter=6061

As you can see, it’s very easy to run into situations where former and latter are not equal just because of low concurrency between two threads. This is because, between the two values, other threads might have changed the number.

Second, thread-safe solutions

Thread-safe solutions are divided into the following dimensions (see Code Efficiency: Java Development Manual) :

  • The data is visible to a single thread (there is no thread-safety issue for a single thread to operate on its own data, and ThreadLocal is the solution);
  • Data read only;
  • Use a thread-safe class (for example, StringBuffer is a thread-safe class, implemented internally using synchronized);
  • Synchronization and locking mechanism;

The core idea of solving thread safety is: “read only or lock”. The key to solving thread safety is to use the thread safety package java.util.concurrent for short JUC reasonably.

Thread synchronization and locking

Before Java 5, synchronized was the only means of synchronization. Java 5 added ReentrantLock, which has the same semantics as synchronized, but is more flexible than synchronized, and can do more detailed control. For example, fair/unfair locking.

3.1 synchronized

Synchronized is a built-in Java synchronization mechanism that provides mutually exclusive semantics and visibility; while one thread has acquired the current lock, other threads attempting to acquire it can only wait or block there.

3.1.1 synchronized using

Synchronized can be used to refer to methods and blocks of code.

3.1.1.1 Modifies code blocks

synchronized (this) {
    int former = number++;
    int latter = number;
    / /...
}
Copy the code

3.1.1.2 Modification method

public synchronized void add(a) {
    / /...
}
Copy the code

3.1.2 Implementation principles of synchronized

Synchronized is implemented by a pair of Monitorenter/Monitorexit directives, and the Monitor object is the basic implementation unit of synchronization. Prior to Java 6, Implementations of Monitor relied entirely on the operating system’s internal muexes, and synchronization was an undifferentiated heavyweight operation with low performance because of the need to switch from user mode to kernel mode. In Java 6, however, the JVM significantly improved this by providing three different implementations of Monitor, known as Biased Locking, lightweight Locking, and heavyweight Locking, which greatly improved its performance.

3.1.2.1 Biased locking/lightweight locking/heavyweight locking

Biased locking is designed to minimize the performance overhead of locking without multithreaded access.

A lightweight lock means that when a biased lock is accessed by another thread, the biased lock is upgraded to a lightweight lock, and the other threads spin to try to acquire the lock without blocking and improve performance.

When the lock is a lightweight lock, the other thread spins, but the spin does not last forever. When the thread spins a certain number of times and has not acquired the lock, it will enter the block, and the lock will expand to a heavyweight lock. Heavy locking causes other requesting threads to block, causing performance degradation.

3.1.2.2 Lock Expansion (Upgrade) Mechanism

Java 6 optimized the synchronized implementation, using biased locking to upgrade to lightweight and then to heavyweight, to reduce the performance cost of locking, commonly known as lock inflation or lock escalation. How does it achieve lock escalation?

Lock expansion (upgrade) principle: There is a field called ThreadId in the header of the lock object. On first access, ThreadId is left blank. The JVM makes it biased and sets ThreadId to its ThreadId. If not, the upgrade bias lock is a lightweight lock, through a certain number of spin cycles to acquire the lock, will not be blocked, after a certain number of times will be upgraded to a heavyweight lock, into the blockage, the whole process is lock expansion (upgrade) process.

Spin lock 3.1.2.4 permission levels

Spin-locking is when the thread trying to acquire the lock does not block immediately, but tries to acquire the lock in a circular manner. This has the advantage of reducing the cost of thread context switching, but the disadvantage is that the loop consumes CPU.

3.1.2.4 Optimistic/Pessimistic lock

Pessimistic and optimistic locks are not specific “locks” but are basic concepts of concurrent programming.

Pessimistic locking considers that concurrent operations on the same data must be modified, even if no modification is made. Therefore, pessimistic locking takes the form of locking for concurrent operations on the same data. The pessimistic view is that concurrent operations without locks are bound to cause problems.

Optimistic locking is similar to The AtomicFieldUpdater in Java, which also uses the CAS mechanism. It does not lock the data, but realizes the version judgment required by optimistic locking by comparing the timestamp or version number of the data.

3.1.2.5 Fair/Non-Fair Lock

A fair lock is one in which multiple threads acquire a lock in the order in which they apply for it.

An unfair lock means that multiple threads acquire a lock in an order other than that in which they apply for the lock. It is possible that the thread that applies for the lock later takes precedence over the thread that applies for the lock first.

If you use unfair locks with synchronized, they cannot be set, which is the choice for thread scheduling in mainstream operating systems. In a generic scenario where fairness is not as important as you might think, Java’s default scheduling policy rarely causes starvation. The throughput of non-fair locks is greater than that of fair locks.

The throughput of non-fair locks is higher than fair locks because:

Occupy lock such as A, B requests for locks, found A footprint, jams waiting to be awakened, this time C at the same time to obtain A takes up the lock, if it is A fair lock C newcomers find unavailable after must wait after B to be awakened, rather than A fair lock C could be use first, before B awakened C, have been used, This saves the performance cost between C wait and wake up, which is why unfair locks have higher throughput than fair locks.

3.2 already

ReentrantLock can only be used for code blocks. To use ReentrantLock, you must manually unlock the lock. Otherwise, the lock will always be used.

3.2.1 already use

ReentrantLock reentrantLock = new ReentrantLock(true); / / set totrueIs a fair lock, default is not a fair lock ReentrantLock.lock (); try { }finally { reentrantLock.unlock(); }Copy the code

3.2.2 already advantage

  • The feature of trying to acquire a lock without blocking: the current thread attempts to acquire the lock, and if no other thread has acquired the lock at that time, it succeeds in acquiring and holding the lock;

  • Features of interruptible lock acquisition: Unlike synchronized, the thread that acquired the lock can respond to interrupts. When the thread that acquired the lock is interrupted, an interrupt exception is thrown and the lock is released;

  • The lock is acquired within a specified time range. Returns if the lock cannot be acquired by the deadline.

3.2.3 Precautions for ReentrantLock

  • In finally, the lock is released to ensure that it is finally released after being acquired;
  • Do not write the lock acquisition process in the try block, because if an exception occurs during the lock acquisition, the exception will cause the lock to be released without any reason.
  • ReentrantLock provides a newCondition method so that the user can perform the wait or wake action depending on the situation within the same lock;

3.3 Differences between Synchronized and ReentrantLock

From a performance point of view, the early implementation of Synchronized was inefficient, and performance in most scenarios was significantly different from that of ReentrantLock. But with so many improvements in Java 6, ReentrantLock still has some advantages in highly competitive situations. In most cases, you don’t need to worry too much about performance, but about the ease of writing code structures, maintainability, and so on.

The main differences are as follows:

  1. ReentrantLock is flexible to use, but must be accompanied by the action of releasing the lock;
  2. ReentrantLock must manually acquire and release locks, while synchronized does not manually release and open locks.
  3. ReentrantLock applies only to code block locks, while synchronized can be used to modify methods, code blocks, and so on.

The resources

Code Efficiently: A Java Development Manual

Java core technology 36 talk: t.cn/EwUJvWA

Lock classification in Java: www.cnblogs.com/qifengshi/p…

Course Recommendation: