🙌 hello, long time no see, this time to bring you is also real dry goods oh, interview essential ah in addition, if you think the content is helpful to you, as well as to give a thumbup, encourage the update 😂.

Classification of lock

According to the classification standards, we divide locks into the following 7 categories, which are:

  • Biased/lightweight/heavyweight locks;
  • Reentrant lock/non-reentrant lock;
  • Shared/exclusive lock;
  • Fair/unfair lock;
  • Pessimistic lock/optimistic lock;
  • Spin-lock/non-spin-lock;
  • Interruptible locks/non-interruptible locks.

In the case of locks in Java, it is also possible for a lock to occupy multiple criteria and meet multiple classifications, such as a ReentrantLock that is both interruptible and reentrant.

Bias lock/lightweight lock/heavyweight lock

The first category is biased lock/lightweight lock/heavyweight lock. These three types of locks specifically refer to the synchronized lock state, which is indicated by the mark word in the object header.

Biased locking

If there’s no competition for the lock all the time, then there’s no need to lock it, just mark it, and that’s the idea of favoring locking. After an object is initialized, there is no thread to obtain its lock, it is can be biased, when the first thread to access it and try to get the lock, it will record this thread, if you try to get the lock after the thread is the owner of biased locking, can directly obtain locks, overhead is very small, the best performance.

Lightweight lock

JVM developers have found that in many cases synchronized code is executed alternately by multiple threads rather than at the same time, meaning that there is no actual contention or only a short lock contention that CAS can resolve, in which case a fully mutually exclusive heavyweight lock is unnecessary. A lightweight lock is one that is accessed by another thread when the lock is biased, indicating that there is a race. Then the biased lock will be upgraded to a lightweight lock, and the thread will try to acquire the lock by spinning instead of blocking.

Heavyweight lock

Heavyweight locks are mutex locks, which are implemented using the synchronization mechanism of the operating system, so the overhead is relatively high. When multiple threads compete with each other for a long time, the lightweight lock cannot meet the demand, and the lock will expand to the heavyweight lock. Heavyweight locks block other threads that apply but can’t get the lock.

Reentrant lock/non-reentrant lock

The second category is reentrant and non-reentrant locks. A reentrant lock is a thread that already holds the lock and can acquire it again without releasing it. Similarly, a non-reentrant lock means that although the thread currently holds the lock, it must release the lock before attempting to acquire it again.

Shared/exclusive lock

The third classification criteria are shared and exclusive locks. A shared lock means that the same lock can be acquired by multiple threads at the same time, whereas an exclusive lock means that the lock can only be acquired by one thread at the same time. Our read-write locks best illustrate the concept of shared and exclusive locks. Read locks are shared locks, while write locks are exclusive locks. Read locks can be simultaneously read and held by multiple threads, while write locks can be held by at most one thread at a time.

Fair lock/unfair lock

The fourth category is fair and unfair locking. The fair meaning of fair lock is that if the thread can not get the lock now, then all the threads will enter the wait and queue. The thread that waits for a long time in the wait queue will get the lock first, which means first come, first served. But non-fair lock is not so “perfect”, it will under certain circumstances, ignore the thread already in the queue, queue jumping phenomenon.

Pessimistic lock/optimistic lock

The fifth category is the pessimistic lock and its corresponding optimistic lock. The concept of pessimistic lock is that before acquiring a resource, the lock must be acquired first, so as to achieve the “exclusive” state. When the current thread is manipulating the resource, other threads cannot obtain the lock, so other threads cannot affect me. Optimistic locking, on the other hand, does not require the lock to be taken before acquiring the resource, nor does it lock the resource; In contrast, optimistic locking takes advantage of the CAS concept to modify a resource without monopolizing it. For example:

  • Pessimistic Lock: synchronized keyword and Lock interface.
  • Optimistic locks: Atomic Atomic classes

Spinlocks/non-spinlocks

The sixth classification is spinlocks and non-spinlocks. The idea behind spin locking is that if a thread can’t get a lock now, instead of blocking or releasing CPU resources, it starts using a loop, which is metaphorically described as “spinning”, as if the thread is “spinning on itself”. In contrast, the idea of non-spin locking is that there is no spin in the process, and if you can’t get the lock you simply give it up, or do some other processing logic, such as queuing, blocking, etc.

Compare the lock acquisition process for spin and non-spin

Interruptible locks/non-interruptible locks

The seventh category is interruptible and non-interruptible locks. In Java, the synchronized keyword represents an unbreakable lock. Once a thread has claimed the lock, there is no going back until it has acquired it before it can perform any other logical processing. Our ReentrantLock is a typical interruptible lock. For example, if you use the lockInterruptibly method to acquire a lock and suddenly don’t want to acquire it, you can do something else after the interrupt without waiting until the lock is acquired.

What optimizations does the JVM make for locks?

Compared to JDK 1.5, HotSopt virtual machine in JDK 1.6 has many optimizations for synchronized built-in lock performance, including adaptive spin, lock elimination, lock coarser, biased lock, lightweight lock, and so on. With these optimizations, the performance of synchronized locks is greatly improved.

Adaptive spin lock

Adaptive spin locks were introduced in JDK 1.6 to address the problem of long spins. Adaptive means that the timing of the spin is no longer fixed, but is determined by factors such as the success rate of recent spin attempts, the failure rate, and the status of the current lock owner. The duration of the spin changes, and the spin lock becomes “smart”.

Lock elimination

When the virtual machine just-in-time compiler, the JIT, runs and requires synchronization on some code, but removes locks that detect that there can be no shared data contention. The primary determination of lock elimination is based on data from escape analysis. If it is determined that all data on the heap in a piece of code will not escape and be accessed by other threads, it can be treated as stack data, as thread private, and synchronization locking is not necessary. Whether the change 鼂 escapes or not requires data flow analysis for the virtual machine, but the programmer himself should be well aware that how can synchronization be required when he knows there is no data contention? In fact, many synchronization measures are not implemented by the programmers themselves, and synchronized code is more common in Java programs than most readers might think.

public class MySynchronizedTest07 {

    public void method(a) {
        Object object = new Object();
        synchronized (object) {
            System.out.println("hello world"); }}}Copy the code

We can see from the above code that object is changed into a local variable. In the method, the local variable of the method is thread independent and concurrent, and each thread has its own object object. In this case, the lock is meaningless.

Lock coarsening

public void lockCoarsening(a) {
    synchronized (this) {
        //do something
    }
    synchronized (this) {
        //do something
    }
    synchronized (this) {
        //do something}}Copy the code

So actually this kind of release and retrieves the lock is completely unnecessary, if we expand the synchronization area, also is only in the most began to a lock, and directly in the final release, you can add the middle these meaningless unlock and lock the process of elimination, rather then combine several synchronized blocks into a larger synchronized block. The advantage of this is that locks are not frequently applied and released while the thread is executing the code, which reduces the performance overhead.

Lock the upgrade path

You can find the upgrade path of lock: no lock, biased lock, lightweight lock, heavyweight lock. To sum up:Biased locking provides the best performance and avoids CAS operations. Lightweight locks use spin and CAS to avoid thread blocking and wake up with moderate performance. Heavyweight locks block threads that do not acquire the lock and perform worst.

Best practices for using locks

  1. Always lock only when updating a member variable of an object
  2. Always lock only when accessing mutable member variables
  3. Never lock a method on another object
  4. Reduce lock holding time and lock granularity

Finally, click to collect double go 😄 ~