Abstract

The principle of synchronized lock is also often involved in the interview questions, this paper mainly through the analysis of the following questions, to help you understand the principle of synchronized lock.

1. What are synchronized locks? What is the object of the lock?

2. What is the execution process of biased lock, lightweight lock and heavyweight lock?

3. Why are lightweight and heavyweight locks unfair?

4. Why do heavyweight locks need spin operation?

5. When will lock upgrade or lock degradation occur?

6. What are the advantages and disadvantages of biased lock, light lock and weight lock?

1. What are synchronized locks? What is the object of the lock?

Synchronized is a method or code block modified by synchronized that can only be executed by one thread at a time to achieve data security.

In general, you can modify synchronized code blocks, instance methods, and static methods. The locking objects are respectively synchronized code blocks in parentheses, instance objects, and classes.

In terms of implementation principles,

  • Synchronized refers to a synchronized code block. Javac generates monitorenter and Monitorexit directives before and after the entry and exit directives of the synchronized block during compilation, representing attempts to acquire and release locks. (To ensure that the lock is released in the event of an exception throw, Javac adds an implicit try-finally to the synchronized code block, in which monitorexit is called to release the lock.)
  • Synchronized modifier. Javac adds the ACC_SYNCHRONIZED keyword to the flags attribute of the method. When a JVM makes a method call and finds that the method is ACC_SYNCHRONIZED, it tries to acquire the lock first.
public class SyncTest {
    private Object lockObject = new Object();
    public void syncBlock(a){
        // modify the code block with the lockObject as lockObject
        synchronized (lockObject){
            System.out.println("hello block"); }}// Modify the instance method to lock the current instance object
    public synchronized void syncMethod(a){
        System.out.println("hello method");
    }
    // Modifies static methods, locking the current class
    public static synchronized void staticSyncMethod(a){
        System.out.println("hello method"); }}Copy the code

2. What is the execution process of biased lock, lightweight lock and heavyweight lock?

In the JVM, a Java object consists of an object header, instance data, and an alignment fill, and the object header consists of a Mark Word plus a pointer to the class the object belongs to (and, in the case of an array object, a length). It looks like this:

Mark Word: Stores runtime data about the object itself, such as hashCode, GC generation age, lock status flags, locks held by threads, and so on. It accounts for 4 bytes in a 32-bit system and 8 bytes in a 64-bit system, so the amount of data it can store is limited. Therefore, the flag bit of whether it is biased to lock and the flag bit of lock are mainly set to distinguish the data stored by other bits. Please see the following figure for details:

Lock information is stored in the Mark Word of the lock object. When the object status is biased lock, Mark Word stores the ID of biased thread. In the lightweight Lock state, Mark Word stores a pointer to the Lock Record in the thread stack. When the state is a heavyweight lock, Mark Word is a pointer to a Monitor object in the heap.

This is a flow chart found on the Internet, you can see the flow chart first, combined with the text to understand the execution process

Biased locking

The author of Hotspot introduced biased locking after previous research and found that in most cases locks are not contested by multiple threads and are always acquired by the same thread multiple times.

Simply put, when the main lock is in the biased lock state, the ID of the thread holding the biased lock will be saved in Mark Word. If the ID of the thread acquiring the lock is consistent with it, it means that it is the same thread. It can be directly executed without CAS operation to lock and unlock as lightweight lock.

Partial locking process:

Scenario 1: When the lock object is first acquired by the thread

If the thread finds that it is in anonymous biased state (that is, the Mark Word of the lock object does not store the thread ID), CAS instruction will be used to change the thread ID in the Mark Word from 0 to the current thread ID. If successful, the biased lock is obtained and the code in the synchronized block continues to execute. Otherwise, the bias lock is revoked and upgraded to a lightweight lock.

Scenario 2: When the thread that acquired the bias lock enters the synchronization block again

Finding that the thread ID stored by the Lock object is the ID of the current thread, it will add a product of the bug Mark Word empty Lock Record to the stack of the current thread, and then continue to execute the code of the synchronized block, since it is the thread’s private stack that is manipulated, so CAS instruction is not needed; It can be seen that in the biased lock mode, when the biased thread tries to acquire the lock again, only a few simple operations can be performed. In this case, the performance overhead brought by synchronized keyword can be basically ignored.

Scenario 2: When a thread that has not acquired a lock enters the synchronization block

When a thread that has not acquired the lock enters the synchronization block and finds that it is currently in the biased lock state and stores the ID of another thread (that is, another thread is holding the biased lock), it will enter the logic to revoke the biased lock. Generally, it will check whether the biased thread is still alive in SafePoint

  • If the thread is alive and still executing in a synchronized block, the lock is upgraded to a lightweight lock. The original biased thread continues to own the lock, but holds the lightweight lock, continues to execute the code block, and then unlocks the code in the same manner as the lightweight lock, while the other threads spin to try to acquire the lightweight lock.
  • If the biased thread is no longer alive or not in the synchronized block, the object header’smark wordChange the state to unlocked.

Thus, the timing of biased lock upgrade is: when a thread obtains biased lock, the biased lock will be upgraded to lightweight lock as long as there is another thread trying to obtain biased lock and the thread currently holding biased lock is still executing in the synchronous block.

Partial lock unlocking process

Therefore, lock-biased unlocking is simple, which simply sets the obJ field of the most recent Lock record in the thread’s stack to NULL. It should be noted that the thread ID in the Mark Word of the lock object will not be changed in the step of biased lock unlocking. Simply speaking, when the lock object is in biased lock, the thread ID in the Mark Word may be the ID of the thread that is executing the synchronization block. It may also be the thread ID of the biased lock that has been released after the last execution, mainly for the purpose that the thread that held the biased lock last time can execute the synchronization block next time after judging that the thread ID in Mark Word is the same. Instead of CAS operation to set their own thread ID into the lock object Mark Word. This is the general flow of biased lock execution:

Lightweight lock

Heavyweight relies on the underlying operating system Lock the Mutex Lock, but due to the use of the Lock to the current thread Mutex hang and switch from user mode to kernel mode to perform, the switching cost is very expensive, and in most of the time may not be multi-threaded competition, only this time is the thread A synchronized block, Another period of time is thread B to execute the synchronized block, only the multiple threads are executed alternately, not at the same time, there is no competition, if the use of heavy lock efficiency is less. And in the heavy lock, didn’t get the thread lock blocks, thread after winning the lock will be awakened, blocking and awaken operation is time-consuming, if synchronized block of code execution is faster, waiting for the thread lock can undertake to spin operation (just don’t release the CPU, perform some empty instruction or is it a few times for loop), waiting for locks, It’s more efficient. Therefore, lightweight lock is naturally aimed at the scene where there is no lock competition. If there is lock competition but it is not fierce, it can still be optimized with spin lock. After the spin fails, it can be upgraded to heavyweight lock.

The process of adding lightweight locks

The JVM creates a space for storing lock records in the current thread’s stack frame for each thread, called the Product Mark Word. If a thread obtains a product that is a lightweight product, it will copy the product’s Product Mark Word into its product’s product.

The thread then attempts to replace the lock’s Mark Word with a POINTER to the lock record copied in its own thread stack with a CAS operation. If the current thread succeeds in acquiring the lock, if it fails, it means that the Mark Word has been replaced with another thread’s lock record, indicating that the current thread is trying to acquire the lock using spin in the competition with other threads.

Spin: Repeated attempts to acquire the lock, usually in a loop.

Spin is cpu-consuming, and if the lock is never acquired, the thread will remain spinning, wasting CPU resources.

The JDK uses adaptive spin, which simply means that a thread spins more next time if it succeeds, and less if it fails.

Spin does not continue indefinitely, and if the spin reaches a certain point (JVM, operating system) and the lock is not acquired, the thread will block. At the same time, this lock will be upgraded to a heavyweight lock.

Release process for lightweight locks

When releasing the lock, the current thread copies the contents of the product Mark Word back into the Mark Word of the lock using the CAS operation. If no contention occurs, the replicated operation succeeds. If another thread has upgraded a lightweight lock to a heavyweight lock because of multiple spins, the CAS operation will fail, releasing the lock and waking up the blocked thread. Lightweight lock lock unlock flowchart:

Heavyweight lock

When multiple threads request a heavyweight lock at the same time, the heavyweight lock sets several states to distinguish between the requesting threads:

Contention List: All threads requesting locks will be placed in this queue first. I don’t know why articles on the Internet call this queue, but actually this queue is advanced and out, more like a stack. When the Entry List is empty, The Owner thread fetches a thread directly from the end of the queue for the Contention List, making it an OnDeck thread to Contention the lock. The OnDeck thread will still spin the key to obtain the key, so it can Contention for Contention with other threads that have yet to join the Contention List.

Entry List: The threads on the Contention List that qualify as candidates are moved to the Entry List primarily to reduce concurrent access to the Contention List, as new threads are added to and fetched from the tail of the queue.

Wait Set: Threads that block calling the Wait method are placed in the Wait Set.

OnDeck: At most one thread is competing for a lock at any time, called OnDeck.

Owner: The thread that acquires the lock is called Owner. ! Owner: the thread that releases the lock

Heavyweight lock execution process:

The flow chart is as follows:

In step 1, the thread will attempt to spin the CAS operation to obtain the lock before crossing the Contention List. If it fails to obtain the lock, the thread will enter the end of the Contention List queue.

In Step 2, when the Owner thread unlocks the Entry List, if the Entry List is empty, it will first move the threads at the end of the queue to the Entry List

Step 3 is that when the Owner thread unlocks, if the Entry List is not empty, it takes a thread from the Entry List and makes it the OnDeck thread. The Owner thread does not directly transfer the lock to the OnDeck thread, but gives the right to compete for the lock to OnDeck, and OnDeck needs to re-compete for the lock. This selection behavior in the JVM is called a “competing switch.” (Contention is primarily with threads that have not yet entered the Contention List and are still spinning to acquire heavyweight locks.)

Step 4 is when the OnDeck thread gets the lock and becomes the Owner thread.

Step 5: The Owner thread calls the wait () method of the lock object, moves to the wait Set, releases CPU resources, and also releases the lock.

Step 6. When another thread calls notify () on the lock object, the thread that called wait will move from the wait Set to the Entry List and wait for the lock.

3. Why are lightweight and heavyweight locks unfair?

Biased locking because competition has nothing to do with multiple threads, so no fair and unfair, lightweight way of acquiring a lock is a lock spin multiple threads for operation, and then use the CAS operation will lock Mark Word replaced by pointing to his thread stack lock record copy of the pointer, so who can get the lock is the luck, watch order. The main reason why heavyweight locks are unfair is that the thread that accesses the heavyweight lock does not directly enter the key List but spins to obtain the key. Therefore, the thread that accesses the heavyweight lock may also obtain the key first.

4. Why do heavyweight locks need spin operation?

Because threads in ContetionList, EntryList, and WaitSet are all blocked, the blocking is done by the operating system (via the pthread_mutex_lock function under Linxu). A blocked thread enters the kernel (Linux) scheduling state, which can cause the system to switch back and forth between user and kernel states, severely affecting lock performance. If there is less code in the synchronization block and the execution is relatively fast, the subsequent line will spin to acquire the lock first and execute first, instead of entering the blocking state, reducing the extra overhead and improving the system throughput.

5. When will lock upgrade or lock degradation occur?

Biased locks are upgraded to lightweight locks: when there are different threads competing for the lock. Specifically, when a Thread finds that the current lock state is biased lock, the Thread ID stored by the lock object is the ID of other threads, and the OBJ field of the Lock record queried by the Thread stack corresponding to the Thread ID is not null (indicating that the Thread holding the biased lock is still executing the synchronization block). The biased lock will then be upgraded to a lightweight lock.

Upgrade from lightweight lock to heavyweight lock: In a lightweight lock, the thread that has not acquired the lock spins, and the lock will be upgraded after a certain number of spins, because the spin is also CPU consuming, and a long period of spin is also performance consuming. Lock degradation because using heavyweight locks incurs additional overhead if there is no multithreading contention, the JVM checks for idle Monitors when it enters SafePoint. Then they try to downgrade.

6. What are the advantages and disadvantages of biased lock, light lock and weight lock?

Space is limited, but here are the pros and cons of various locks, from The Art of Concurrent Programming:

The lock advantages disadvantages Applicable scenario
Biased locking Locking and unlocking require no additional cost, and there is only a nanosecond difference compared to implementing asynchronous methods. If there is lock contention between threads, there is additional lock cancellation cost. This applies to scenarios where only one thread accesses a synchronized block.
Lightweight lock Competing threads do not block, improving the response time of the program. If a thread that never gets a lock contention uses spin, it consumes CPU. Pursue response time. Synchronous block execution is very fast.
Heavyweight lock Thread contention does not use spin and does not consume CPU. Threads are blocked and response time is slow. Pursue throughput. The synchronization block execution speed is long.

Reference links:

Github.com/farmerjohng…

Redspider. Group: 4000 / article / 02 /…

Blog.csdn.net/bohu83/arti…

Blog.csdn.net/Dev_Hugh/ar…

Highlights:

How to ensure the consistency between cache and database in high concurrency scenario?

How do you clean expired keys?

How does MySQL solve the illusion problem?

How to execute a MySQL update statement?

What is your understanding of MySQL locks?

What is your understanding of Redis persistence?

What do you think of synchronized locks?

Talk about your understanding of HashMap.