Synchronized is one of the most frequently used words in an interview

Recommended reading:

  • MySQL database

  • Computer network high frequency interview questions latest edition

  • Java set high frequency interview questions latest edition

  • Concurrent programming high frequency interview questions

  • Interviewer: Please use five ways to print questions alternately with multiple threads

Article Contents:

What is the synchronized keyword?

In a multithreaded environment, multiple threads accessing a shared resource at the same time can cause problems, and the synchronized keyword is used to ensure thread synchronization.

Visibility problems with Java memory

Before you understand the underlying principles of the synchronized keyword, take a quick look at Java’s memory model to see how the synchronized keyword works.

Local memory here is not real, but an abstraction of the Java memory model, which includes controllers, algorithms, caches, and so on. At the same time, the Java memory model stipulates that threads must operate on shared variables in their own local memory and cannot directly operate on shared variables in main memory. What can go wrong with this memory model? .

  1. Thread A obtains the value of the shared variable X. At this time, there is no value of X in local memory A, so it loads the value of X in main memory and caches it in local memory A. Thread A changes the value of X to 1 and flushers the value of X to main memory. At this time, the value of X in main memory and local memory is 1.
  2. Thread B needs to obtain the value of the shared variable X. At this time, there is no value of X in local memory B. Load the value of X in main memory and cache it in local memory B. At this time, the value of X is 1. Thread B changes the value of X to 2 and flushes it to main memory. At this time, the X value in main memory and local memory B is 2, and the X value in local memory A is 1.
  3. Thread A obtains the value of shared variable X again. At this time, the value of X exists in the local memory, so it directly obtains the value of X is 1 from the local memory of A, but the value of X in the main memory is 2 at this time, so the so-called memory invisible problem occurs.

The Java memory model can solve this problem with synchronized and volatile keywords. In fact, to enter the synchronized block is to erase the variable used in the synchronized block from the local memory of the thread, so that the variable can not be obtained from the local memory when used again in the synchronized block, it needs to be obtained from the main memory, which solves the problem of invisible memory.

What are the three characteristics of synchronized?

Interview comparisons are often made between the synchronized keyword and the volatile keyword. The synchronized keyword guarantees three characteristics of concurrent programming: Atomicity, visibility, and orderliness; volatile guarantees visibility, not atomicity, and is also known as lightweight synchronized.

  • Atomicity: One or more operations either all execute successfully or all fail.synchronizedThe keyword ensures that only one thread holds the lock and accesses the shared resource.
  • Visibility: When one thread makes changes to a shared variable, other threads can see them immediately. performsynchronizedIs executedlock ,unlockAtomic operation to ensure visibility.
  • Orderliness: The order in which the program is executed is the order in which the code is executed.

What types of locks can be implemented with the synchronized keyword?

  • Pessimistic locks:synchronizedThe keyword implements pessimistic locking, which is locked every time a shared resource is accessed.
  • Unfair lock:synchronizedThe keyword implements an unfair lock, that is, threads do not acquire locks in the order in which they are blocked.
  • Reentrant lock:synchronizedThe keyword implements a reentrant lock, that is, a thread that has acquired the lock can acquire it again.
  • An exclusive or exclusive lock:synchronizedThe keyword implements an exclusive lock, that is, the lock can only be held by one thread, and all other threads are blocked.

Synchronized indicates the usage of the keyword

Synchronized is mainly used in three ways: modifying ordinary synchronization methods, modifying static synchronization methods, and modifying synchronization method blocks.

Modify normal synchronization methods (instance methods)

class syncTest implements Runnable {

    private static int i = 0;   // Share resources

    private synchronized void add(a) {
        i++;
    }

    @Override
    public void run(a) {
        for (int j = 0; j < 10000; j++) { add(); }}public static void main(String[] args) throws Exception {

        syncTest syncTest = new syncTest();

        Thread t1 = new Thread(syncTest);
        Thread t2 = newThread(syncTest); t1.start(); t2.start(); t1.join(); t2.join(); System.out.println(i); }}Copy the code

This is a very classic example of a thread-unsafe problem with multiple threads operating on i++, and the result of this code is very easy to get

20000
Copy the code

You can look at this code again and guess what it will do

class syncTest implements Runnable {

    private static int i = 0;   // Share resources

    private synchronized void add(a) {
        i++;
    }

    @Override
    public void run(a) {
        for (int j = 0; j < 10000; j++) { add(); }}public static void main(String[] args) throws Exception {

// syncTest syncTest = new syncTest();

        Thread t1 = new Thread(new syncTest());
        Thread t2 = new Thread(newsyncTest()); t1.start(); t2.start(); t1.join(); t2.join(); System.out.println(i); }}Copy the code

The results for

18634
Copy the code

The add() method in the second example also uses the synchronized keyword, but because the two new syncTest() operations set up two different objects, meaning that there are two different object locks, threads T1 and T2 use different object locks, it is not thread-safe. So how can this situation be resolved? Because the instance object is created differently each time, and the class object has only one, if the synchronized keyword applies to the class object, then the problem is solved by using synchronized to modify static methods.

Decorated static methods

Simply use the static modifier before the add() method; that is, when synchronized applies to a static method, the lock is the current class object.

class syncTest implements Runnable {

    private static int i = 0;   // Share resources

    private static synchronized void add(a) {
        i++;
    }

    @Override
    public void run(a) {
        for (int j = 0; j < 10000; j++) { add(); }}public static void main(String[] args) throws Exception {

// syncTest syncTest = new syncTest();

        Thread t1 = new Thread(new syncTest());
        Thread t2 = new Thread(newsyncTest()); t1.start(); t2.start(); t1.join(); t2.join(); System.out.println(i); }}Copy the code

The results for

20000
Copy the code

Decorates a block of synchronized code

In some cases, if the entire method body is large, only a small part of the code needs to be synchronized. If the entire method body is directly synchronized, the code performance deteriorates. In this case, only a small part of the code needs to be synchronized. The code is as follows:

class syncTest implements Runnable {

    static int i = 0;   // Share resources

    @Override
    public void run(a) {
        // Other operations.......
        synchronized (this) {// This represents the current object instance. Synctest. class can also be used to represent the class object lock
            for (int j = 0; j < 10000; j++) { i++; }}}public static void main(String[] args) throws Exception {

        syncTest syncTest = new syncTest();

        Thread t1 = new Thread(syncTest);
        Thread t2 = newThread(syncTest); t1.start(); t2.start(); t1.join(); t2.join(); System.out.println(i); }}Copy the code

Output result:

20000
Copy the code

Basic principles of synchronized keyword

This question is also a frequently asked question in interviews and difficult to understand. To understand synchronized requires certain knowledge of Java virtual machine.

Synchronized was known as heavyweight locking prior to jdk1.6, where biased locking and lightweight locking were introduced to reduce the performance overhead associated with acquiring and releasing locks. The following introduces the principle of synchronized before JDK1.6.

Object head

In the HotSpot VIRTUAL machine, the layout of Java objects in memory can be roughly divided into three parts: object headers, instance data, and fill alignment. Because synchronized locks are stored in object headers, we need to focus on object headers. If the object header is an Array type, the object header consists of Mark Word, Class MetadataAddress, and Array Length. If the object header is not an Array type, the object header consists of Mark Word, Class MetadataAddress, and Array Length. In a 32-bit virtual machine, the composition of the Java object header of array type is as follows:

content instructions The length of the
Mark Word Stores the object’s hashCode, generational age, and lock marker bits 32bit
Class MetadataAddress A pointer to data stored in an object type 32bit
Array length Array length 32bit

Here we need to focus on the Mark Word.

Mark Word

During running, the data stored in Mark Word changes with the change of the lock flag bit. In a 32-bit VM, the data in different states is as follows:

Where the thread ID represents the ID of the thread holding the biased lock, and the Epoch represents the timestamp of the biased lock. Biased locking and lightweight locking were introduced in jdk1.6.

The bottom implementation principle for heavyweight locks is Monitor

Before JDK1.6, synchronized can only implement heavyweight locking. Java virtual machines implement heavyweight locking based on Monitor objects, so let’s first learn about Monitor. In Hotspot virtual machines, Monitor is implemented by ObjectMonitor. Its source code is written in c + + language, first we download the Hotspot of the source code, source code download links: hg.openjdk.java.net/jdk8/jdk8/h…

ObjectMonitor() {
    _header       = NULL;
    _count        = 0; // The lock counter, when the lock is acquired count increment 1, when the lock is released count increment 1, until
    _waiters      = 0.// Number of waiting threads
    _recursions   = 0; // The number of lock reentrant times
    _object       = NULL; 
    _owner        = NULL; // points to the address of the thread holding the ObjectMonitor object
    _WaitSet      = NULL; // Threads in wait state are added to _WaitSet
    _WaitSetLock  = 0 ;
    _Responsible  = NULL ;
    _succ         = NULL ;
    _cxq          = NULL ; // Block the list of one-way threads on EntryList
    FreeNext      = NULL ;
    _EntryList    = NULL ; // Threads in the waiting block state are added to the list
    _SpinFreq     = 0 ;
    _SpinClock    = 0 ;
    OwnerIsThread = 0 ;
  }
Copy the code

Among them, _OWNER, _WaitSet, and _EntryList fields are more important. The conversion relationship between them is shown in the following figure

The process for obtaining and releasing Monitor is summarized as follows:

  1. If multiple threads access the code block at the same time, we will enter the EntryList and try to set the owner field in Monitor to the current thread by 1. If we find that the previous owner is the current thread, we will add 1 to recursions. If CAS attempts to acquire the lock fail, it enters the EntryList.
  2. Called when the thread that acquires the lockwait()Method, the owner is set to null, count is reduced by 1, recursions is reduced by 1, and the current thread joins the WaitSet waiting to be awakened.
  3. When the current thread finishes executing the synchronized block, the lock is released, count is reduced by 1, and recursions are reduced by 1. When the recursions value is 0, the thread has released the lock.

ObjectMonitor implements wait() and notify() in synchronized methods or blocks of synchronized code. ObjectMonitor implements wait() and notify() in synchronized code blocks.

The realization principle of synchronized acting on synchronized code blocks

You’ve already seen the implementation details of Monitor, but the Java virtual machine synchronizes methods and code blocks by entering and exiting Monitor objects. Jclasslib Bytecode Viewer plugin is installed in IDEA to facilitate the execution of program Bytecode instructions. Let’s start with the synchronized code that acts on a block of synchronized code.

    public void run(a) {
        // Other operations.......
        synchronized (this) {// This represents the current object instance. Synctest. class can also be used to represent the class object lock
            for (int j = 0; j < 10000; j++) { i++; }}}Copy the code

View the code bytecode instructions as follows:

 1 dup
 2 astore_1
 3 monitorenter     // The command to enter the synchronized code block
 4 iconst_0
 5 istore_2
 6 iload_2
 7 sipush 10000
10 if_icmpge 27 (+17)
13 getstatic #2 <com/company/syncTest.i>
16 iconst_1
17 iadd
18 putstatic #2 <com/company/syncTest.i>
21 iinc 2 by 1
24 goto 6 (-18)
27 aload_1
28 monitorexit     // End the command to synchronize the code block
29 goto 37 (+8)
32 astore_3
33 aload_1
34 monitorexit     // The command to execute when an exception is encountered
35 aload_3
36 athrow
37 return
Copy the code

As you can see from the above bytecode, the implementation of the synchronized code block is done by monitorenter and Monitorexit directives. The monitorenter directive is located where the synchronized code block begins. The first Monitorexit directive is used to end the synchronized code block normally. The second Monitorexit directive is the release Monitor directive that is executed when the exception ends.

Principle of synchronized action on synchronization method

    private synchronized void add(a) {
        i++;
    }
Copy the code

View the bytecode as follows:

0 getstatic #2 <com/company/syncTest.i>
3 iconst_1
4 iadd
5 putstatic #2 <com/company/syncTest.i>
8 return
Copy the code

Monitorenter and Monitorexit are missing from the monitorenter and Monitorexit directives. However, when checking the structure information of the class file of the monitorenter method, the synchronized flag after Access Flags indicates that the method is a synchronized method. The Java virtual machine uses this flag to tell whether a method is synchronous or not. If it has this flag, the thread will hold Monitor, execute the method, and release Monitor.

That’s how it works. To summarize, you should succinctly answer the basic principle of synchroized in the interview.

A: The Java VIRTUAL machine implements code block synchronization and method synchronization by entering and exiting Monitor objects. Code block synchronization is implemented using monitorenter and Monitorexit directives, while method synchronization is determined by the flags following Access Flags to determine whether the method is a synchronized method.

Why is Synchronized optimized in Jdk1.6?

Java virtual machines implement code block synchronization and method synchronization by entering and exiting Monitor objects, while Monitor relies on Mutex Lock of the underlying operating system to realize the switch between threads of the operating system, which requires switching from user mode to kernel mode, which has a high cost and great impact on performance.

What improvements have been made to synchronized in jDK1.6?

Lock the upgrade

In JDK1.6, biased and lightweight locks were introduced to reduce the performance cost of acquiring and releasing locks, and the lock states became four, as shown in the figure below. The lock status will be upgraded as the competition intensifies, but in general, the lock status can only be upgraded, not degraded. This upgrade-only, not-downgrade-only strategy is intended to improve the efficiency of acquiring and releasing locks.

Biased locking

The principle of partial lock (or partial lock acquisition process), partial lock advantage is what (what is the purpose of obtaining partial lock)

Biased locking was introduced to reduce the performance cost of synchronous code blocks executed by only one thread, that is, one thread acquires the lock without other threads competing.

Biased lock acquisition process:

  1. Check whether the Mark Word in the object header is biased. If not, upgrade to lightweight lock.
  2. If so, determine whether the thread ID in Mark Work points to the current thread, and if so, execute the synchronized code block.
  3. If not, the CAS operation is performed to compete for the lock. If the lock is contested, the thread ID in Mark Work is set to the current thread ID and the synchronized code block is executed.
  4. If the competition fails, upgrade to a lightweight lock.

The acquisition process of bias lock is shown as follows:

Partial lock undo:

The thread holding the bias lock will revoke the bias lock only after it has been contested. When a biased lock is revoked, it reverts to a no-lock or lightweight lock.

  1. Revocation of bias locks requires reaching the global safe point, which represents a state in which all threads are suspended.
  2. Determine whether the lock object is in lock-free state, that is, if the thread that obtained the biased lock has exited the critical area, it indicates that the synchronization code has been executed. The thread recompeting for the lock performs a CAS operation instead of the ThreadID of the original thread.
  3. If the thread that obtained the biased lock is still within the critical region, it means that the synchronization code has not been completed, and the thread that obtained the biased lock is upgraded to the lightweight lock.

A simple summary of the biased locking principle: Use the CAS operation to record the ID of the current thread into the object’s Mark Word.

Lightweight lock

The purpose of lightweight locking was to avoid the performance cost of using mutex (weight locks) when multiple threads alternately execute synchronized blocks of code (without contention). However, multiple threads entering a critical section at the same time can cause a lightweight lock to swell into a heavyweight lock.

Lightweight lock acquisition process:

  1. First, determine whether the current object is in a lock-free state. If so, the Java virtual machine will establish a Lock Record in the stack frame of the current thread, which is used to store the current copy of the object’s Mark Word, as shown in the figure.

  2. Copy the Mark Word of the object to the Lock Record in the stack frame, and point the owner in the Lock Record to the current object, and use CAS operation to update the Mark Word of the object to the pointer to the Lock Record, as shown in the figure.

  3. If the second step is successful, it means that the thread has acquired the lock of the object. Set the flag bit of the lock in the object Mark Word to “00” and execute the synchronized code block.

  4. If the second step fails, it is necessary to determine whether the Mark Word of the current object points to the stack frame of the current thread. If so, it indicates that the current thread has held the lock of the current object. This is a reentrant, and the synchronized code block is directly executed. If it does not indicate that multiple threads are competing, the thread obtains the lock through spin attempts, that is, repeat Step 2, spin more than a certain number of times, and upgrade the lightweight lock to the heavyweight lock.

Unlock lightweight locks:

Lightweight unlock is also carried out through CAS operation, where the thread will replace the Product of the Bug Mark Word in Lock Record with CAS operation. If yes, the lock is released and the lock is restored to the unlocked state. If the lock fails, the current lock is competing and is upgraded to a heavyweight lock.

A brief summary of the principle of lightweight locking: Copy the object’s Mark Word to the Lock Record of the current thread, and update the object’s Mark Word to a pointer to the Lock Record.

spinlocks

Several states of Java locks do not include spin locks, while lightweight locks compete to adopt the spin locking mechanism.

What is A spin lock: When thread A has acquired the lock, thread B will compete for the lock. Thread B will not block directly, but wait in the loop. When thread A releases the lock, thread B can immediately acquire the lock.

The reason for introducing spin lock: because blocking and thread arousing will cause the change of user state and core state of the operating system, which has a great impact on system performance, while spin wait can avoid the overhead of thread switching.

Disadvantages of spin locking: While spin waiting can avoid the overhead of thread cutting, it can also consume processor time. If the thread holding the lock in a relatively short period of time to release the lock, the spin lock effect is good, if the thread holding the lock for a long time didn’t release the lock, spin thread will be a waste of resources, so the general thread must have a limit of time spin, the number of times by parameter – XX: PreBlockSpin adjustment, general default is 10.

Adaptive spin locks: JDK1.6 introduces adaptive spin locks, where the number of spins in an adaptive spin lock is not fixed, but determined by the last time the spin was on the same lock and the state of the lock owner. If a thread spin wait has just successfully acquired the lock for a lock object, the virtual machine will assume that the spin wait has a high success rate and will allow the thread spin wait longer. If the thread spin wait rarely succeeds in acquiring the lock for a lock object, the virtual machine will reduce the thread spin wait time.

Bias lock, lightweight lock, heavyweight lock comparison

The lock advantages disadvantages Practical scenario
Biased locking Locking and unlocking require no additional cost, with a nanosecond difference compared to implementing asynchronous methods If there is contention between threads, there is an additional cost to lock cancellation This applies to scenarios where only one thread accesses a synchronized block
Lightweight lock Competing threads do not block, improving the response time of the program If you never get a thread competing for the lock, using spin consumes CPU Pursuit of response time, synchronous block execution is very fast
Heavyweight lock Thread contention does not use spin and does not consume CPU Threads are blocked and response time is slow In pursuit of throughput, synchronous block execution is slow

This table is from The Art of Concurrent Programming in Java

Do you know anything about lock elimination?

Lock elimination refers to the elimination of locks that cannot compete for shared resources by scanning up and down the Java virtual machine during just-in-time compilation. Lock elimination can save time for meaningless lock requests.

Know lock coarsening

In general, to improve performance, you always limit the scope of a synchronized block to a minimum so that as few operations need to be synchronized as possible. However, if a series of continuous operations repeatedly lock and unlock an object, frequent mutex synchronization can also cause unnecessary performance costs.

If the VM detects a series of operations that repeatedly lock and unlock an object, the scope of lock synchronization is coarsed outside the entire operation sequence. Take a look at this classic case.

for(int i=0; i<n; i++){synchronized(lock){    }}
Copy the code

This code will result in frequent locking and unlocking, after the lock coarsening

synchronized(lock){    for(int i=0; i<n; i++){ }}Copy the code

When thread 1 enters an object’s synchronized method A, can thread 2 enter the object’s synchronized method B?

No, thread 2 can only access asynchronous methods of the object. Because the lock of the object is required to execute the synchronized method, and thread 1 has acquired the lock when it enters the Sychronized modified party A, thread 2 can only wait and cannot enter the synchronized modified method B, but can enter other non-synchronized modified methods.

The difference between synchronized and volatile?

  • volatileThe main purpose is to ensure the visibility of memory, that is, the memory of the variable in the register is uncertain and needs to be read from the main memory.synchronizedThe main problem is to solve the synchronization of multiple threads accessing resources.
  • volatileActing on variables,synchronizedApplied to a code block or method.
  • volatileOnly the visibility of data can be guaranteed, not the atomicity of data.synchronizedData visibility and atomicity are guaranteed.
  • volatileDoes not cause the thread to block,synchronizedCauses the thread to block.

The difference between synchronized and Lock?

  • Lock is a display Lock that needs to be manually opened and closed. Synchronized is a hermit lock that automatically releases the lock.

  • Lock is an interface implemented by the JDK. Synchronized is a keyword and is implemented by the JVM.

  • Lock is an interruptible Lock, and synchronized is an uninterruptible Lock. The Lock can be released only after the thread completes execution.

  • When an exception occurs, the Lock does not actively release the occupied Lock. It must be released manually by unlock, which may cause a deadlock. Synchronized will automatically release the occupied lock in an abnormal situation, and there will be no deadlock.

  • Lock can check the Lock status, but synchronized cannot.

  • Lock Implements reentrant locks and fair locks. Synchronized implements reentrant locks, which are not fair locks.

  • Lock applies to a large number of synchronized code blocks, and synchronized applies to a small number of synchronized code blocks.

You can search the public account zhang on wechat, reply to the interview manual, and get the PDF version of more frequent interview questions summarized by me.