• Approaching autumn recruitment, preparing for summer internship, I wish you every day progress hundred million little!Day12
  • This article summarizes the interview questions related to Java multithreading and will be updated daily


1. Do you know the volatile keyword? Can you explain how it differs from synchronized?

Thread-safe lines:

Thread safety consists of three aspects: visibility, atomicity, and orderliness.

Volatile features

  • See article: volatile keyword
  • Volatile Ensuring thread visibility case: Using the volatile keyword
  • Source code analysis article reference: Java synchronization series volatile parsing

In plain English, thread A’s changes to A volatile variable are visible to other threads, meaning that each time A thread acquires the value of A volatile variable, it is the latest.

The two contrast

  • Volatile is lightweight synchronized, ensuring that shared variables are visible. If the value of a volatile variable changes, other threads can see it immediately, preventing dirty reads.
  • Volatile is lightweight and can only modify variables. Synchronized heavyweight, also modifies methods
  • Volatile only ensures visibility of data and cannot be used for synchronization because volatile variables accessed concurrently by multiple threads do not block. Synchronized not only guarantees visibility, but also atomicity, because only the thread that has acquired the lock can enter the critical region, thus ensuring that all statements in the critical region are executed. Blocking occurs when multiple threads compete for a synchronized lock object.
  • Volatile: Guarantees visibility, but not atomicity
  • Synchronized: guarantees visibility and atomicity

Usage Scenarios:

Writes to variables that do not depend on the current value, such as a++, cannot be volatile to ensure atomicity.

Example: volatile int I = 0; And with a large number of threads calling I increment, does volatile keep the variable safe?

No guarantees! Volatile does not guarantee atomicity of variable operations!

  • Increment operation includes three steps, respectively: read, add one, write, because the atomicity of these three sub operations can not be guaranteed, then n threads call n times I ++ operation, the final I is not the value of n, but a number less than n!

  • Explanation:

    • Such asAThread increment operation, just readiThe initial value of the0And then it gets blocked!
    • BThe thread now starts executing and still readsiThe initial value of the0To perform the autoincrement operationiThe value of1
    • thenAThread blocking is done, right on what we just got0Perform add1And write operations. After successful execution,iIs written as the value of1!
    • We expect output2, but the output is1The output is smaller than expected!
  • Example code:

    public class VolatileTest {
        public volatile int i = 0;
     
        public void increase(a) {
            i++;
        }
     
        public static void main(String args[]) throws InterruptedException {
            List<Thread> threadList = new ArrayList<>();
            VolatileTest test = new VolatileTest();
            for (int j = 0; j < 10000; j++) {
                Thread thread = new Thread(new Runnable() {
                    @Override
                    public void run(a) { test.increase(); }}); thread.start(); threadList.add(thread); }// Wait for all threads to complete
            for (Thread thread : threadList) {
                thread.join();
            }
            System.out.print(test.i);/ / output 9995}}Copy the code

    Conclusion:

    Volatile does not require locking, therefore does not block threads, and is lighter than synchronized, which can block threads! Volatile disallows instruction reordering, so jVM-specific optimizations are lost and inefficient!

JAVA Memory Model (JMM) : JMM specifies that all variables reside in main memory, and that each thread has its own working memory. With volatile variables, the last values of the main memory property must be written to main memory immediately before each read. Volatile variables can see their latest values at any time. If thread 1 modiates v, thread 2 can see them immediately.Copy the code


2. Volatile prevents instruction reordering. Can you explain what instruction reordering is?

  • There are two types of instruction reorder:

    • Compile time resort
    • Run-time reordering

When the JVM compiles Java code or the CPU executes JVM bytecode, it reorders existing instructions to optimize performance (without changing program results)

int a = 3;     // step:1
int b = 4;     // step:2
int c =5;      // step:3 
int h = a*b*c; // step:4Definition order:1.2.3.4Order of calculation:1.3.2.42.1.3.4The result is always the sameCopy the code
  • Although instruction reordering can improve execution efficiency, it may affect the result in multithreading. What is the solution?

  • Solution: Memory barriers

    • A memory barrier is a barrier instruction that causes the CPU to restrict the execution results of memory operations before and after the barrier instruction!

Extension: The current principle of happens-before

The memory visibility of volatile is a first-come-first rule!


3, introduce the three elements of concurrent programming?

  • atomic
  • order
  • visibility

3.1 atomic

  • Atomicity means that one or more operations either all succeed or all fail without interruption, and there is no context switching. Thread switching brings atomicity problems!
int num = 1; // Atomic operation
num++;       // Read num from main memory to thread working memory, +1, and write num back to main memory.
		    / / unless with atomic classes: namely, Java. Util. Concurrent. The atoms in the atomic variable classes

// The solution is to use synchronized or Lock(such as ReentrantLock) to "turn" this multi-step operation into an atomic operation
// it is not possible to use volatile, as described earlier: writes to variables that do not depend on the current value, such as a++, cannot be volatile to ensure atomicity
public class XdTest {
    
    // Approach 1: use atomic classes
    // AtomicInteger num = 0; // In this way, the ++ operation can guarantee atomicity, without the need to lock
    private int num = 0;
    
    // Method 2: Use lock, each object has a lock, only the lock can be performed corresponding operations
    Lock lock = new ReentrantLock();
    public  void add1(a){
        lock.lock();
        try {
            num++;
        }finally{ lock.unlock(); }}// Synchronized is the same operation as synchronized, which ensures that methods are locked and blocks of code are locked
    public synchronized void add2(a){ num++; }}Copy the code

Solve the core idea: treat a method or block of code as a whole, and ensure that it is an indivisible whole!

3.2 order

  • When the JVM compiles Java code or the CPU executes JVM bytecode, it reorders existing instructions. The main purpose is to optimize performance (without changing the results of the program)
int a = 3;     // step:1
int b = 4;     // step:2
int c =5;      // step:3 
int h = a*b*c; // step:4Definition order:1.2.3.4Order of calculation:1.3.2.42.1.3.4The result is the same (in the case of single thread) instruction reordering can improve execution efficiency, but multithreading can affect the result!Copy the code

Consider the following scenario:

/ / thread 1
before();// Handle the initialization before you can run the following run method
flag = true; // Mark that the resource is handled. If the resource is not handled, then the program may have a problem
/ / thread 2
while(flag){
    run(); // Execute core business code
}

/ / -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- after instruction reordering, result in order to change the program appear problem, and difficult to screen -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --

/ / thread 1
flag = true; // Mark that the resource is handled. If the resource is not handled, then the program may have a problem
/ / thread 2
while(flag){
    run(); // Execute core business code
}
before();// Handle the initialization before you can run the following run method
Copy the code

3.3 the visibility

  • Changes made by one thread A to A shared variable are immediately visible to another thread B!
// Thread A executes
int num = 0;
// Thread A executes
num++;
// Thread B executes
System.out.print("Num value:"+ num); Thread A executes i++ and then thread B, which may have2It turns out, maybe0and1.Copy the code

Because i++ is doing the operation in thread A, and it’s not immediately updated to main memory, and thread B is going to read and print from main memory, and then it prints 0; Or maybe thread A has done the update to main memory, and thread B has A value of 1.

Therefore, thread visibility is required: Synchronized, lock, and volatile all provide thread visibility.

Volatile Ensuring thread visibility case: Using the volatile keyword


4. What locks are in Java? What about you?

① Optimistic lock/pessimistic lock

  • Pessimistic locks:

    • When a thread manipulates data, it assumes that another thread will modify it, so it locks it every time it retrieves data, and blocks it every time another thread retrieves data, such as synchronized
  • Optimistic locking:

    • Every time to fetch the data that other people will not modify, update will determine whether is others back to update the data, judging by version, if the data is modified and then refused to update, such as the CAS is optimistic locking, but it is not strictly a lock, and to ensure that the data synchronization by atomic, optimistic locking like a database, for instance, by version control, CAS does not guarantee thread synchronization and optimistically assumes that no other threads will interfere during data updates
  • Summary: Pessimistic lock is suitable for the scenario with many write operations, optimistic lock is suitable for the scenario with many read operations, optimistic lock throughput is higher than pessimistic lock!

② Fair lock/unfair lock

  • Fair lock:

    • ReentrantLock(FIFO: First Input First Output) ReentrantLock(FIFO: First Input First Output) ReentrantLock(FIFO: First Input First Output)
  • Unfair lock:

    • The method of acquiring the lock is random, which cannot ensure that every thread can get the lock. In other words, some threads starve to death and cannot get the lock all the time, such as synchronized and ReentrantLock
  • Summary: Unfair locking performs better than fair locking and reuses CPU time. A ReentrantLock constructor can specify whether it is a fair lock or not. By default, it is an unfair lock! Synchronized cannot be specified as a fair lock and is always an unfair lock.

③ Reentrant lock/non-reentrant lock

  • Reentrant lock:

    • Also called recursive locking, after a lock is used on the outer layer, the inner layer can still be used without deadlock. When a thread attempts to acquire a lock after it has acquired it, it automatically acquires the lock. The advantage of reentrant locking is to avoid deadlocks.
  • Non-reentrant lock:

    • If the current thread has already acquired the lock by executing a method, attempts to acquire the lock again in the method will not be blocked
  • Summary: Reentrantlocks can avoid deadlocks to a certain extent. Synchronized and reentrantlocks are both reentrantlocks.

④ Exclusive or shared lock

  • An exclusive lock means that the lock can only be held by one thread at a time.

    • Also called X lock/exclusive lock/write lock/exclusive lock: this lock can only be held by one thread at a time, and any thread attempting to lock again will be blocked until the current thread is unlocked. Example: If thread A holds an exclusive lock on data1, no other thread can hold any lock on data1. The thread that holds the exclusive lock can read and modify the data!
  • A shared lock means that the lock can be held by multiple threads at a time.

    • Also called S lock/read lock, can view data, but can not modify and delete data of a lock, lock other users can read, query data concurrently, but can not modify, increase, delete data, the lock can be held by multiple threads, used for resource data sharing!

ReentrantLock and synchronized are both exclusive locks. ReadWriteLock’s read lock is shared and its write lock is exclusive.

⑤ Mutex/read-write lock

Similar to the concept of exclusive lock/shared lock, is the specific implementation of exclusive lock/shared lock.

ReentrantLock and synchronized are mutually exclusive locks, and ReadWriteLock is a read-write lock

6. The spin lock

  • The spin lock.

    • When a thread obtains a lock, if the lock has been acquired by other threads, then the thread will loop and wait, and then continuously judge whether the lock can be successfully acquired, until the lock is acquired, it will exit the loop, and at most one execution unit can obtain the lock at any time.
    • There is no thread state switch, always in user mode, reducing the consumption of thread context switch, the disadvantage is that the loop will consume CPU.
  • Common spinlocks: TicketLock, CLHLock, MSCLock

All landowners deadlock

  • Deadlock:

    • When two or more threads are blocked during execution, either because they are competing for resources or because they are communicating with each other, they will not be able to continue without external action!

The following three methods are optimized by the Jvm to improve the efficiency of acquiring and releasing locks. The status of Synchronized locks is indicated by the fields in the object header of the object monitor and is an irreversible process

  • Biased locking:

    • If a piece of synchronized code is accessed by a thread all the time, the thread automatically acquires the lock, and the cost of acquiring the lock is lower!
  • Lightweight lock:

    • When a biased lock is accessed by another thread, the biased lock will be upgraded to a lightweight lock. Other threads will try to acquire the lock through the form of spin, but will not block, and performance will be higher!
  • Heavyweight locks:

    • When the lock is lightweight, other threads spin, but the spin cycle will not continue, when a certain number of spins and has not acquired the lock, the lock will enter the block, upgrade to heavyweight lock, heavyweight lock will allow other threads to enter the block, performance will be reduced!

5. What do you think of synchronized?

Source code analysis article reference: Java synchronization series synchronized parsing

  • synchronizedIs to solve the problem of thread safety, commonly used in the synchronization of ordinary methods, static methods, code blocks used!
  • synchronizedUnfair, reentrant lock!
  • Each object has a lock and a wait queue. The lock can only be held by one thread. Other threads that need the lock must block the wait. After the lock is released, the object will fetch one from the queue and wake up. It is not determined which thread to wake up, so fairness is not guaranteed

6. What is CAS? And ABA problems?

We recommend that you take a look at this article: Java Concurrency cornerstone CAS principles and ABA issues, from getting started with simple CAS to analyzing principles!


The difference between ReentrantLock and synchronized?

  • ReentrantLock and synchronized are exclusive, ReentrantLock, and pessimistic locks

  • Synchronized:

    • 1. Java built-in keywords
    • 2, can not determine whether to obtain the lock state, only unfair lock!
    • 3, lock unlocking process is implicit, users do not need manual operation, the advantage is simple operation but not flexible enough
    • 4. General concurrency scenarios are enough to be placed on methods that are executed recursively without worrying about whether the thread will eventually release the lock correctly
  • Already:

    • 1, is a Lock interface implementation class
    • 2. You can determine whether a lock is obtained. The lock can be fair or unfair (default)
    • 3. Manually add and unlock the lock, and the unlocking operation should be placed in finally code block as far as possible to ensure that the thread releases the lock correctly
    • 5. Create by passing in parameterstrueCreate a fair lock. If false is passed or no argument is passed, an unfair lock is created
    • 6. The bottom layer is AQSstateFIFOQueues to control locking.

8. What are the three elements of concurrent programming? How to ensure the safety of multithreading in Java procedures?

  • atomic: Atomicity meansOne or more operations either all succeed or all fail. (synchronizedKeywords can guarantee atomicity)
  • visibility:Changes made by one thread to a shared variable are immediately visible to another thread. (synchronized,volatileKeywords are guaranteed visibility)
  • Orderliness: The order in which a program is executed is the order in which the code is executed. (Processor compilation optimizations may reorder instructions)

Thread safety in Java:

  • AtomicThe leading atomic class,,synchronizedKeyword,LockLocking can solve the atomicity problem.
  • synchronizedKeyword,volatileKeyword,LockLocking solves the visibility problem.
  • volatileKeyword modifier variable, can disable instruction rearrangement, disable is addvolatileKeyword variablePrevious codeReordering to ensure order problems.

What is thread context switching?

In multithreaded programming, the number of threads is generally greater than the number of CPU cores, and a CPU core can only be used by one thread at any time. In order to make these threads can be effectively executed, the CPU adopts the strategy of allocating time slices for each thread and rotating them. When a thread runs out of time, it is ready to be used by another thread. This process is a context switch.

To summarize, the current task saves its state before switching to another task after executing the CPU time slice, so that it can be reloaded when switching back to the task next time. The process from saving to reloading a task is a context switch.


What is the difference between a daemon thread and a user thread?

  • Daemon thread: Runs in the background and serves other foreground threads. Once all user threads have finished running, the daemon thread will finish running with it.
  • User threads: Run in the foreground to perform specific tasks, such as the main thread of a program

You can set a thread to be a daemon thread with thread.setdaemon (true).

Note 1: must be in the thread. The start () set before, otherwise you will run out of a IllegalThreadStateException anomalies. A running regular thread cannot be set as a daemon thread.

Note 2: The termination of the daemon thread is out of its control, so do not assign IO, File and other important operation logic to it. Because these operations can throw an exception at any time, the daemon thread will die!


Summary of the interview question is also quite time-consuming, the article will be updated from time to time, sometimes more than a day to update a few, if you help to review and consolidate the knowledge point, please also support three, the follow-up will be a hundred million points of update!


In order to help more young white from zero to advanced Java engineers, from CSDN official side to get a set of “Java Engineer learning growth knowledge Map”, size870mm x 560mm, can be folded into a book size, interested friends can have a look, of course, anyway, the blogger’s articles are always free ~