This article is a collection of concurrent programming interview questions, and will always be supplemented with a map to see what concurrent programming requires first

Concurrent basis

Question: Why is multithreading needed?

A: Using multithreading can improve performance, mainly by reducing latency and improving throughput. In order to improve the performance, there are two main directions: one is to optimize the algorithm, and the other is to maximize the performance of the hardware. In concurrent programming, improving performance is essentially about improving hardware utilization, and more specifically, I/O utilization and CPU utilization. Concurrent programming does not address the utilization of a single piece of hardware, that is, it does not address CPU utilization or I/O utilization, which is what the operating system does. Multithreading addresses the overall utilization of the CPU and I/O devices. Here’s an example: Assuming that CPU computations and I/O operations intersect, and that CPU computations and I/O operations are both 1:1 (for example, an impossible ratio), the CPU utilization and I/O device utilization are both 50%.

Suppose two threads perform I/O operations while thread A performs CPU calculations. While thread A performs AN I/O operation, thread B performs the CPU calculation so that both the CPU utilization and the I/O device utilization are 100%.

In the single-core era, multithreading was primarily used to balance CPU and I/O devices.

Question: How many ways are there to create a thread? Which way do you think is better

A: You can create a Thread by using a Runnable interface. You can create a Thread by using a Runnable interface. You can create a Thread by using a Thread pool, a timer, a Lambda, an inner class, etc. But the essence is both. Implementing a Runnable interface is better than inheriting Thread because:

  • From a code architecture perspective, the execution of the task (that is, the code in the run method) should be separate from the creation of the Thread. In other words, the task should be decoupled from the creation of the Thread, and the creation of the Thread should not be mixed with the execution of the task.
  • If using inheritance Thread, to perform a task each time, all need to create a new Thread, and the price of the new Thread is bigger (need to create, execute, destroy), if you use a Runnable, we can use the Thread pool tool, thus greatly reduce the Thread creation, also shows that the benefits of tasks and threads to separate, Fortunately, the saving of resources.
  • Java is single inheritance, but multiple implementations are possible

Problem: What happens if a thread calls start() twice? Why is that?

A: An exception is thrown because the start() method checks the state of the thread at the beginning of the call, and an error is reported if the state changes. Start () changes the state of the thread from new to runnable.

Question: Since the start() method calls the run() method, why do we choose to call start() rather than run() directly?

Answer: because a call to start() actually starts the thread and executes its lifecycle, whereas a call to run() simply executes a normal method.

Question: Can half a running thread in Java be forcibly killed? How to gracefully close?

A: The answer to the first question is definitely no. Java provides functions such as stop() and deStory (), but these functions are expired and are not recommended. The reason is that if you force a thread to be killed, the resources used in the thread, such as file descriptors, network connections, etc., will not be closed properly. So once the thread is running, the logical thing to do is to let it finish, release resources, and exit. If it’s a loop you need a thread communication mechanism to tell it to exit

Question: How do I stop a thread properly? Is the stop method for volatile flag bits ok?

A: There is no way to force a thread to stop in Java. Early Java methods such as stop() and deStory () were marked as expired and not recommended. Threads can only be stopped by notification and collaboration. There are two ways to do this: 1. Use Interrupt (recommended) 2. Use volatile to flag a field and exit the thread by determining that the field is true/false (limited)

  • Use Interrupt to notify

while (! Thread.currentthread ().isinterrupt () && more work to do) {do more work} Determine if the thread has been interrupted and then check to see if there is still work to be done.

public class StopThread implements Runnable {

    @Override
    public void run() {

        int count = 0;

        while (!Thread.currentThread().isInterrupted() && count < 1000) {

            System.out.println("count = " + count++);

        }

    }

    public static void main(String[] args) throws InterruptedException {

        Thread thread = new Thread(new StopThread());
        thread.start();
        Thread.sleep(5);
        thread.interrupt();
    }

}
Copy the code

When using this method, it is important to note that methods such as sleep() and wait(), which can block a thread between tasks, make the thread sleep and if the dormant thread is interrupted, the thread will feel the interruption signal and will throw An InterruptedException. Also clear the interrupt signal and set the interrupt flag bit to false. Try {}catch(InterruptedException e){} Do not absorb the exception (nothing is done after catching the exception). In this case, you can either throw it without catching the legacy or interrupt it again. Throw an exception case:

/** * Description: Best practice: Give priority to catching InterruptedExcetion: Public class RightWayStopThreadInProd implements Runnable {@override public void RightWayStopThreadInProd implements Runnable {@override public void run() { while (true && ! Thread.currentThread().isInterrupted()) { System.out.println("go"); try { throwInMethod(); } catch (InterruptedException e) { Thread.currentThread().interrupt(); Println (" save the log "); // Save the log and stop the program system.out.println (" Save the log "); e.printStackTrace(); } } } private void throwInMethod() throws InterruptedException { Thread.sleep(2000); } public static void main(String[] args) throws InterruptedException { Thread thread = new Thread(new RightWayStopThreadInProd()); thread.start(); Thread.sleep(1000); thread.interrupt(); }}Copy the code

Again interrupt case:

/** * Description: Best Practice 2: Call thread.currentThread ().interrupt() in the catch substatement to restore the set interrupt status, so that in subsequent execution, you can still check that the interrupt occurred. Let it out * / public class RightWayStopThreadInProd2 implements Runnable {@ Override public void the run () {the while (true) {if (thread.currentThread ().isinterrupted ()) {system.out.println ("Interrupted, program ends "); break; } reInterrupt(); } } private void reInterrupt() { try { Thread.sleep(2000); } catch (InterruptedException e) { Thread.currentThread().interrupt(); // Restore interrupt e.printStackTrace(); } } public static void main(String[] args) throws InterruptedException { Thread thread = new Thread(new RightWayStopThreadInProd2()); thread.start(); Thread.sleep(1000); thread.interrupt(); }}Copy the code
  • Use volatile mode
/** * Description: Demonstrating the limitations of volatile: Part1 canceled = false */ public class WrongWayVolatile implements Runnable {private volatile implements Boolean canceled = false; @Override public void run() { int num = 0; try { while (num <= 100000 && ! Canceled) {if (num % 100 == 0) {system.out.println (num + "is a multiple of 100." ); } num++; Thread.sleep(1); } } catch (InterruptedException e) { e.printStackTrace(); } } public static void main(String[] args) throws InterruptedException { WrongWayVolatile r = new WrongWayVolatile(); Thread thread = new Thread(r); thread.start(); Thread.sleep(5000); r.canceled = true; }}Copy the code

Such an approach is feasible, but it has limitations

    • Volatile Boolean cannot handle long blocking cases. Here is an example of a producer-consumer pattern:
/** * Description: In this case, the producer is producing fast and the consumer is consuming slow, so when the blocking queue is full, the producer will block. Waiting for further consumption by consumers */ public Class WRONGWayVOLATIleLECANTSTOP {public static void main(String[] args) throws InterruptedException  { ArrayBlockingQueue storage = new ArrayBlockingQueue(10); Producer producer = new Producer(storage); Thread producerThread = new Thread(producer); producerThread.start(); Thread.sleep(1000); Consumer consumer = new Consumer(storage); While (consumer.needmorenums ()) {---- (1) system.out.println (consumer.storage.take()+" being consumed "); Thread.sleep(100); } system.out.println (" The consumer doesn't need any more data." ); Canceled =true; // After the consumption doesn't need any more data, we should stop the producers, too, but the actual situation is producer.canceled=true; System.out.println(producer.canceled); } } class Producer implements Runnable { public volatile boolean canceled = false; BlockingQueue storage; public Producer(BlockingQueue storage) { this.storage = storage; } @Override public void run() { int num = 0; try { while (! canceled) { if (num % 100 == 0) { storage.put(num); -- -- -- -- -- -- -- -- -- -- - (2) System. Out.println (num + "is a multiple of 100, has been in the warehouse." ); } num++; } } catch (InterruptedException e) { e.printStackTrace(); } finally {system.out.println (" the producer terminates the run "); ----- (3)}}} class BlockingQueue storage; public Consumer(BlockingQueue storage) { this.storage = storage; } public Boolean needMoreNums() {if (math.random () > 0.95) {return false; } return true; }}Copy the code

Canceled was set to true at code (1) when the consumer no longer needed production data, so the code should have stopped running. The code at (3) printed “Producer finished running.” In fact, the code did not. So volatile cannot stop the thread when it gets blocked. So why does this happen? Because the code blocked at (2), it wouldn’t have continued the loop, and it wouldn’t have decided if canceled was true. This was designed with this in mind, which is why the interrupt () method is used to rupt a thread’s flow, because the Interrrupt method can respond to interrupts even when it blocks.

Problem: below main(…) Function () opens a thread. When main ends, will the thread be forced to exit? Is the process forcibly exiting?

public static void main(String[] args) { System.out.println("main thread start"); Thread t1 = new Thread(() -> { while (true){ System.out.println(Thread.currentThread().getName()); try { Thread.sleep(1000); } catch (InterruptedException e) { e.printStackTrace(); }}}); t1.start(); System.out.println("main thread end"); }Copy the code

A: No, but add t1.setdaemon (true) before t1.start(); Later, when the main (…). Threads running in the JVM are divided into two categories: daemon and non-daemon threads. By default, both non-daemon threads are opened. Java states that the JVM process exits when all non-daemon threads exit, and that the daemon thread does not affect the entire JVM process exit.

Question: Do any of the following situations throw the Interrupted exception?

 public void run(){
            while (!stopped){

                int a = 1, b = 2;
                int c = a + b;
                System.out.println("thread is executing");
            }
        }
Copy the code

If tinnterrupt () is called in the main thread, will the thread throw an exception? If the code is modified so that the thread is blocking at a synchronized keyword and is about to take the lock, as shown in the following code

public void run(){ while (! stopped){ synchronized (this){ int a = 1, b = 2; int c = a + b; System.out.println("thread is executing"); }}}Copy the code

If the main thread (tinnterrupt) is used to rupt (), will the thread throw an exception?

A: Neither of these pieces of code will throw an exception, except for those functions that declare that they will throw InterruptedException, which are commonly used:

public static void sleep(long millis) throws InterruptedException{}
public final void wait() throws InterruptedException{}
public final void join() throws InterruptedException{}
Copy the code

Question: What states exist between threads? How to change?

A: The thread state migration is shown in the figure above.

Blocks that can be interrupted are calledLightweight blocking, the corresponding thread state isWAITINGorTIMED_WAITING; And likesynchronizedThis blocking that cannot be interrupted is calledHeavy blocking, the corresponding state isBLOCKED.The NEW statecallstart()Enter theRUNNINGorREADYState. If the blocking function is not called, the thread will only be inRUNNINGandREADYTo switch between, that is, the system time slice scheduling. You can callyield()Function, to abandon the CPU usage, the other way to intervene in these two state changes.

Calling a blocking function will enterWAITINGorTIMED_WAITINGState, the difference between the two is that the former blocks indefinitely, while the latter passes in a time parameter and blocks for a finite time.

usesynchronizedWill enterBLOCKEDState.

BlockedWaitingIs the difference between theBlockedWaiting for another thread to releasemonitorThe lock,WaitingYou’re waiting for a condition, likejoin, ornotify()/notifyAll()

State that

  • Implementing the Runnable interface and inheriting Thread gives you a Thread class. When a NEW instance is created, the Thread enters the initialized state.

  • RUNNABLE is in the running state

    1. Ready state just says you’re running on your own, the scheduler doesn’t pick you up, you’re always ready.
    2. The thread enters the ready state by calling its start() method.
    3. When the current thread’s sleep() method ends, the other thread’s join() ends, waiting for the user’s input to complete, and one thread to take the object lock, these threads will also enter the ready state.
    4. When the current thread time slice is used up, the current thread’s yield() method is called and the current thread enters the ready state.
    5. After a thread in the lock pool has acquired the object lock, it enters the ready state.
  • RUNNING The state of a thread when the scheduler selects a thread from the runnable pool as the current thread. This is also the only way the thread can get into a running state.

  • A BLOCKED state is the state in which a thread is BLOCKED before it enters a method or block of code modified by the synchronized keyword.

  • The thread is in the WAITING state after a call to sleep or wait, WAITING to be awakened.

  • After a call to sleep or WAIT, the thread is in the TIMED_WAITING state, waiting to be awakened or automatically awakened due to a timeout.

  • TERMINATED state

    • When the thread’s run() method finishes, or the main thread’s main() method finishes, we consider it terminated. The thread object may be alive, but it is no longer a separate thread of execution. Once a thread is terminated, it cannot be resurrected.
    • On a termination of the thread calls the start () method, which will be thrown. Java lang. IllegalThreadStateException anomalies.

State switch

  • NEW to the RUNNABLE state

  • The state transition between RUNNABLE and BLOCKEDCurrently, threads convert from RUNNABLE to BLOCKED only when they are waiting for an implicit lock from synchronized

  • The state transition between RUNNABLE and WAITING

    There are three scenarios that trigger a thread transition from RUNNABLE to WAITING;

    1. A thread that acquires an implicit synchronized lock calls the object.wait () method.
    2. Alternatively, call the Thread synchronization thread.join () method. For example, if A thread A is called a.join (), the thread executing the statement will wait for thread A to finish executing, and the WAITING thread will transition from RUNNABLE to WAITING. When thread A finishes executing, the WAITING thread transitions from WAITING to RUNNABLE.
    3. Finally, call the locksupport.park () method. Locks in Java and send packages are implemented based on the LockSupport object. Call locksupport.park () and the current thread will block and the state will transition from RUNNABLE to WAITING. Call locksupport.unpark (Thread Thread) to wake up the target Thread, and the state of the target Thread transitions from WAITING to RUNNABLE.
  • The state transition between RUNNABLE and TIMED_WAITING



    There are five scenarios that trigger the transition from RUNNABLE to TIMED_WAITING:

    1. Call thread.sleep (long millis) with timeout;
    2. A thread with an implicit synchronized lock calls the object.wait (long Timeout) method with a timeout argument.
    3. Call thread.join (long millis) with timeout;
    4. Call locksupport. parkNanos(Object Blocker, long Deadline) method with timeout;
    5. Call the locksupport.parkuntil (Long Deadline) method with a timeout argument.
  • RUNNABLE is in TERMINATED state

What is the difference between tisinterrupted () and thread.interrupted ()?

A: tinterrupted () sends a wake-up signal to the thread. If the thread happens to be in the WAITING or TIMED_WAITING state at this point, it throws an InterruptedException and the thread is woken up. If the thread is not blocked, the thread does nothing. Both functions are used by the thread to determine whether it has received an interrupt signal. The former is non-static and the latter is static. The difference between the two is that the former only reads the interrupted state and does not modify the state; The latter not only reads the interrupt status, but also resets the interrupt flag bit.

Question: What is your understanding of synchronized?

A: Synchronized actually adds a lock to an object. For a non-static member function, it locks the current object, and for a static member function, it locks the class object of the current class

2. The essence of a lock: a lock is an object that needs to do the following: 1) The object must have a state variable inside it that records whether it is occupied by a thread or not. In the simplest case, the state has the values 0 and 1, where 0 means that no thread is occupying the lock, and 1 means that some thread is. 3) The object needs to maintain a list of thread ids to keep track of other threads that are blocking. Synchronized (this) {synchronized (this) {synchronized (this) {synchronized (this) {synchronized (this) {synchronized (this) {synchronized } you can also use two objects, lock on obj, and synchronized (obj) {… }

Question: How do you design a producer-consumer model

A:

As shown in the figure above: a memory queue with multiple producer threads putting data into the memory queue; Multiple consumer threads fetch data from the memory queue.

    • The memory queue itself must be locked to be thread safe.
    • Blocking. When the memory queue is full and the producer cannot fit it, it will be blocked. When the memory queue is empty, the consumer has nothing to do and is blocked.
    • Two-way notification. After the consumer is blocked, the producer puts in new data and notifies () the consumer; Otherwise, after the producer is blocked and the consumer consumes the data, notify () producer is required.
How to block?
  • The producer and consumer threads call wait() and notify(), respectively.
  • With a blocking queue, the in/out functions themselves block when data cannot be fetched or put in. So this is the implementation of BlockingQueue
How to notify in both directions?
  • Wait () and notify () mechanisms
  • Condition mechanism

Pseudocode: Something is wrong…

public void enqueue() { synchronized (queue) { while (queue.full()) { queue.wait(); } // Queue.notify (); } } public void dequeue() { synchronized (queue) { while (queue.empty()) { queue.wait(); } // Queue.notify (); }}Copy the code

Question: Why must the wait method be used in synchronized code protected by synchronized?

Answer: This is how wait() is described in Javadoc:

     * As in the one argument version, interrupts and spurious wakeups are
     * possible, and this method should always be used in a loop:
     * <pre>
     *     synchronized (obj) {
     *         while (&lt;condition does not hold&gt;)
     *             obj.wait();
     *         ... // Perform action appropriate to condition
     *     }
     * </pre>
     * This method should only be called by a thread that is the owner
     * of this object's monitor. See the {@code notify} method for a
     * description of the ways in which a thread can become the owner of
     * a monitor.
Copy the code

This method may be interrupts(interrupts) or false wake (Interrupts), so it needs to be executed in a loop. Because the loop is going to determine if the condition is met, and if the thread is falsely awakened at that point, then the loop in the while is going to determine if the condition is met, and it’s still going to block and wait. So why synchronized?

class BlockingQueue { Queue<String> buffer = new LinkedList<String>(); public void give(String data) { buffer.add(data); notify(); // Since someone may be waiting in take } public String take() throws InterruptedException { while (buffer.isEmpty()) { wait(); } return buffer.remove(); }}Copy the code

This code is a typical producer-consumer pattern.

First, the consumer thread calls the take method and determines whether the buffer.isEmpty method returns true. If true means that buffer isEmpty, the thread wants to wait, but before it can call wait, the scheduler stops it. So the wait method has not yet been executed. At this point, the producer runs, executes the entire give method, adds data to the buffer, and executes the notify method, but the notify has no effect because the consumer thread’s wait method did not execute, so no thread is waiting to be awakened. 3. At this point, the consumer thread that was suspended by the scheduler comes back to continue the wait method and enters the wait.

Because the consumer has to judge and wait, it’s two operations, it’s not an atomic operation, it’s interrupted in the middle, it’s thread unsafe. In addition, a call to wait() requires the release of the lock, which can only be done if the lock is acquired, so it should be used with synchronized.

Question: Why is wait/notify/notifyAll defined in class Object and sleep in class Thread?

A: There are two main reasons:

  • Because every object in Java has a handful calledmonitorThe lock of the monitor, since every object can be locked, requires a location in the object header to hold the lock information. This lock is object level, not thread level,wait/notify/notifyAllThey are also lock-level operations whose locks belong to objects, so they are defined inObjectClass is the most appropriate becauseObjectClass is the parent class of all objects.
  • Because if you takewait/notify/notifyAllMethods are defined inThreadFor example, a thread may hold multiple locks in order to implement complex logic that works with each otherwaitMethods are defined inThreadClass, how to enable a thread to hold multiple locks? How do you know which lock the thread is waiting for? Since we’re asking the current thread to wait for the lock on an object, it’s natural to do this by manipulating the object, not the thread.

Question: What are the differences between wait/notify and sleep methods?

A: Similarities

  • Both of them can block threads
  • Both can respond to interrupts: If an interrupt signal is received while waiting, both can respond and throw InterruptedException

The difference between

  • The wait method must be used in synchronized protected code; the sleep method does not
  • When the sleep method is executed in synchronized code, the monitor lock is not released, but when the WAIT method is executed, the monitor lock is actively released
  • The sleep method requires that you define a time when the time expires to actively resume or interrupt the thread to wake up earlier, while the wait method without parameters means that the thread will wait forever until it is interrupted or awakened, and it will not actively resume
  • Wait /notify is an Object method, and sleep is a Thread method

Question: Why must the lock be released when wait()

Answer: When thread A enters obj1, it locks obj1. At this point, call wait() to enter the blocked state and remain unable to exit the synchronized block; Thread B will never enter the synchronized(obj1) block and will never have a chance to call notify(). So wait() inside:

Wait () {// release the lock obj1 // block until another thread notifies // reacquire the lock}Copy the code

Question: How do I share data between two threads?

A: You can do this by sharing objects, or by using concurrent data structures such as blocking queues. The producer – consumer model is implemented by wait and notify methods.

Question: What is context-switching in multithreading?

A: A context switch is the process of storing and restoring the STATE of the CPU, which enables threaded execution to resume execution from a breakpoint. Context switching is an essential feature of multitasking operating systems and multi-threaded environments.

Question:CallableRunnableDifferent?

A: The differences:

  • Method names, Callable (call()) and Runnable (run());
  • A Callable task returns a value, while a Runnable task does not return a value.
  • Throw an exception. The call() method can throw an exception, but the run() method cannot throw checked exceptions.
  • Callable comes with a Future class that allows you to know how a task is being executed, or cancel it, or get the result of the task. These functions are not available in Runnable. Callable is more powerful than Runnable.

Problem: How do I solve the concurrency problem?

A: There are two main types of concurrency solutions, locking and non-locking

  • unlocked
    • Local variables local variables only exist in the working memory of each thread, there is no shared situation, there is no concurrency security problems

    • Immutable Objects Immutable objects, once created, remain immutable, no matter how many threads operate on them, and there is no concurrency problem

    • ThreadLocal

      The nature of a ThreadLocal is that each thread has its own copy, and the copies of each thread do not affect each other, so there is no concurrency problem

    • Compare And Swap The cas mechanism uses three basic operands: The memory address V, the old expected value A, and the new value B will be updated only if the corresponding value of memory address V is equal to the old expected value A. Implementations in Java usually refer to a set of classes prefixed by the English word Atomic, all of which adopt the CAS idea. Atomic uses the Unsafe class to provide hardware-level Atomic operations. Let’s look at the Unsafe method

      public final int getAndAddInt(Object o, long offset, int delta) { int v; do { v = this.getIntVolatile(o, offset); } while(! this.weakCompareAndSetInt(o, offset, v, v + delta)); return v; }Copy the code

      V is used to get an old value, and then the CAS operation is performed to compare and replace the data, failing which the while loop is executed until it succeeds.

  • Have a lock
    • Syncronized
    • ReentrantLock

      Both Synchronized and ReentrantLock adopt a pessimistic lock strategy. Their implementations are very similar, except that one is implemented at the language level (Synchronized) and the other programmatically (ReentrantLock).

      Locking principle:

The thread pool

Question: What is a thread pool?

A: To prevent the system from constantly creating and destroying threads, we can reuse the created threads and cache them. Creating threads becomes getting idle threads from the pool, and closing threads becomes returning threads to the pool.

Question: Why do we need thread pools?

A: This question can be translated into the advantages of thread pools or the disadvantages of not using them.

Disadvantages of not using a thread pool:

  • Creating threads repeatedly is expensive, with each thread taking time to create and destroy, and if the task is simple, it can cause the creation and destruction of threads to consume more resources than the thread itself.
  • Too many threads take up too many resources such as memory, cause too many context switches, and cause system instability.

Thread pool advantages:

  • Reduce resource consumption. Reduce the cost of thread creation and destruction by reusing threads that have been created.
  • Improve response speed. When a task arrives, it can be executed immediately without having to wait for the thread to be created.
  • Thread pools manage resources in a unified manner. For example, a thread pool can manage task queues and threads in a unified way, and can start and end tasks in a unified way. This is easier and easier to manage than a single thread handling tasks one by one. It also makes it easier to count the number of tasks that have been executed.

Question: What are the parameters of a thread pool? What is the workflow of the thread pool?

A: The thread pool has the following parameters:

  • CorePoolSize (number of core threads) : Will not be destroyed after creation
  • MaximumPoolSize: No task execution will be destroyed after keepAliveTime after creation
  • KeepAliveTime + Time unit: idle thread survival time
  • ThreadFactory(ThreadFactory)
  • WorkQueue (Blocking queue)
  • Handler (reject policy)

The execution process can be explained as follows:

  • If the current number of threads is less than corePoolSize, a new thread is created to execute the task.
  • If the current thread count is greater than or equal to corePoolSize, add the task to BlockingQueue.
  • If the queue is full, a new thread is created to handle the task;
  • If the number of threads than maxinumPoolsize, the task will be rejected, and call the RejectExecutionHandler. RejectedExecution () method;

Question: What kinds of denial policies do thread pools provide? What is the time to refuse?

A: There are two main situations when a thread pool rejects a policy:

  • When we callshutdownIf the thread pool is closed, the task will be rejected if it is submitted, even if it is not completed.
  • The thread pool is not capable of processing new tasks, which is when the work is too full.

Rejection strategy:

  • AbortPolicy: When rejecting a task, this rejection policy throws a type ofRejectedExecutionExceptionRuntimeException“Gives you the sense that the task has been rejected, so you can choose according to business logicretryorGive up to submitSuch strategies.
  • DiscardPolicy: A new task is directly discarded after it is submitted without any notification to you, which may cause data loss.
  • DiscardOldestPolicy: After a new task is submitted, the task with the longest survival time is discarded. Similarly, the new task may cause data loss.
  • CallerRunsPolicy: After the new task is submitted, it will be handed over to the thread that submitted the task for execution, that is, who submitted the task, who is responsible for executing the task. There are two main advantages to this.
    • The first is that newly submitted tasks are not discarded, so there is no business loss.
    • The second is because who submit task who will be responsible for performing the task, so submit task thread responsible for performing tasks, and perform the task is more time consuming, during this period, submit task thread is occupied, also won’t submit new tasks, slow down the speed of tasks submission, rather then a negative feedback. During this time, threads in the thread pool can also take advantage of the time to perform some tasks, freeing up some space, and thus giving the thread pool a buffer period.

Question: Why is it not recommended to automatically create thread pools in the Ali protocol? Which is to useExecutorsCreated thread pool.

A: Because using Executors to create a thread pool is not safe. To answer this question we need to know what thread pools can be created by using Executors.

  • FixedThreadPool: a thread pool with a fixed number of threads, with the same core and maximum number of threads,newFixedThreadPoolIt’s actually called internallyThreadPoolExecutorConstructor. The problem with it is that the queue is used with no capacity onlineLinkedBlockingQueueIn this case, if the thread is slow to process tasks, and when there are too many tasks, the queue will accumulate a large number of tasks, which may causeOutOfMemoryError.
public static ExecutorService newFixedThreadPool(int nThreads) { 
    return new ThreadPoolExecutor(nThreads, nThreads,0L, TimeUnit.MILLISECONDS,new LinkedBlockingQueue<Runnable>());

}
Copy the code
  • SingleThreadExecutor: single thread pool, andnewFixedThreadPoolIt is the same, except that the number of core threads and the maximum number of threads are set directly to 1, but the task queue is still unboundedLinkedBlockingQueue, so there is alsonewFixedThreadPoolRisk of memory overflow.
public static ExecutorService newSingleThreadExecutor() { return new FinalizableDelegatedExecutorService (new ThreadPoolExecutor (1, 1, 0 L, TimeUnit MILLISECONDS, new LinkedBlockingQueue < Runnable > ())); }Copy the code
  • CachedThreadPool: pool of cacheable threads, which uses queues ofSynchronousQueue.SynchronousQueueIt doesn’t store the task itself, it forwards the task directly, which is fine. But the second argument to the constructor isInteger.MAX_VALUE, indicating that it does not limit the maximum number of threads, that is, when the task is too many, too many threads will be created, resulting in memory overflow.
public static ExecutorService newCachedThreadPool() { 
    return new ThreadPoolExecutor(0, Integer.MAX_VALUE,60L, TimeUnit.SECONDS,new SynchronousQueue<Runnable>());
}
Copy the code
  • ScheduledThreadPool: A pool of threads that execute tasks on a scheduled or periodic basisDelayedWorkQueueIs a delayed and unbounded queue, so it has the risk of running out of memory as mentioned above.
public ScheduledThreadPoolExecutor(int corePoolSize) { 
    super(corePoolSize, Integer.MAX_VALUE, 0, NANOSECONDS,new DelayedWorkQueue());
}
Copy the code
  • SingleThreadScheduledExecutorAnd:ScheduledThreadPoolIt’s the same risk.

So, if you create a thread pool using Executors, you’ll have a risk. Instead, manually create the pool because you can know more about how the pool works, choose the number of threads that work for you, and refuse new tasks if necessary to avoid the risk of running out of resources.

Question: How do I set the number of threads in a thread pool?

A: To set the number of threads we need to be guided by the following conclusion:

  • The higher the percentage of the average working time of threads, the fewer threads are needed.
  • The higher the percentage of the average wait time of threads, the more threads are needed.
  • Actual pressure testing is required for different procedures to get the right choice.

Task is CPU intensive (such as encryption, decryption, compression, calculation and a series of tasks that need a lot of CPU resource), the number of threads is set to 1 ~ 2 times the number of CPU, because this job will take up a lot of CPU time, the CPU is almost full of work, set at this time how many threads can cause unnecessary context switch, The performance is not good.

Task is IO intensive (such as databases, file read and write, network communication, etc), these tasks will not consume CPU resources in particular, but IO is very time consuming, so there will be a lot of waiting, the CPU when the number of threads can set many times larger, formula: thread count = CPU core * (1 + thread waiting time/thread working time). Other loads on the system also need to be considered, and the number of threads can be determined according to the actual pressure test.

Question: How do I properly close a thread pool? What is the difference between shutdown and shutdownNow?

A: There are several ways to close a thread pool in ThreadPoolExecutor:

void shutdown;
boolean isShutdown;
boolean isTerminated;
boolean awaitTermination(long timeout, TimeUnit unit) throws InterruptedException;
List<Runnable> shutdownNow;
Copy the code

Shutdown () method is safe, close the thread pool, after call shutdown method, not immediately close the thread pool, if there will be a lot of tasks are executed in the thread pool or wait for the queue and waiting for task execution, then it will be completed all tasks such as to turn off the thread pool, but after the call this method cannot submit new mission again, Subsequent submitted tasks are rejected according to the rejection policy.

The isShutdown() method can tell if the thread pool has started to close; it cannot tell if it is completely closed.

The isTerminated() method determines if the thread pool is completely closed, so if there are still tasks executing after shutdown, calling isShutdown returns true, while calling isTerminated returns false.

The awaitTermination method determines whether the thread pool is completely closed, and is similar to isTerminated, except that it accepts await time. Calling this method can cause the following to happen:

  • This method returns true if the thread pool has closed during the wait period (including before entering the wait state) and all committed tasks (both executing and waiting in the queue) have been executed, indicating that the thread pool has “terminated.”
  • If the first thread pool “terminates” does not occur after the timeout period expires, the method returns false.
  • If the thread is interrupted while waiting, the method throws InterruptedException.

ShutdownNow: Closes the thread pool immediately. It first attempts to interrupt the threads in the pool by signaling them to interrupt, and then returns the tasks in the waiting queue to the caller to do some remedial work on them. Therefore, we can choose the appropriate method to stop the thread pool according to our business needs. For example, we can use shutdown() to shutdown the thread pool, so that the submitted tasks can be completed, but in an emergency, we can use shutdownNow to speed up the “terminating” of the thread pool.

Question: What is the two-level scheduling model of the Executor framework?

A: Simply put, Java threads are mapped one-to-one to native operating system threads. When the Java thread starts, a local operating system thread is started, and when the Java thread terminates, the operating system thread is also reclaimed. The operating system schedules all the threads and assigns them to the available CPU.

Thus, the application program controls the upper level invocation through the Executor framework, while the lower level scheduling is controlled by the operating system kernel, thus forming a two-level scheduling model, and the lower level scheduling is not controlled by the application program. The two-level scheduling model of tasks is as follows:

Various locks in Java

Question: What are the types of locks in Java?

A: According to the classification criteria, we divide locks into the following 7 categories.

  • Biased lock/lightweight lock/heavyweight lock;
  • Reentrant locks/non-reentrant locks;
  • Shared/exclusive locks;
  • Fair lock/non-fair lock;
  • Pessimistic lock/optimistic lock;
  • Spinlock/non-spinlock;
  • Interruptible locks/non-interruptible locks.

Question: What is the essence of pessimistic lock and optimistic lock?

A: Pessimistic lock, as the name implies, is pessimistic. It thinks that if you do not lock column resources, other threads will fight for them. Therefore, pessimistic lock will lock data every time it obtains and modifies data. A typical case of pessimistic Lock: synchronized keyword and Lock interface

Optimistic locking is optimistic. I think that when I operate resources, there will be no interference from other threads, so the object will not be locked. When I update resources, I will compare whether other threads have modified the data before I modify the data. If it is not modified, the modification is normal. If it has been modified by another thread, the modification is abandoned and an error message is displayed or the modification is retry. It is a concurrency policy based on collision detection. The implementation of this concurrency policy does not require threads to suspend, so it is non-blocking synchronization. Optimistic locking is generally implemented using the CAS algorithm. Typical cases of optimistic locking: Atomic Atomic classes in Java and the version mechanism in the database.

Pessimistic and optimistic locking comparison: pessimistic locks will not lock the thread block, this overhead is fixed, pessimistic locking of the original cost is higher than optimistic locking, and although optimistic locking overhead than pessimistic locks at the beginning, but if always can’t get lock or large amount of concurrent competition is intense, can lead to retry, so the consumption of resources will be more and more, even more than pessimistic locking.

Usage scenarios: Pessimistic locking is suitable for scenarios with multiple concurrent writes, complex critical section code, and intense competition, where a lot of useless trial-and-error consumption can be avoided. Optimistic locking is suitable for read, few change scenarios, and for both read and write scenarios where there is little competition.

Q: What does the CAS algorithm look like?

A: Compare and swap (CAS) means Compare and swap. It involves three operands: the memory value, the expected value, and the new value. Change the memory value to the new value if and only if the memory value is equal to the expected value. CAS has atomicity, which is guaranteed by the implementation of CPU hardware instructions.

Question:synchronizedWhat is the locking principle?

Answer: slightly, special article supplement

Question: withSynchronizedCompare that to a reentrant lockReentrantLockHow does it work differently?

A: Locks are basically implemented to achieve one goal: to make a tag visible to all threads. Synchronized does this by setting flags in object headers and is a JVM native Lock implementation, while ReentrantLock, and all implementation classes based on Lock interfaces, are int variables modified by a volitile. Each thread is guaranteed to have visibility and atomic changes to the int, which is essentially based on the AQS framework.

Question: WhySynchronizedWell, Java hasReentrantLockThe lock? What are their similarities and differences?

A: Actually, ReentrantLock was created not to replace Synchronized, but to supplement it. Similarities:

  • Are used to keep resources thread safe.
  • Both are reentrant locks

Difference:

  • On the level of being,synchronizedAt the JVM level,ReentrantLockAt the Java API level.
  • In terms of usage,synchronizedNo need to explicitly lock and release locks,ReentrantLockYou need to lock it yourself. You need tofinallyTo release the lock.
  • In terms of the synchronization mechanism,synchronizedThrough the Java object header lock tag andMonitorObject for synchronization.ReentrantLockthroughCAS, AQS (AbstractQueuedSynchronizer)andLockSupport(for blocking and unblocking) to achieve synchronization.
  • Functionally,ReentrantLockAdvanced features have been added, such as: try to acquire locks, wait interruptible, fair locking, and so on

How to choose?

  • And if you can, you’re better off using neither ReentrantLock nor synchronized. Because in many cases you can use the mechanism in the java.util.Concurrent package, which handles all locking and unlocking operations for you, it is recommended to use the utility classes for unlocking first.
  • If the synchronized keyword is appropriate for your program, use it as much as possible to reduce the amount of code you have to write and the probability of an error. The code can go horribly wrong if you forget to unlock in the finally, and synchronized is much safer.
  • Use ReentrantLock only if you need the special features of ReentrantLock, such as try to acquire a lock, interruptibility, timeout, etc.

Question: What are the common methods for Lock?

A: Overview of methods

Public interface Lock {// block until no Lock is acquired; void Lock (); Void lockInterruptibly() throws InterruptedException; void lockInterruptibly() throws InterruptedException; void interruptibly () throws InterruptedException; Boolean tryLock(); // tryLock(); // tryLock(); // The thread tries to acquire the lock. If it does not acquire the lock, the thread will relinquish the lock after waiting for a specified timeout period. Boolean tryLock(long time, TimeUnit unit) throws InterruptedException; Void unlock(); Condition newCondition(); }Copy the code

Q: What is a fair lock? What is an unfair lock? Why do you need unfair locks?

A: Fair locking allocates locks in the order in which threads request them. While non-fair locks do not follow the order of request, under certain circumstances queue-jumping can be allowed (note, not completely at random, but at the right opportunity). Appropriate opportunity means that if one of the threads releases the lock while the current thread is requesting it, the current thread will jump the queue regardless of the waiting thread. However, if the current thread requests a lock but the previous thread did not release the lock, the current thread is still in the waiting queue. Non-fair locks occur to improve throughput. Consider the situation where thread A holds the lock and thread B requests the lock. Since thread A holds the lock, thread B can only block and wait. When A releases the lock, IT is supposed to wake B up to acquire the lock. However, if C jumps the queue to apply for the lock, C will acquire the lock in the non-fair lock mode, because it costs A lot to wake B up. It is possible that C has acquired the lock and completed the task to release the lock before B wakes up. Then B gets the lock again, and it’s a win-win situation. From the perspective of code implementation, the logic of fair lock application is to first see whether there are threads waiting in the wait queue, if there is a wait, not to acquire the lock. Non fair lock application lock regardless of 372 21 directly to obtain a lock, did not apply to the lock to queue.

Question: Why do I need read and write locks? What are the rules?

A: Read-write locking is designed to improve performance because we know that multiple read operations are not thread-safe. This allows multiple threads to read, increasing efficiency. The design idea is: design two locks, read lock and write lock, obtain read lock, can only read data, cannot modify, obtain write lock can read and modify data. Read locks can be held by multiple threads, while write locks can be held by only one thread.

/** * Description: Public class ReadWriteLockDemo {private static final ReentrantReadWriteLock ReentrantReadWriteLock = new ReentrantReadWriteLock(false); private static final ReentrantReadWriteLock.ReadLock readLock = reentrantReadWriteLock.readLock(); private static final ReentrantReadWriteLock.WriteLock writeLock = reentrantReadWriteLock.writeLock(); private static void read() { readLock.lock(); Try {system.out.println (thread.currentThread ().getName() + "get read lock, reading "); Thread.sleep(500); } catch (InterruptedException e) { e.printStackTrace(); } finally {system.out.println (thread.currentThread ().getName() + "Release the read lock "); readLock.unlock(); } } private static void write() { writeLock.lock(); Try {system.out.println (thread.currentThread ().getName() + "get the write lock, writing "); Thread.sleep(500); } catch (InterruptedException e) { e.printStackTrace(); } finally {system.out.println (thread.currentThread ().getName() + "Release the write lock "); writeLock.unlock(); } } public static void main(String[] args) throws InterruptedException { new Thread(() -> read()).start(); new Thread(() -> read()).start(); new Thread(() -> write()).start(); new Thread(() -> write()).start(); }}Copy the code

The results of

Thread-1 releases the read lock, thread-2 releases the write lock, thread-3 obtains the write lock, and thread-2 releases the write lock. Writing thread-3 releases the write lockCopy the code

Q: According to the above questions, if there are no thread safety issues with reading data, why add a read lock? Can’t we just leave it unlocked?

A: Read the operation itself is no security problem, but if that is the existence of Shared variables read, have the existence of Shared variables to write (which is a method of Shared variables, another method of Shared variables read), by this time, assuming that reading is not locked, so reading at the same time there may be the Shared variables to be written, expectations may not be read.

Question: Can I jump the queue with a read lock?

A: ReentrantReadWriteLock allows you to set the fair lock mode and non-fair lock mode.

// Fair lock mode ReentrantReadWriteLock ReentrantReadWriteLock = New ReentrantReadWriteLock(true); ReentrantReadWriteLock ReentrantReadWriteLock = New ReentrantReadWriteLock(false); // The lock mode is not fair.Copy the code
  • Fair lockIs checked before obtaining a read lockreaderShouldBlock() Method is checked before acquiring a write lockwriterShouldBlock()Method to decide whether to queue or jump the queue.
    final boolean writerShouldBlock() {
        return hasQueuedPredecessors();
    }
    
    final boolean readerShouldBlock() {
        return hasQueuedPredecessors();
    }
    Copy the code

In fair locking mode, threads cannot jump the queue to acquire a read/write lock.

  • In the non-fair locking mode,writerShouldBlock()andreaderShouldBlock()implementation
    final boolean writerShouldBlock() {
        return false; // writers can always barge
    }
    final boolean readerShouldBlock() {
        return apparentlyFirstQueuedIsExclusive();
    }
    Copy the code

An unfair lock can jump the queue when acquiring a write lock. A policy is used to obtain a read lock. Let’s say thread 1 and thread 2 hold the read lock. Thread 3 wants to acquire the write lock and has to wait. At this point, thread 4 wants to acquire the read lock.

  1. Strategy one: You can cut in line

Since threads 1 and 2 hold the read lock, thread 4 can share the read lock, so it jumps in the queue to read together. This strategy improves the efficiency of thread 4, but there is a big problem. If many threads want to get a read lock can jump the queue, thread 3, the first to get a write lock, will have to wait and get hungry.

  1. Strategy two: Don’t cut in line

This strategy assumes that thread 3 has queued ahead, so thread 4 must queue. The next step is for thread 3 to acquire the lock and avoid “hunger”.

ReentrantReadWriteLock Select the second policy. Do not cut in line.

Q: What is the promotion or degradation of read/write locks?

A: In the promotion or degradation policy, a write lock can only be degraded to a read lock, but a read lock cannot be upgraded to a write lock.

  • Lock degradation update cache code case:

    public class CachedData { Object data; volatile boolean cacheValid; final ReentrantReadWriteLock rwl = new ReentrantReadWriteLock(); Void processCachedData() {rwL.readLock ().lock(); if (! CacheValid) {// Cache invalidation Read locks must be released before write locks can be acquired. rwl.readLock().unlock(); Rwl.writelock ().lock(); Try {// We need to check the validity of the data again, because between the time we release the read lock and the time we acquire the write lock, another thread may have modified the data. if (! cacheValid) { data = new Object(); cacheValid = true; } // When you acquire a read lock without releasing the write lock, you degrade the read/write lock. rwl.readLock().lock(); } finally {// release the writeLock, but still hold the read lock rwl.writelock ().unlock(); }} try {system.out.println (data); } finally {// release the readLock rwl.readlock ().unlock(); }}}Copy the code

    The above code is a degradation of the read/write lock. Why should analysis be downgraded? Doesn’t it smell good to get a write lock all the time in the above code and then read the data? Why is it so complicated? This is for performance, because if you keep a write lock, if subsequent reads are time-consuming, then because the write lock is exclusive, other threads are queued to read, while the thread holding the write lock is actually doing the read. This is when a downgrade can improve overall performance.

  • Examples of code upgrades that do not support locking:

    final static ReentrantReadWriteLock rwl = new ReentrantReadWriteLock(); public static void main(String[] args) { upgrade(); } public static void upgrade() { rwl.readLock().lock(); System.out.println(" read lock acquired "); rwl.writeLock().lock(); System.out.println(" Upgrade successfully "); }Copy the code

    The above code “Upgrade successfully” will not be printed. Why not support upgrades? A read lock is a shared lock that can be held by multiple threads. When a read lock is upgraded to a write lock, what happens to other threads? In this case, different threads hold the read/write lock.

Q: What is a spinlock? What are its advantages and disadvantages?

A: Spin-locking is the process of trying to acquire the lock until it is acquired. The implementation is to iterate over and over again until you get the lock. The spin lock process is compared with the non-spin lock process. The spin lock process does not release the CPU time slice and tries to acquire the lock until it succeeds. The non-spin lock process hibernates the thread when it fails to acquire the lock, releases the CPU time slice and tries to acquire the lock again when the other thread releases the lock. Therefore, the main difference between a non-spin lock and a spin lock is that if it fails to reach the lock, it will block the thread until it is awakened. And the spinlock keeps trying.

Benefit: Since both blocking and waking up threads is expensive, if the synchronization code is not complex, the cost of executing the code may be less than the cost of switching threads. So by just waiting for the spin, you can avoid the overhead of context switching and improve efficiency.

Disadvantages although the cost of context switching is avoided, it brings new costs. Continuous spin will occupy CPU time to do useless work. Although the cost of spin is low at the beginning, it can not acquire the lock, and this cost will become larger and larger.

How to choose the spin is suitable for the degree of concurrency is not particularly high, synchronization code execution time is relatively short. This spin avoids thread switching and improves efficiency. If the synchronization code takes a long time to execute and the thread takes a long time to release the lock, then the spin is a waste of CPU resources.

Question: What optimizations did the JVM make for locks?

A: In JDK 1.6, the HotSopt VIRTUAL machine implements a number of synchronized built-in lock performance enhancements, including adaptive spin, lock elimination, lock coarsing, bias locking, lightweight locking, and more.

  • The downside of spin locking is that if you can’t get the lock for a long time, you will keep trying to get it, wasting CPU resources. Adaptive spin is the solution to this problem. Its spin time is not fixed, and will be determined by the success rate of the nearest spin, the failure rate, and the state of the current lock owner. Spinlocks are getting smart.

  • Lock removal If the compiler determines that certain objects cannot be accessed by other threads, then it must be safe to remove such locks automatically.

  • Lock coarsening might look like this:

    public void lockCoarsening() {
        synchronized (this) {
            //do something
        }
        synchronized (this) {
            //do something
        }
        synchronized (this) {
            //do something
        }
    }
    Copy the code

    Releasing and reacquiring locks is meaningless, and the synchronization area can be enlarged

    public void lockCoarsening() {
        synchronized (this) {
            //do something
            //do something
            //do something
        }
    }
    Copy the code

    However, lock coarsening is not suitable for looping scenarios, where expanding the synchronization area causes the thread holding the lock to run for a long time and other threads cannot acquire the lock.

  • These three types of locks are specific to the state of the synchronized lock, which is indicated by the mark word in the object header.

    For biased locks, the lock is not contested all the time, so there is no need to lock it, just mark it. After an object is initialized, it is biased if no thread can acquire its lock. When the first thread accesses it and attempts to acquire the lock, it records that thread, and if the next thread attempting to acquire the lock is the biased lock owner, it can acquire the lock directly.

    A lightweight lock means that when a biased lock is accessed by another thread, the biased lock is upgraded to a lightweight lock, and the thread spins to acquire the lock without blocking.

    Heavy locks, when multiple threads are directly in actual contention, and the contention is long, bias locks and lightweight locks are not sufficient, and the locks swell into heavy locks. A thread that cannot claim a lock blocks.

Question: What is a deadlock? What are the necessary conditions for deadlocks? What should I do if I have a deadlock?

A: The definition of deadlock:A “permanent” block in which a group of threads competing for resources wait for each other.

A deadlock has four requirements:

  • Shared resources X and Y can only be occupied by one thread
  • Thread T1 acquires shared resource X and does not release shared resource X while waiting for shared resource Y; thread T1 acquires shared resource X and does not release shared resource X while waiting for shared resource Y.
  • Non-preempt: no other thread can preempt resources held by thread T1;
  • A circular wait, where thread T1 waits for the resources occupied by thread T2, and thread T2 waits for the resources occupied by thread T1, is a circular wait.

If there is a deadlock online, save the “crime scene” data such as JVM information, logs, and so on, and immediately restart the service to try to fix the deadlock. The best way to avoid deadlocks is to avoid deadlocks as long as the necessary conditions for a deadlock are broken and there is no deadlock problem.

  • For the “occupy and wait” condition, we can apply for all the resources at once, so there is no waiting. (Add a resource management role. If you want one-time access to resources, get the resource manager first)
  • The non-preemption condition is broken when a thread that occupies a part of a resource requests another resource. If the thread fails to request another resource, it can release the resource. (Java Lock attempts to acquire, cannot obtain the Lock to give up)
  • The “circular wait” condition can be prevented by requesting resources sequentially. The so-called sequential application refers to the linear order of resources. When applying for resources, you can apply for resources with small serial numbers first and then apply for resources with large serial numbers, so that there will be no circulation after linearization.

Question:synchronizedWhat are the main problems solved?

A: Synchronized solves two key problems in concurrent programming. One is mutual exclusion, that is, only one thread is allowed to access a shared resource at a time. The other is synchronization, or how threads communicate and collaborate with each other.

Problem: The following codesynchronizedIs it used correctly?

Class Account {// Account balance private Integer balance; Private String password; Void withdraw(Integer amt){synchronized(balance) {if (this.balance > amt){this.balance -= amt; }}} void updatePassword(String pw){synchronized(password) {this.password = pw; }}}Copy the code

Answer: The basic principles of locking should be private, immutable, and non-reusable. So the above code locks the object incorrectly. Since Ingeter caches values ranging from -128 to 127, and String has a pool of String constants, objects of type Integer and String can be reused in the JVM. In addition, objects of type Boolean can be reused in the JVM. What does reuse mean? This means that your lock could be used by other code, and if other code does not synchronized, then your program will never get the lock, which is a hidden risk.

Question: Why provide a Lock when it is provided in the Java SDKSemaphore?

A:SemaphoreIs translated as a semaphore. The semaphore model can be summarized as: one counter, one wait queue, and three methods (init, Down, up).Three methods (all atomic) :

  • Init () : Sets the initial value of the counter.
  • Down () : the counter value is reduced by 1; If the counter value is less than zero, the current thread will be blocked, otherwise the current thread can continue execution.
  • Up () : increments the counter value by 1; If the counter value is less than or equal to zero at this point, a thread in the wait queue is awakened and removed from the wait queue.

Both Lock and Semaphore can work as a mutex, but Semaphore can also work to allow multiple threads to access a critical section. Common requirements include a variety of pooled resources (connection pools, object pools, thread pools) that allow multiple threads to use a pooled resource at a time. This is something that Lock is not easy to implement.

Java concurrent container

Q: WhyHashMapNot thread-safe?

A: There are many places where HashMap is unsafe.

  • The put() method in a HashMap has a line of code modCount++, which at first glance is thread-unsafe.

  • If the value is not accurate during the expansion, the expansion of the HashMap will create an empty array and fill the new array with the old items. If you try to retrieve the value at this time, the value may be null. The following code will print the Exception in the thread “is the main” Java. Lang. RuntimeException: HashMap is not thread safe. At the top. Xiaoduo. Luka. API. The activity. The controller. HashMapNotSafe. Main (HashMapNotSafe. Java: 26).

    public class HashMapNotSafe { public static void main(String[] args) { final Map<Integer, String> map = new HashMap<>(); final Integer targetKey = 65535; // 65 535 final String targetValue = "v"; map.put(targetKey, targetValue); new Thread(() -> { IntStream.range(0, targetKey).forEach(key -> map.put(key, "someValue")); }).start(); while (true) { if (null == map.get(targetKey)) { throw new RuntimeException("HashMap is not thread safe."); }}}}Copy the code
  • Also, put collisions cause data loss. If multiple threads add an element at the same time, the two PUT keys are exactly the same. They collide, but then the two threads decide that the position is empty, write the two values to the same position, and lose one.

  • Visibility is not guaranteed. If one thread puts a new value on a key, there is no guarantee that another thread will get the new value.

  • Infinite loop causes cpu100%, this problem occurs when expanding, may occur when the linked list ring. When YOU get, it’s going to loop forever.

Question: WhyConcurrentHashMap It takes more than 8 in a barrel to turn into a red-black tree?

A: First ConcurrentHashMap is in the form of an array + a linked list. When the length of the list is greater than 8, it will turn the list into a red-black tree. So the first question is why do we go red-black? The red-black tree is a balanced binary tree, so the query efficiency is very high. The time complexity of the linked list is O(n), while the time complexity of the red-black tree is O(log(n)). Therefore, in order to improve the performance, the linked list will be converted into a red-black tree. So why not start with a red-black tree? The space occupied by a single TreeNode in a red-black tree is about twice that of a common Node. Therefore, a TreeNodes is converted only when there are enough Nodes. Therefore, a red-black tree is converted when the number of Nodes is greater than the threshold to save space. This is explained in the Java source code:

Because TreeNodes are about twice the size of regular nodes, use them only when bins contain enough nodes to warrant use (see TREEIFY_THRESHOLD). And when they become too small (due  removal or resizing) they are converted back to plain bins.Copy the code

The list turns into a red-black tree when it reaches 8, and turns back when it reaches 6, reflecting the idea of balance between time and space. The threshold 8 is also explained in Javadoc:

In usages with well-distributed user hashCodes, tree bins are rarely used. Ideally, under random hashCodes, the frequency of nodes in bins follows a Poisson distribution (http://en.wikipedia.org/wiki/Poisson_distribution) with a Parameter of about 0.5 on average for the default resizing threshold of 0.75, although with a large variance because of resizing granularity. Ignoring variance, The expected occurrences of list size k are (exp(-0.5) * POw (0.5, k)/factorial(k)). The first values are: 0: 0.60653066 1:0.30326533 2:0.07581633 3:0.01263606 4:0.00157952 5: 0.00015795 6: 0.00001316 7:0.00000094 8: 0.00000006 more: less than 1 in ten millionCopy the code

This means that if the hashCode is well distributed and the results are discrete and even, then red-black trees are hard to use. In an ideal world, the length of the list fits the Poisson distribution, and the probability of the list being 8 is less than 1 in 10 million, so in general, the list doesn’t turn into a red-black tree, and if your code does turn into a red-black tree, then you need to think about whether the hashCode method is inappropriate.

Question: ComparisonConcurrentHashMapandHashtableThe difference between?

A: The main difference is

  • The Hashtable data structure is also an array + linked list, and it uses the synchronized keyword for key methods to achieve thread safety. ConcurrentHashMap is divided into Java 7 and Java 8 implementations. Java 7 uses Segment Segment lock to ensure security. Segment inherits ReentrantLock. Java 8 abandons the Segment design and uses Node + CAS + synchronized to ensure thread safety.

  • Performance difference

    HashtableDue to the usesynchronizedAs the number of threads increases, performance deteriorates dramatically because only one thread can manipulate an object at a time, and the other threads block. There is also overhead such as context switching.

    ConcurrentHashMapIn Java 7, the maximum concurrency is the size of the fragment, which is 16 by defaultHashtableIt’s much more efficient. Java 8 doesNode + CAS + synchronizedWay toput()For example, when the hash is used to compute the slot and the slot is empty, the CAS is used to set the value. That is to say, the CAS is unlocked for this layer of the array.synchronizedIt is only used when the value is mounted on a linked list or red-black tree, where the calculated slot is found to have data (i.e., Hash collisions)synchronized. So its concurrency is the number of arrays.

  • Iteration with different Hashtable iterative modifying complains, ConcurrentModificationException concurrent modification abnormalities. ExpectedModCount is generated during iterator production and cannot be modified. However, when modifying data, the modCount value will be modified, so the two variables will definitely be inconsistent.

    public T next() { if (modCount ! = expectedModCount) throw new ConcurrentModificationException(); return nextElement(); }Copy the code

    ConcurrentHashMap does not throw an exception when modifying data during iteration.

Question: SeparatelyConcurrentHashMapHow are versions 1.7 and 1.8 designed and implemented?

A:

Question: Please talk about itCopyOnWriteArrayListWhat is?

A:

  • Define CopyOnWriteArrayList as a concurrent container provided in Java and send packages. It is a thread-safe, lock-free ArrayList for read operations, and a read-separate concurrency policy for write operations by creating new copies of the underlying array. We can also call this container a “copier on write”. CopyOnWriteArrayList allows concurrent reads without locking. The most important thing is that it does not affect the read when it is written, because when it is written, it is copying the original array and operating on the new array, without affecting the original array at all. Only multiple writes are synchronous. I think it’s similar to the multi-version concurrency mechanism for databases.

  • Add element source code

    public boolean add(E e) { final ReentrantLock lock = this.lock; lock.lock(); try { Object[] elements = getArray(); int len = elements.length; Object[] newElements = Arrays.copyOf(elements, len + 1); newElements[len] = e; setArray(newElements); return true; } finally { lock.unlock(); }}Copy the code
    • useReentrantLocklock
    • Copy the original array, increase the length by one, and set the original data in the new array
    • Refers a reference to the original array to the new array
    • unlock
  • advantages

    • High read performance
    • No concurrent modification exception is reported when data is modified during iteration
  • disadvantages

    • Memory usage is high, a new array needs to be copied every time, and when the amount of data is large, GC may occur frequently
    • When the elements are large or complex, the replication cost is high
    • Real-time data cannot be guaranteed. When writing, old data can be read, and new data can be read only after writing is completed
  • Applicable scenarios Scenarios with more read and less write require low real-time performance

Concurrent tool

Question:AQSHow does it work?

A: I’m here refer to a relatively clear post, the line source analysis clear AbstractQueuedSynchronizer”

Atomic classes

Q: What is atomicity? Why is atomicity a problem? How to solve it?

A: Atomicity is the guarantee that a set of operations will either all succeed or all fail. Therefore, atomic representation means that the operation is indivisible. Its essence is the requirement of consistency among multiple resources, and the intermediate state of the operation is invisible to the outside.

Wouldn’t atomicity be a problem if only one thread could execute one or more instructions without being disturbed? That is, threads do not switch when performing a set of operations. So the root of the atomicity problem is thread switching, and thread switching is dependent on CPU interrupts, so disabling CPU interrupts can solve the atomicity problem. In the early days of single-core cpus, this was true, but in multi-core cpus, even disabling interrupts does not work, because there are multiple threads working on the resource at the same time. So, only one thread at a time can execute this condition to solve the atomicity problem, which we call mutex. This can be done by relying on locks.

Question: When is an atomic class? What does it do?

A: Atomicity is the guarantee that a set of operations will either succeed or fail. So atomic classes are called atomic classes, which can add, increment, decrement and so on atomically. Atoms act like locks, ensuring thread-safe concurrency. The advantage over locking is

  • The granularity of control is finer. Atomic variables can control the scope of contention at the variable level. In one case, the granularity of lock is greater than that of atomic variables.
  • More efficient, atomic class underlying use CAS, does not block threads, except for high contention, CAS is more efficient than locking

Question: What classes of atoms are there?

A:

  • Basic type AtomicInteger, AtomicLong, and AtomicBoolean

    Public final int getAndSet(int newValue) public final int getAndSet(int newValue) Public final int getAndDecrement() public final int getAndDecrement() public final int getAndDecrement() Public final int getAndAdd(int delta) // Get the current value and add the expected valueCopy the code
  • The array type is AtomicIntegerArray, AtomicLongArray, and AtomicReferenceArray

  • Reference Type Atomic class AtomicReference, AtomicStampedReference, and AtomicMarkableReference Reference type Atomic classes are similar to basic atomic classes. Reference type atomic classes can ensure the atomicity of objects

  • Upgrade type atomic classes AtomicIntegerFieldUpdater, AtomicLongFieldUpdater, AtomicRerenceFieldUpdater can will originally common variables into atomic, AtomicIntegerFieldUpdater, for example

    public class AtomicIntegerFieldUpdaterDemo implements Runnable { static class Score { volatile int score; } static Score computer ; static Score math; private AtomicIntegerFieldUpdater atomicIntegerFieldUpdater = AtomicIntegerFieldUpdater.newUpdater(Score.class, "score"); @Override public void run() { for(int i=0; i < 1000; I++){// common variable ++ computer.score ++; / / using the upgrade type atomic classes + + atomicIntegerFieldUpdater incrementAndGet (math); } } public static void main(String[] args) throws InterruptedException{ computer = new Score(); math = new Score(); AtomicIntegerFieldUpdaterDemo r = new AtomicIntegerFieldUpdaterDemo(); Thread t1 = new Thread(r); Thread t2 = new Thread(r); t1.start(); t2.start(); t1.join(); t2.join(); System.out.println(" result of common variable: "+ computer.score); System.out.println(" result after upgrade: "+ math.score); }} // Execute the result of the common variable: 1998 result after the upgrade: 2000Copy the code

    So since these variables need to have atomic classes, why not design them as AtomicInteger from the start?

    • For historical reasons, this variable is declared as a common variable and is widely used, which makes it expensive to modify
    • If there are only one or two places where the atomicity of a variable needs to be used, and many other places do not, there is no need to design it as atomic, since atoms consume resources like ordinary variables.
  • Adder accumulator LongAdder, DoubleAdder

  • Accumulator 积累器

    LongAccumulatorDoubleAccumulator

Problem: ContrastAtomicIntegersynchronized

A:

  • Similarities are thread safe.

  • The difference between

    • behindThe principle of different.synchronizedusemonitor Monitor, get it firstmonitor All done, releasemonitor .AtomicIntegerUse of isCASPrinciple.
    • The use ofRange of different.synchronizedYou can modify methods, you can modify blocks of code, which means that the scope of synchronization is wider thanAtomicIntegerIt’s just an object.
    • The performance difference betweenThis is the difference between optimistic lock and pessimistic lock. The pessimistic lock has a fixed cost. The optimistic lock has a low cost at first, but as the lock is lost, the constant spin costs a lot. So these two capabilities need to be separate scenarios, suitable for scenarios where competition is intense and synchronization zone code is complexsynchronized, the competition is not fierce use of atomic class is better. andsynchronized The optimized performance is also good.

Problem: Atomic class underlying useCAS, thenCASWhat is it?

A: CAS is a kind of idea, a kind of algorithm. The CAS has three operands: memory value V, expected value A, and modified value b. The IDEA of the CAS is to change the memory value to B only when the expected value A is the same as the current memory value V. Otherwise, give up the modification and try again next time. The “compare and swap” operation is done by a SINGLE CPU instruction, so it is atomic.

Question:CASWhat are the disadvantages?

A: It has the following disadvantages:

  • CASThe biggest drawback is thatABAThe problem,ABAThe problem is that the current value goes from A to B, and then from B to A, which is the same value as the original value, but the value has been changed. whileCASThe criterion is whether the current value is consistent with the expected value. soCASIt cannot detect whether the value has been modified during this period, it can only check whether the value that appears is the same as the original value. The solution is to add a version number. Each time you change the version number, add 1. When comparing, you should compare whether the value is consistent, but also compare whether the version number is consistent.
  • Spin time is too long.CASGenerally, it is used with loops or even infinite loops. In high concurrency scenarios, if the modification is not successful, it will keep trying to modify the loop, consuming CPU resources.
  • The scope cannot be flexibly controlled,CASYou can only modify one variable, possibly a base class or a reference type, but it is difficult to do this for a group or multiple shared variablesCASOperation.

ThreadLocal

Question:ThreadLocalWhat is it used for?

Answer: Two typical scenarios:

  • ThreadLocal is used as a per-thread exclusive object, creating a copy for each thread, and each thread can only modify its own copy without affecting the other threads’ copies, thus making a situation that is thread unsafe in concurrent situations thread-safe.
  • ThreadLocal is used to provide local variables within a thread that work for the lifetime of the thread, reducing the complexity of passing common variables between multiple functions or components within the same thread.

From the thread’s perspective, the target variable is like a Local variable to the thread, which is what Local means in the class name.

Question:ThreadLocalIs it used to solve the problem of multi-threaded access to shared resources?

A: No, although ThreadLocal can solve thread-safety problems in multithreaded situations, the most important thing is that the resource is not shared, but unique to each thread. A ThreadLocal makes a copy for each thread, so there is no competition for the resource at all. If a static decorated shared resource is placed in a ThreadLocal, the ThreadLocal does not guarantee thread-safe resources.

Question:ThreadLocalsynchronizedWhat is the relationship?

A: While they both address thread-safety issues, they are designed differently.

  • ThreadLocalThis avoids resource contention by giving each thread an exclusive copy.
  • synchronizedTo achieve thread safety, the critical section resources can only be accessed by one thread at a time through locking.

In addition, there is another use scenario for ThreadLocal. It is convenient to use ThreadLocal to get the information stored independently in the current thread’s task, which is to avoid passing parameters.

4.Thread,ThreadLocalThreadLocalMapThe relationship between the three

A: A Thread holds a ThreadLocalMap. A ThreadLocalMap can store multiple ThreadLocal’s, each of which corresponds to a value.

ThreadThreadLocalMapRelationship between

ThreadLocal class relationships:

Question:ThreadLocalHow is it possible to maintain a copy of a variable for each thread?

A: The ThreadLocal class has a static declared Map that stores a copy of each thread’s variables. The keys of the elements in the Map are the thread objects and the values correspond to the thread’s copy of variables.

Question:ThreadLocalOOM problem

Java memory model related

Question: What is the hierarchical CPU caching model?

Question: Bus locking mechanism and MESI cache consistency protocol?

Question: Why are there visibility, atomicity, and order problems in concurrent programming?

A:

  • The visibility problem is caused by caching. There are multiple levels of caching between the CPU and main memory, and the cache data between multiple cpus cannot be synchronized in time. The JMM abstracts that each thread has a working memory, but the data in the working memory between different threads is not visible.
  • Atomicity is caused by switching between threads
  • The reorder is due to compilation optimization

Question: Does volatile guarantee visibility, orderability, and atomicity?

A: Volatile guarantees visibility, orderliness, and atomicity but not atomicity. Volatile can guarantee the atomicity of every read or write operation, such as reading a variable, but it cannot guarantee the atomicity of a combination of atoms. For example, the variable ++ operation is typically not atomic.

Question: Compare JVM memory structure vs. Java memory model?

A:

  • JVMThe memory structure is related to the data area when the Java virtual machine is running. Java runs on the Java virtual machine. The virtual machine divides the managed memory into different data areas. According to the Java VIRTUAL machine specification, the memory can be divided into six areas: heap area, virtual machine stack, method area, local method stack, program counter, and running constant pool.
  • The Java memory model is related to Java concurrent programming. The Java memory model is a set of specifications that address the unpredictable effects of CPU and multi-level caching, processor optimization, and specified reordering.

Question: WhyJMM(Java Memory model)

A: There is no concept of an in-memory model. Therefore, the final execution result of the program depends on the processor, and the rules of each processor are different, so the same code will be executed on different processors with different results, and different JVM implementation, will also bring different translation results. So Java needs a standard that tells developers, compilers, and JVM engineers to agree on. That standard is the JMM.

Question: What is the JMM?

A: The JMM is a set of specifications related to multithreading that require JVM implementations to comply with the JMM specification. The JMM is processor, cache, concurrency, and compiler related. (The Java Memory Model specifies how the JVM provides a way to disable caching and compile optimization on demand. These methods include volatile, synchronized, and final, as well as six happens-before rules. It solves the problem of unpredictable results caused by multi-stage CPU cache, processor optimization, instruction rearrangement, etc. Communication between Java threads is controlled by the Java memory model, and the JMM determines when a thread’s write to a shared variable is visible to another thread.

Question: What is a specified reorder?

A: The runtime order of the statements in a Java program is not necessarily the same as the order of the code we write, because the compiler, THE JVM, and the CPU can all change the order of the instructions for optimization purposes. This is called reordering.

Question: What’s the benefit of reordering, and why?

A: The benefit of reordering is increased processor speed. Here’s an example:

LoadRead data from main memory,StoreWrite data to main storage. Because you have to do a twice before you reorderLoadStoreI only did a once after reorderingLoadandStore. This increases the overall speed of operation. It’s not going to sort arbitrarily, it’s going to make sure it doesn’t change the semantics within a single thread.

Question: What has atomic operations in Java?

A: All of the following have atomic operations.

  • In addition tolonganddoubleOutside of the basic types (int,byte,boolean,short,char,float) are naturally atomic.
  • All referencesreferenceRead/write operations for the
  • addedvolatilePost variable read/write operations (includinglonganddouble)
  • injava.concurrent.AtomicSome methods of some classes in a package are atomic, for exampleAtomicIntegerincrementAndGetMethods.

But beware: Atomic + atomic! = Atomic operation. For example, the value and assignment of the volatile int variable I are both atomic operations. If the value is incrementing and then the assignment is not atomic.

Question: Why?longanddooubleIs reading and writing not atomic?

A: Both long and double values require 64 bits of memory, and writing a 64 bit value can be divided into two 32 bit operations. Such an assignment is split into two operations, the lower 32-bit and the higher 32-bit. The wrong value can occur in multiple threads. So using volatile makes read/write atomic. In practice, however, you don’t usually use volatile to modify long and double, because each virtual machine implementation treats them as atomic operations.

Question: Why do memory visibility issues occur

Answer: The question involves modern CPU architectures, as shown below

Because of the CPU cache consistency protocol, the caches of multiple cpus will not be out of sync. However, the cache consistency protocol has a performance cost, so CPU designers have optimized on top of it. For example, a Store Buffer and a Load Buffer are added between the cell and L1, and the Store Buffer and L1 are not synchronized.Mapped to Java, the Java memory model abstracts the hierarchical cache of the CPUThe JMM has the following provisions:

  1. All variables are stored in main memory, and each thread has its own working memory. The contents of a variable in working memory are copies of the same variable in main memory.

  2. Threads cannot directly read/write variables in main memory, but they can manipulate variables in their own working memory and then synchronize them to main memory so that other threads can see the change.

  3. The main memory is shared by multiple threads, but the threads do not share their own working memory, if the threads need to communicate, then must use the main memory relay to complete.

Q: What is ithappens-beforeRelations?

A: The happens-before relationship is used to describe problems related to visibility: if the first action happens before the second, then we say that the first action must be visible to the second.

The program design

Question: What’s the difference between asynchronous and synchronous?DubboHow to realize the asynchronous to synchronous, can you design this program?

A: The difference between asynchronous and synchronous is whether the caller needs to wait for the result. If the caller needs to wait for the result, it is synchronous. If the caller does not need to wait for the result, it is asynchronous. Dubbo is a well-known RPC framework. At the TCP level, after sending an RPC request, a thread does not wait for an RPC response. But most of what we do with the RPC framework is synchronous, because the framework is converting asynchronous to synchronous.

Private final Lock Lock = new ReentrantLock(); private final Condition done = lock.newCondition(); Object get(int timeout){long start = system.nanotime (); lock.lock(); try { while (! IsDone ()) {// no result is returned to wait for done.await(timeout); long cur=System.nanoTime(); If (isDone () | | cur - start > timeout) {/ / wait for overtime break; } } } finally { lock.unlock(); } if (! isDone()) { throw new TimeoutException(); } return returnFromResponse(); } // Whether the RPC result has returned Boolean isDone() {return response! = null; } private void doReceived(Response res) {lock.lock(); try { response = res; if (done ! = null) {// Call signal to notify the calling thread of done.signal(); } } finally { lock.unlock(); }}Copy the code

Question: Can I hand-write a reentrant spinlock?

public class ReentrantSpinLock { private AtomicReference<Thread> owner = new AtomicReference<>(); Private int count = 0; Public void lock() {Thread current = thread.currentThread (); if (owner.get() == current) { count++; return; } while (! Owner.com pareAndSet(null, current)) {system.out.println ("-- I'm spinning --"); Public void unLock() {Thread current = thread.currentThread (); If (owner.get() == current) {if (count > 0) {count--; } else {// There is no CAS operation here, because there is no competition, because only the thread holder can unlock owner.set(null); } } } public static void main(String[] args) { ReentrantSpinLock spinLock = new ReentrantSpinLock(); Runnable Runnable = () -> {system.out.println (thread.currentThread ().getName() + "Start trying to get spinlock "); spinLock.lock(); Try {system.out.println (thread.currentThread ().getName() + "Got the spinlock "); Thread.sleep(4000); } catch (InterruptedException e) { e.printStackTrace(); } finally { spinLock.unLock(); System.out.println(thread.currentThread ().getName() + "Release spinlock "); }}; Thread thread1 = new Thread(runnable); Thread thread2 = new Thread(runnable); thread1.start(); thread2.start(); }}Copy the code

Question: How to use itSemaphoreQuick implementation of a current limiter?

A:

public class SemaphoreDemo { static class Link { } static class ObjPool<T, R> { final List<T> pool; final Semaphore semaphore; ObjPool(int size, T t) { pool = new Vector<>(size); for (int i = 0; i < size; i++) { pool.add(t); } semaphore = new Semaphore(size); } public R exec(Function<T, R> func) throws Exception { T t = null; semaphore.acquire(); Try {System. Out. Println (Thread. The currentThread (). The getName () + "-- -- -- -- -- -- -- -- -- for the lock -- -- -- -- -- -- -- --"); t = pool.remove(0); System.out.println(thread.currentThread ().getName() + "Get the lock and execute "); return func.apply(t); } finally { pool.add(t); semaphore.release(); } } } public static void main(String[] args) { ObjPool objPool = new ObjPool(5, new Link()); for (int i = 0; i < 30; i++) { new Thread(() -> { try { objPool.exec(t -> t.toString()); } catch (Exception e) { } }).start(); }}}Copy the code

Source analysis