1.1. Multithreading basics

  • What are threads and processes? What are the relationships between threads and processes, their differences, their advantages and disadvantages?
  • What’s the difference between concurrency and parallelism?
  • Why multithreading?
  • What are the possible problems with using multithreading? (Memory leaks, deadlocks, thread insecurity, etc.)
  • What are the ways to create a thread? (a. Thread class; B. Implement the Runnable interface; C. Use the Executor framework. D. Using FutureTask)
  • What about the life cycle and state of threads?
  • What is context switching?
  • What is a thread deadlock? How do I avoid deadlocks?
  • What are the differences and similarities between sleep() and Wait ()?
  • Why do we call the run() method when we call the start() method, and why can’t we call the run() method directly?
  • Why are multithreaded condition variables in the while body
  • Focus on Java thread locks: underlying implementations of synchronized and ReentrantLock underlying implementations of thread pools and common parameters
  • What locks does Java have? (Optimistic & Pessimistic locks, Synchronize & Reentrant locks, etc.)
  • Use of common classes under J.U.C. An in-depth look at Threadpool; The use of blockingQueue
  • What is the use of the Volatile keyword (including underlying principles)
  • Tuning strategies for thread pools
  • Talk about using multithreading and concurrency tools
  • Multithreading in Java, and thread pool growth and rejection policies.

.

What is a thread deadlock? How do I avoid deadlocks?

Thread deadlocks describe a situation in which multiple threads are blocked at the same time, and one or all of them are waiting for a resource to be released. Because the thread is blocked indefinitely, the program cannot terminate normally.

As shown in the figure below, thread A holds resource 2 and thread B holds resource 1. They both want to claim resources from each other at the same time, so the two threads wait for each other and enter A deadlock state.Here’s an example of a thread deadlock, modeled by code from The beauty of Concurrent Programming:

public class DeadLockDemo { private static Object resource1 = new Object(); Private static Object Resource2 = new Object(); private static Object resource2 = new Object(); Public static void main(String[] args) {new Thread(() -> {synchronized (resource1) { System.out.println(Thread.currentThread() + "get resource1"); try { Thread.sleep(1000); } catch (InterruptedException e) { e.printStackTrace(); } System.out.println(Thread.currentThread() + "waiting get resource2"); synchronized (resource2) { System.out.println(Thread.currentThread() + "get resource2"); }}}, "thread 1").start(); new Thread(() -> { synchronized (resource2) { System.out.println(Thread.currentThread() + "get resource2"); try { Thread.sleep(1000); } catch (InterruptedException e) { e.printStackTrace(); } System.out.println(Thread.currentThread() + "waiting get resource1"); synchronized (resource1) { System.out.println(Thread.currentThread() + "get resource1"); }}}, "thread 2").start(); }}Copy the code

The output

Thread[Thread 1,5,main]get resource1 Thread[Thread 2,5,main]get resource2 Thread[Thread 1,5,main]waiting get resource2 Thread[Thread 2, 5, the main] waiting get resource1Copy the code

Thread A acquires the monitor lock of resource1 through synchronized (resource1) and thread.sleep (1000); Let thread A sleep for 1s so that thread B can execute and then acquire the monitor lock of Resource2. When thread A and thread B sleep, they both attempt to request resources from each other, and then the two threads fall into A state of waiting for each other, resulting in A deadlock. The above example meets the four necessary conditions for a deadlock to occur.

The following four conditions must be met to cause a deadlock:

  • Mutually exclusive condition: This resource can only be occupied by one thread at any time.
  • Request and hold conditions: when a process is blocked by requesting resources, it holds on to acquired resources.
  • Non-deprivation condition: a thread cannot forcibly deprivate a resource it has acquired before using it up. The resource can be released only after the thread uses it up.
  • Circular waiting condition: a circular waiting resource relationship is formed between several processes.

How do I avoid thread deadlocks?

To avoid deadlocks, we only have to break one of the four conditions that cause deadlocks. Now let’s break it down one by one:

  • Break the mutex condition: We can’t break this condition because we use locks to make them mutually exclusive (critical resources require mutex access).
  • Break request and hold conditions: request all resources at once.
  • Destruction not deprivation condition: when a thread that occupies some resources applies for other resources, if it fails to apply for other resources, it can release the occupied resources.
  • Break the circular wait condition: prevent by requesting resources sequentially. Apply for resources in a certain order, and release resources in reverse order. Break the loop wait condition.

We changed the code for thread 2 to the following so that deadlocks would not occur.

new Thread(() -> { synchronized (resource1) { System.out.println(Thread.currentThread() + "get resource1"); try { Thread.sleep(1000); } catch (InterruptedException e) { e.printStackTrace(); } System.out.println(Thread.currentThread() + "waiting get resource2"); synchronized (resource2) { System.out.println(Thread.currentThread() + "get resource2"); }}}, "thread 2").start();Copy the code

Output:

Thread[Thread 1,5,main]get resource1 Thread[Thread 1,5,main]waiting get resource2 Thread[Thread 1,5,main]get resource2 Thread[Thread 1,5,main]get resource2 Thread 2,5,main]get resource1 Thread[Thread 2,5,main]waiting get resource2 Thread[Thread 2,5,main]get resource2 Process finished with exit code 0Copy the code

Why does this code prevent deadlocks?

Thread 1 is the first to acquire the monitor lock of Resource1, at which point thread 2 cannot acquire it. Thread 1 then acquires the monitor lock for Resource2, which is available. Thread 1 then releases the monitor lock on resource1 and resource2, which thread 2 acquires and executes. This breaks the break loop wait condition, thus avoiding deadlocks.

What are the differences and similarities between sleep() and Wait ()?

  • The main difference is that the sleep method does not release the lock, while the wait method does.
  • Both can suspend the execution of a thread.
  • Wait is usually used for interthread interaction/communication, and sleep is usually used to pause execution.
  • After wait() is called, the thread does not wake up automatically. Other threads need to call notify() or notifyAll() on the same object. After the sleep() method completes, the thread wakes up automatically. Or you can use wait(long timeout) to wake up automatically.

1.2. Advanced knowledge of multi-threading

Volatile keyword (non-blocking algorithm)

  • Java Memory Model (JMM);
  • Reorder and happens-before principle?
  • The role of the volatile keyword;
  • Talk about the difference between synchronized and volatile.

.

Synchronized and volatile are equivalent here, both of which solve the memory visibility problem of the shared variable value. However, the former is an exclusive lock, and only one thread can call get(). Other calling threads will not block, and there will be overhead of thread switching up and down and thread rescheduling. This is also the downside of using locks. The latter is a non-blocking algorithm and does not incur the overhead of thread context switching.

Using locks can solve shared variable memory visibility problems, but locks are too heavy because of the overhead of thread context switching. To address the memory visibility problem, Java also provides a weak form of synchronization that uses the volatile keyword. This keyword ensures that updates to a variable are immediately visible to other threads. When a variable is declared volatile, the thread does not cache the value in registers or elsewhere when writing to the variable. Instead, the value is flushed back to main memory. When another thread reads the shared variable, it retrieves the latest value from main memory, rather than using the value in the current thread’s working memory.

The memory definition of volatile is similar to that of synchronized in that writing a volatile variable is equivalent to exiting a synchronized block. Reading volatile values is equivalent to entering a synchronized block (emptying local memory values and fetching the latest values from main memory)

Volatile, while providing visibility guarantees, does not guarantee atomicity of operations.

So when is the volatile keyword generally used?

  • When a variable value is written independent of its current value. Because relying on the current value would be a fetch-calculate-write three-step operation that is not atomic, and volatile does not guarantee atomicity.
  • Reading and writing variable values without locking. Since locking itself guarantees memory visibility, there is no need to declare variables volatile.

Atomic operations in Java

Atomic operations: When a series of operations are performed, all or none of them are performed. Counters are designed to read the current value, then +1, and then update. This process is read – change – write process, if not guaranteed atomicity of this process, then there will be thread safety issues.

ThreadLocal

What does it do? How does it work? Do you understand the principle? Do you understand the memory leak problem?

The thread pool

  • Why thread pools?
  • Do you use thread pools?
  • How to create a thread pool? (The ThreadPoolExecutor constructor is recommended to create a thread pool.)
  • What are the important parameters of the ThreadPoolExecutor class? ThreadPoolExecutor saturation policy?
  • Do you understand thread pools?
  • How many common thread pools are there? Why not use FixedThreadPool?
  • How do I set the size of the thread pool?

.

Dirty read

A common concept. In multithreading, it is inevitable that there will be concurrent access to the instance variable of the same object in multiple threads. If the correct synchronization is not done, the result will be “dirty read”, that is, the data fetched is actually changed.

Example of multithreaded thread safety issues

Take a look at this code:

public class ThreadDomain13
{
    private int num = 0;
    
    public void addNum(String userName)
    {
        try
        {
            if ("a".equals(userName))
            {
                num = 100;
                System.out.println("a set over!");
                Thread.sleep(2000);
            }
            else
            {
                num = 200;
                System.out.println("b set over!");
            }
            System.out.println(userName + " num = " + num);
        }
        catch (InterruptedException e)
        {
            e.printStackTrace();
        }
    }
}
Copy the code

Write two threads to add string “a” and string “b” respectively:

public class MyThread13_0 extends Thread { private ThreadDomain13 td; public MyThread13_0(ThreadDomain13 td) { this.td = td; } public void run() { td.addNum("a"); }}Copy the code
public class MyThread13_1 extends Thread { private ThreadDomain13 td; public MyThread13_1(ThreadDomain13 td) { this.td = td; } public void run() { td.addNum("b"); }}Copy the code

Take a look at the results:

a set over!
b set over!
b num = 200
a num = 200
Copy the code
  • “A num = 100” and “b num = 200″ are normally printed. Instead,” B num = 200″ and “A num = 200” are printed. This is thread safety. We can think about how there could be a thread-safe problem:

  • Mt0 runs first, assigns num 100, and prints “a set over!” Start to sleep

While mt0 is sleeping, MT1 is running, give num 200, and print “b set over!” And then print “b num = 200”

  • Mt1 (num = 200); mt1 (num = 200);

AddNum (String userName); addNum(String userName);

public class ThreadDomain13
{
    private int num = 0;
    
    public synchronized void addNum(String userName)
    {
        try
        {
            if ("a".equals(userName))
            {
                num = 100;
                System.out.println("a set over!");
                Thread.sleep(2000);
            }
            else
            {
                num = 200;
                System.out.println("b set over!");
            }
            System.out.println(userName + " num = " + num);
        }
        catch (InterruptedException e)
        {
            e.printStackTrace();
        }
    }
}
Copy the code

Take a look at the results:

a set over!
a num = 100
b set over!
b num = 200
Copy the code

Multiple locks for multiple objects

In the case of synchronization, change the code inside main:

public static void main(String[] args)
{
    ThreadDomain13 td0 = new ThreadDomain13();
    ThreadDomain13 td1 = new ThreadDomain13();
    MyThread13_0 mt0 = new MyThread13_0(td0);
    MyThread13_1 mt1 = new MyThread13_1(td1);
    mt0.start();
    mt1.start();
}
Copy the code

Take a look at the results:

a set over!
b set over!
b num = 200
a num = 100
Copy the code

The way the results are printed is changed, and the order is crossed. Why is that?

There’s an important concept here. The lock obtained by the keyword synchronized is an object lock, rather than a piece of code or method (function) as a lock, which line first executes the method with the keyword synchronized, which thread holds the lock of the object to which the method belongs, other threads can only be in a waiting state. But there is a premise: since locks are called object locks, they must be associated with objects, so multiple threads must access the same object.

If multiple threads access multiple objects, the Java VIRTUAL machine creates multiple locks, as in the example above, where two ThreadDomain13 objects are created, resulting in two locks. Since the two threads hold different locks, they are not bound by the behavior of “waiting to release the lock”, so they can run the code in addNum(String userName) separately.

Synchronized methods and lock objects

An entity class defines a synchronized method and an unsynchronized method. An entity class defines a synchronized method and an unsynchronized method.

public class ThreadDomain14_0 { public synchronized void methodA() { try { System.out.println("Begin methodA, threadName = " + Thread.currentThread().getName()); Thread.sleep(5000); System.out.println("End methodA, threadName = " + Thread.currentThread().getName() + ", end Time = " + System.currentTimeMillis()); } catch (InterruptedException e) { e.printStackTrace(); } } public void methodB() { try { System.out.println("Begin methodB, threadName = " + Thread.currentThread().getName() + ", begin time = " + System.currentTimeMillis()); Thread.sleep(5000); System.out.println("End methodB, threadName = " + Thread.currentThread().getName()); } catch (InterruptedException e) { e.printStackTrace(); }}}Copy the code

One thread calls its synchronized method and one thread calls its asynchronous method:

public class MyThread14_0 extends Thread { private ThreadDomain14_0 td; public MyThread14_0(ThreadDomain14_0 td) { this.td = td; } public void run() { td.methodA(); }}Copy the code
public class MyThread14_0 extends Thread { private ThreadDomain14_0 td; public MyThread14_0(ThreadDomain14_0 td) { this.td = td; } public void run() { td.methodA(); }}Copy the code

Write a main function to call both threads:

public static void main(String[] args)
{
    ThreadDomain14_0 td = new ThreadDomain14_0();
    MyThread14_0 mt0 = new MyThread14_0(td);
    mt0.setName("A");
    MyThread14_1 mt1 = new MyThread14_1(td);
    mt1.setName("B");
    mt0.start();
    mt1.start();
}
Copy the code

Take a look at the results:

Begin methodA, threadName = A
Begin methodB, threadName = B, begin time = 1443697780869
End methodB, threadName = B
End methodA, threadName = A, end Time = 1443697785871
Copy the code

The first thread calls methodA() and the second thread can call methodB(). MethodB (); methodB(); methodB();

Begin methodA, threadName = A
End methodA, threadName = A, end Time = 1443697913156
Begin methodB, threadName = B, begin time = 1443697913156
End methodB, threadName = B
Copy the code

Two important conclusions can be drawn from this example:

Thread A holds the Lock of the Object. Thread B can asynchronously invoke non-synchronized methods of the Object. Thread A holds the Lock of the Object. Thread B will need to wait if it calls a synchronized method in Object at this point

ReetTrantLock

In Java multithreading, you can use the synchronized keyword to achieve mutual exclusion between threads, but in JDK1.5 the ReentranLock class has been added to achieve the same effect and is more powerful in extension functions, such as sniffing locking, multi-channel branch notification, etc. It is also more flexible in use than synchronized.

First answer a question? What are the three main features of threads? When do we need locks? Why use ReentrantLock when synchronized is already available in Java? AQS principle.

Threads are characterized by atomicity, visibility and order. This means that operations that satisfy all three of these features are safe, such as Atomic packages, volatile, and thread safety can be determined using happensBefore rules, usually in order to avoid REordering JVM instructions. For example, we usually know configuration information, if there are multiple threads to modify the configuration information, it needs to be locked. When multiple threads modify a variable, the variable needs to be locked or read/write separation is required. For example, ReentrantReadWriteLock is recommended. Its essence is to solve the problem in parallelism and turn parallelism into serial problem. So how does it work? The lock state is mostly mutually exclusive. There are exceptions to this rule: ReentrantReadWriteLock is not mutually exclusive for reading, writing, and writing, nor is it mutually exclusive for one thread to call another. RenntranLock is used because it is suitable for intensive concurrent scenarios and is optimized. Of course synchronized has also been optimized since JDK1.6, dividing it into biased locks, lightweight locks and heavyweight locks.

At the same time already is based on AQS (AbstractQueuedSynchronizer), its current and only realize the reentrant lock lock interface. Its advantage lies in the refinement of the lock, which is divided into two types of lock, fair lock and unfair lock, namely exclusive lock and preemption lock. When a fair lock is entered, it is returned that the lock was acquired successfully. If not, it will be wrapped as a node and put into the addWaiter. The node will block and wait for the previous thread to complete. Perform operations. Then the lock is acquired. Condition also uses await and singnal and of course encapsulates it in the queue to wake up the queue. Calling the await() method of Condition (or a method beginning with await) causes the current thread to queue and release the lock while the thread state changes to wait. When returned from await(), the current thread must have acquired the lock associated with Condition. Calling the signal() method of Condition wakes up the node that has waited the longest in the wait queue (the first node) and moves the node to the synchronization queue before waking it up.

  • 1.AQS data structure and variables
//Node data structure // bidirectional list of FIFO, each data structure has two Pointers, respectively pointing to the successor Node and the precursor Node. // If the thread fails to preempt the lock, it will be encapsulated as a node and added to the AQS queue. When the thread that acquired the lock releases the lock, Static final class Node {//waitStatus has 5 states: CANCELLED=1, //SIGNAL=-1, CONDITION=-2, //PROPAGATE=-3, 0: CANCELLED=1; static final int CANCELLED=1; Static final int SIGNAL=-1; static final int SIGNAL=-1; /** a thread communication tool is similar to synchronized wait/notify Signal static final int condition = -2; // The thread will be awaked only if the condition is satisfied. /** * waitStatus value to indicate the next acquireShared should */ / Static final int PROPAGATE = -3; volatile int waitStatus; volatile Node prev; // Volatile Node next; // Drive node volatile Thread Thread; // The current thread Node nextWaiter; Final Boolean isShared() {return nextWaiter == SHARED; } final Node predecessor() throws NullPointerException { Node p = prev; if (p == null) throw new NullPointerException(); else return p; } Node() {// Used to establish initial head or SHARED marke} Node mode) { // Used by addWaite this.nextWaiter = mode; this.thread = thread; } condition (Thread Thread, int waitStatus) {// Used by condition this.waitStatus = waitStatus; this.thread = thread; }} private transient volatile Node head; // private TRANSIENT volatile Node tail; // The CAS property is set to 0 or greater than 0, where 0 indicates no lock and >0 indicates that a thread has acquired the lock.Copy the code
  • 2. Method to obtain the lock
Static final class NonfairSync extends Sync {static final class NonfairSync extends Sync {static final class NonfairSync extends Sync { // Synchronized is an exclusive lock, and synchronized is an exclusive lock. // If the cas is successful, the cas lock is obtained successfully. // If the CAS is not successful, the cas lock fails. Cas calls the underlying unsafe.compareAndSwapInt(this,stateOffset, expect, update); // State =0, indicating that there is no lock state //state>0, i.e. 1, indicating that a thread has acquired the lock. ReentrantLock allows ReentrantLock, so if the same thread acquires the lock more than once, the state will increase, such as ReentrantLock 5 times //namstate is 5, and the lock must be released 5 times before other threads can acquire the lock. final void lock() { if (compareAndSetState(0, 1)) setExclusiveOwnerThread(Thread.currentThread()); Acquire (1); } // Try to get the lock, Returns true on success, Protected Final Boolean tryAcquire(int acquires) {return tryAcquire(int acquires) {return nonfairTryAcquire(acquires); Unsafe: the current value of compareAndSwapInt, Protected final Boolean compareAndSetState(int expect, int update) { return unsafe.compareAndSwapInt(this, stateOffset, expect, update); } public final native boolean compareAndSwapInt(Object var1, long var2, int var4, int var5); // If tryAcquire fails, return true. If tryAcquire fails, return false. // If tryAcquire fails, return false. //addWaiter encapsulates the current thread as a Node and adds it to the end of the AQS queue with Node as an argument, Public final void acquire(int arg) {if (! tryAcquire(arg) && acquireQueued(addWaiter(Node.EXCLUSIVE), arg)) selfInterrupt(); } protected boolean tryAcquire(int arg) { throw new UnsupportedOperationException(); } // The thread is added to the list via the addWriter method, and node is then passed as an argument to the acquireQueued method to contention the lock. Final Boolean acquireQueued(final Node Node, int arg) {Boolean failed = true; Boolean interrupted = false; // Spin for (;;) {// Obtain the prev nodes of the current Node final Node p = node.predecessor(); If (p == head && tryAcquire(arg)) {if (p == head && tryAcquire(arg)) { // remove the head node from the list. // help GC failed = false; return interrupted; } // The previous thread has not released the lock, / / for the current thread when performing tryAcquire returns false if (shouldParkAfterFailedAcquire (p, node) && parkAndCheckInterrupt()) interrupted = true; }} finally {// cancelAcquire cancelAcquire(node); } } final Node predecessor() throws NullPointerException { Node p = prev; if (p == null) throw new NullPointerException(); else return p; } // If the previous thread has not been released, When the current thread and the next thread to lock fail / / / / node failure will call shouldParkAfterFailedAcquire way in waitStatus has five states CANCELLED private static Boolean ShouldParkAfterFailedAcquire (Node Mr Pred, Node Node) {/ / to front nodes wait states int ws = Mr Pred. WaitStatus; // If the status is equal to SIGNAL, // as soon as the front node releases the lock, If (ws == node. SIGNAL) /* * This Node has already set status asking a release * to SIGNAL it,  so it can safely park. */ return true; CANCELLED(1) CANCELLED(ws>0) CANCELLLED (CANCELLLED) CANCELLED(1) CANCELLED(WS >0) CANCELLLED (CANCELLED(1) CANCELLED(WS >0) CANCELLED(CANCELLLED) If (ws > 0) {/* * Predecessor was cancelled. Skip over installations and * indicate retry. */ do { // Use a loop to remove CANCELLED nodes from the queue node.prev = pred = pred.prev; } while (pred.waitStatus > 0); pred.next = node; } else {// no, there are only two states. The default state is 0 or PROPAGATE // which is the initial state or in the executable state. Indicate that we * need a SIGNAL, Indicate that we * need a SIGNAL, but don't park yet. Caller will need to * retry to make sure it cannot acquire before parking. */ compareAndSetWaitStatus(pred, ws, Node.SIGNAL); } return false; } private final Boolean parkAndCheckInterrupt() {// Suspend the current thread to WATING state //park method to wait for permission, The unpark method permits the thread locksupport.park (this); If an interrupt request has been triggered, the current interrupt flag is returned true and the interrupt flag is reset. The interrupt flag has responded to the interrupt request. SelfInterrput () // this means that the acquire method will execute selfInterrput() // because threads will not return Thread.interrupted() when calling the acquireQueued method; } public static void park(Object blocker) { Thread t = Thread.currentThread(); setBlocker(t, blocker); UNSAFE.park(false, 0L); setBlocker(t, null); } final Boolean acquireQueued(final Node Node, int arg) {Boolean failed = true; Try {Boolean interrupted = false; for (;;) {final Node p = node.predecessor(); If (p == head &&tryacquire (arg)) {// If (p == head &&tryacquire (arg)) {// If (p == head &&tryacquire (arg)) {// If (p == head &&tryacquire (arg)) { Set head to null for ThreadB; set head to null for ThreadB; // Remove the head node from the list failed = false; Return interrupted; } //ThreadA may not have released the lock yet, Makes ThreadB in execution when it returns false if tryAcquire (shouldParkAfterFailedAcquire (p, node) && parkAndCheckInterrupt ()) Interrupted = true; // And returns whether the current thread has been interrupted while waiting. }} finally {if (failed) cancelAcquire(node); }}Copy the code
  • Release the lock
public void unlock() { sync.release(1); Public final Boolean release(int arg) {// If (tryRelease(arg)) {// Get AQS head Node Node  h = head; // Call unparkSUcessor to wake up subsequent nodes if (h! = null && h.waitStatus ! = 0) unparkSuccessor(h); return true; } return false; } protected boolean tryRelease(int arg) { throw new UnsupportedOperationException(); } // Wake up the node's successor, if it exists. private void unparkSuccessor(Node node) { /* * If status is negative (i.e., possibly needing signal) try * to clear in anticipation of signalling. It is OK if this * fails or if status is changed Int ws = node.waitStatus; If (ws <0) compareAndSetWaitStatus(node, ws, 0); /* * Thread to unpark is held in successor, which is normally * just the next node. But if cancelled or apparently null, * traverse traverse backwards from tail to find the actual * non-cancelled. */ / get the next Node of head Node s = Node. Next; // If the next node is null or status>0 indicates that the cat is in a canacelled state. // By scanning from the last node, Find distance head a recent waitStatus < = 0 if the node (s = = null | | s. aitStatus > 0) {s = null; for (Node t = tail; t ! = null && t ! = node; t = t.prev) if (t.waitStatus <= 0) s = t; } // The next node is not empty, just wake up the thread. = null) LockSupport.unpark(s.thread); } private static final boolean compareAndSetWaitStatus(Node node, int expect, int update) { return unsafe.compareAndSwapInt(node, waitStatusOffset, expect, update); }Copy the code

Condition

The keyword synchronized is combined with wait() and notify()/notifyAll() to implement the/notification pattern, as can the class ReentrantLock, but with the aid of Condition objects. The Condition class is a technology emerging in JDK5 that provides greater flexibility, such as multiple notification, where multiple instances of Condition (object monitor) can be created within a Lock object, and thread objects can be registered in a given Condition. In this way, thread notification can be selectively carried out and thread scheduling is more flexible.

When notify()/notify() is used for thread notification, the notified thread is randomly selected by the JVM. But using ReentrantLock in conjunction with the Condition class allows you to implement the “selective notification” described earlier, which is very important and is provided by default in the Condition class.

Synchronized is the equivalent of a single Condition object in a Lock object, on which all threads are registered. When a thread starts notifyAll(), it needs to notifyAll WAITING threads that there is no option, causing considerable efficiency problems.

The correct use of Condition implements wait/notification

import java.util.concurrent.locks.Condition; import java.util.concurrent.locks.Lock; import java.util.concurrent.locks.ReentrantLock; public class MyService { private Lock lock = new ReentrantLock(); public Condition condition = lock.newCondition(); public void await(){ try { lock.lock(); System.out.println("await time: "+ system.currentTimemillis ()); condition.await(); } catch (InterruptedException e) { e.printStackTrace(); }finally { lock.unlock(); } } public void signal(){ try{ lock.lock(); System.out.println("signal time: "+ system.currentTimemillis ()); condition.signal(); }finally { lock.unlock(); }}}Copy the code
public class ThreadA extends Thread{ private MyService service; public ThreadA(MyService service){ super(); this.service = service; } @Override public void run() { service.await(); }}Copy the code
public class TestRun { public static void main(String[] args) throws InterruptedException { MyService service = new MyService(); ThreadA threadA = new ThreadA(service); threadA.start(); threadA.sleep(3000); service.signal(); }}Copy the code

The wait/notify mode was successfully implemented

The wait() method in the Object class is equivalent to the await() method in the Condition class The notify() method in Object is equivalent to the signal() method in Condition. The notifyAll() method in Object is equivalent to the signalAll() method in Condition.

Multiple conditions are used to notify partial threads

Realize producer/consumer: one to one alternate printing

Realize producer/consumer: many-to-many alternate printing

Fair and unfair locks

Fair Lock and unfair Lock: Lock is divided into fair Lock and unfair Lock, fair Lock means that the order of the thread to obtain the Lock is in accordance with the order of the thread to allocate the Lock, that is, first come, first served FIFO advance and new out order. The non-fair lock is a preemption mechanism to obtain the lock, which is random. Unlike the fair lock, the first to get the lock does not necessarily get the lock first. In this way, some threads may not get the lock all the time, and the result is unfair.

Using ReetrantReadWriteLock class

Classes using ReetrantReadWriteLock have the effect of complete mutex exclusively. that is, only one thread is performing tasks following the ReetrantLock.lock() method at a time. This ensures thread-safe instance variables, but is very inefficient. Therefore, the SDK provides a read and write lock class ReetrantReadWriteLock to speed up the operation efficiency. In some methods that do not need to operate instance variables, you can use ReetrantReadWriteLock to improve the speed of the method code.

Read/write locks: One is related to read operations, also called shared locks. The other is a write-related lock, also known as an exclusive lock. Multiple read locks are not mutually exclusive. Read locks and write locks are mutually exclusive. If there is no Thread to write data, all the read threads can acquire the read lock, while the write Thread can write data only after acquiring the write lock. That is, multiple threads can read data at the same time, but only one Thread can write data at a time.

The ReetrantReadWriteLock class is used to read shares

public class Service { private ReentrantReadWriteLock lock = new ReentrantReadWriteLock(); public void read() throws InterruptedException { try { try { lock.readLock().lock(); System.out.println(" get read lock "+ thread.currentThread ().getName() + "" + system.currentTimemillis ()); Thread.sleep(10000); } finally { lock.readLock().unlock(); } } catch (InterruptedException e) { e.printStackTrace(); }}}Copy the code

The ReetrantReadWriteLock class is used to write mutexes

The use of the ReetrantReadWriteLock class is mutually exclusive

The use of the ReetrantReadWriteLock class is mutually exclusive

Block: In the await() method, after the thread releases the lock resource, blocks the current thread if the node is not in the AQS wait queue, or spins to wait to try to acquire the lock if it is in the wait queue

Release: After signal(), the node will move from the condition queue to the AQS wait queue, and then enter the normal lock acquisition process

await:

Public final void await() throws InterruptedException {if (thread.interrupted ()) throw new InterruptedException (); Node node = addConditionWaiter(); Int savedState = fullyRelease(node); int savedState = fullyRelease(node); Int interruptMode = 0; int interruptMode = 0; // If the current node is not on the synchronization queue, that is, has not been signalled, then the current thread blocks while (! IsOnSyncQueue (node)) {// Check whether this node is on the AQS queue. // The thread decides whether it is interrupted in the process of waiting, and if it is not interrupted, it loops again to determine whether it is in the queue in isOnSyncQueue // isOnSyncQueue Completes the loop and blocks if the node is still on the queue and not in CONDITION ((interruptMode = checkInterruptWhileWaiting(node)) ! = 0) break; } // When the thread wakes up, it attempts to acquire the lock, and acquireQueued returns false to obtain the lock. // Set this variable to REINTERRUPT. If (acquireQueued(node, savedState) && interruptMode ! = THROW_IE) interruptMode = REINTERRUPT; If (node.nextwaiter!) if (node.nextwaiter!) if (node.nextwaiter! = null) // Clean up if cancelled unlinkCancelledWaiters(); If the thread is interrupted, an exception needs to be thrown. Or do nothing if (interruptMode! = 0) reportInterruptAfterWait (interruptMode); }Copy the code

signal

Public final void signal() {if (! IsHeldExclusively ()) / / to judge whether the current thread got a lock first throw new IllegalMonitorStateException (); Node first = firstWaiter; // Get the first node in the Condition queue if (first! = null) doSignal (first); } private void doSignal(Node first) {do {if ((firstWaiter = first.nextwaiter) == null) The last node is null. LastWaiter = null; // Set next to null first.nextWaiter = null; } while (! transferForSignal(first) && (first = firstWaiter) ! = null); } final Boolean transferForSignal(Node Node) {/* * If cannot change waitStatus, Cancelled. */ if (! CompareAndSetWaitStatus (node, node. CONDITION, 0)) return false; The Node p = enq (Node); Int the ws = p.w aitStatus; / / if the state of a node has been cancelled, or try to set up the state of a node failed to SIGNAL (SIGNAL said his next node needs to stop blocking), the if (ws > 0 | |! CompareAndSetWaitStatus (p, ws, Node. SIGNAL)) LockSupport. Unpark (Node. Thread); // Wake up the thread on the input node. } final Boolean transferForSignal(Node Node) {/* * If cannot change waitStatus, Cancelled.*/ if (! CompareAndSetWaitStatus (node, node. CONDITION, 0)) return false; The Node p = enq (Node); Int the ws = p.w aitStatus; / / if the state of a node has been cancelled, or try to set up the state of a node failed to SIGNAL (SIGNAL said his next node needs to stop blocking), the if (ws > 0 | |! CompareAndSetWaitStatus (p, ws, Node. SIGNAL)) LockSupport. Unpark (Node. Thread); // Wake up the thread on the input node. }Copy the code

Why thread pools?

Thread pools provide a way to limit and manage resources (including performing a task). Each thread pool also maintains some basic statistics, such as the number of completed tasks.

Here are some of the benefits of using thread pools, borrowed from The Art of Concurrent Programming in Java:

  • Reduce resource consumption. Reduce the cost of thread creation and destruction by reusing created threads.
  • Improve response speed. When a task arrives, it can be executed immediately without waiting for the thread to be created.
  • Improve thread manageability. Threads are scarce resources. If they are created without limit, they will not only consume system resources, but also reduce system stability. Thread pools can be used for unified allocation, tuning, and monitoring.

Use of thread pools

Implicit coupling between tasks and execution policies

  • Dependent task
  • Tasks that use a thread-blocking mechanism
  • Response time sensitive tasks
  • Tasks that use ThreadLocal

Thread starvation deadlock (pending)

Tasks that take longer to run

Set the size of the thread

The ideal size of the thread pool depends on the type of task being submitted and the characteristics of the deployed system. Usually can’t be fixed thread pool size in your code, and should provide through a configuration mechanism, or according to the Runtime. AuailableProcessors to dynamic calculation.

Managing queue Tasks

What kind of thread pools does Java provide? What are their respective usage scenarios?

Java provides the following four main thread pools

  • FixedThreadPool: This method returns a thread pool with a fixed number of threads. The number of threads in this thread pool is always the same. When a new task is submitted, it is executed immediately if there are idle threads in the thread pool. If no, the new task is temporarily stored in a task queue. When a thread is idle, the task in the task queue will be processed.
  • SingleThreadExecutor: method returns a thread pool with only one thread. If additional tasks are submitted to the thread pool, the tasks are stored in a task queue and executed in a first-in, first-out order until the thread is idle.
  • CachedThreadPool: This method returns a thread pool that can adjust the number of threads as needed. The number of threads in the thread pool is uncertain, but if there are free threads that can be reused, reusable threads are preferred. If all threads are working and a new task is submitted, a new thread is created to process the task. All threads will return to the thread pool for reuse after completing the current task.
  • ScheduledThreadPoolExecutor: mainly used to run the task after a given delay, or perform tasks on a regular basis. ScheduledThreadPoolExecutor is divided into: ScheduledThreadPoolExecutor (contain multiple threads) and SingleThreadScheduledExecutor (contains only a single thread) two kinds.

This section describes the application scenarios of various thread pools

FixedThreadPool: Applies to scenarios where the current number of threads needs to be limited to meet resource management requirements. It is suitable for heavy load of the server;

SingleThreadExecutor: suitable for application scenarios where tasks need to be executed sequentially and no multiple threads are active at any point in time;

CachedThreadPool: Suitable for small programs that perform a lot of short-term asynchronous tasks, or for lightly loaded servers;

ScheduledThreadPoolExecutor: applicable to require multiple background task execution cycle, at the same time in order to meet the demand of resource management and the need to limit the number of background threads scenarios;

SingleThreadScheduledExecutor: suitable for need a single background threads execute cycle task, ensure order to perform the tasks at the same time the application scenario.

The way a thread pool is created

(1) Create by using Executors

You can easily create thread pools by following the Executors tool class. Thread pools can be created by following the Following steps: * If the thread pool is created by following the Following steps: * If the thread pool is created by following the Following steps: * If the thread pool is created by following the Following steps: * If the thread pool is created by following the following steps: In this way, students can be more clear about the running rules of the thread pool and avoid the risk of resource exhaustion.

Executors return the following thread pool objects: FixedThreadPool and SingleThreadExecutor: The queue length of allowed requests is Integer.MAX_VALUE, which may accumulate a large number of requests, causing OOM. CachedThreadPool and ScheduledThreadPool: The number of threads allowed to be created is integer. MAX_VALUE, which may create a large number of threads, resulting in OOM.Copy the code

Create a ThreadPoolExecutor constructor

We can create our own thread pool by calling the constructor of ThreadPoolExecutor directly. At the same time as the BlockQueue is created, you can specify the capacity. The following is an example:

private static ExecutorService executor = new ThreadPoolExecutor(13, 13,
        60L, TimeUnit.SECONDS,
        new ArrayBlockingQueue(13));
Copy the code

This case, once submitted to the number of threads than currently available threads, Java will be thrown. Util. Concurrent. RejectedExecutionException, this is because the current thread pool using border queue, queue is a queue is full cannot continue to process new requests. But an Exception is better than an Error.

(3) Use open source class libraries

Hollis noted this earlier in his article: “In addition to defining ThreadPoolExecutor yourself. There are other ways. Open source libraries like Apache and Guava are the first thing to come to mind.” He recommends using Guava’s ThreadFactoryBuilder to create thread pools. Here is an example of his code:

public class ExecutorsDemo { private static ThreadFactory namedThreadFactory = new ThreadFactoryBuilder() .setNameFormat("demo-pool-%d").build(); private static ExecutorService pool = new ThreadPoolExecutor(5, 200, 0L, TimeUnit.MILLISECONDS, new LinkedBlockingQueue<Runnable>(1024), namedThreadFactory, new ThreadPoolExecutor.AbortPolicy()); public static void main(String[] args) { for (int i = 0; i < Integer.MAX_VALUE; i++) { pool.execute(new SubThread()); }}}Copy the code

When creating threads in the above way, you can not only avoid OOM problems, but also customize the thread name, which is more convenient to trace to the source when errors occur.

Reference:

  • Github.com/Snailclimb/…
  • Analysis and use of Java thread pools: www.sohu.com/a/116327822…
  • Juejin. Im/post / 684490…
  • Java concurrent programming practice
  • The beauty of Concurrent programming in Java
  • Reference: learning cloud.tencent.com/developer/a ReentranLock source code…
  • Assault concurrent programming JUC series package – word decryption cloud.tencent.com/developer/a JUC interview questions…