Disclaimer: This article is a personal summary and supplement to the blogger “I am the flower of the motherland” “Java development post high frequency interview question full analysis”!

preface

The link below is the big guys dry goods! Vomiting blood recommended orz! Dishes 🐔 just entered the door at present…

  • Thread pool parameters: Thread pool parameters: thread pool
  • Know the four rejection strategies for thread pools?
  • JAVA common blocking queue details
  • Six common thread pools in Java
  • Analysis and Use of Java thread pools
  • How does a thread pool reuse idle threads to perform tasks? Direct source! Explains the thread pool workflow in detail.

Advantages of thread pools

There are three benefits to using thread pools properly:

  1. Reduce resource consumption. Reduce thread creation and destruction costs by reusing created threads;
  2. Improve response speed. When a task arrives, it can be executed immediately without waiting for the thread to be created.
  3. Improve thread manageability. Threads are scarce resources. If they are created without limit, they will not only consume system resources, but also reduce system stability. Thread pools can be used for unified allocation, tuning, and monitoring.

Creation of a thread pool

We can through the Java. Util. Concurrent. ThreadPoolExecutor is used to create a thread pool:

public ThreadPoolExecutor(int corePoolSize,
                          int maximumPoolSize,
                          long keepAliveTime,
                          TimeUnit unit,
                          BlockingQueue<Runnable> workQueue,
                          ThreadFactory threadFactory,
                          RejectedExecutionHandler handler) 
Copy the code

The main parameters of the thread pool:

  • corePoolSize: The number of core threads, that is, the number of threads that normally create work. These threads are not destroyed after creation, but are resident threads.
  • maxinumPoolSize: Maximum number of threads, which corresponds to the number of core threads, and indicates the maximum number of threads allowed to be created. For example, when the number of core threads has been used up due to a large number of tasks, a new thread will be created, but the total number of threads in the thread pool will not exceed the maximum number of threads.
  • keepAliveTime unit: Indicates the idle lifetime of threads beyond the core thread count, that is, the core thread will not be eliminated, but the portion of threads beyond the core thread count will be eliminated if idle for a certain period of time. We can use setKeepAliveTime to set the idle time.
  • workQueue: is used to store tasks to be executed, assuming that the core thread is already used, and tasks will be queued until the whole queue is full, but the task continues to enter, and new threads will be created.
  • ThreadFactory: is actually a thread factory that produces threads to perform tasks. We can choose to use the default creation factory, where the threads are all in the same group, have the same priority, and are not daemons. Of course, we can also choose to customize the thread factory, generally we will specify different thread factories according to the business;
  • Handler: Task rejection policy, there are two cases. The first is when we call shutdown to close the thread pool, even though there are still unfinished tasks in the thread pool, we will be rejected if we continue to submit tasks to the thread pool because the thread pool has been closed. Another situation is when the maximum number of threads is reached and the thread pool has no capacity to continue processing new submitted tasks, which is rejected.

The common BlockingQueue BlockingQueue

  • ArrayBlockingQueue: a bounded blocking queue based on arrays, sorting elements according to FIFO;
  • LinkedBlockingQueue: an unbounded blocking queue based on a linked list that sorts elements in FIFO (the length of the queue is integer-max_value, which is so large that it is almost unreachable that it can be considered an unbounded queue);
  • SynchronousQueue: a blocking queue that does not store elements. Each insert operation must wait until another thread calls the remove operation, otherwise the insert operation remains blocked;
  • PriorityBlockingQueue: an unbounded blocking queue with priority.

Thread pool rejection policy

A thread pool rejection policy is a processing mechanism for when the thread pool can no longer handle any more tasks. There are two ways to reject a thread pool:

  1. When we shutdown the thread pool normally, using shutdown and other methods, we will be rejected if we continue to submit tasks to the thread pool, even though there are still unfinished tasks in the thread pool.
  2. The thread pool is saturated when it is unable to continue processing the submitted tasks, such as when the queue is full and the maximum number of threads is reached.

Rejection policies

  • AbortPolicy: when a task is added to the thread pool is refused, it will throw RejectedExecutionException exception (the default policy);
  • CallerRunsPolicy: When a task is rejected after being added to the Thread pool, the rejected task is processed in the Thread pool that is currently running.
  • DiscardOldestPolicy: When a task added to the thread pool is rejected, the thread pool will discard the oldest unprocessed task in the wait queue, and then add the rejected task to the wait queue.
  • DiscardPolicy: When a task added to the thread pool is rejected, the thread pool discards the rejected task.

Submit tasks to the thread pool

  1. useexecuteThe execute method does not return a value, so you cannot determine whether the task is successfully executed by the thread pool.
  2. usesubmitMethod to submit the task, which returns a future to determine whether the task was successfully executed. A future’s get method will block until the task is complete, whereas a Get (long timeout, TimeUnit unit) method will block for a while and then return immediately, in which case the task may not have finished.

Closing the thread pool

We can shutdown a thread pool by calling its shutdown or shutdownNow methods, but they work differently:

  • Shutdown simply sets the state of the thread pool to shutdown, and then interrupts all threads that are not executing tasks.
  • ShutdownNow works by iterating through worker threads in a thread pool and then interrupting them one by one by calling the thread_interrupt method, so tasks that cannot respond to interrupts may never be terminated. ShutdownNow first sets the state of the thread pool to STOP, then attempts to STOP all threads executing or suspending tasks and returns a list of tasks waiting to be executed.

The isShutdown method returns true whenever either of the two shutdown methods is called. The thread pool is closed successfully when all tasks are closed, and calling isTerminaed returns true. Which method we should call to shutdown the thread pool depends on the nature of the task submitted to the thread pool. Shutdown is usually called to shutdown the thread pool, or shutdownNow if the task is not necessarily finished.

Process analysis of thread pools

Common thread pools

Common thread pools in Java: FixedThreadPool (the number of core threads is the same as the maximum number of threads), SingleThreadExecutor (a thread pool for one thread), and CachedThreadPool (zero core threads and a maximum number of threads integer.max_value).

FixedThreadPool

The FixedThreadPool thread pool has the same number of core threads as the maximum number of threads. Executors#newFixedThreadPool(int)

public static ExecutorService newFixedThreadPool(int nThreads) {
    return new ThreadPoolExecutor(nThreads, nThreads,
                                  0L, TimeUnit.MILLISECONDS,
                                  new LinkedBlockingQueue<Runnable>());
}

Copy the code
  • This thread pool can be thought of as a fixed thread pool, which is created from 0 only at initial initialization, but is not destroyed after creation, but is all resident thread pool.
  • For this type of thread pool, the third and fourth parameters are meaningless, they are idle thread lifetime, here are both permanent non-existent destruction, and when a thread can’t handle it, it will join the blocking queue, which is a linked list structure of bounded blocking queue, the maximum length is integer.max_value.

SingleThreadExecutor

The SingleThreadExecutor thread is characterized by a core thread count and a maximum thread count of 1. It can also be a singleton thread pool, which can be implemented as an Executors#newSingleThreadExcutor() :

public static ExecutorService newSingleThreadExecutor() {
    return new FinalizableDelegatedExecutorService
        (new ThreadPoolExecutor(1, 1,
                                0L, TimeUnit.MILLISECONDS,
                                new LinkedBlockingQueue<Runnable>()));
}

public static ExecutorService newSingleThreadExecutor(ThreadFactory threadFactory) {
    return new FinalizableDelegatedExecutorService
        (new ThreadPoolExecutor(1, 1,
                                0L, TimeUnit.MILLISECONDS,
                                new LinkedBlockingQueue<Runnable>(),
                                threadFactory));
}
Copy the code
  • In the code above we see that it has an overloaded function that passes in a ThreadFactory parameter. Normally we pass in our custom thread creation factory in our development, otherwise we call the default ThreadFactory
  • We can see that it differs from the FixedThreadPool thread pool only in that the core thread count and the maximum thread count are changed to 1, meaning that no matter how many tasks there are, it will only have a single thread to execute
  • If an exception occurs during execution and causes the thread to be destroyed, the thread pool also creates a new thread to perform subsequent tasks
  • This thread pool is ideal for scenarios where all tasks need to be executed in the order they are submitted, a single-threaded serial.

cachedThreadPool

The cachedThreadPool thread pool features a resident core thread count of 0, and as its name indicates, all of its counties are created temporarily. Its implementation is as follows:

public static ExecutorService newCachedThreadPool() {
    return new ThreadPoolExecutor(0, Integer.MAX_VALUE,
                                  60L, TimeUnit.SECONDS,
                                  new SynchronousQueue<Runnable>());
}

public static ExecutorService newCachedThreadPool(ThreadFactory threadFactory) {
    return new ThreadPoolExecutor(0, Integer.MAX_VALUE,
                                  60L, TimeUnit.SECONDS,
                                  new SynchronousQueue<Runnable>(),
                                  threadFactory);
}
Copy the code
  • As you can see from the above code, the maximum number of threads in the CachedThreadPool is integer.max_value, which means that its number of threads can be increased almost indefinitely.
  • Since the threads created are temporary, they are all destroyed. In this case, the idle thread is destroyed in 60 seconds, which means that the thread is destroyed when no task has been executed for 60 seconds
  • Note that it uses a blocking queue of SynchronousQueue to store tasks. This queue cannot store tasks because its capacity is zero and it is only responsible for the delivery and forwarding of tasks. It would be more efficient because the core thread is zero.

Configure thread pools properly

To properly configure thread pools, you must first analyze task characteristics, which can be analyzed from the following perspectives:

  1. Nature of tasks: CPU intensive tasks, IO intensive tasks and hybrid tasks.
  2. Task priority: high, medium and low.
  3. Task execution time: long, medium and short.
  4. Task dependencies: Whether they depend on other system resources, such as database connections.

The water was too deep for Lao Wang to hold… 😭