The background,

Java thread pool writing and parameters are basic questions that come up frequently in interviews. The more basic things, especially for high-level job candidates, need to answer the level of their job interview. Here can not say how a good answer, just say if I, how I answer, for reference only. Here’s what you might expect from an interviewer: Talk about thread pools. The main idea is that as a broad question, the answer needs to reflect structured thinking, which is a must. On top of that, you can show depth, which is a plus.

Second, the answer

1. Thread pool design objectives

The main implementation of Java thread is to use kernel-level thread implementation, the creation of thread to the operating system state switch. To avoid excessive resource consumption, try to reuse threads to perform multiple tasks. A thread pool is a thread cache that is responsible for uniformly allocating, tuning, and monitoring threads.

2. Thread pool implementation

As shown in the figure above, the Java implementations of thread pools all inherit from the Executor interface of JUC(java.util.Concurrent). This interface has only one execute method, which represents its behavior. The ExecutorService interface inherits Executor and adds lifecycle handling methods. The common implementation such as: ThreadPoolExecutor, ScheduledThreadPoolExecutor, ForkJoinPool default argument structure has directly in the Executors utility class instantiation. But the Alibaba development manual does not recommend using Executors.

So we need to learn for ourselves what the ThreadPoolExecutor initialization parameter means and how to use it.

/**
 * Creates a new {@code ThreadPoolExecutor} with the given initial
 * parameters and default rejected execution handler.
 *
 * @param corePoolSize the number of threads to keep in the pool, even
 *        if they are idle, unless {@code allowCoreThreadTimeOut} is set
 * @param maximumPoolSize the maximum number of threads to allow in the
 *        pool
 * @param keepAliveTime when the number of threads is greater than
 *        the core, this is the maximum time that excess idle threads
 *        will wait for new tasks before terminating.
 * @param unit the time unit for the {@code keepAliveTime} argument
 * @param workQueue the queue to use for holding tasks before they are
 *        executed.  This queue will hold only the {@code Runnable}
 *        tasks submitted by the {@code execute} method.
 * @param threadFactory the factory to use when the executor
 *        creates a new thread
 * @throws IllegalArgumentException if one of the following holds:<br>
 *         {@code corePoolSize < 0}<br>
 *         {@code keepAliveTime < 0}<br>
 *         {@code maximumPoolSize <= 0}<br>
 *         {@code maximumPoolSize < corePoolSize}
 * @throws NullPointerException if {@code workQueue}
 *         or {@code threadFactory} is null
 */
public ThreadPoolExecutor(int corePoolSize,
                          int maximumPoolSize,
                          long keepAliveTime,
                          TimeUnit unit,
                          BlockingQueue<Runnable> workQueue,
                          ThreadFactory threadFactory) {
    this(corePoolSize, maximumPoolSize, keepAliveTime, unit, workQueue,
         threadFactory, defaultHandler);
}
Copy the code

ThreadPoolExecutor takes six arguments, the first of which is the number of core threads that are retained if the pool is idle. The second parameter is the maximum number of threads. Anything that exceeds the number of core threads is eliminated after the maximum idle lifetime determined by the third and fourth parameters combined. The fifth parameter is the blocking queue, where threads are too busy to queue. Finally, there is the thread pool factory, which determines what to do with threads that are not in the queue. The default strategy is to throw an exception.

3. Design of 5 states of thread pool

4. Basic principles of thread pools

The implementation of Java thread is to call the PThread API of the operating system by calling the native method, which is managed uniformly by the kernel thread. A class that implements Runnable simply identifies that multiple threads are available to run; it’s the kernel calls when new Thread() that cause the most overhead. The thread pool technology caches worker threads for reuse.

Third, supplementary q&A

1. Closing the thread pool

Shutting down the thread pool can be done by calling the shutdownNow and shutdown methods. ShutdownNow: Interrupt () is issued for all ongoing tasks, stops execution, cancels all tasks that have not yet started, and returns a list of tasks that have not yet started. Shutdown: When we call shutdown, the thread pool will not accept new tasks, but will not force the termination of committed or executing tasks.

2. What kinds of work queues are there in the thread pool

  • ArrayBlockingQueue

Is a bounded blocking queue based on an array structure that sorts elements according to FIFO (first-in, first-out).

  • LinkedBlockingQueue

A linked list-based blocking queue that sorts elements in FIFO (first in, first out) and typically has a higher throughput than ArrayBlockingQueue. Static factory methods Executors. NewFixedThreadPool () using the queue

  • SynchronousQueue

A blocking queue that does not store elements. Each insert operation must wait until another thread calls to remove operation, otherwise the insert has been in the blocking state, the throughput is usually more than LinkedBlockingQueue, static factory methods Executors. NewCachedThreadPool using the queue.

  • PriorityBlockingQueue

An infinite blocking queue with priority.

ForkJoinPool

Essentially subdividing a task further, internally using a “job stealing” algorithm, the task is as evenly distributed across cpus as possible.

  • Each worker thread has its own WorkQueue;
  • This is a double-ended queue, it’s thread private;
  • A subtask of a ForkJoinTask is placed in the queue head of the worker thread that runs the task. The worker thread processes the tasks in the queue in LIFO order.
  • To maximize CPU utilization, idle threads “steal” tasks from other threads’ queues to execute them;
  • Steal tasks from the tail of the work queue to reduce competition;
  • Double-endian queue operations: Push ()/pop() is called only in its owner worker thread, poll() is called when another thread steals the task;
  • When there is only one task left, there is still competition, which is implemented through CAS;

Four,

The answer part of this paper explains the thread pool from four aspects of its goal, implementation and key design, and the underlying principle, which has a certain system. The supplementary q&A section is used as an in-depth question during the interview when you are asked an in-depth question or when the interviewer indicates that you want to go further. If you find this article helpful, please do not hesitate to give me a thumbs-up. Your support is the biggest encouragement for me. If you want to know more basic knowledge of Java and interview answers, I have organized a GitHub repository of my own: Java small white Training Manual, you can check it yourself if you need