First let’s look at the code that gets the four thread pools:

    ExecutorService fixedThreadPool = Executors.newFixedThreadPool(10);
    ExecutorService cachedThreadPool = Executors.newCachedThreadPool();
    ExecutorService scheduledThreadPool = Executors.newScheduledThreadPool(10);
    ExecutorService singleThreadPool = Executors.newSingleThreadExecutor();

Copy the code

You can find that the four thread pools are generated by the Executors class. Looking at the internal implementations of the four methods in turn, we find that they all end up calling the same constructor of ThreadPoolExecutor(), with different constructor arguments. Let’s look at the list of parameters for ThreadPoolExecutor:

public ThreadPoolExecutor(int corePoolSize,
                              int maximumPoolSize,
                              long keepAliveTime,
                              TimeUnit unit,
                              BlockingQueue<Runnable> workQueue,
                              ThreadFactory threadFactory,
                              RejectedExecutionHandler handler) {
Copy the code

It is the differences of these parameters that lead to the different working mechanisms of the four thread pools. Refer to the parameter annotations in the source code to list the meanings of the parameters.

  • CorePoolSize: The number of core threads that reside in the thread pool and are not destroyed, even if they are idle, unless allowCoreThreadTimeOut is set.
  • MaximumPoolSize: specifies the maximum number of threads in a thread pool
  • KeepAliveTime: Extra threads that exceed the number of cores, i.e., non-core threads, are destroyed after a specified maximum amount of idle time. (If the time is 5s, the number of core threads is 2, and the current thread is 4, the remaining two threads exceeding the number of core threads will be destroyed after 5 seconds of idle.)
  • Unit: time unit
  • WorkQueue: indicates the wait queue
  • ThreadFactory: A factory that generates threads
  • Handler: How to process new tasks when the wait queue is full and the number of thread pools reaches its maximum.
    • AbortPolicy (default) : Throws an exception directly
    • CallerRunsPolicy: to be executed by the caller’s thread. (Assuming that the current caller thread is Main, then Main handles it)
    • DiscardOldestPolicy: Discards the task that has not been processed for the longest and then executes the current task. (The poll() method does call the poll() method, which is the oldest unprocessed node in the queue.)
    • DiscardPolicy: Discards the task without throwing an exception.

How thread pools work:

When tasks are continuously added to the thread pool and the current number of threads is smaller than the number of core threads, a new thread is added. When the current thread count reaches the core thread count, the task is queued. When the wait queue is full, new threads are created. When the number of thread pools reaches the maximum and the wait queue is full, a denial of service policy is adopted.

Let’s analyze the different thread pools based on the parameters:

FixedThreadPool

 public static ExecutorService newFixedThreadPool(int nThreads) {
        return new ThreadPoolExecutor(nThreads, nThreads,
                                      0L, TimeUnit.MILLISECONDS,
                                      new LinkedBlockingQueue<Runnable>());
    }
Copy the code

We can see that the number of corePoolSize core threads is the same as the maximum number of maximumPoolSize threads, and the keepAliveTime is 0. WorkQueue is LinkedBlockingQueue, which is a list blocking queue. It can be concluded that this thread pool is a fixed number of thread pools and has an unbounded wait queue. We can deduce that this thread pool is suitable for a scenario with a steady amount of work. For example, with an average of 10 tasks per second, the received task volume curve will not be steep.

Suitable scenario: suitable for a small number of large tasks (large tasks are slow to process, if the number of threads is large, but lose when switching thread context, so control the number of threads in a certain number).

CachedThreadPool

public static ExecutorService newCachedThreadPool() {
        return new ThreadPoolExecutor(0, Integer.MAX_VALUE,
                                      60L, TimeUnit.SECONDS,
                                      new SynchronousQueue<Runnable>());
    }
Copy the code

CorePoolSize is 0, which means that the thread has no core thread pool, which means that threads are recyclable and can sometimes be empty in the pool. And maximumPoolSize is an int maximum, which means that the pool can create an unlimited number of threads. KeepAliveTime is 60, indicating that the recycle thread is idle for 60 seconds. The workQueue is a SynchronousQueue that has no capacity. When a task arrives, it waits for the thread to consume the task before it can be added. We deduce that this thread pool is suitable for scenarios where there are usually few tasks, but sometimes the number of tasks suddenly increases.

Suitable scenario: a large number of small tasks (each task is handled quickly and does not occur frequently when the thread is halfway through processing and switches to other threads).

ScheduledThreadPool

public ScheduledThreadPoolExecutor(int corePoolSize) {
        super(corePoolSize, Integer.MAX_VALUE, 0, NANOSECONDS,
              new DelayedWorkQueue());
    }
Copy the code

We can see that the biggest difference in this thread pool parameter is that the workQueue is DelayedWorkQueue. The queue is a heap sorted by latency time from smallest to largest. And when the delay time of the queue head node is less than 0, this node is returned. So the thread pool can specify a time for scheduled tasks. Periodic tasks can also be performed by increasing the delay time when adding tasks.

Applicable scenario: Scheduled task or periodic task.

SingleThreadExecutor

public static ExecutorService newSingleThreadExecutor() {
        return new FinalizableDelegatedExecutorService
            (new ThreadPoolExecutor(1, 1,
                                    0L, TimeUnit.MILLISECONDS,
                                    new LinkedBlockingQueue<Runnable>()));
    }
Copy the code

The corePoolSize and the maximum number of maximumPoolSize threads are both 1, indicating that the thread has one or only one fixed thread. WorkQueue is LinkedBlockingQueue, which is a list blocking queue. Therefore, this thread pool is suitable for executing tasks in a serial execution queue.

Suitable scenario: Tasks that are processed sequentially.

Readers may wonder what keepAliveTime 0 means. Should the thread be reclaimed now or never?

The keepAliveTime parameter comment explicitly states that it is only useful for non-core threads. If 0 means never recycle, then the ScheduledThreadPool will not recycle once it has created a non-core thread. This is very unreasonable. So I think 0 stands for immediate collection.

public ScheduledThreadPoolExecutor(int corePoolSize) {
        super(corePoolSize, Integer.MAX_VALUE, 0, NANOSECONDS,
              new DelayedWorkQueue());
    }
Copy the code

The author’s level is limited, if there are mistakes, please comment.

My personal blog, vc2x.com