1. Java Thread Factory Executors provide four thread pools

NewCachedThreadPool – Cache thread pool, used in okHTTP

NewFixedThreadPool – Thread pool with a fixed number of threads

NewSingleThreadExecutor – Single-threaded thread pool

NewScheduledThreadPool – A thread pool that implements scheduled periodic tasks

Each of the four thread pools directly or indirectly takes ThreadPoolExecutor instances, and if these are not appropriate, we can create our own. The ThreadPoolExecutor constructor is as follows, and the main goal of this parsing is to clarify the seven parameters in this constructor

public ThreadPoolExecutor(int corePoolSize,
                              int maximumPoolSize,
                              long keepAliveTime,
                              TimeUnit unit,
                              BlockingQueue<Runnable> workQueue,
                              ThreadFactory threadFactory,
                              RejectedExecutionHandler handler)
Copy the code

2. The seven parameters of the thread pool

CorePoolSize: number of core threads

Thread pools always maintain a minimum number of threads that are not destroyed even if they are idle. Unless allowCoreThreadTimeOut is set. The minimum number of threads is corePoolSize.

Here you might ask, well, if I create a thread pool with a core thread of 3, will the pool have 3 threads as soon as it’s created?

No, the core thread is not created by default, it is created when the task comes in. And when the number of threads is less than the number of core threads, the task arrives and the thread is created regardless of whether the current thread is idle.

Then you might wonder, it’s still relatively time-consuming to create a thread, what if I want to create the core thread as soon as the thread pool is created? I am. That’s what I’m gonna do, okay?

You can override the PreStartCoreThreads method to prestart the core thread or call the prestartAllCoreThreads method

MaximumPoolSize: specifies the maximum number of threads in a thread pool

After a task is submitted to the thread pool, it is first checked to see if there are any free threads, if any, to execute the task directly. If not, it is cached to the work queue. If the work queue is full, a new thread is created and a task is fetched from the work queue to be processed by the new thread. The newly submitted task is put into the work queue. The thread pool does not create new threads indefinitely, and there is a limit to the maximum number of threads specified by maximumPoolSize

KeepAliveTime: indicates the lifetime of idle threads

If a thread is idle and the current number of threads is greater than corePoolSize, the idle thread will be destroyed after a specified time. The specified time here is specified by keepAliveTime

Here you might be asking, what if I want a lifetime for the core thread as well?

This can be done by setting allowCoreThreadTimeOut. Note, however, that this takes effect only if keepAliveTime is not equal to 0. This means that non-core threads can be released as they run out, but core threads cannot

Core and non-core threads are treated differently here. Core thread is like the company’s real employees, non-core thread is more than the time to recruit temporary workers, work done, the real employees fired (it is not appropriate), temporary workers are not the same, was temporary ~

4. Unit: unit of idle thread lifetime

WorkQueue – There are four types of blocking queues in Java

(1) ArrayBlockingQueue

Array-based bounded blocking queue, sorted by FIFO. When the number of threads in the thread pool reaches corePoolSize, new tasks will be put to the end of the queue for scheduling. If the queue is already full, a new thread is created. If the number of threads reaches maximumPoolSize, the reject policy is executed

(2) LinkedBlockingQueue

Unbounded blocking queues based on linked lists (default). Sort by FIFO. The queue is approximately unbounded. When the number of threads in the thread pool reaches corePoolSize, new tasks are kept in the queue. Instead of creating a new thread. Therefore maximumPoolSize in this case does not actually work

SynchronousQueue SynchronousQueue

A blocking queue that does not cache tasks. A producer putting a task must wait until a consumer takes it out. That is, when the number of threads reaches corePoolSize, a new task arrives and a new thread is created. If the number of threads reaches maxPoolSize, the rejection policy is executed

(4) PriorityBlockQueue

An unbounded blocking queue with priority, which is implemented by the parameter Comparator

Both are blocking queues, so what’s the difference between a blocking queue and a normal queue?

A blocking queue is a queue that supports two additional operations on top of the queue.

Two additional operations:

Blocking insertion methods are supported: when the queue is full, the queue blocks the thread that inserted the element until the queue is full.

Blocking removal is supported: When the queue is empty, the thread that fetched the element waits for the queue to become non-empty.

ThreadFactory specifies a threadFactory

Factory used to create a new thread, which can be used to specify the thread name, whether it is a daemon thread, etc

7. Handle Reject policy

When the number of tasks in the work queue reaches the maximum limit and the number of threads in the thread pool reaches the maximum limit, a new task arrives and a rejection policy is executed. There are four rejection policies available in Java

(1), CallerRunsPolicy

Under this strategy, the run method of the rejected task is executed directly in the caller thread, and the task is discarded unless the thread pool has been shotdown

(2), AbortPolicy

Under this strategy, discarding the task directly, and throw RejectedExecutionException anomalies

(3) DiscardPolicy

Under this strategy, the task is simply discarded and nothing is done.

(4) DiscardOldestPolicy

In this policy, the earliest task in the queue is discarded and the rejected task is added to the queue

3. Summary

1. Thread pools run the overall process

In the absence of special Settings, such as the ones mentioned above (pre-create core threads, set lifetime for core threads, etc.)

When a thread pool is created, there are no threads in it. The task queue is passed in as a parameter. However, even if there are tasks in the queue, the thread pool will not execute them immediately.

When the execute() method is called to add a task, the thread pool makes the following judgments:

1. If the number of running threads is smaller than corePoolSize, create a thread to run the task immediately

2. If the number of running threads is greater than or equal to corePoolSize, queue the task.

3. If the queue is full and the number of running threads is smaller than maximumPoolSize, create a thread to run the task.

4. If the queue is full and the number of running threads is greater than or equal to maximumPoolSize, the thread pool throws an exception telling the caller “I can’t accept any more tasks.”

5. When a thread completes a task, it takes the next task from the queue and executes it.

6. When a thread has nothing to do for more than a certain keepAliveTime, the thread pool determines that if the number of threads currently running is greater than corePoolSize, the thread is stopped. So after all the tasks in the thread pool are complete, it will eventually shrink to the size of corePoolSize.

2. Case: Will the task be executed before it is added?

Given a queue size of 10, corePoolSize of 3, and maximumPoolSize of 6, when 20 tasks are added, the order of execution looks like this: tasks 1, 2, 3 are executed first, and then tasks 4-13 are put into the queue. When the queue is full, tasks 14, 15, and 16 will be executed immediately, and tasks 17-20 will be rejected. The final order is: 1, 2, 3, 14, 15, 16, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13.

3. Talk about waiting in line

1. No queuing, a good default

Use SynchronousQueue+ infinite maximumPoolSize. SynchronousQueue is an uncached queue that executes (create or reuse) every time a task does not reach maximumPoolSize. An example is the implementation of newCachedThreadPool

    public static ExecutorService newCachedThreadPool(a) {
        return new ThreadPoolExecutor(0, Integer.MAX_VALUE,
                                      60L, TimeUnit.SECONDS,
                                      new SynchronousQueue<Runnable>());
    }
Copy the code

The newCachedThreadPool uses the SynchronousQueue+ infinite maximumPoolSize combination (integer. MAX_VALUE can be approximated as infinite).

The core thread is also set to zero, allowing all threads to follow the 60-second rule of idle thread life.

For performing a large number of short-term asynchronous tasks, this improves performance by creating threads to add to the pool if there are no free threads available when the task arrives. After a long enough idle time, it will all be released without consuming resources.

2. Long lines.

Use LinkedBlockingQueue+ finite number of threads (the maximum number of threads is set the same as core threads). When a task arrives, if there are no free threads available, it is queued, and the infinite queue prevents the task from being lost. In fact newFixedTheradPool and newSingleThreadExecutor are implemented this way

    public static ExecutorService newFixedThreadPool(int nThreads) {
        return new ThreadPoolExecutor(nThreads, nThreads,
                                      0L, TimeUnit.MILLISECONDS,
                                      new LinkedBlockingQueue<Runnable>());
    }
Copy the code
public static ExecutorService newSingleThreadExecutor(a) {
        return new FinalizableDelegatedExecutorService
            (new ThreadPoolExecutor(1.1.0L, TimeUnit.MILLISECONDS,
                                    new LinkedBlockingQueue<Runnable>()));
    }
Copy the code

3, control the queue, rent more sites – high-end operation

Use ArrayBlockingQueue+ to control the size of maximumPoolSize. If the task is long and time-consuming, use a large queue and a small number of threads to reduce CPU usage, operating system resources, and context switching consumption. When tasks take a long time and often block, use a larger pool to avoid blocking.