How to create a thread

1 Inherit the Thread class and override the run method. Simple implementation, but does not conform to the Richter substitution principle, can not inherit other classes. Steps:

(1) Inherit the Thread class and rewrite the run method. The method body of the run method represents the task to be completed by the Thread. Therefore, the run() method is called the body of the execution.

(2) Create a thread object and call the start method to start it

2 Implement the Runnable interface and rewrite the run method. The limitation of single inheritance is avoided, and the programming is more flexible and decoupled. Steps:

(1) Implement the Runnable interface and rewrite the run method

(2) Create a thread object and call the start method to start it

**3 Implement Callable interface and override call method. ** can get the return value of the thread’s execution result, and can throw an exception. Steps:

(1) Define a class that implements the Callable interface and implements the Call () method, which will act as the thread body and return a value.

(2) Create thread object, use FutureTask class to wrap Callable object, and call start method FutureTask ft = new FutureTask<>(MC);

(3) Call the get() method of FutureTask to get the return value after the execution of the child thread

4 Use Executors to create a thread pool

Why have thread pools

Remember when we didn’t have a Thread pool, we created each Thread: new Thread(() -> {… }), and call the start() method to execute the thread. This leads to a number of problems, such as thread creation and destruction, which are time-consuming and wasteful operations. Again, a simple new two or three threads is fine, but what about hundreds of threads? And when they’re done, they have to destroy them one by one. The performance of creating and destroying hundreds of threads is terrible!

Thread pools were created to solve these problems. Its core idea is: thread reuse. That is, the thread is not destroyed when it runs out, it is put in the pool and waits for new tasks to arrive, and it repeatedly uses N threads to perform all new and old tasks. The overhead would only be the creation of those N threads, rather than the life-to-death process of one thread for every request.

So the benefits and advantages of using thread pools:

  • Reduce resource consumption. Reduce the cost of thread creation and destruction by reusing created threads.

  • Improve response speed. When a task arrives, it can be executed immediately without waiting for the thread to be created.

  • Improve thread manageability. Threads can be uniformly allocated, tuned, and monitored using thread pools.

Create a thread pool

3.1 Seven Parameters

You can create a thread pool by using the Static Factory method for Executors:

Do not use Executors to create the thread pool. Use ThreadPoolExceutor instead. By doing so, you can clear the running rules of the thread pool and avoid the risk of resource depletion.

Source code (seven parameters) :

public ThreadPoolExecutor(
    int corePoolSize,
    int maximumPoolSize,
    long keepAliveTime,
    TimeUnit unit,
    BlockingQueue<Runnable> workQueue,
    ThreadFactory threadFactory,
    RejectedExecutionHandler handler) {}
Copy the code

① corePoolSize: number of core threads

By default, after a thread pool is created, the number of threads in the pool is zero, and when a task comes in, a thread is created to execute the task. The number of core threads defines the minimum number of threads that can run simultaneously. When the number of threads in the thread pool reaches corePoolSize, the incoming tasks are placed on the work queue. It is not reclaimed by default, but if allowCoreTimeOut is set to true, it will be reclaimed if the core thread is idle.

② maximumPoolSize: specifies the maximum number of threads

When the number of tasks in the queue reaches the queue capacity, the number of threads that can run simultaneously becomes the maximum number. Fixed size thread pool if set to the same number of core threads.

③ workQueue: indicates a workQueue

A thread enters the work queue when the number of thread requests is greater than or equal to corePoolSize. A blocking queue is used to store tasks waiting to be executed. After a new task is submitted, it is first entered into the work queue and then removed from the queue when the task is scheduled. There are several options for blocking queues:

  • ArrayBlockingQueue: array-based bounded blocking queue, sorted by FIFO;

  • LinkedBlockingQueue: an unbounded blocking queue based on a linked list, sorted in FIFO;

  • SynchronousQueue: a blocking queue that does not cache tasks, that is, when new tasks come in, they are not cached but instead are scheduled to execute them;

  • PriorityBlockingQueue: An unbounded blocking queue with a priority implemented by the Comparator parameter.

④ Unit: keepAliveTime For example, timeunit. MILLISECONDS and timeunit.seconds

⑤ keepAliveTime: indicates the idle time of a thread

When the idle time reaches this value, the threads are destroyed until only corePoolSize threads are left to avoid wasting memory resources.

⑥ threadFactory: a threadFactory

When a thread pool needs a new thread, threadFactory is used to generate the new thread

The default is DefaultThreadFactory, which is responsible for creating threads. NewThread () method. The created threads are all in the same thread group and have the same priority. That is, threads used to produce a set of identical tasks. Threads can be named to facilitate error analysis.

⑦ Handler: Rejects the policy.

  • AbortPolicyDiscard the task and throw an exception;

** Function: ** When the reject policy is triggered, the exception that refuses to execute is directly thrown. The abort policy means that the current execution process is interrupted

** Usage scenarios: ** There are no special scenarios for this, but it is important to handle raised exceptions correctly.

The default policy for ThreadPoolExecutor is AbortPolicy. The ExecutorService interface family of ThreadPoolExecutors does not display a rejection policy, so it does by default. Note, however, that the thread pool instance queues in ExecutorService are unbounded, meaning that overflowing memory will not trigger a rejection policy. When customizing your own thread pool instance, use this policy to handle exceptions that are thrown when the policy is triggered, because it interrupts the current execution process.

  • The CallerRunsPolicy call performs its own thread-run task. However, this strategy will reduce the speed of submitting new tasks and affect the overall performance of the program. In addition, this policy likes to increase the queue capacity. You can choose this strategy if your application can withstand the delay and you can’t drop any task requests;

    Function: When the reject policy is triggered, it is handled by the current thread submitting the task as long as the thread pool is not closed.

    Usage scenarios: generally not allowed in failure, the performance requirement is not high, concurrent scenarios using small amount, usually because the thread pool will not shut down, which is submitted by the task will be run, but because it is the caller thread its execution, when repeatedly submitting, can block the subsequent task execution, performance and efficiency naturally slow.

  • DiscardOldestPolicy: Discards the longest waiting task in the queue and adds the current task to the queue.

    ** Function: ** silently discards the task without triggering any action

    ** Usage scenarios: ** You can use it if the task you submitted is not important. Because it’s an empty realization, a task that will silently devour you. So that strategy is basically gone

  • DiscardPolicy means to simply discard the current task without throwing an exception.

    ** If the thread pool is not closed, pop up the element in the queue header and try to execute

    ** Usage scenarios: ** This strategy still drops tasks, again silently, but typically drops old, unexecuted tasks, and tasks with high priority to be executed. Based on this feature, the scenario I can think of is publishing a message, modifying a message, and then when the message is published and not yet executed, the updated message comes back, and the version of the message that is not executed is lower than the version of the message that is now committed and can be discarded. Because there may be messages of lower versions queued, it is important to compare the versions of messages before actually processing them.

3.2 the sample

3.3 Built-in thread pool encapsulation

It can be to create: the ExecutorService MyExecutorService = Executors. NewCachedThreadPool ();

  • NewFixedThreadPool, fixed size thread pool, maximum number of core threads, no idle threads, keepAliveTime = 0. The work queue used by this thread pool is the unbounded blocking queue LinkedBlockingQueue, which is suitable for heavily loaded servers.

  • NewSingleThreadExecutor, which uses a single thread to execute all tasks sequentially. It is suitable for scenarios where sequential execution of tasks is required. Compared to single thread performance: even though the same thread is working, using a single thread pool is much more efficient.

  • NewCachedThreadPool, which returns a thread pool that can adjust the number of threads as needed. The number of threads in the thread pool is uncertain, but if there are free threads that can be reused, reusable threads are preferred. If all threads are working and a new task is submitted, a new thread is created to process the task. All threads will return to the thread pool for reuse after completing the current task.

  • NewScheduledThreadPool: supports periodic and periodic task execution, applicable to scenarios where multiple background threads are required to execute periodic tasks and the number of threads is limited. The difference with newCachedThreadPool is that the worker thread is not reclaimed. Principle: ScheduedThreadPoolExecutor is first put the task in a DelayQueue delay queue, and then start a thread, and then go to the queue in cycle time is closest to the current time of the task. ScheduedThreadPoolExecutor maintains a DelayQueue store waiting for the task, there is a PriorityQueue DelayQueue priority queue, and he will be ordered by the time the size of the time, the less time the top. DelayQueue is also an unbounded queue, but its initial size is 16. If it exceeds 16, it will be expanded. There are three ways to submit tasks:

    • Schedule: Executes a task after a specified time delay
    • ScheduledAtFixedRate: Tasks are executed at a fixed interval (the interval is fixed regardless of the execution time)
    • ScheduledWithFixedDelay: A fixed delay for executing a task (related to the execution time of the task, the delay starts after the completion of the last task)

Fourth, thread pool processing task flow

① If the core thread pool is insufficient, create a new thread to execute the task. If the workCount < corePoolSize, obtain the global lock.

② If the core thread pool is full and the work queue is not full, the task is stored in the work queue, where workCount >= corePoolSize.

If the workCount is < maximumPoolSize, create a new thread to process the task. If the workCount is < maximumPoolSize, create a new thread to process the task.

④ If the number of threads exceeds the maximum, the workCount is greater than maximumPoolSize.

When the thread pool creates a thread, it will encapsulate the thread as a Worker thread, and the Worker will cycle to get the task in the work queue to execute after completing the task.

For example:

Thread pool parameter configuration: 5 core threads, 10 maximum threads, and 100 queue length.

So when the thread pool is started, no threads are created. If six requests come in, five core threads are created to handle the five requests, and the other one that is not being processed is put on the queue. At this point 99 requests come in, and the thread pool finds that the core thread is full, and the queue is still 99 places empty, so 99 of them go into the queue, plus the last one is exactly 100. Five more requests come in, and the pool opens up five more non-core threads to handle those five requests. The current situation is that there are 10 threads in the RUNNING state in the thread pool and 100 threads in the queue are full. If another request comes in at this time, the rejection policy is used directly.

Execute and close the thread pool

5.1 Executing Tasks

What is the difference between submit() and execute() methods in a thread pool?

  • Receive parameter: execute() Can execute only Runnable tasks. Submit () performs tasks of type Runnable and Callable.

  • Return value: The execute() method is used to submit tasks that do not require a return value, so there is no way to determine whether the task was successfully executed by the thread pool; The submit() method is used to submit tasks that require a return value. The thread pool returns an object of type Future, which determines whether the task was successfully executed and which can be retrieved from the Future’s get() method, which blocks the current thread until the task is complete. Using get (Long timeout, TimeUnit Unit) will block the current thread for a period of time and then return immediately. In this case, the task may not be completed.

  • Exception handling: Submit () facilitates Exception handling

5.2 Disabling a thread pool

A thread pool can be shutdown by calling the shutdown() or shutdownNow() methods by iterating through the worker threads in the pool and then interrupting them one by one by calling the interrupt method. Tasks that cannot respond to interrupts may never be terminated.

The difference between

  • shutdownNowFirst set the state of the thread pool to STOP, then try to STOP the threads executing or suspending tasks and return to the list of tasks waiting to be executed.
  • shutdownSimply set the state of the thread pool to SHUTDOWN and interrupt the threads that are not executing. Shutdown is usually called to shutdown the thread pool, and shutdownNow can be called if the task is not necessarily finished.

How to design the number of threads? That is, the relationship between the number of threads in the thread pool and (CPU intensive and I/O intensive tasks)

CPU intensive: This type of task generally does not take a lot of IO, so the backend server can handle it quickly and the pressure falls on the CPU.

I/O intensive: There are often large data volume queries and bulk inserts, where the pressure is mainly on THE I/O.

  • Relationship with CPU intensity: Generally, the number of CPU cores == maximum number of concurrent threads. In this case (set the number of CPU cores to N), a large number of clients send requests to the server, but the server can only execute at most N threads simultaneously. So in this case, too much without setting the thread pool work queue, (work queue length = number of CPU core | | CPU core number + 1).

  • Relation with I/O intensity: If a thread is in the work queue for a long time due to I/O operations, but it does not occupy CPU, one CPU is idle. Therefore, in this case, you should increase the length of the thread pool work queue to keep the CPU idle and improve CPU utilization.

In general, how should the size of the thread pool be set (the default core thread size for the initial thread pool? Where N is the number of cpus.

  • For CPU-intensive applications, the thread pool size is set to N+1,

  • If the application is IO intensive, set the thread pool size to 2N+1.


Please point out more deficiencies!