A constructor

Without further ado, the most basic is the constructor.

    public ThreadPoolExecutor(int corePoolSize,
                              int maximumPoolSize,
                              long keepAliveTime,
                              TimeUnit unit,
                              BlockingQueue<Runnable> workQueue,
                              ThreadFactory threadFactory,
                              RejectedExecutionHandler handler) {
        if (corePoolSize < 0 ||
            maximumPoolSize <= 0 ||
            maximumPoolSize < corePoolSize ||
            keepAliveTime < 0)
            throw new IllegalArgumentException();
        if (workQueue == null || threadFactory == null || handler == null)
            throw new NullPointerException();
        this.corePoolSize = corePoolSize;
        this.maximumPoolSize = maximumPoolSize;
        this.workQueue = workQueue;
        this.keepAliveTime = unit.toNanos(keepAliveTime);
        this.threadFactory = threadFactory;
        this.handler = handler;
    }
Copy the code

Seven parameters

In short, it is a configuration parameter.

  • corePoolSizeCore thread size: whether or not they are idle after creation. Thread pools need to be maintainedcorePoolSizeNumber of threads unless setallowCoreThreadTimeOut.
  • maximumPoolSize: Maximum number of threads: the maximum number of threads that can be created in the thread poolmaximumPoolSizeA thread.
  • keepAliveTime: Survival time: If passedkeepAliveTimeWhen the number of threads exceeding the core number has not received a new task, it is reclaimed.
  • unit:keepAliveTimeA unit of time.
  • workQueue: Queue to store tasks to be executed: When the number of submitted tasks exceeds the size of the core thread, the submitted tasks are stored here. It is only used to store the quiltexecuteMethod submittedRunnableTask. Don’t dig a hole for yourself.
  • threadFactory: Thread project: Used to create thread factories. For example, the thread name can be customized so that when analyzing the virtual stack, you can look at the name and know where the thread came from without being confused.
  • handler: Reject policy: When the queue is full of tasks and the threads with the maximum number of threads are working, the thread pool of further submitted tasks cannot handle this, what kind of reject policy should be implemented.

JDK Implementation Process

A thread pool is designed to process tasks, and when a task comes in, the basic execution flow of the thread pool is as follows

  • If the number of core threads in the JDK thread pool is full, subsequent requests will be placed in the blocking queue. When the blocking queue is full, the maximum number of threads will be enabled.

Tomcat Execution Process

If Tomcat is used, then…

  • The Tomcat thread pool works like this: if the number of core threads runs out, then the maximum number of threads is used, and finally the task is submitted to the queue. This is to ensure that response time is a priority.

300 is the maximum, and if requests keep coming in, it’s definitely too much

The problem

Ten machines with 1000 concurrent requests and an average of 100 requests per service. The server is in a 4-core configuration.

  • If it isCPUFor intensive tasks, we should minimize context switching, so the number of core threads can be set to5, the queue length can be set to100, the maximum number of threads must be the same as the number of core threads.
  • If it isIOFor intensive tasks, we can appropriately allocate a little more core threads to make better use of themCPU, so the number of core threads can be set to8The queue length is still100, the maximum thread pool is set to10.

Of course, these are all theoretical values. The pressure results should be compared to determine the most appropriate setting.

Consider it from four perspectives:

  • cpu-intensiveIn the case.
  • IO intensiveIn the case.
  • Get reasonable by pressure measurementParameter configuration.
  • The thread poolDynamic adjustment. (Early warning through thread pool monitoring)

Tuning strategy

Databases are usually the first to fail.

  • Optimize system parameters.
  • Distribute the pressure. Sub – library sub – table, read and write separation
  • Introduce caching to intercept requests at the entry point.
  • Peak load filling, rational use of MQ.
  • Asynchronous.
  • Service fuses and service degradation.
  • Server expansion