preface

Small make up here a JAVA multithreaded concurrent programming detailed mind map, want to understand the small partner can be opened to see it.

Multithreading relative to other Java knowledge points, there is a certain learning threshold, and understand more laborious. In ordinary work, if it is used improperly, there will be problems such as data disorder, low execution efficiency (not as good as single thread to run) or deadlock program hanging, so it is very important to understand multithreading.

The next installment will cover threading, starting with the basic concepts and ending with the concurrency model.

1. Concurrency and parallelism

Parallel, meaning that two threads are doing things at the same time.

Concurrency, which means one thing, one thing, another thing, there’s scheduling. Parallelism (microscopic) is not possible with a single-core CPU.

A critical region

A critical section is used to represent a common resource, or shared data, that can be used by multiple threads. However, only one thread can use it at a time, and once the resource is occupied, other threads must wait to use it.

Blocking and non-blocking

Blocking and non-blocking are commonly used to describe interactions between multiple threads. For example, if a thread occupies a critical section, all other threads that need the resource must wait in the critical section, and waiting causes the thread to hang. This is called blocking.

At this point, if the occupying thread remains unwilling to release the resource, all other threads blocking on the critical section will not work. Blocking is when a thread is suspended at the operating system level. Blocking generally does not perform well and takes about 80,000 clock cycles to schedule.

Non-blocking allows multiple threads to enter the critical section simultaneously.

Second, the lock

A deadlock

Deadlock is short for process deadlock. It refers to a situation in which multiple processes cycle and wait for the resources held by another party.

For everyone to recommend q u n 678-241-563, if there is any question everybody to learn Java, learning methods, learning course, problems on how to learn efficiently, can be consulting me at any time, or the lack of system study materials, I do this year longer, they think is experienced, can help everyone to put forward the constructive suggestion

Live lock

Let’s say there are two threads 1 and 2, and they both need resource A/B. Let’s say thread 1 owns resource A and thread 2 owns resource B. To avoid A deadlock, thread 1 releases the lock on resource A and thread 2 releases the lock on resource B. At this time AB idle, two threads at the same time grab the lock, again the above situation, this time a live lock.

The simple analogy is that the elevator meets people, one in and one out, opposite to occupy the road, two people give way to one direction at the same time, repeat back and forth, or blocked the road.

If your online app runs into a live lock problem, congratulations, you’ve won. These problems can be difficult to troubleshoot.

hunger

Starvation is when one or more threads, for one reason or another, cannot get the resources they need and are unable to execute.

The life cycle of the thread

During the life cycle of a thread, it goes through several states: created, runnable, and non-runnable.

Create a state

When a new thread object is created using the new operator, the thread is in the created state.

A thread in the created state is simply an empty thread object and no resources are allocated to it.

Operational condition

The start() method of the executing thread allocates the necessary system resources to the thread, schedules it to run, and calls the run() method of the thread body, thus making the thread Runnable.

This state is not Running because the thread may not actually be Running.

Non-running state

A running thread moves to the non-running state when the following events occur:

The sleep() method is called;

The thread calls the wait() method to wait for a particular condition to be met;

Thread input/output blocking;

Return to the running state;

The thread in the sleeping state after the specified time has passed;

If a thread is waiting for a condition, another object must notify the waiting thread of the change in the condition through the notify() or notifyAll() method;

If the thread blocks because of I/O, wait for I/O to complete.

Priority of the thread

Thread priority and setting

The priority of threads is to facilitate the scheduling of threads in a multi-threaded environment. Threads with higher priority will execute first. The priority setting of a thread follows the following principles:

When a thread is created, the child inherits its parent’s priority;

After a thread is created, the priority can be changed by calling setPriority();

The priority of a thread is a positive integer between 1 and 10.

Scheduling policy for threads

The thread scheduler selects the thread with the highest priority to run. However, the thread is terminated if the following occurs:

Yield () is called in the thread body to yield CPU usage.

The thread body calls the sleep() method to put the thread to sleep.

The thread is blocked due to I/O operations.

Another thread of higher priority appears;

In systems that support time slicing, the thread runs out of time slices.

Single thread creation mode

The single Thread creation method is relatively simple. Generally, there are only two methods: inheriting Thread class and implementing Runnable interface. These two ways are more commonly used in the Demo, but for the novice need to pay attention to the problem is:

Whether inheriting Thread or implementing Runable, the business logic is written in the run method, and the start() method is executed when the Thread is started.

Open a new thread, does not affect the main thread execution order and does not block the main thread execution;

There is no guarantee that the code execution order of the new thread and the main thread is sequential;

In a multithreaded program, there’s only one thread working at any given time on the micro level, and the purpose of multithreading is to keep the CPU busy;

Looking at Thread’s source code, you can see that the Thread class implements the Runnable interface, so the two are essentially one;

PS: Usually in the work can also learn from this code structure, to provide more choices for the upper call, as a service provider core business maintenance

Why use thread pools

Through the above introduction, it is completely possible to develop a multithreaded program, why introduce thread pool. This is mainly because the above-mentioned single-threaded mode has the following problems:

The work cycle of threads: the time required for thread creation is T1, the time required for thread execution is T2, and the time required for thread destruction is T3. Usually, T1+T3 is greater than T2, so too much extra time will be wasted if threads are created frequently.

It is less efficient to create a thread when a task is coming, but it is more efficient to obtain the available threads directly from a pool. Therefore, the thread pool saves time and improves efficiency by eliminating the process of creating threads first and then executing them.

Thread pools can be used to manage and control threads, because threads are scarce resources. If created without limit, they will not only consume system resources, but also reduce system stability. Thread pools can be used for uniform allocation, tuning, and monitoring.

Thread pools provide queues to buffer tasks waiting to be executed.

After summarizing the above reasons, we can draw a conclusion that in our daily work, if we want to develop multi-threaded programs, we should try to use thread pool to create and manage threads.

Create a thread by thread pool from the call API perspective can be divided into two kinds, one kind is a native of the thread pool, the other one is provided by Java and contract out to create, the latter is simpler, the latter is the primary way of thread pool create made a simplified packaging, let the caller to use more convenient, but the truth is the same. So it’s important to understand how native thread pools work.

ThreadPoolExecutor

To create a thread pool using ThreadPoolExecutor, the API looks like this:

public ThreadPoolExecutor(int corePoolSize, int maximumPoolSize, long keepAliveTime,

TimeUnit unit,

BlockingQueue workQueue);

First to explain the meaning of the parameters (if you see the more vague can have a general impression, the following figure is the key).

corePoolSize

Size of the core pool.

After a thread pool is created, by default there are no threads in the pool. Instead, it waits for a task to arrive before creating a thread to execute the task, unless the prestartAllCoreThreads() or prestartCoreThread() methods are called, as the names of these methods indicate. Precreate thread means to create corePoolSize threads or one thread before the task arrives. By default, when a thread pool is created, the number of threads in the pool is zero. When a task arrives, a thread is created to execute the task. When the number of threads in the pool reaches the corePoolSize, the task is placed in the cache queue.

maximumPoolSize

The maximum number of threads in the thread pool is also an important parameter, indicating the maximum number of threads that can be created in the thread pool.

keepAliveTime

Indicates how long a thread can hold at most before terminating without executing a task. By default, keepAliveTime takes effect only if the number of threads in the pool is greater than the corePoolSize, until the number of threads in the pool is less than the corePoolSize. If a thread is idle for keepAliveTime, it terminates until the number of threads in the pool does not exceed corePoolSize.

However, if the allowCoreThreadTimeOut(Boolean) method is called, the keepAliveTime parameter will also work if the number of threads in the pool is not greater than the corePoolSize, until the number of threads in the pool is zero.

unit

The time unit of the keepAliveTime parameter.

workQueue

A blocking queue is used to store tasks waiting to be executed. The choice of this parameter is also important and can have a significant impact on the execution of the thread pool. Generally speaking, the blocking queue has one of the following options: ArrayBlockingQueue, LinkedBlockingQueue, SynchronousQueue.

threadFactory

Thread factory, which is used to create threads.

handler

Indicates the policy for rejecting a task. The value can be:

ThreadPoolExecutor. AbortPolicy: discard task and throw RejectedExecutionException exception;

ThreadPoolExecutor. DiscardPolicy: discard task too, but does not throw an exception;

ThreadPoolExecutor. DiscardOldestPolicy: discard queue in front of the task, and then to try to perform a task (repeat);

ThreadPoolExecutor. CallerRunsPolicy: handle the tasks by the calling thread.

How do these parameters work together? See below:

Note the number on the top of the drawing.

Parameter collaboration between thread pools

Parameter collaboration between thread pools can be summarized in the following steps:

Threads submit to the CorePool first.

After the Corepool is full, threads are submitted to the task queue and wait for the thread pool to become idle;

When the task queue is full and corePool is not idle, the task will be submitted to maxPool, and if maxPool is full, the task rejection policy will be executed.

The flowchart is as follows:

This is the core of how native thread pools are created. In addition to the native thread pool, sending packages also provides a simple way to create. As mentioned above, they are a wrapper around the native thread pool, allowing developers to create the required thread pool easily and quickly.

Six, Executors

newSingleThreadExecutor

Creates a pool of threads in which only one thread always exists. If a thread in the thread pool exits due to an exception, a new thread will replace it. This thread pool ensures that all tasks are executed in the order they were committed.

newFixedThreadPool

Create a thread pool of fixed size. Each time a task is submitted, a thread is created until it reaches the maximum size of the thread pool. The size of the thread pool remains constant once it reaches its maximum, and if a thread terminates due to an exception, the pool is replenished with a new thread.

newCachedThreadPool

You can adjust the number of threads in the thread pool according to the actual situation. The number of threads in the thread pool is uncertain. If there are idle threads, the idle threads are preferentially selected. This thread pool is not recommended for normal development because in extreme cases, the newCachedThreadPool can exhaust CPU and memory resources by creating too many threads.

newScheduledThreadPool

This pool can specify a fixed number of threads to execute periodically. For example, scheduleAtFixedRate or scheduleWithFixedDelay specify the period time.

PS: Also, when writing scheduled tasks (if not using the Quartz framework), it is best to use this thread pool because it ensures that there are always live threads in it.

ThreadPoolExecutor is recommended

Instead of using Executors to create the pool, use ThreadPoolExecutor to create the pool.

The main reason for doing so is that: Using Executors to create the thread pool does not pass core parameters, but use the default values. In this case, we tend to ignore the meaning of parameters. If the business scenario requires severe requirements, there is a risk of resource exhaustion. In addition, the ThreadPoolExecutor approach allows us to have a clearer understanding of how thread pools are run, which is a great benefit both in terms of interviewing and in terms of technology growth.

When a variable is changed, other threads know about it immediately. There are several ways to ensure visibility:

volatile

Variables with volatile keywords are prefixed with a lock instruction in assembly, which acts as a memory barrier to ensure the order of memory operations. When a variable declared volatile is written, the variable needs to write data to main memory.

Since the processor implements the cache consistency protocol, writing to main memory invalidates the cache of other processors, which means that the thread working memory is invalid and the data needs to be refreshed from main memory.

This is the end of the article

Like xiaobian to share technical articles can be like attention oh!

Xiaobia here sort out some Java core knowledge point data collection, as well as Java multithreading, Spring, microservices, MySQL tuning, Mybatis interview questions data…..

Pay attention to the public number: Kirin to change the bug