This article outline


1. Three elements of concurrent programming

  • An atomic atom is a particle that can no longer be divided. Atomicity in Java means that one or more operations either all execute successfully or all fail.
  • The order in which a program is executed is in the order in which the code is executed. (The processor may reorder the instructions)
  • Visibility When multiple threads access the same variable, if one thread makes changes to it, the other threads can immediately get the latest value.

2. Five states of threads

  • Create state when a thread is created using the new operator
  • The ready thread calls the start method. The thread in the ready state does not necessarily execute the run method immediately and needs to wait for the CPU to schedule it
  • The running state CPU starts scheduling threads and executes the run method
  • Blocked thread execution is blocked for several reasons, such as calling sleep, trying to get a lock, etc
  • The run method was executed or an exception was encountered during execution

3. Pessimistic and optimistic locks

  • Pessimistic locking: Every operation locks, causing the thread to block.
  • Optimistic locking: An operation is completed without locking, assuming no conflicts. If the operation fails because of conflicts, retry until it succeeds without causing thread blocking.

4. Collaboration between threads

4.1 wait/notify/notifyAll

This group of methods is the Object class. It is important to note that all three methods must be called within the scope of synchronization

  • Wait blocks the current thread until notify or notifyAll wakes it up

    Wait () must be notified or notifyAll to wake up a wait(long timeout). If there is no notify or notifAll within the specified time, the wait(long timeout) is automatically woken up. Wait (long timeout,long nanos) public final void wait(long timeout,long nanos) int nanos) throws InterruptedException { if (timeout < 0) { throw new IllegalArgumentException("timeout value is negative"); } if (nanos < 0 || nanos > 999999) { throw new IllegalArgumentException( "nanosecond timeout value out of range"); } if (nanos > 0) { timeout++; } wait(timeout); }Copy the code
    • Only one thread in wait can be awakened by notify
    • NotifyAll Wakes up all wait threads

4.2 sleep/yield/join

This set of methods is the Thread class

  • Sleep suspends the current thread for a specified time, but frees CPU usage without releasing the lock

  • Yield suspends the execution of the current thread, that is, the use of the current CPU, to allow other threads to execute at an unspecified time. Causes the current thread to change from a running state to a ready state. This method is rarely used in production environments and is explained in the official comments

    /** * A hint to the scheduler that the current thread is willing to yield * its current use of a processor. The scheduler is free to ignore this * hint. * * <p> Yield is a heuristic attempt to improve relative progression * between threads that would otherwise over-utilise a CPU. Its use * should be combined with detailed profiling and benchmarking to * ensure that it actually has the desired effect. * * <p> It is rarely appropriate to use this method. It may be useful * for debugging or testing purposes, where it may help to reproduce * bugs due to race conditions. It may also be useful when designing * concurrency control  constructs such as the ones in the * {@link java.util.concurrent.locks} package. */Copy the code
  • The join method is used when the parent thread needs to wait for the child thread to complete its execution or when the child thread needs to complete its execution

5. Valitate keyword

5.1 define

The Java programming language allows threads to access a shared variable, and to ensure that the shared variable can be updated accurately and consistently, threads should ensure that the variable is acquired separately through an exclusive lock. The Java language provides volatile, which in some cases is more convenient than locking. If a field is declared volatile, the Java thread memory model ensures that all threads see the variable’s value as consistent.

Valitate is lightweight synchronized, which does not cause switching and scheduling of thread context and has lower execution overhead.

5.2 the principle

1. In the assembly stage, variables modified by Volitate will have a lock prefix instruction. It ensures that instruction reordering does not place subsequent instructions in front of the memory barrier, nor does it place previous instructions behind the barrier; That is, by the time the memory barrier instruction is executed, all operations in front of it have been completed. It forces changes to the cache to be written to main storage immediately. 4. If the operation is a write operation, it will invalidate the data cached in other cpus

5.3 role

Memory visibility In multi-threaded operations, when one thread changes the value of a variable, other threads can immediately see the changed value. Reordering prevents programs from being executed in the same order as the code (the processor may reorder the code to make it more efficient)

There is no guarantee that the operation will be atomic (for example, the following code will not result in 100,000)

public class testValitate { public volatile int inc = 0; public void increase() { inc = inc + 1; } public static void main(String[] args) { final testValitate test = new testValitate(); for (int i = 0; i < 100; i++) { new Thread() { public void run() { for (int j = 0; j < 1000; j++) test.increase(); } }.start(); } while (thread.activecount () > 2) {// Make sure that all the previous threads have finished executing thread.yield (); } System.out.println(test.inc); }}Copy the code

6. Synchronized

Access synchronization code that ensures threads are mutually exclusive

6.1 define

Synchronized is a lock implemented by JVM, in which the lock acquisition and release are monitorenter and monitorexit instructions respectively. The lock is implemented into biased lock, lightweight lock and heavyweight lock. The biased lock is enabled by default in java1.6. Lightweight locks can swell to heavyweight locks in the case of multi-threaded contention, and data about the lock is stored in the object header

6.2 the principle

Monitorenter and Monitorexit directives are added to the bytecode file with the synchronized keyword. (See the javap-verbose bytecode file for details about monitorenter and Monitorexit:

  • monitorenter Each object is associated with a monitor. A monitor is locked if and only if it has an owner. The thread that executes monitorenter attempts to gain ownership of the monitor associated with objectref, as follows: • If the entry count of the monitor is associated with objectref is zero, The thread enters the monitor and sets its entry count to one. The thread is then the owner of the monitor. • If the thread already owns the monitor associated with objectref, it reenters the monitor, • If another thread already owns the monitor associated with objectref, the thread blocks until the monitor’s entry count is zero, then tries again to gain ownership.

  • monitorexit

    The thread that executes monitorexit must be the owner of the monitor associated with the instance referenced by objectref.

    The thread decrements the entry count of the monitor associated with objectref. If as a result the value of the entry count is zero, the thread exits the monitor and is no longer its owner. Other threads that are blocking to enter the monitor are allowed to attempt to do so.​​

When a method is called, the calling instruction will check whether the ACC_SYNCHRONIZED access flag of the method is set. If the ACC_SYNCHRONIZED access flag is set, The thread of execution retrieves the monitor, executes the method body after the method succeeds, and releases the Monitor after the method completes execution. During method execution, the same Monitor object is no longer available to any other thread. There is essentially no difference, except that method synchronization is done implicitly, without bytecode.

6.3 About Use

  • Modify that the normal method synchronization object is an instance object
  • The modifier static method synchronization object is the class itself
  • Modifier code blocks can set synchronization objects themselves

6.4 disadvantages

The resources that have not been locked will enter the Block state, and the resources will be in the Running state after being competed for. This process involves switching between the user mode and kernel mode of the operating system, and the cost is relatively high. Java1.6 has been optimized for synchronized, increasing the transition from bias to lightweight to heavyweight locks, but performance remains low after the final shift to heavyweight locks.

7. CAS

AtomicBoolean, AtomicInteger, AtomicLong, and lock-related classes are implemented using CAS, and to some extent perform better than synchronized.

7.1 What is the CAS

CAS stands for Compare And Swap, And is a technique used to implement concurrent applications. The operation contains three operands — the memory location (V), the expected old value (A), and the new value (B). If the value of the memory location matches the expected original value, the processor automatically updates the location value to the new value. Otherwise, the processor does nothing.

7.2 Why does CAS exist

Synchronized is a pessimistic lock that can cause performance problems. In the multi-thread competition, locking and releasing locks will cause a lot of context switching and scheduling delay, resulting in performance problems. Holding a lock by one thread causes all other threads requiring the lock to suspend.

7.3 Implementation Principles

Java cannot access the underlying operating system directly, but through native method (JNI) access. The CAS layer implements atomic operations through the Unsafe class.

7.4 Existing Problems

  • ABA Problems What are ABA problems? For example, if N of type int is 1, three threads want to change it: thread A wants to assign N to 2 threads B wants to assign N to 2 threads C: If you want to assign N to 1, both thread A and thread B get the value 1 of N at the same time. Thread A gets the system resource first and assigns N to 2. Thread B is blocked for some reason. Thread C gets the current value of N 2 after thread A has finished executing and thread A has successfully assigned N 2 and thread B has got the current value of N 1 and wants to assign it 2, Thread C, which is blocked, gets the current value of N, 2, and wants to give it a value of 1 and then thread C successfully assigns a value of 1 to N and finally thread B gets the system resource and resumes running, and thread B gets a value of 1 before blocking, Perform the compare operation and find that the current value of N is the same as the obtained value (both are 1). The value of N is successfully assigned to 2. In this process, thread B gets an old value of N, which is the same as the current value of N, but it’s actually changed from 1 to 2 to 1. So this is an example of a typical ABA problem how to solve an ABA problem by adding a version number to the variable, You need to compare not only the value of the current variable but also the version number of the current variable. AtomicStampedReference in Java solves this problem
  • Long cycle time and high overhead In the case of high concurrency, if many threads repeatedly try to update a variable, but the update has been unsuccessful, cyclic, will bring great pressure to the CPU.

CAS can only guarantee atomic operations on one shared variable

8. AbstractQueuedSynchronizer(AQS)

AQS abstract queue synchronizer is a state-based linked list management mode. State is modified by CAS. It is the most important building block of the java.util.Concurrent package and is key to learning what is in the java.util.Concurrent package. ReentrantLock, CountDownLatcher, and Semaphore are based on AQS. Would like to know how he implementation as well as the realization principle See this article www.cnblogs.com/waterystone…

9. Future

In concurrent programming we usually use Runable to perform asynchronous tasks, however we can’t get the return value of the asynchronous task by doing so, but using a Future can. Using Future is as simple as replacing Runable with FutureTask. It is relatively simple to use and will not be introduced here.

The thread pool

Creating a thread while using threads is simple, but problematic. If there are a large number of concurrent threads, and each thread executes a short task and then terminates, creating threads frequently can greatly reduce the efficiency of the system because of the time it takes to create and destroy threads frequently. Thread pools can be reused to greatly reduce the performance cost of frequent thread creation and destruction.

The thread pool implementation class ThreadPoolExecutor in Java has a very clear definition of each parameter in its constructor. Here are a few key parameters that can be explained briefly

  • CorePoolSize: The number of core threads that remain in the thread pool at all times and will not be destroyed even if idle. AllowCoreThreadTimeOut must be set to true before it is destroyed.
  • MaximumPoolSize: the maximum number of threads allowed in the thread pool
  • KeepAliveTime: The maximum idle time allowed by a non-core thread before it is destroyed locally.
  • WorkQueue: Queue for storing tasks.
    • SynchronousQueue: This queue allows newly added tasks to execute immediately, and if all threads in the thread pool are executing, a new thread is created to execute the task. When using this queue maximumPoolSizes normally sets a maximum value integer.max_value
    • LinkedBlockingQueue: This queue is an unbounded queue. If the number of threads in the thread pool is smaller than corePoolSize, we will create a new thread to execute the task. If the number of threads in the thread pool is equal to corePoolSize, we will queue the task. It is also called an unbounded queue because there is no limit to queue size. MaximumPoolSizes does not take effect when using this queue (the number of threads in the thread pool does not exceed corePoolSize), so it is usually set to 0.
    • ArrayBlockingQueue: This queue is a bounded queue. You can set the maximum size of a queue. If the number of threads in the thread pool is greater than or equal to maximumPoolSizes, the task will be placed in this queue. If the number of threads in the current queue is greater than the maximum size of the queue, the task will be discarded and sent to RejectedExecutionHandler.

Finally, this article mainly on the Java concurrent programming development of knowledge points for a simple explanation, here each knowledge point can be explained in an article, due to the length of the reason can not be introduced in detail on each point of knowledge, I believe that through this article you will have a closer understanding of Java concurrent programming. If you find any gaps or errors, please add them in the comments section. Thank you.

A link to the

  • Java interview, you should be prepared to these knowledge mp.weixin.qq.com/s/0JVy3W9uX…
  • Concurrent Java programming: the volatile keyword parse www.importnew.com/18126.html
  • In-depth analysis of the volatile principle blog.csdn.net/lc0817/arti…
  • Concurrent Java programming: Synchronized and its realization principle www.cnblogs.com/paddix/p/53…
  • The JVM source analysis of synchronized www.jianshu.com/p/c5058b6fe…
  • Collaboration between concurrent Java programming: thread www.cnblogs.com/paddix/p/53…
  • Manga: What is the CAS mechanism? Mp.weixin.qq.com/s/f9PYMnpAg…
  • Unsafe in Java classes, a www.cnblogs.com/mickole/art…
  • In-depth analysis ReentrantLockblog.csdn.net/jiangjiajia…