This post first appeared on my personal blog: Tail-tail-fall

This article is I brush dozens of front-line Internet school recruitment Java back-end development post after the summary of multithreaded related topics, although a little long, but before the interview to take a look, I believe that can help you easily hit the multithreaded this big bone.

What is a process and what is a thread? Why multithreaded programming?

A process is a executing application, and a thread is a sequence of executions within a process. A process can have multiple threads. Threads are also called lightweight processes. Process is a program with certain independent functions on a data set of a running activity, is an independent unit of the operating system resource allocation and scheduling; A thread is an entity of a process, the basic unit of CPU scheduling and dispatching, and a smaller unit that can run independently than a process. The scale of thread partition is smaller than that of process, which makes the concurrency of multithreaded program high. Processes typically have separate memory units at execution, whereas threads can share memory. Programming using multithreading generally leads to better performance and user experience, but multithreaded programs are less friendly to other programs because they consume more CPU resources.

Communication between processes

  • Pipe: A pipe is a half-duplex communication mode in which data flows only in one direction and can only be used between related processes. Process kinship usually refers to the parent-child process relationship.
  • Namedpipe: a namedpipe is also a half-duplex communication mode, but it allows communication between unrelated processes.
  • Semophore: A semaphore is a counter that can be used to control access to a shared resource by multiple processes. It is often used as a locking mechanism to prevent other processes from accessing a shared resource while one process is accessing it. Therefore, it is mainly used as a means of synchronization between processes and between different threads within the same process.
  • Messagequeue: a messagequeue is a linked list of messages stored in the kernel and identified by messagequeue identifiers. The message queue overcomes the disadvantages of little signal transmission, pipe carrying only plain byte stream and limited buffer size.
  • Sinal: Signals are complex forms of communication used to inform the receiving process that an event has occurred.
  • Shared memory: A shared memory map maps a segment of memory that can be accessed by other processes. This shared memory is created by one process but can be accessed by multiple processes. Shared memory is the fastest IPC method and is specifically designed for the low efficiency of other interprocess communication methods. It is often used in conjunction with other communication mechanisms, such as signal two, to achieve synchronization and communication between processes.
  • Socket: A socket is also an interprocess communication mechanism. Unlike other communication mechanisms, it can be used to communicate between different processes.

The way threads communicate

  • Lock mechanism: including mutex, condition variables, read and write lock
    • Mutex provides an exclusive way to prevent data structures from being modified concurrently.
    • Read/write locks allow multiple threads to read shared data at the same time, while write operations are mutually exclusive.
    • Condition variables can block the process atomically until a particular condition is true. Conditions are tested under the protection of a mutex. Condition variables are always used with mutex.
  • Semaphore mechanism: includes nameless thread Semaphore and named thread Semaphore
  • Signal: Similar to interprocess Signal processing

The purpose of communication between threads is mainly for thread synchronization, so threads do not have a communication mechanism for data exchange like process communication.

Three ways to achieve multithreading

  • Inherits Thread and overrides the run() method
public class thread1 extends Thread {
        public void run() {
                for (int i = 0; i < 10000; i++) {
                        System.out.println("I'm a thread."+this.getId()); } } public static void main(String[] args) { thread1 th1 = new thread1(); thread1 th2 = new thread1(); th1.start(); th2.start(); }}Copy the code
  • Implement the Runnable interface
public class thread2 implements Runnable {
        public String ThreadName;
        public thread2(String tName){
                ThreadName = tName;
        }
        public void run() {
                for(int i = 0; i < 10000; i++) { System.out.println(ThreadName); }} public static void main(String[] args) {thread2 th1 = new thread2("Thread A.");
                thread2 th2 = new thread2("Thread B."); Thread1 = new Thread(th1); thread1 = new Thread(th1); Thread myth2 = new Thread(th2); // Call the start() method to start the thread and execute the run() method myth1.start(); myth2.start(); }}Copy the code
  • Create threads through Callable and Future
import java.util.concurrent.Callable;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.FutureTask;
 
public class CallableThreadTest implements Callable<Integer>
{
	@Override
	public Integer call() throws Exception{
		int i = 0;
		for(; i<100; i++){ System.out.println(Thread.currentThread().getName()+""+i);
		}
		return i;
	}
	
	public static void main(String[] args){
		CallableThreadTest ctt = new CallableThreadTest();
		FutureTask<Integer> ft = new FutureTask<>(ctt);
		for(int i = 0; i < 100; i++){ System.out.println(Thread.currentThread().getName()+"The value of the loop variable I of+i);
			if(i==20){
				new Thread(ft,"Thread with return value").start();
			}
		}
		try{
			System.out.println("Return value of child thread:"+ft.get()); } catch (InterruptedException e){ e.printStackTrace(); } catch (ExecutionException e){ e.printStackTrace(); }}}Copy the code

A comparison of three ways to create multithreading

1. When creating multiple threads by implementing the Runnable or Callable interface, the thread class only implements the Runnable or Callable interface and can also inherit other classes. The downside is that programming is a bit more complicated, and if you want to access the currentThread, you must use the thread.currentthread () method. Thread.currentthread () ¶ if you want to access the currentThread, you can use this to access the currentThread. The disadvantage is that Thread classes already inherit from Thread, so they cannot inherit from other parent classes. Callable overwrites call(), Runnable overwrites run(). (2) Callable tasks can return values after execution, while Runnable tasks cannot return values. (3) The call method can throw an exception, but the run method cannot. (4) Run the Callable task to get a Future object, which represents the result of the asynchronous calculation. It provides a way to check that a calculation is complete, wait for it to complete, and retrieve the results of the calculation. By using the Future object, you can know the execution status of the task, cancel the execution of the task, and obtain the execution result.

Thread state

  • New state: Create a thread object without calling the start() method
  • Ready: A thread enters the ready state when the start() method is called, but it does not immediately become the current thread when the start() method is called. It is ready until it becomes the current thread. It is worth mentioning that threads also enter the ready state when they recover from sleep and suspension.
  • Running state: The thread is set to the current thread and, after acquiring the CPU, executes the run() method.
  • Blocked: A thread in a running state that, unless executed for a very, very, very short period of time, is interrupted by the system scheduling resources into a blocked state. For example, the thread is blocked after the sleep() method is called.
  • Dead: a running thread is dead when it terminates either actively or passively. The end of the thread can be: 1. The thread ends after execution. 2. An exception or error occurs during thread execution. 3. The thread calls stop to terminate the thread.

The thread of control

  • Join () : wait. Make one thread wait for another thread to complete before continuing. If thread A calls thread B’s join() method in the execution body of thread A, thread A will be blocked until thread B finishes executing.
  • Sleep () : sleep. Causes the current executing thread to suspend for a specified time and enter the blocking state.
  • Yield () : the thread yields. Transitions a thread from the running state to the ready state. When a thread calls the yiled() method to transition from the running state to the ready state, the CPU will only select the thread with the same priority or higher priority from the ready state thread queue to execute.
  • SetPriority () : Changes the priority of a thread. Each thread has a certain priority in execution, and the thread with a higher priority has more execution opportunities. Each thread defaults to the same priority as the thread that created it. The main thread has normal priority by default. The priorityLevel parameter ranges from 1 to 10. Three static constants are commonly used: MAX_PRIORITY: 10; MIN_PRIORITY: 1; NORM_PRIORITY: 5.

PS: a thread object with a higher thread priority only indicates that the thread has more opportunities to execute, not priority.

  • SetDaemon (true) : Set to background thread. Background threads mainly serve other threads (foreground threads), or “daemon threads”. Such as garbage collection threads in the JVM. Background threads die automatically when all foreground threads are dead.

The sleep() method gives other threads a chance to run, regardless of the priority of the other threads, and thus gives the lower priority thread a chance to run. The yield() method only gives threads of the same or higher priority a chance to run. ② When the thread executes the sleep(long millis) method, it will go to the blocked state, and the millis parameter specifies the sleep time. When a thread executes the yield() method, it goes to the ready state. ③ The sleep() method declares to throw InterruptedException, while the yield() method does not declare any exception to throw.

The difference between wait, notify, and notifyAll

Wait, notify, and notifyAll are important components of Java synchronization mechanism. Many excellent synchronization models can be established by using synchronized keyword. These three methods are not Thread or Runnable interface methods, but the three native methods of Object. To invoke wait and notify/notifyAll of an Object, ensure that the calling code is synchronized with the Object, that is, synchronized(obj){…… } the inside to be able to call the wait and notify obj/notifyAll three methods, otherwise it will be an error: Java. Lang. IllegalMonitorStateException: current thread not the owner

Let me start with two concepts: Lock pool and wait pool lock pool: Suppose thread A already owns the lock on an object (note: not A class) and another thread wants to call A synchronized method (or block) of that object. Since these threads must first acquire ownership of the object’s lock before entering the synchronized method of the object, but the lock of the object is currently owned by thread A, these threads enter the lock pool of the object. Wait pool: If thread A calls wait() on an object, thread A releases the lock on the object and enters the object’s wait pool

  • If a thread calls an object’s wait() method, it is in that object’s wait pool, and the threads in the wait pool do not compete for the lock on that object.
  • When a thread invokes the notifyAll() or notify() methods of an object, the awakened thread enters the lock pool of the object, and the threads compete for the lock. That is, after notify is called, only one thread enters the lock pool from the wait pool, and notifyAll moves all threads in the wait pool to the lock pool, waiting for lock contention
  • A thread with a higher priority has a higher probability of competing for an object lock. If a thread does not compete for the object lock, it will remain in the lock pool and only return to the wait pool if the thread calls wait() again. The thread that contended for the object lock continues to execute until the synchronized block is finished, which releases the object lock, at which point the pool of threads continues to contend for the object lock.

summary

  • Wait: A thread automatically releases the lock it holds on an object and waits for notify
  • Notify: Wake up a thread that is waiting for the current object lock and let it get the lock
  • NotifyAll The main difference between notify and notifyAll is that notify awakens only one thread that is waiting for a lock, while notifyAll awakens all threads. Note that notify is a local method, and the virtual machine controls which thread to wake up. NotifyAll not all threads can execute immediately after notifyAll, they are only out of wait, and they are then competing locks.

What is the difference between sleep() and wait()?

The sleep() method is a static method of the Thread class that causes the Thread to pause execution for a specified time, giving execution to another Thread, but the monitoring state remains, at which point it is automatically restored (the Thread goes back to the ready state), because calling sleep does not release the object lock. Wait () is a method of the Object class. Calling the wait() method on this Object causes the thread to abandon the Object lock (the thread suspends execution) and enter the pool of waiting objects. Only after the notify (or notifyAll) method is issued on the object does the thread enter the object lock pool to get the lock ready.

The lock type

  • Reentrant lock: Broadly defined, a reentrant lock is a lock that is repeatable and recursively invocable. After the lock is used on the outer layer, the inner layer can still be used without deadlock (provided it is the same object or class). Such a lock is called reentrant lock. That is, all synchronization methods in the execution object do not acquire the lock again. ReentrantLock and synchronized are both reentrant locks. As a simple example, when a thread executes a synchronized method, such as method1, and in method1 another synchronized method, method2, is called, the thread doesn’t have to re-apply for the lock. Instead, it can execute method2 directly.
  • Interruptible lock: Interruptible while waiting to acquire the lock. Synchronized is not a breakable Lock, whereas Lock is a breakable Lock.
  • Fair lock: The lock is acquired according to the waiting time of the thread. If the waiting time is long, the thread has the priority to obtain the lock. An unfair lock is one in which there is no guarantee that the lock is acquired in the order in which it was requested, which may result in one or more threads never acquiring the lock. Synchronized is an unfair lock that does not guarantee the order in which waiting threads acquire locks. For ReentrantLock and ReentrantReadWriteLock, an unfair lock is default, but it can be set to a fair lock.
  • Read/write lock: Reads and writes to resources are divided into two parts: a read lock and a write lock. Multiple threads can read together while reading, and must be written simultaneously when writing. ReadWriteLock is a read/write lock. It is an interface. ReentrantReadWriteLock implements this interface. Read locks can be obtained by readLock() and write locks by writeLock().

What are optimistic locks and pessimistic locks

(1) Optimistic lock: Very optimistic. Every time I go to get the data, I think that others will not modify it, so I will not lock it. But WHEN I update the data, I will judge whether anyone will update the data during this period (version number and other mechanisms can be used). Retry if you fail because of a conflict. Optimistic locking is suitable for the situation with fewer writes, that is, fewer conflicts, which can save the overhead of locking and increase the overall throughput of the system. Databases that provide mechanisms such as write_condition provide optimistic locks. In Java. Java util. Concurrent. Atomic package this atomic variable classes is to use the optimistic locking a way of implementation of CAS. (2) Pessimistic lock: Always assume the worst case, every time you go to get the data, you think others will modify it, so every time you get the data, it will be locked, so that others want to get the data will be blocked until it gets the lock, which is relatively inefficient. Traditional relational database inside used a lot of this locking mechanism, such as row lock, table lock, read lock, write lock, etc., are in the operation before the first lock. Another example is the implementation of the synchronized keyword in Java.

Implementation of Optimistic Locking (CAS)

The implementation of optimistic locking mainly involves two steps: conflict detection and data update. One typical implementation is Compare and Swap (CAS). CAS: CAS is an optimistic locking technique. When multiple threads attempt to update the same variable using CAS, only one thread can update the value of the variable. However, all other threads fail. The CAS operation contains three operands — the memory location to be read and written to (V), the expected original value to be compared (A), and the new value to be written to (B). If the value of memory location V matches the expected original value A, the processor automatically updates that location value to the new value B. Otherwise the processor does nothing. In either case, it returns the value of that location before the CAS instruction. (Some special cases of CAS will only return whether the CAS was successful, not extract the current value.) CAS effectively says “I think position V should contain the value A; If this value is included, place B in this position; Otherwise, do not change the location, just tell me the current value of the location.” This is actually the same principle as optimistic lock collision check + data update.

Optimistic locking is an idea, and CAS is an implementation of this idea.

The disadvantage of the CAS

  1. ABA problem

If the memory address V is first read with A value and is still A when it is ready to be assigned, can we say that its value has not been changed by another thread? If its value was changed to B during that time and then changed back to A, the CAS operation would assume that it had never been changed. This vulnerability is called the “ABA” problem for CAS operations. To solve this problem, ava provides a labeled atom reference class “AtomicStampedReference” that guarantees the correctness of CAS by controlling the version of the variable value. Therefore, it is important to consider whether the “ABA” problem affects the concurrency correctness of the program before using CAS. If ABA problems need to be solved, switching to traditional mutual-exclusive synchronization may be more efficient than atomic classes.

  1. Long cycle time is expensive

Spin CAS (unsuccessful, loop until successful), if unsuccessful for a long time, can be very expensive for the CPU to execute.

  1. Atomic operations of only one shared variable are guaranteed.

When performing operations on a shared variable, we can use cyclic CAS to guarantee atomic operations, but when performing operations on multiple shared variables, the cyclic CAS cannot guarantee atomic operations. In this case, we can use locks to guarantee atomic operations.

Implement a deadlock

What is a deadlock? A deadlock occurs when two processes are waiting for each other to complete before they can proceed. The result is that both processes are stuck in an infinite wait. There are four necessary conditions for deadlocks: Mutual exclusion condition: a resource can only be used by one process at a time. Request and hold conditions: when a process is blocked by requesting resources, it holds on to acquired resources. Non-dispossession condition: a process cannot forcibly take away a resource it has acquired until it is used up. Circular waiting condition: a circular waiting resource relationship is formed between several processes. These four conditions are necessary for deadlocks, and they must be true whenever a deadlock occurs on the system, and no deadlock occurs unless one of these conditions is met. Consider the following situation: (1) Thread A currently holds the mutex lock lock1, thread B currently holds the mutex lock lock2. (2) Thread A tries to acquire lock2 because thread B is holding the lock2, so thread A blocks waiting for thread B to release the lock2. (3) If thread B is also trying to acquire lock1, the same thread will block. (4) Both parties are waiting for a lock held by the other party, but neither party releases it, so it will block forever, forming a deadlock. Deadlock resolution: a undo all processes caught in deadlock; B Unlocks the deadlocked processes one by one until the deadlock does not exist; C forces the abandonment of occupied resources from deadlocked processes one by one until the deadlock disappears. D Forcibly deprives other processes of enough resources to allocate to the deadlocked process to break the deadlock state

How do I ensure that N threads can access N resources without causing deadlocks?

When using multiple threads, a very simple way to avoid deadlocks is to specify the order in which locks are acquired and force threads to acquire them in that order. Therefore, if all threads lock and release locks in the same order, there will be no deadlocks

The volatile keyword

For visibility, ordering, and atomicity problems, the Synchronized keyword is usually used to solve these problems, but if you know anything about Synchronized, you know that Synchronized is a heavyweight operation. It has a significant impact on system performance, so we generally avoid using Synchronized to solve the problem if there are other solutions. The volatile keyword is another solution to visibility and ordering problems provided in Java. One important and often misunderstood point about atomicity is that atomicity is guaranteed by a single read/write on volatile variables, such as long and double, but not by a single read/write on i++, which is essentially read/write.

  • Reordering prevention

Problem: The operating system can reorder instructions, and multithreaded environments can expose an uninitialized object reference, leading to unexpected results. The volatile keyword prevents instructions from being reordered by providing a “memory barrier.” To implement the memory semantics of volatile, the compiler inserts a memory barrier into the instruction sequence to prevent a particular type of processor from being reordered when the bytecode is generated. Insert the StoreStore barrier before each volatile write and the StoreLoad barrier after each volatile write. Insert LoadLoad barriers before each volatile read and LoadStore barriers after each volatile read.

  • Implement visibility

Problem: Visibility problems occur when a thread changes the value of a shared variable and another thread does not see the solution. (1) Changing a volatile variable forces the value to be flushed from main memory. (2) Changing a volatile variable invalidates the value of the corresponding variable in the working memory of other threads. Therefore, reading the value of this variable requires reading the value from main memory again.

  • Note: Volatile does not guarantee atomicity of variable updates

Recommended use of volatile

In contrast to synchronized blocks, volatile provides a lightweight lock on shared variables, and we need to consider using volatile when communicating with shared variables between threads. Volatile is a weaker synchronization mechanism. Access to volatile variables does not lock and therefore does not block, making volatile variables a lighter synchronization mechanism than the synchronized keyword. Usage suggestion: Use volatile on member variables that two or more threads need to access. Volatile is not necessary when the variable to be accessed is already in a synchronized block or is a constant. Using volatile is inefficient because it blocks out necessary code optimizations in the JVM, so use this keyword only when necessary.

The difference between volatile and synchronized

1. Volatile does not lock: Volatile variables are a weaker synchronization mechanism. They do not lock on access to volatile variables and thus do not block the thread of execution, making them a lighter synchronization mechanism than synchronized. 2. Volatile variables act like synchronous variable reads and writes: From a memory visibility perspective, writing volatile variables equals exiting a synchronized block, and reading volatile variables equals entering a synchronized block. 3. Volatile is less secure than synchronized: Excessive reliance on volatile variables to control the visibility of state in code is generally more fragile and difficult to understand than code that uses locks. Volatile variables should be used only when they simplify code implementation and the validation of synchronization policies. In general, it is safer to use a synchronization mechanism. 4. Volatile does not guarantee both memory visibility and atomicity: Locking (or synchronization) ensures both visibility and atomicity, whereas volatile variables only ensure visibility. The reason is that simple variables declared as volatile have no effect if their current value is related to their previous value. In other words, the following expressions are not atomic operations: Count ++ and count = count+1.

Volatile variables should only be used if and only if: 1. Writes to variables do not depend on the current value of the variable, or if you can ensure that only a single thread updates the value. 2. This variable is not included in invariants with other variables. Conclusion: When it comes to synchronization, the first choice should be the synchronized keyword, which is the safest approach. Trying anything else is risky. In particular, after jdK1.5, a lot of optimization has been made to synchronized synchronization mechanism, such as: adaptive spin lock, lock coarser, lock elimination, lightweight lock and so on, making its performance significantly improved.

synchronized

Synchronized ensures that only one method or block of code can enter a critical section at any one time at runtime, and it also ensures memory visibility of shared variables. Synchronized mainly has the following three functions: guarantee mutual exclusion, guarantee visibility, guarantee sequence.

Three applications of synchronized

  • Modifier instance method, used to lock the current instance, before entering the synchronization code to obtain the current instance lock. Implementation principle: The directive checks to see if the method’s ACC_SYNCHRONIZED access flag is set, and if so, the thread of execution holds monitor (a pipe in the virtual machine specification), then executes the method, and finally releases Monitor when the method completes, either normally or abnormally.

    public synchronized void increase(){
        i++;
    }
    Copy the code
  • Modifies a static method that locks the current class object before entering the synchronized code

    public static synchronized void increase(){
        i++;
    }
    Copy the code
  • Modifies a block of code that specifies a lock object, locks a given object, and acquires the lock for the given object before entering a synchronized code base. How it works: Monitorenter and Monitorexit directives are used, monitorenter pointing to the start of the synchronized block and Monitorexit indicating the end of the synchronized block.

    static AccountingSync instance=new AccountingSync();
    synchronized(instance){
        for(int j=0; j<1000000; j++){ i++; }}Copy the code

Lock

Lock is an interface whose implementation class provides a broader sense of locking than synchronized, allowing users more flexible code structures and different effects. The implementation classes of Lock include ReentrantLock and ReentrantReadWriteLock.

The Lock Lock = new already (); lock.lock(); try{ //doSomething // If there isreturn}finally{lock.unlock(); }Copy the code

Lock The method of obtaining a Lock on an interface

  • Void lock() : The lock() method is the most commonly used method to obtain the lock. If the lock has already been acquired by another thread, wait. When an exception occurs, it does not automatically release the lock. Remember to release the lock in the finally block to ensure that the lock is always released and prevent deadlocks.
  • Void lockInterruptibly() : Can respond to interrupts if the thread is waiting to acquire the lock while the lock is being acquired.
  • Boolean tryLock() : returns a value that indicates an attempt to obtain the lock, and returns true on success; If the acquisition failed (that is, the lock was acquired by another thread), false is returned.
  • Boolean tryLock(long time, TimeUnit unit) : this method is similar to the tryLock() method, except that the tryLock() method will wait a certain amount of time before the lock is available.

Condition class

Condition is a Java class that implements wait/ notification. The Condition class also provides richer functionality than wait/notify. The Condition object is created by the Lock object. However, the same lock can create objects of multiple conditions, that is, create multiple object monitors. This has the advantage of being able to specify the wake up thread. Threads that wake up with notify wake up randomly. The Condition breaks the Object monitor methods (WAIT, notify, and notifyAll) into distinct objects in order to provide multiple wait sets (wait-sets) for each Object by combining these objects with any Lock implementation. Lock replaces the use of synchronized methods and statements, and Condition replaces the use of Object monitor methods. In Condition, we can replace wait() with await(), notify() with signal(), and notifyAll() with signalAll(). Condition is bound to Lock, and to create a Lock Condition must use the newCondition() method.

Condition is different from wait, notify, and notifyAll of Object

1. The await() method in Condition is equivalent to the wait() method of Object, and the signal() method in Condition is equivalent to the notify() method of Object. SignalAll () in Condition is equivalent to notifyAll() on Object. The difference is that these methods in Object are bundled with synchronization locks; Condition is bundled with a mutex/shared lock. 2.Condition it is more powerful: it can more finely control the sleep and wake up of multiple threads. We can create multiple conditions for the same lock, using different conditions in different situations. For example, suppose that multiple threads read/write to the same buffer: wake up the “reader thread” after writing data to the buffer; Wake up the “writer thread” after reading data from the buffer; And when the buffer is full, the “writer thread” waits; When the buffer is empty, the “reader thread” waits. If the buffer is implemented with wait(),notify(), and notifyAll() of the Object classes, it is impossible to wake up the reader thread by notify() or notifyAll() after writing data to the buffer. Instead, notifyAll can be used to wake up all threads (notifyAll, however, cannot tell whether the awakened thread is a reader or a writer). However, the Condition explicitly specifies that the reader thread is awakened.

The difference between synchronized and lock

synchronized Lock
There are levels Java keyword It’s an interface
The release of the lock If an exception occurs, the JVM causes the thread to release the lock In finally, the lock must be released; otherwise thread deadlocks can occur
To acquire the lock Suppose thread A acquires the lock and thread B waits. If thread A blocks, thread B will wait A Lock causes the thread waiting for the Lock to respond to an interrupt
The lock state Unable to determine You can determine whether the lock was successfully acquired
The lock type Reentrant uninterruptible is not fair Reentrant interruptible fair/unfair

In terms of performance, synchronized is inefficient in JDK1.5. Because this is a heavyweight operation, the biggest performance impact is that the implementation blocks, suspending and resuming the thread operations need to be done in kernel mode, which puts a lot of pressure on the system’s concurrency. By contrast, using Java provided Lock objects provides better performance. In multi-threaded environment, the throughput of synchronized decreases seriously, while that of ReentrankLock can basically keep at the same relatively stable level.

With JDK1.6, Synchronize adds many optimizations, including adaptive spin, lock elimination, lock coarser, lightweight locking, and biased locking. As a result, synchronize performance on JDK1.6 is not worse than Lock performance. Synchronized is also preferred in synchronize, which can be optimized in future releases, and is therefore preferred to synchronize when it meets the requirements.

The state of the lock

Java SE1.6 introduces “biased lock” and “lightweight lock” in order to reduce the performance cost of acquiring and releasing locks. Therefore, there are four lock states in Java SE1.6: no lock state, biased lock state, lightweight lock state and heavyweight lock state, which will gradually upgrade with the competition situation. Locks can be upgraded but cannot be downgraded, meaning biased locks cannot be downgraded after being upgraded to lightweight locks. The purpose of this policy is to improve the efficiency of acquiring and releasing locks.

Biased locking

You can continue to optimize for some scenarios without actual competition. Maintaining lightweight locks is wasteful if not only is there no actual competition, but there is only one thread that uses the lock all the time. The goal of biased locking is to reduce the performance cost of using lightweight locks when there is no contention and only one thread uses the lock. Lightweight locks require at least one CAS each time applying for or releasing a lock, but biased locks require only one CAS during initialization. “Biased” mean, biased locking assumes that only the first application in the future’s locked thread will use (to apply for the lock won’t have any thread), as a result, only need to Mark Word in CAS record owner (essentially is updated, but the initial value is empty), if successful, recording their success is biased locking, record lock state as biased locking, Later, the current thread equal to owner can directly acquire the lock at zero cost; Otherwise, there are other threads competing for the lightweight lock. Biased locks cannot be optimized using spin locks because the assumption of biased locks is broken once another thread applies for the lock.

Lightweight lock

Lightweight locks are upgraded from bias locks, which operate when one thread enters a synchronized block, and are upgraded to lightweight locks when the second thread joins lock contention. Lightweight locks are designed to reduce the performance cost of traditional heavyweight locks without multithreading competition. Lightweight locks are suitable for scenarios where threads alternately execute synchronized blocks. If the same lock is accessed at the same time, the lightweight lock will swell into a heavyweight lock. When lightweight Lock is used, there is no need to apply for mutex, only part of the CAS update in Mark Word points to the Lock Record in the thread stack. If the update is successful, the lightweight Lock is successfully obtained, and the Record Lock status is lightweight Lock. Otherwise, it means that a thread has already acquired the lightweight lock, there is a lock contention (no longer suitable for using the lightweight lock), and then it expands to the heavyweight lock.

Heavyweight lock

Weight to lock in the JVM also called object’s Monitor (Monitor), it’s like the Mutex in C, in addition to possess a Mutex (0 | 1) the function of the Mutex, it also is responsible for implementing the Semaphore (Semaphore) function, which means it contains at least one lock queue of the competition, and a signal blocking queue (wait queue), The former is used for mutex and the latter for thread synchronization.

spinlocks

Spinlocks principle is very simple, if the thread holding the lock can lock is released in a very short time resources, and the thread lock wait for competition there is no need to do between kernel mode and user mode switch into the block pending state, they just need to wait for a while (spin), such as thread holding the lock immediately after releasing the lock locks, thus avoiding the consumption of user and kernel thread switching. However, the spin of the thread consumes the CPU. In other words, the CPU is doing idle work. If the lock cannot be acquired, the thread cannot occupy the spin of the CPU for idle work. If the thread holding the lock executes for longer than the maximum spin wait time and does not release the lock, other threads contending for the lock still cannot acquire the lock within the maximum wait time, then the contending thread will stop spinning and enter the blocking state.

Adaptive spin lock

Adaptive means that the spin time is no longer fixed, but is determined by the previous spin time on the same lock and the state of the lock owner:

  • If the spin wait has just successfully acquired the lock on the same lock object, and the thread holding the lock is running, the virtual machine will assume that the spin wait is likely to succeed again, and it will allow the spin wait to last a relatively long time, such as 100 cycles.
  • Conversely, if the spin is rarely successfully acquired for a lock, it is possible to reduce the spin time or even omit the spin process when acquiring the lock in the future to avoid wasting processor resources.

Adaptive spin solves the problem of “uncertain lock competition time”. It is difficult for the JVM to sense the exact lock contention time, and handing it over to the user for analysis goes against the JVM’s design. Adaptive spin assumes that different threads hold the same lock object for roughly the same time and the degree of competition tends to be stable. Therefore, the time of the next spin can be adjusted according to the time of the last spin and the result.

Biased locking, lightweight locking, and heavyweight locking are suitable for different concurrent scenarios

Biased lock: No actual contention and only the first thread to apply for the lock will use the lock in the future. Lightweight locking: No actual contention, multiple threads use the lock interchangeably; Allows short lock contention. Heavyweight lock: there are actual contention, and lock contention is long. In addition, if lock contention time is short, spin locks can be used to further optimize the performance of lightweight and heavyweight locks and reduce thread switching. If lock contention increases gradually (slowly), then a gradual expansion from biased locking to weight locking can improve the overall performance of the system.

The process of lock inflation: only one thread enters the critical zone (biased lock), multiple threads alternately enter the critical zone (lightweight lock), and multiple threads simultaneously enter the critical zone (heavyweight lock).

AQS

AQS is AbstractQueuedSynchronizer, a used to construct the framework of locks and synchronization tool, including the commonly used already, such as CountDownLatch, Semaphore. AbstractQueuedSynchronizer is an abstract class, mainly maintains a state properties of type int and a non-blocking, fifo thread waiting queue; Where state is volatile to ensure visibility between threads. Queue enqueueing and pairing are lock-free operations based on spin lock and CAS. In addition, AQS can be divided into two modes: exclusive mode and shared mode. For example, ReentrantLock is implemented based on exclusive mode, while CountDownLatch and CyclicBarrier are implemented based on shared mode.

The thread pool

If there are a large number of concurrent threads, and each thread executes a short task and then terminates, creating threads frequently can greatly reduce the efficiency of the system because of the time it takes to create and destroy threads frequently. Thread pool and database connection pool, system start a thread price is relatively high, if the program startup initialization when a certain number of threads, in the thread pool, and when the need is to use the pool, with the return to the pool, it can greatly improve the application performance, moreover, some of the thread pool initialization configuration, Can also effectively control the number of concurrent system, to prevent excessive memory consumption, and the server exhausted.

You can create various types of thread pools by using the Executors tool. The following four types are common:

  • NewCachedThreadPool: the size is unlimited and the thread can be reused when it is released.
  • NewFixedThreadPool: Fixed size, no thread available, the task must wait until available thread;
  • NewSingleThreadExecutor: Creates a single thread where tasks are executed in sequence;
  • NewScheduledThreadPool: Creates a thread pool of fixed length that supports scheduled and periodic task execution

The benefits of using thread pools

  • The number of threads created and destroyed is reduced, and each worker thread can be reused to perform multiple tasks.
  • Using the thread pool can effectively control the maximum number of concurrent thread, can according to the system capacity, adjust the working line the number of threads in thread pool, prevent because consumes too much memory, but to exhaust the server (each thread takes about 1 MB of memory, thread to open, the more the greater the memory consumption, finally crash).
  • Some simple thread management, such as: delayed execution, timed loop execution strategy, using thread pools can be well implemented

What kinds of work queues do thread pools have

ArrayBlockingQueue is a bounded blocking queue based on an array structure that sorts elements according to FIFO (first-in, first-out). LinkedBlockingQueue A LinkedBlockingQueue based on a linked list structure that sorts its elements in FIFO (first-in, first-out) order and has a higher throughput than ArrayBlockingQueue. Static factory methods Executors. NewFixedThreadPool () to use the queue 3, SynchronousQueue will not a blocking queue of storage elements. Each insert operation must wait until another thread calls to remove operation, otherwise the insert has been in the blocking state, the throughput is usually more than LinkedBlockingQueue, static factory methods Executors. NewCachedThreadPool using the queue. PriorityBlockingQueue An infinite blocking queue with a priority.

reference

Java multi-thread

Java concurrency: Volatile memory visibility and instruction rearrangement

Concurrent programming locking mechanisms: synchronized and lock

Talk about partial lock, lightweight lock, heavyweight lock

For the latest news, please follow our wechat official number: Nanqiang say good night

I am taking part in the gold-digging technology essay activity address