1. Optimistic lock

Optimistic locking is a kind of optimistic thought that reading more than writing less, meet low likelihood of concurrent write, every time to fetch the data that other people will not change, so will not lock, but at the time of update will assess the during that time, people have to update the number According to, at the time of writing in a first read the current version number, and then lock operation (compare with the previous version, Update if it is the same), and repeat the read-compare-write operation if it fails. Optimistic locking in Java is basically implemented through CAS operation, which is an updated atomic operation. The CAS operation compares the current value with the passed value. If the current value is the same as the passed value, it will be updated, otherwise it will fail.

2. Pessimistic lock

Pessimistic locking is a pessimistic thinking, that is, the likelihood of concurrent writing is high, and every time I go to fetch data, I think someone else will modify it, so every time I read or write data, I will lock it, so that someone else will try to read or write the data, and the data will be blocked until I get the lock. Pessimistic lock in Java is Synchronized. In AQS framework, cas optimistic lock is first tried to obtain the lock. If it fails to obtain the lock, it will be converted to pessimistic lock, such as RetreenLock.

3. Spin locks

Spinlocks principle is very simple, if the thread holding the lock can lock is released in a very short time resources, and the thread lock wait for competition there is no need to do between kernel mode and user mode switch into the block pending state, they just need to wait for a while (spin), such as thread holding the lock immediately after releasing the lock locks, thus avoiding the consumption of user and kernel thread switching. If the thread can’t get the lock, it can’t use the spin to do idle work, so it needs to set a maximum spin wait time. If the thread holding the lock executes for longer than the maximum spin wait time and does not release the lock, other threads contending for the lock still cannot acquire the lock within the maximum wait time, then the contending thread will stop spinning and enter the blocking state.

Advantages and disadvantages of spin locks

Spinlocks reduce thread blocking as much as possible, which is a big performance improvement for blocks that are less competitive for locks and have a very short lock time, because the spin cost is less than the cost of blocking, suspending and waking up operations that cause the thread to do two context switches!

However, if the lock competition is fierce, or the thread holding the lock needs to occupy the lock for a long time to execute the synchronization block, then it is not suitable to use the spin lock, because the spin lock always occupies the CPU for useless work before acquiring the lock, occupying XX or XX. At the same time, there are a large number of threads competing for a lock, which will lead to a long time to acquire the lock. The cost of thread spin is greater than the cost of thread blocking and suspension operations, and other threads that need CPU cannot obtain CPU, resulting in CPU waste. So in this case we want to turn off the spin lock; Spin lock time threshold (1.6 introduced adaptive spin lock) The purpose of spin lock is to occupy CPU resources and process the lock immediately after it is obtained. But how do you choose when to execute the spin? If the spin execution time is too long, a large number of threads in the spin state will occupy CPU resources, which will affect the overall system performance. So the cycle of spin is extra important!

In JDK 1.6, adaptive spin locking was introduced. Adaptive spin locking means that the time of spin on the same lock is not fixed, but is determined by the state of the previous spin on the lock and the owner of the lock. It is generally considered that the time for a thread context switch is the optimal time, and the JVM is optimized for the current CPU load. If the average load is less than CPUs, the thread spins; if more than (CPUs/2) threads are spinning, the thread blocks directly afterwards. If the thread that is spinning finds that its Owner has changed, it will delay spin time (spin count) or block. If the CPU is in power-saving mode, it will stop spinning. The worst case of spin time is CPU memory delay (CPU A stores A piece of data, and CPU B gets the data directly). Differences between thread priorities are appropriately waived when spinning.

The opening of the spin lock

-xx :+UseSpinning enabled in JDK1.6; -xx :PreBlockSpin=10;

After JDK1.7, this parameter is removed and controlled by JVM.

4. Synchronized

Synchronized treats any non-null object as a lock. It is an exclusive pessimistic lock and a reentrant lock. Range of Synchronized

  1. When applied to a method, it locks an instance of the object (this);
  2. When used for static methods, the Class instance is locked, and the permanent band is shared globally because the data related to the Class is stored in PermGen (in JDK1.8, metaspace). Therefore, the static method lock is equivalent to a global lock of the Class, and will lock all threads that call the method.
  3. Synchronized, when applied to an object instance, locks all blocks of code that are locked on that object. It has multiple queues,

When multiple threads access an object monitor together, the object monitor stores these threads in different containers. Synchronized core components

  1. Wait Set: this is where threads that blocked calling the Wait method are placed;
  2. Contention List: Contention queue where all threads requesting locks are placed first;
  3. Entry List: Threads in the Contention List that qualify as candidate resources are moved to the Entry List;
  4. OnDeck: At most one thread is competing for a lock resource at any given time. This thread is called OnDeck;
  5. Owner: The thread that has obtained the resource is called Owner.
  6. ! Owner: specifies the current thread that releases the lock.

Synchronized implementation

  1. The JVM fetches data one at a time from the end of the queue for lock contention candidates (onDecks), but in the case of concurrency, the ContentionList is CAS accessed by a large number of concurrent threads. To reduce contention on tail elements, The JVM moves a subset of threads into the EntryList as candidate contention threads.

  2. The Owner thread migrates some of the ContentionList threads into the EntryList when unlock, and designates one of the EntryList threads as the OnDeck thread (usually the first thread to enter).

  3. Instead of passing the lock directly to the OnDeck thread, the Owner thread gives the lock contention to OnDeck, which needs to recontest the lock. This sacrifices some fairness, but can greatly improve the throughput of the system. In the JVM, this choice behavior is also called “competitive switching.”

  4. The OnDeck thread becomes the Owner thread after acquiring the lock resource, while the thread that does not acquire the lock resource remains in the EntryList. If the Owner thread is blocked by wait, it is placed in the WaitSet queue until it is awakened by notify or notifyAll and re-entered into the EntryList.

  5. The ContentionList, EntryList, and WaitSet threads are all blocked by the operating system (pthread_mutex_lock in Linux).

  6. Synchronized is an unfair lock. Synchronized when a thread enters the ContentionList, the waiting thread will first try to acquire the spin lock. If it fails to acquire the lock, it will enter the ContentionList, which is obviously unfair to the thread that has entered the queue. Another unfair thing is that the thread that spins to acquire the lock may also directly preempt the lock resource of the OnDeck thread. Reference:

Blog.csdn.net/zqz_zqz/art…

  1. Each object has a Monitor object, and locking competes with the monitor object. Block locking is implemented by adding monitorenter and Monitorexit directives, respectively. Method locking is determined by a marker bit.

  2. Synchronized is a heavyweight operation that requires the invocation of the related interface of the operating system. The performance of synchronized is inefficient and may lead to failure

Process locking consumes more time than useful operations.

  1. Java1.6, synchronized carried out a lot of optimization, including adaptive spin, lock elimination, lock coarser, lightweight lock and biased lock, efficiency has been substantially improved. In Java1.7 and 1.8, the implementation mechanism of this keyword is optimized. Biased locks and lightweight locks were introduced. Both have token bits in the object header and do not need to be locked by the operating system.

  2. Locks can be upgraded from biased locks to lightweight locks to heavyweight locks. This escalation process is called lock inflation;

  3. Bias locking and lightweight locking are enabled by default in JDK 1.6. Bias locking can be disabled by -xx: -usebiasedlocking.

5, already

ReentantLock inherits the interface Lock and implements the method defined in the interface. It is a reentrant Lock. In addition to completing all the work that synchronized can do, ReentantLock also provides methods such as responsible interrupt Lock, polling Lock request, timing Lock and so on to avoid multi-thread deadlock. The main method of the Lock interface

  1. Void lock(): When this method is executed, if the lock is idle, the current thread acquires it. Conversely, if the lock is already held by another thread, the current thread is disabled until the current thread acquires the lock.

  2. Boolean tryLock() : Obtain the lock if it is available and return true immediately, return false otherwise. The difference between this method and lock() is that tryLock() only “tries” to acquire the lock, and if the lock is unavailable, it does not cause the current thread to be disabled and the current thread to continue executing the code. The lock() method, on the other hand, must acquire the lock. If the lock is not available, it waits, and the current thread does not proceed until the lock is acquired.

  3. Void unlock() : When this method is executed, the lock held by the current thread is released. The lock can only be released by the holder, and if the thread does not hold the lock and executes this method, an exception may occur.

  4. Condition newCondition() : Condition object that gets the waiting notification component. This component is bound to the current lock, and the current thread cannot call the await() method of the component until it has obtained the lock, after which the current thread scales the lock.

  5. GetHoldCount () : Queries the number of times the current thread holds the lock, that is, the number of times the thread executes the lock method.

  6. GetQueueLength () : Returns an estimate of the number of threads waiting to acquire the lock, such as starting 10 threads and 1 thread acquiring the lock, which returns 9

  7. GetWaitQueueLength :(Condition Condition) returns the estimate of the number of threads waiting for the given Condition associated with this lock. For example, if 10 threads are using the same condition object and all 10 threads are executing the await method of the condition object, executing this method returns 10

  8. HasWaiters (Condition Condition) : Loves his work: Queries whether there are threads waiting on a given Condition (Condition) associated with this lock, and how many threads execute the Condition. Await method for the specified contidion object

  9. HasQueuedThread (Thread Thread) : Queries whether a given Thread is waiting to acquire this lock

  10. HasQueuedThreads () : Whether there are threads waiting for this lock

  11. IsFair () : indicates whether the lock isFair

  12. IsHeldByCurrentThread () : Whether the current thread holds the lock. The thread executes the lock method before and after false and true, respectively

  13. IsLock () : Whether this lock is occupied by any thread

  14. LockInterruptibly () : Acquires the lock if the current thread is not interrupted

  15. TryLock () : Attempts to acquire the lock, only if the lock is not occupied by the thread when called

  16. TryLock (Long Timeout TimeUnit Unit) : Obtains the lock if it is not held by another thread within a given wait time.

6. Unfair locking

JVM mechanisms that allocate locks randomly and nearby are called unfair locks, and ReentrantLock provides an initialization method in the constructor for whether or not the lock is fair, which defaults to unfair locks. The actual execution of unfair locks is far more efficient than that of fair locks. Unless the program has special needs, the allocation mechanism of unfair locks is most commonly used.

7. Fair lock

A fair lock means that the lock allocation mechanism is fair. Usually, the thread that requests the lock first will be allocated the lock first. ReentrantLock defines a fair lock by providing an initialization method in the constructor to determine whether the lock is fair.

8. ReentrantLock and synchronized

  1. ReentrantLock uses methods Lock () and unlock() to lock and unlock. Unlike synchronized, which is automatically unlocked by the JVM, ReentrantLock needs to be unlocked manually. In order to prevent the program from being able to unlock properly due to an exception, the ReentrantLock must be unlocked in the finally control block.

  2. The advantages of ReentrantLock over synchronized are interruptible, fair locking, and multiple locks. ReentrantLock is required in this case.

Already implemented

public class MyService {
private Lock lock = new ReentrantLock();
//Lock lock=new ReentrantLock(true); / / fair lock
//Lock lock=new ReentrantLock(false); // Unfair lock
private Condition condition=lock.newCondition();/ / create the Condition
public void testMethod() {
try {
lock.lock();/ / lock lock
//1: wait
/ / System. Out. Println (" began to wait ");
condition.await();
// To make the thread wait by creating the Condition object, the lock must be obtained by executing the lock.lock method
//:2: the signal method wakes up
condition.signal();// The condition's signal method wakes up the wait thread
for (int i = 0; i < 5; i++) {
System.out.println("ThreadName=" + Thread.currentThread().getName()+ ("" + (i + 1))); }}catch (InterruptedException e) {
e.printStackTrace();
}
finally
13/04/2018  Page 68 of 283{ lock.unlock(); }}}Copy the code

Condition and Object lock methods are different

  1. Condition’s AWIAT method is equivalent to Object’s WAIT method
  2. The Condition signal method is equivalent to the notify method of Object
  3. The signalAll method of Condition is equivalent to notifyAll method of Object
  4. The ReentrantLock class can wake up threads with specified conditions, whereas Object wakes up randomly

TryLock and the difference between lock and lockInterruptibly

  1. TryLock (long timeout,TimeUnit) returns true if the lock is obtained, false if it is not

2. Lock returns true if the lock is available, but if not, wait until the lock is available. 3. However, if both threads are interrupted, lock does not throw an exception, while lockInterruptibly does.