What is pessimistic lock and optimistic lock

The optimistic lock corresponds to the optimistic people in life always think things to the good direction, the pessimistic lock corresponds to the pessimistic people in life always think things to the bad direction. These two kinds of people have their own advantages and disadvantages, and one kind of people is better than the other.

Pessimistic locking

Always assume the worst, every time to fetch the data that people will change, so every time when take data will be locked, so people want to take this data will be blocked until it got locked (Shared resources to only one thread at a time using, other threads blocked, after use to transfer resources to other threads). Many of these lock mechanisms are used in traditional relational databases, such as row lock, table lock, read lock, write lock, etc., are locked before the operation. Exclusive locks such as Synchronized and ReentrantLock in Java are implementations of pessimistic locking ideas.

Optimistic locking

It always assumes the best case. When fetching data, it thinks that others will not modify it, so it will not be locked. However, when updating data, it will judge whether others have updated the data during this period, which can be implemented by using the version number mechanism and CAS algorithm. Optimistic locking is applicable to multiple read applications, which can improve throughput. Mechanisms such as write_condition provided by databases are actually optimistic locking. In Java. Java util. Concurrent. Atomic package this atomic variable classes is to use the optimistic locking a way of implementation of CAS, golden nine silver ten share the interview notes address: optimistic and pessimistic locking.

Use scenarios of the two types of locks

From the introduction of the two types of locks above, we know that each type of lock has its advantages and disadvantages, and one can not be regarded as better than the other. For example, optimistic lock is suitable for the case of less write (multi-read scenario), that is, when the conflict really rarely occurs, this can save the cost of locking, and increase the overall throughput of the system. However, in the case of overwrite, there will be frequent conflicts, which will cause the upper-layer application to repeatedly retry, which reduces performance, so pessimistic locking is appropriate in the case of overwrite.

There are two common ways to implement optimistic locking

Optimistic locking is generally implemented using the version number mechanism or the CAS algorithm.

1. Version number mechanism

Generally, the version field is added to the data table to indicate the number of times the data has been modified. When the data is modified, the version value is increased by one. When thread A wants to update the data value, it will read the version value as well as the data value. When submitting the update, it will update only if the version value it just read is the same as the version value in the current database. Otherwise, the update operation will be retry until the update succeeds.

Here’s a simple example:

Assume that the account information table in the database has a version field, the current value is 1; The current account balance field is $100.

Operator A then reads it (version=1) and deducts 50 (50 (50 (100-$50)) from its account balance.

During operator A’s operation, operator B also reads in this user information (version=1) and deducts 20 (20 (20 (100-$20)) from his account balance.

Operator A completes the modification, adds the version number of the data by one (version=2), together with the balance of the account after deduction (balance=$50), and submits to the database for update. At this time, because the version of the submitted data is larger than the current version of the database record, the data is updated, and the database record version is updated to 2.

Operator B completes the operation and attempts to submit the data to the database (balance=$80) by adding one to the version number (version=2). Operator B’s submission was rejected because the optimistic locking policy of “current last updated version is equal to operator’s first version number” was not met.

In this way, operator B avoids the possibility of overwriting operator A’s operation results with results modified based on version=1 of the old data.

2. CAS algorithm

Compare and swap, or compare and swap, is a named lock-free algorithm. Lockless programming is to achieve variable Synchronization between multiple threads without using locks. In other words, it is also called non-blocking Synchronization when no threads are blocked. The CAS algorithm involves three operands

Memory value V to be read and written

The value A for the comparison

The new value B to write

CAS atomically updates the value of V with the new value B if and only if the value of V is equal to A, otherwise nothing is done (compare and replace is an atomic operation). In general, it’s a spin operation, that is, repeated retries.

Disadvantages of optimistic locking

ABA problem is a common problem in optimistic locking

1 ABA problem

If A variable V is first read as A value and is still A value when we are ready to assign, can we say that its value has not been modified by another thread? Obviously not, because in the meantime its value could be changed to something else and then changed back to A, and the CAS operation would assume that it was never changed. This problem is known as the “ABA” problem of CAS operation.

The AtomicStampedReference class after JDK 1.5 provides this capability, where the compareAndSet method first checks whether the current reference equals the expected reference, and whether the current flag equals the expected flag. If all are equal, The reference and the value of the flag are set atomically to the given update value.

2 long cycle time and high overhead

Spinning CAS (that is, spinning until it succeeds) can be very costly to the CPU if it does not succeed for a long time. The pause instruction does two things. First, it can delay the execution of a de-pipeline so that the CPU does not consume too much resources. The amount of delay depends on the version of the implementation. Second, it prevents CPU pipeline flush due to memory order violation when exiting a loop, thus improving CPU execution efficiency.

3 can only guarantee atomic operations on a shared variable

CAS is valid only for a single shared variable and is invalid when an operation involves multiple shared variables. However, starting with JDK 1.5, the AtomicReference class is provided to ensure atomicity between reference objects. You can place multiple variables in a single object to perform CAS operations. So we can use locks or use the AtomicReference class to combine multiple shared variables into a single shared variable.

Usage scenarios of CAS and synchronized

To put it simply, CAS is used for low-write scenarios (high-read scenarios, usually with fewer collisions) and synchronized is used for high-write scenarios (high-write scenarios, usually with more collisions).

In the case of less resource competition (less thread conflict), using synchronized synchronization lock to switch between thread blocking and waking up and switching between user mode and kernel mode will waste CPU resources. While CAS is implemented based on hardware, it does not need to enter the kernel, does not need to switch threads, and has less chance of operating spin, so it can obtain higher performance.

In the case of severe resource competition (severe thread conflict), CAS has a higher probability of spin, thus wasting more CPU resources and being less efficient than synchronized.

Supplemental: Synchronized has long been a fixture in the Java concurrent programming universe, and a long time ago many referred to it as “heavyweight lock.” However, after JavaSE 1.6, bias locking and lightweight locking, which were introduced primarily to reduce the performance cost of acquiring and releasing locks, and various other optimizations, became less heavy in some cases. The underlying implementation of synchronized mainly relies on lock-free queues. The basic idea of synchronized is spin-back blocking, and the Lock contention continues after the competition switch, slightly sacrificing fairness but achieving high throughput. With fewer thread conflicts, it can achieve similar performance to CAS. In the case of serious thread conflicts, the performance is much higher than THAT of CAS.