background

Prior to JDK1.5, synchronized was a brilliant solution to Java concurrency problems:

  1. The normal synchronization method locks the current instance object
  2. Statically synchronized method that locks the current Class object
  3. Synchronize blocks, locking objects configured in parentheses

Take synchronous blocks for example:

public void test(a){
  synchronized(object) { i++; }}Copy the code

The following command is compiled by Javap -v:

Monitorenter instructions are inserted at the beginning of the synchronized code block after compilation; Monitorexit inserts into the end of the method and exception (actually hiding the try-finally). Each object has a monitor associated with it. When a thread executes the Monitorenter instruction, it gains ownership of the monitor that corresponds to the object. You get the lock on the object

When another thread executes on a synchronized block, it blocks because it does not own the corresponding Monitor, and control is handed over to the operating system. Therefore, it will switch from user mode to kernel mode, and the operating system is responsible for scheduling between threads and state change of threads, requiring frequent switching between these two modes (context conversion). This kind of competition for the kernel is bad and can cause a lot of overhead, so people call it heavyweight lock, and it is also very inefficient, which gives many children the impression that the synchronized keyword does not perform well compared to other synchronization mechanisms

The free Java concurrent programming booklet is here

The evolution of the lock

Coming to JDK1.6, how can you optimize to make locking lighter? The answer is:

Lightweight lock: CPU CAS

If the CPU can handle locking/releasing with a simple CAS, there will be no context switching and it will be much lighter than a heavyweight lock. However, when the competition is very fierce, it is a waste of CPU to try more CAS. Therefore, it is better to upgrade the lightweight lock to the heavyweight lock

Programmers on the way to pursue perfection is endless, the HotSpot of the author through the study found that in most cases, the lock does not exist a multithreaded competition not only, but by the same thread for many times, always repeated the same thread locks, lock if there is a lightweight way of acquiring a lock (CAS), there is a certain price, How to make this less costly?

Biased locking

Biased locking is a process in which the lock object subconsciously “favors” the same thread to access the lock. The lock object will remember the thread ID, and when the thread acquires the lock again, it will reveal its identity. If the thread acquires the lock with the same ID, it will be better

However, in multi-threaded environment, it is impossible for the same thread to acquire the lock all the time. Other threads also need to work. If there are multiple threads competing, there will be a process of biased lock upgrade

Consider this for a moment: Can biased locks bypass lightweight locks and be upgraded to heavyweight locks?

The purpose of having multiple lock states for the same lock object is obvious:

The fewer resources consumed, the faster the program executes

Partial Lock, light Lock, it will not call the system Mutex Lock (Mutex Lock), just to improve the performance of the two states of the Lock, so that the most appropriate strategy can be adopted in different scenarios, so we can conclude:

  • Biased lock: in the case of no contention, only one thread enters the critical area, using biased lock

  • Lightweight locks: Multiple threads can alternately enter critical sections using lightweight locks

  • Heavyweight lock: Multiple threads enter the critical area at the same time, and the operating system mutex handles it

At this point, you should understand the big picture, but there are still a lot of questions:

  1. Where does the lock object store the thread ID to identify the same thread?
  2. How did the upgrade process transition?

To understand these questions, you need to know the structure of Java object headers

Learn about Java object headers

According to the general understanding, thread ID identification requires a set of mapping mapping relations. If this mapping relationship is maintained separately, thread safety needs to be considered. According to Occam’s Razor principle, everything in Java is an object, and objects can be used as locks. Instead of maintaining a mapping relationship separately, it is better to centrally maintain the lock information on the Java object itself

A Java object header consists of up to three parts:

  1. MarkWord
  2. ClassMetadata Address
  3. Array Length (only available if the object is an Array)

Markword is the key to save the lock state. The object lock state can be upgraded from biased lock to lightweight lock, and then to heavyweight lock, plus the initial no-lock state, which can be understood as four states. To represent that much information in one object is naturally bitwise stored, and in 64-bit operating systems, it is stored this way (note the color notation), Wanted to see specific comments can be hotspot (1.8) source file path/hotspot/SRC/share/vm/oops/markOop HPP line 30

With that basic information, we just need to figure out, how does the lock information change in MarkWord

Cognitive bias lock

As programmers, we like to use code to talk. The official website of openJDK provides the tool JOL (Java Object Layout) to check the memory layout of objects.

Maven Package

<dependency>
  <groupId>org.openjdk.jol</groupId>
  <artifactId>jol-core</artifactId>
  <version>0.14</version>
</dependency>
Copy the code

Gradle Package

implementation 'org. Its. Jol: jol - core: 0.14'
Copy the code

Let’s take a closer look at biased locking through the code

Note:

The chart above (left to right) represents high -> low

The JOL output (left to right) represents low -> high

Look at the test code

Scenario 1

	public static void main(String[] args) {
		Object o = new Object();
		log.info("Not in sync block, MarkWord is:");
		log.info(ClassLayout.parseInstance(o).toPrintable());
		synchronized (o){
			log.info(("Enter sync block, MarkWord is:")); log.info(ClassLayout.parseInstance(o).toPrintable()); }}Copy the code

Look at the output:

The JOL version we used above is 0.14, so let’s take a quick look at the bit values. Next, we’ll use the 0.16 version to see the output, because this version gives us a more friendly description.

In JDK 1.6, bias locking is enabled by default. Why is the code initialized to be unlocked?

Although biased locking is enabled by default, it is enabled with a delay of about 4s. The reason is that synchronized is used in many places in the JVM code. If bias is directly turned on, there will be lock upgrades resulting in contention, which will bring additional performance loss, hence the delay policy

We can through the parameter – XX: BiasedLockingStartupDelay = 0 will be delayed to 0, but is not recommended. To understand what’s going on, here’s a graph:

Scenario 2

Let’s delay the creation of the object by 5 seconds to see if the bias works

	public static void main(String[] args) throws InterruptedException {
		/ / 5 s of sleep
		Thread.sleep(5000);
		Object o = new Object();
		log.info("Not in sync block, MarkWord is:");
		log.info(ClassLayout.parseInstance(o).toPrintable());
		synchronized (o){
			log.info(("Enter sync block, MarkWord is:")); log.info(ClassLayout.parseInstance(o).toPrintable()); }}Copy the code

Review the run results:

The result is as we expected, but the biasable state in the result does not exist in the MarkWord table. In fact, it is an anonymous biased state that the JVM does for us during object initialization

So when a thread enters the synchronized block:

  1. Biased state: replace the CAS with ThreadID directly. If successful, you can obtain biased locks
  2. Do not bias: it becomes a lightweight lock

So the question is again, now that the lock object has a specific bias to the thread, if a new thread comes in to execute the synchronization block will it bias to the new thread?

Scenario 3

	public static void main(String[] args) throws InterruptedException {
		/ / 5 s of sleep
		Thread.sleep(5000);
		Object o = new Object();
		log.info("Not in sync block, MarkWord is:");
		log.info(ClassLayout.parseInstance(o).toPrintable());
		synchronized (o){
			log.info(("Enter sync block, MarkWord is:"));
			log.info(ClassLayout.parseInstance(o).toPrintable());
		}

		Thread t2 = new Thread(() -> {
			synchronized (o) {
				log.info("New thread acquires lock, MarkWord is:"); log.info(ClassLayout.parseInstance(o).toPrintable()); }}); t2.start(); t2.join(); log.info("The main thread looks at the lock object again, with the MarkWord:");
		log.info(ClassLayout.parseInstance(o).toPrintable());

		synchronized (o){
			log.info(("The main thread enters the sync block again, MarkWord is:")); log.info(ClassLayout.parseInstance(o).toPrintable()); }}Copy the code

Looking at the results, something strange happens:

  • Marker 1: Initial biased state

  • Flag 2: After the main thread is biased, the main thread exits the synchronized code block

  • Flag 3: A new thread enters the synchronized code block and upgrades to a lightweight lock

  • Flag 4: The new thread lightweight lock exits the synchronized code block, which is viewed by the main thread and becomes unbiased

  • Flag 5: Since the object cannot be biased, the same scene 1 main thread enters the synchronization block again, and the lightweight lock is naturally used

At this point, scenario 123 can be summarized as a diagram:

From such running results, biased locking is like a “one-shot deal”. As long as biased to one thread, the subsequent attempts to acquire the lock by other threads will become lightweight locks, which has very limitations. In fact, this is not the case. If you look closely at mark 2 (biased state), there is an epoch that we have not mentioned, and this value is the key to breaking this limitation. Before we get to the epoch, we need to understand the concept of biased undo

The free Java concurrent programming booklet is here

To undo

Before we go into partial undo, we need to clarify the concept that partial lock undo and partial lock release are two different things

  1. Undo: Generally speaking, when multiple threads compete to stop using biased mode, the lock object is told to stop using biased mode
  2. Release: Corresponds to the exit of a synchronized method or the end of a synchronized block, as you might normally understand it

What is biased undo?

To reverse the bias state, that is, to change the value of the third bit of the MarkWord from 1 back to 0

If only one thread acquires the lock, plus the mechanism of “partiality”, there is no reason to revoke bias, so the cancellation of bias can only happen in the case of contention

Want to cancel biased locking, can not have influence on the thread to hold biased locking, so must wait hold biased locking threads to reach a safepoint security point (the security point here is the JVM in order to ensure that in the process of recycling reference relationship will not change setting a safe state, all threads in this state will suspend work), The thread that acquired the bias lock is suspended at this safe point

At this safe point, threads may still be in different states, but the conclusion is first (since this is how the source code is written, any confusion will be explained later).

  1. The thread is not alive or the thread is alive but exits the synchronized block, so it’s easy to just undo the bias

  2. A thread that is alive but still inside a synchronized block is upgraded to a lightweight lock

This seems to have nothing to do with epoch, because it’s not the whole scene. Biased locking is a scheme to improve program efficiency in specific scenarios, but it does not mean that programmers write programs that meet these specific scenarios, such as these scenarios (under the premise of enabling biased locking) :

  1. One thread creates a large number of objects and performs the initial synchronization operation, and then another thread uses these objects as locks for subsequent operations. In this case, there will be a lot of biased lock revocation operations

  2. Using biased locks knowing that there are multithreaded races (producer/consumer queues) can also lead to various revocations

Obviously, these two scenarios will definitely lead to biased undo. The cost of one biased undo is irrelevant, but the cost of a large number of biased undo cannot be ignored. So what? Without disabling bias locking or tolerating the increased cost of a large number of bias unlocks, the solution is to design a tiered bottom line

Bulk rebias

This is a quick solution to the first scenario. Maintain a bias lock undo counter for each class, on a class basis, and each time a bias undo operation occurs on an object of that class, the counter is +1 when this value reaches the bias threshold (default: 20) :

BiasedLockingBulkRebiasThreshold = 20
Copy the code

The JVM believes that the class has a bias lock problem, so it performs batch rebias, which is implemented using the epoch

Epoch, as it means Epoch, is a time stamp. Each class object has a corresponding EPOCH field, which is also present in the Mark Word for each object in the biased lock state, and whose initial value is the epoch value in the class when the object was created (when the two are equal). Each time a batch rebias occurs, this value is incremented by one, while traversing the stack of all threads in the JVM

  1. Find all of the classBeing lockedBias the lock object to itepochField changed to the new value
  2. In the classNot in the locked stateThe bias lock object (not held by any thread, but previously held by a thread, such a lock object markword must also be biased) holdsepochField value unchanged

The next time the lock is acquired, the epoch of the current object and the epoch of the class will not be revoked, even if the current thread is already in the preference of another thread. Instead, it directly changes its Mark Word thread ID to the current thread ID through CAS operation, which is also a certain degree of optimization, after all, there is no upgrade lock;

If the epoch is the same, no batch rebias has occurred. If Markword has a thread ID and other locks compete, the lock will be upgraded (as in the previous example epoch=0).

The batch bias is the first step bottom line, and the second step bottom line

Bulk revoke

When the rebias threshold is reached, assuming that the class counter continues to grow, when it reaches the batch undo threshold (default: 40),

BiasedLockingBulkRevokeThreshold = 40
Copy the code

The JVM considers the class to be in multithreaded contention and marks it as unbiased. Then for the class lock, go directly to the lightweight lock logic

This is the bottom line of the second ladder, but in the transition from the first ladder to the second ladder, before bias locking is completely disabled, there is another chance for a change, and that is another timer:

BiasedLockingDecayTime = 25000
Copy the code
  1. A batch undo occurs if the cumulative undo count reaches 40 within 25 seconds of the last batch rebias occurring.
  2. If it is more than 25 seconds since the last batch rebias occurred, it is reset at[20, 40)Count inside. Give me another chance

If you are interested, you can write code to test the critical point and observe the change of the lock object Markword

So far, the entire workflow of biased locking can be represented by a graph:

At this point, you should have a basic understanding of the bias lock, but I still have a lot of questions in my mind, let’s continue to look at:

HashCode which go to

In scenario 1 above, there is no lock state and no hashcode in the object header; The object header still doesn’t have hashCode. Where is our HashCode?

The first thing to remember about Hashcode is that it is not written to the Object header when the Object is created. Rather, it is stored in the Object header after the first call to Object:: HashCode() or System::identityHashCode(Object). This value should always be the same after the first hashcode generation, but the bias lock and the markword that changes the lock object back and forth must affect the generation of hashCode. , let’s verify with code:

The scene of a

	public static void main(String[] args) throws InterruptedException {
		/ / 5 s of sleep
		Thread.sleep(5000);

		Object o = new Object();
		log.info("Hashcode not generated, MarkWord is:");
		log.info(ClassLayout.parseInstance(o).toPrintable());

		o.hashCode();
		log.info("Hashcode generated, MarkWord:");
		log.info(ClassLayout.parseInstance(o).toPrintable());

		synchronized (o){
			log.info(("Enter sync block, MarkWord is:")); log.info(ClassLayout.parseInstance(o).toPrintable()); }}Copy the code

Let’s see what the results are

The conclusion is that even if the Object is initialized as a biased state, once Object::hashCode() or System::identityHashCode(Object) is called, lightweight locking is used directly to enter the synchronized block

Scenario 2

What happens if you bias one thread, generate hashcode, and then the same thread enters the synchronized block? Look at the code:

	public static void main(String[] args) throws InterruptedException {
		/ / 5 s of sleep
		Thread.sleep(5000);

		Object o = new Object();
		log.info("Hashcode not generated, MarkWord is:");
		log.info(ClassLayout.parseInstance(o).toPrintable());

		synchronized (o){
			log.info(("Enter sync block, MarkWord is:"));
			log.info(ClassLayout.parseInstance(o).toPrintable());
		}

		o.hashCode();
		log.info("Generate hashcode");
		synchronized (o){
			log.info(("The same thread enters the synchronized block again, the MarkWord is:")); log.info(ClassLayout.parseInstance(o).toPrintable()); }}Copy the code

View the run result:

Conclusion: Same as scenario 1, lightweight locks will be used directly

Scenario 3

So what happens if the object is biased and those two methods are called in the synchronized block? Continue code validation:

	public static void main(String[] args) throws InterruptedException {
		/ / 5 s of sleep
		Thread.sleep(5000);

		Object o = new Object();
		log.info("Hashcode not generated, MarkWord is:");
		log.info(ClassLayout.parseInstance(o).toPrintable());

		synchronized (o){
			log.info(("Enter sync block, MarkWord is:"));
			log.info(ClassLayout.parseInstance(o).toPrintable());
			o.hashCode();
			log.info("In biased state, generate hashcode, MarkWord:"); log.info(ClassLayout.parseInstance(o).toPrintable()); }}Copy the code

To see the results:

The conclusion is that if the object is in the biased state, hashCode is generated and it is upgraded directly to a heavyweight lock

Finally, use a passage from the book to describe the relationship between locks and hashcode before

What happens when you call the object.wait () method?

In addition to the hashCode method mentioned above, Object also provides a wait() method, which is also commonly used in synchronized blocks. How does this affect locks? Look at the code:

	public static void main(String[] args) throws InterruptedException {
		/ / 5 s of sleep
		Thread.sleep(5000);

		Object o = new Object();
		log.info("Hashcode not generated, MarkWord is:");
		log.info(ClassLayout.parseInstance(o).toPrintable());

		synchronized (o) {
			log.info(("Enter sync block, MarkWord is:"));
			log.info(ClassLayout.parseInstance(o).toPrintable());

			log.info("wait 2s");
			o.wait(2000);

			log.info(("After invoking wait, MarkWord is:")); log.info(ClassLayout.parseInstance(o).toPrintable()); }}Copy the code

View the run result:

The conclusion is that the wait method is unique to mutex (heavyweight locks) and will be upgraded to heavyweight locks once called.

Finally, enrich the lock object variation diagram:

The free Java concurrent programming booklet is here

Farewell deflection lock

JEP 374: Deprecate and Disable Biased Locking JEP 374: Deprecate and Disable Biased Locking JEP 374: Deprecate and Disable Locking JEP 374: Deprecate and Disable Locking JEP 374: Deprecate and Disable Locking JEP 374: Deprecate and Disable Locking

The update to this specification is fairly recent, starting in JDK15

The explanation is that maintenance costs are too high

Finally, prior to JDK 15, bias locking was enabled by default. Starting from JDK 15, bias locking is disabled by default, unless it is enabled by UseBiasedLocking

An article on Quarkus was even more direct

Biased locking adds a huge amount of complexity to the JVM, with only a few very experienced programmers understanding the process, high maintenance costs, and a significant drag on the development of new features (to put it another way, you’ve got it, are the few experienced programmers? Ha ha)

conclusion

Partial lock may have gone through its life, some students may directly ask, are deprecated, JDK is 17, still talk so much why?

  1. Java as it is, I’m using Java8, which is a lot of mainstream status, at least the version you’re using hasn’t been deprecated
  2. Interviews will still be asked a lot
  3. In case there is a better design, the bias lock will come back in a new form. Understanding the change will help you understand the design behind it
  4. Occam’s Razor principle, our reality optimization is the same, if it is not necessary to add entities, if the added content brings a lot of cost, it is better to boldly abolish, accept a little gap

For biased locking before I am just a simple theory of cognition, but to write this article, I read a lot of data, including also revisit the Hotspot source, said these content also cannot fully illustrate the whole process of biased locking details, also need to practice track view, here is the source code of a few key entrance, facilitate everybody to track:

  1. To lock the entrance: hg.openjdk.java.net/jdk8u/jdk8u…
  2. To cancel the entrance: hg.openjdk.java.net/jdk8u/jdk8u…
  3. Biased locking free entry: hg.openjdk.java.net/jdk8u/jdk8u…

If there is any question in this article, please leave a message to discuss it. If there is any mistake, please help to correct it

Soul asking

  1. Where does hashCode exist for lightweight and heavyweight locks?

The resources

Thanks for the elite summary of all predecessors, I can refer to it for understanding:

  1. www.oracle.com/technetwork…
  2. www.oracle.com/technetwork…
  3. Wiki.openjdk.java.net/display/Hot…
  4. Github.com/farmerjohng…
  5. zhuanlan.zhihu.com/p/440994983
  6. Mp.weixin.qq.com/s/G4z08Hfiq…
  7. www.jianshu.com/p/884eb5126…

Personal blog: https://dayarch.top

Day arch a soldier | original