preface

In this article, the common synchronized will be developed around some common problems. Here are the questions we will focus on:

  • Optimistic locks and pessimistic locks?
  • What is the underlying implementation of synchronized?
  • How is synchronized reentrant implemented?
  • Synchronized lock upgrade?
  • Is synchronized fair or unfair?
  • What’s the difference between synchronized and volatile?
  • What should be paid attention to when using synchronized?

Note: this is done in JDK1.8.

Optimistic locks and pessimistic locks?

For the definition and usage scenarios of optimistic and pessimistic locks, please refer to Mysql InnoDB’s Various locks.

About pessimistic Lock, the following is introduced, synchronized and Lock are pessimistic Lock, let’s look at optimistic Lock in detail.

Implementation of optimistic locking -CAS

The core of optimistic locking is CAS (Compare And Swap- a non-preemptive synchronization method), which is a lock-free algorithm. The CAS algorithm involves three operands:

  • Memory value V that needs to be read or written.
  • The value A for comparison.
  • The new value B to write.

If the current memory value V is equal to the memory value A, the new memory value B will be written. Doesn’t the new value B overwrite the value that someone else just wrote? Yes, the comparison and write need to ensure that an atomic operation, here through CPU CMPXCHG instruction, to compare A in the register and memory value V, if it is equal to B, if not equal value V assigned to the register value A, if you want to continue spinning, do not want to continue can throw the corresponding error.

Let’s take a look at how a common AtomicInteger spins.

AtomicInteger

A int value, can be atomic updates about the atomic variable attributes describe specific can consult {@ link Java. Util. Concurrent. Automic} package. AutomicInteger is used in applications such as atomic increment counters and cannot be used in place of Integer. However, this class extends Number to allow it to be accessed uniformly by some tool or common code that handles numbers.

Fields and constructors

package java.util.concurrent.atomic; public class AtomicInteger extends Number implements java.io.Serializable { private static final long serialVersionUID =  6214790243416807050L; // Set to update with Unsafe.compareAndSwapInt. private static final Unsafe unsafe = Unsafe.getUnsafe(); private static final long valueOffset; // Get the offset in the memory distribution of the value object to find the value static {try {valueOffset = unsafe.objectFieldoffset (AtomicInteger.class.getDeclaredField("value")); } catch (Exception ex) { throw new Error(ex); }} // Ensure memory availability and disallow instruction reordering. private volatile int value; public AtomicInteger(int initialValue) { value = initialValue; } public AtomicInteger() {}Copy the code

incrementAndGet

/ / public final int incrementAndGet() {return unsafe. GetAndAddInt (this,); valueOffset, 1) + 1; }Copy the code

Var1: the AtomicInteger object, used for unsafe and valueOffset to obtain the latest value in the object. Var2: value Specifies the offset of the value in the AtomicInteger object. Var4: The increment value is 1.

package sun.misc; public final int getAndAddInt(Object var1, long var2, int var4) { int var5; Do {// Obtain the value of AutomicInteger var5 = this.getIntVolatile(var1, var2); } while(! this.compareAndSwapInt(var1, var2, var5, var5 + var4)); return var5; }Copy the code

Disadvantages of optimistic locking

  • If the concurrency is high and the CAS keeps spinning, it will keep using up the CPU. If there are too many threads spinning, the CPU will soar.
  • Atomic operations of only one shared variable are guaranteed. CAS guarantees atomic operations on one shared variable, but CAS does not guarantee atomic operations on multiple shared variables.
    • The JDK provides the AtomicReference class to ensure atomicity between references and to place multiple variables in a single object for CAS operations.

What is the underlying implementation of synchronized?

Sychronized is achieved by the lock identifier +monitor in the Mark Word in the object header.

Java object head

The object header consists of Mark Word and Klass, each of which is 8 bytes without a compressed pointer.

  • Mark Word: Mark fields – runtime data such as hash codes, GC information, and lock information.
  • Klass: Metadata pointer to the class represented by the object lock.

The lock flag bit + biASED_lock (biASED_lock) together represents several states of an object

monitor

Synchronized implements thread synchronization and collaboration through Monitor.

  • Synchronization relies on the operating system’s Mutex (Mutex). Only threads with Mutex can enter the critical section, and those without Mutex can only block wait, maintaining a blocked queue.

  • Cooperation depends on the object held by synchronized, which can facilitate synchronization of multiple threads. The object can also call wait method to release the lock and let the thread enter the waiting queue. Other threads can call notify and notifyAll methods to wake up the object and obtain the lock again.

Monitor Is used to Monitor lock holding and schedule lock holding. The held object can be thought of as a medium for the lock and can be used to facilitate operation synchronization and collaboration.

For specific examples, see the “Thread source Code Reading and Related Issues”.

The monitor and schedule locks consume resources, including the mutex used by them. To reduce the performance cost of acquiring and releasing locks, JDK 6 introduced “biased locks” and “lightweight locks,” which are assigned below.

unlocked

If the lock flag bit in the object header is 01, whether to bias the lock to 0 indicates that no lock is enabled. To achieve synchronization without locking, use the optimistic lock implementation above -CAS.

Biased locking

Biased locking is a product of lock optimization, marked in the object header to indicate that a thread has entered the critical section, without either using CAS or introducing a heavy monitor when only one thread is accessing it.

Threads will not take the initiative to release the biased locking, only with other threads into attempts to competition biased locking, needs to wait for the global security point (at this point in time not execute bytecode), it will first suspend threads with biased locking, judge whether it is still alive, if death is restored to unlocked state, other threads can occupy Upgrade the lock to “lightweight lock” if it is still in the critical region.

Each thread upon entering the critical view object lock head logo is biased locking is biased locking, will determine whether the current thread and the object header directly into the thread id is the same thread, if not then comparing CAS see whether can replace success (to prevent immediately released), if not success will suspend hold biased locking thread, See if the thread is no longer using the lock. If it is not, release it to the new thread. If it is using the lock, upgrade it to generate a lightweight lock.

Biased locking can be turned off with a JVM parameter: -xx: -usebiasedLocking =false, and the program will default to lightweight locking. The author think

You can see that the mutex overhead can be avoided when only one thread enters a critical section, but you can see that threads do not actively release bias locks. Why not leave the critical section to release when biased locks are in place? Wait for another thread to come in wait for the global point try to pause the thread and then look at the status of the thread holding the lock, okay? These questions are not fully considered, either because of design issues or for some other reason that we are not familiar with the JVM source code. Maybe it will be optimized in later iterations.

So we only need to understand one point: partial locking is a lock optimization, it is not a lock in nature, but the object header is marked, if there is no multi-thread concurrent access to the critical area can reduce the overhead, if there is more than one concurrency will be upgraded.

Lightweight lock

Lightweight locking occurs when a partial locking upgrade or -xx: -usebiasedLocking =false. Before a thread executes a synchronized block, the JVM creates a space in the stack frame of the current thread to store the lock record and copies the Mark Word from the object header into the lock record. Officially called Displaces Mark Word. The thread then attempts to use CAS to replace the Mark Word in the portrait with a pointer to the lock record. If successful, the current thread acquises the lock, if not synchronizes through spin.

Heavyweight lock

Weight locking has a lock flag of 10, which is the most expensive mechanism introduced above.

Synchronized lock upgrade?

The low-level implementation of Synchronized section above.

How is synchronized reentrant implemented?

Let’s prove it with code:

public class SynchronizedReentrantTest extends Father { public synchronized void doSomeThing1() { System.out.println("doSomeThing1"); doSomeThing2(); } public synchronized void doSomeThing2() { System.out.println("doSomeThing2"); super.fatherDoSomeThing(); } public static void main(String[] args) { SynchronizedReentrantTest synchronizedReentrantTest = new SynchronizedReentrantTest(); synchronizedReentrantTest.doSomeThing1(); } } class Father { public synchronized void fatherDoSomeThing() { System.out.println("fatherDoSomeThing"); }}Copy the code

Output:

doSomeThing1
doSomeThing2
fatherDoSomeThing
Copy the code

Synchronized is reentrant.

Heavyweight locks are implemented using the count field in the Monitor object. Biased locks should not only indicate that they are currently held by the thread, while light locks add a Lock Record to indicate the number of reentrant times of the Lock each time it is entered.

The author think

Why biased locking is not recorded into the number, the reentrant, only need to see whether the current thread object in no place to store number, so biased locking does not take the initiative to release critical region (to judge should be nested more troublesome), need another thread to judge whether the current thread active died before release will also try to pause the thread. This is not as good as lightweight and heavyweight locks.

Is synchronized fair or unfair?

Unfair, direct the following example of cooking:

import lombok.SneakyThrows; Public class SyncUnFairLockTest {// private static class DiningRoom {// SneakyThrows public void getFood() { System.out.println(thread.currentThread ().getName() + ": queued "); Synchronized (this) {System. Out. Println (Thread. The currentThread (). The getName () + ": @ @ @ @ @ @ rice in @ @ @ @ @ @ @"); Thread.sleep(200); } } } public static void main(String[] args) { DiningRoom diningRoom = new DiningRoom(); For (int I = 0; i < 5; i++) { new Thread(() -> { diningRoom.getFood(); }, "student id :00" + (I + 1)).start(); }}}Copy the code

Output:

Student number: 001: queuing in students number: 001: @ @ @ @ @ @ dozen rice @ @ @ @ @ @ @ classmate number: 005: queuing in students number: 003: queuing in students number: 004: queuing in students number: 002: queuing in students number: 002: @ @ @ @ @ @ play @ @ @ @ @ @ @ in rice Student number: 004: @ @ @ @ @ @ dozen rice @ @ @ @ @ @ @ classmate number: 003: @ @ @ @ @ @ dozen rice @ @ @ @ @ @ @ classmate number: 005: @ @ @ @ @ @ play @ @ @ @ @ @ @ in riceCopy the code

Notice here I added sleep, because it doesn’t matter for fair lock, first must first perform, but not fair to lock behind threads will first try to get the lock, can’t get into the queue again, so it can avoid the same queued again wake up by the CPU, can improve the efficiency, but not fair lock will be starved to death.

What’s the difference between synchronized and volatile?

Synchronized guarantees atomicity because it’s a lock.

Availability mainly refers to whether the working memory and main memory can be synchronized in a timely manner when threads are shared.

The JMM has two rules about synchronized:

  • The thread must flush the latest value of the shared variable to main memory before it can be unlocked.
  • When a thread locks, it empties the value of a shared variable in working memory, so that when a variable is shared, it needs to read the latest value from main memory.

reference

You think you know CAS?

Monitor Mechanism

Object headers and locks are upgraded

Application of Monitor Object in Sychronized Implementation

Queuing for food: Fair and Unfair Locks (Interview)

The difference between synchronized and volatile in Java

The unspoken Java “lock” thing