The preface

Today, synchronized is an unfair mutex.

Synchronized can be added to methods or code blocks.

An object lock is added when added to a normal method, and a class lock is added when added to a static method.

Synchronized guarantees thread safety in concurrent situations, among other features:

  • Atomicity: Synchronized is mutually exclusive, so only one thread is executing at a time, so atomicity is guaranteed
  • Visibility: After the thread has acquired the lock, it will fetch the latest value from main memory, which ensures visibility
  • Disable instruction reordering: We know that computers change the order in which our code is executed to improve efficiency, and the addition of synchronized prevents instruction reordering through memory barriers
  • Reentrant: Synchronized is reentrant and is implemented by recursions
  • Uninterruptible: a thread cannot enter the wait state until it acquirement a lock. This state cannot be interrupted

Synchronized low-level implementation

Synchronized underlayer is implemented through a C++ object: Monitor.

Monitor has four important member variables:

  • Owner: the current thread that acquires the lock
  • Recursions: Records the number of times the thread obtains locks
  • EntryList: Holds blocked threads
  • Waitset: The thread holding the wait state

Fast implementation of synchronous code

Lock MonitorEnter

When locking, the system checks whether the Monitor object corresponding to the lock object exists. If not, the system creates the Monitor object first.

Check whether the Monitor object recursions >1 and if >1, check whether the owner points to the current thread. If yes, the lock is successfully re-entered. If no, the owner fails to obtain the lock. Recursions =0 if the owner is set to the current object by CAS, the lock is set and recursions is set to 1.

Releases the lock MonitorExit

Recursions will be 1 after each synchronization and when the number is 0, the lock will be released and the blocked thread will be notified to compete for the lock.

The implementation of synchronous methods

ACC_Synchronized is implemented in the synchronized method, which also calls MonitorEnter and MonitorExit implementations underneath.

Lock the optimization

Synchronized has been greatly optimized since JDK1.6, and Synchronized is no longer the heavyweight lock it once was.

Optimization process:

Biased locks -> Lightweight locks -> Spin locks -> Heavyweight locks

Biased locking

Biased locking works when a single thread repeatedly enters a block of code, but not when multiple threads are accessing it.

Our lock state is stored in the lock object layout in heap memory. The layout of an object in memory is divided into ① : object header (KiassPointer, Markword) ② : instance data ③ : alignment fill.

The lock object stores the thread ID (54 bits) in its object header. When the thread obtains the lock, it only needs to compare whether the thread ID is the same and judge whether it is biased to lock (1) and lock flag bit (01).

Lightweight lock

Lightweight locks are suitable for scenarios where multiple threads enter a code block in sequence, but not when there are multiple threads competing for lock resources.

In fact, the process of obtaining lightweight lock is to copy the MarkWord of the lock object to the LockRecord in the stack and update the MarkWord as a pointer to the LockRecord. When the update is successful, the lock is successfully added

spinlocks

Before upgraded to heavyweight lock will be obtained by means of spin lock, because that involves a lot of heavy user mode to kernel mode of operation will waste a lot of system resources (is blocking and wake up the thread of operation, pork and unpork), the default spin if you still get less than 10 times the lock will be upgraded to heavyweight lock.

After JDK1.6, adaptive spin lock was introduced, which will decide whether to spin the lock again based on the number of times the last spin lock was acquired. If it took several times to acquire the lock last time, it will be upgraded to heavyweight lock

Heavyweight lock

The underlying heavyweight locks are MonitorEnter and MonitorExit, which involve a lot of kernel operations, waking up and blocking threads.

Some optimizations for locks

  • Lock elimination: The system automatically determines whether Synchronized is necessary and removes Synchoronized if not.
  • Lock coarsening: If the same lock is added to a series of operations, the scope of the lock is enlarged so that the lock is added only once.
  • Read/write locks: This can be optimized by using the ReadWriteLock read/write lock when we are doing a lot of reads.

Expand the knowledge

A deadlock

Deadlock generation: Deadlock usually occurs when two threads want to acquire each other’s lock resources, and neither thread releases the current occupied resources. Eg: Thread 1 holds A lock and wants to acquire B lock, thread 2 holds B lock and wants to acquire A lock, and neither thread can obtain the desired lock.

How to break a deadlock:

  • Break the mutual exclusion of locks: Make it possible for locks to be owned by multiple threads at the same time (this is not possible, because locks are created for mutual exclusion).
  • Break request and hold conditions: get all resources at once.
  • Lock breaking cyclic wait conditions: Resources are acquired in the order in which they are applied.
  • Destruction non-deprivation condition: When a thread cannot acquire the lock, it should release the occupied resource.

conclusion

Above is some basic knowledge of Synchronized, if there are mistakes or omissions, please comment out, thank you O(∩_∩)O