preface

Hi, I’m Jack Xu, and today we’re talking about synchronized. This is the first article in concurrent programming, because there are so many things involved in concurrent programming that it is hard to understand. It would take at least ten articles to complete a series of concurrent programming. I summarized the knowledge points, category classification, in an easy to understand way to tell you clearly, make it clear.

Why Synchronized

The problem is simple. First, let’s look at the following code

To explain why you don't get 100, i++, the computer has to do it in three steps. Read the value of I. 2, add I 1. 3, write the final result of I to memory. So, (1), if thread A reads I = 0, (2), thread B also reads I = 0. (3) Then A adds I by 1 and writes it to memory, at which point I = 1. (4) Next, thread B adds I by 1 so that I = 1. Thread B writes I to memory where I = 1. In other words, thread A and thread B both increment I, but the end result is 1, not 2.Copy the code

After all, so many operations are not atomic, so how do you solve that problem by adding Synchronized

The three major characteristics

The above example demonstrates atomicity. Synchronized ensures visibility, by virtue of the happens-before rule that all changes to variable values in synchronized code are immediately visible to other threads after one thread has executed synchronized code. Synchronized (synchronized) : synchronized (synchronized) synchronized (synchronized) : synchronized (synchronized) synchronized (synchronized)

Method of use

Syntactically, Synchronized can be used in three ways:

  • Modified instance method
public synchronized void eat(a){... . }Copy the code
  • Modified static methods
public static synchronized void eat(a){... . }Copy the code
  • Decorated code block
public void eat(a){
   synchronized(this) {... . }}Copy the code
public void eat(a){
   synchronized(Eat.class){ ....... . }}Copy the code

The first equals the third, the second equals the fourth, and this one is simple. Here’s a summary of using synchronized:

  • Select a lock object, which can be any object;
  • The lock object locks the synchronized code block, not itself;
  • If multiple threads of different types have code to execute simultaneously, the lock object must use the same object held by all threads.
  • Put the code that needs to be synchronized in braces. Synchronization means any one or more of the following: atomicity, visibility, or order. Do not put in code that does not need to be synchronized, thereby affecting code efficiency.

Lock escalation

In the early days of the JDK, synchronized was called a heavyweight lock, because to apply for a lock resource, you had to go through the kernel, system call, and switch from user mode to kernel mode, which was relatively inefficient, and JDK1.6 made some improvements. In order to reduce the performance overhead of acquiring and releasing locks, the concept of biased locking and lightweight locking is introduced. Synchronized: Free, biased, lightweight, and heavyweight;

We know that synchronized locks objects, objects are objects, and the layout of objects in the heap is shown below

Biased locking

(1) If successful, markword stores the current thread ID and then executes the synchronized code block

(2) If it is the same thread locking, do not need to contend, only need to determine whether the thread pointer is the same, can directly execute the synchronized code block

(3) If another thread has acquired the biased lock, this situation indicates that the current lock is in contention, and the thread that has acquired the biased lock needs to be revoked and its hold upgraded to a lightweight lock (this operation can be performed only when the global safety point is reached, i.e. no threads are executing bytecode)

In our application development, there must be more than two threads competing in most cases, so if biased locking is enabled, it will increase the resource consumption to acquire the lock. So bias locking can be set to turn on or off with the JVM parameter UseBiasedLocking

Lightweight lock

(1) The default number of spin is 10, which can be changed by -xx :PreBlockSpin, or the number of spin threads exceeds half the number of CPU cores

(2) After JDK1.6, adaptive spin lock was introduced. Adaptive means that the number of spins is not fixed, but depends on the previous time of spin on the same lock and the state of the lock owner. If, on the same lock object, the spin wait has just succeeded in obtaining a lock, and the thread holding the lock is running, the virtual machine will assume that the spin is likely to succeed again, and it will allow the spin wait to last a relatively longer time. If the spin is rarely successfully acquired for a lock, it may be possible to omit the spin when attempting to acquire the lock in the future, blocking the thread directly and avoiding wasting processor resources

If one of these two conditions is met, the lock is upgraded to a heavyweight lock

Heavyweight lock

That’s when Galeries Lafayette gets called, requests resources from the operating system, Linux Mutex, the CPU is called from level 3 to level 0, the thread hangs, goes into the wait queue, waits for the operating system to schedule, and then maps back to user space.

Let’s just randomly write a simple piece of code with the synchronized keyword. Compile it to a.class file and disassemble it using javap -c xxx.class. We can get the assembly instructions for the Java code. The following two lines of instructions can be found.

Every object in Java is associated with a monitor lock, monitor, that is locked when the monitor is occupied. When the thread executes the Monitorenter instruction, it attempts to acquire ownership of the monitor as follows:

  • If the entry count of monitor is 0, the thread enters monitor, sets the entry count to 1, and the thread is the owner of Monitor.
  • If a thread already owns the monitor and just re-enters, the number of entries into the monitor is increased by one.
  • If another thread has already occupied Monitor, the thread blocks until the number of monitor entries is zero, and then tries again to acquire monitor ownership.

Monitor is reentrant and has a counter. Monitor is not a fair lock

When a thread is blocked, it enters the kernel (Linux) scheduling state. This will cause the system to switch back and forth between the user mode and the kernel mode, which seriously affects the lock performance. The flowchart is as follows:

Lock elimination

We know that StringBuffer is thread safe because its key methods are modified by synchronized, but if we look at the code above, we see that the sb reference is only used in the add method and cannot be referenced by other threads (because it is local and stack private). Therefore, SB is an unshareable resource, and the JVM automatically removes locks inside StringBuffer objects.

public void add(String str1,String str2){
         StringBuffer sb = new StringBuffer();
         sb.append(str1).append(str2);
}
Copy the code

conclusion

Well, this article is pretty clear about what synchronized covers. Synchronized is the most common method of thread safety in Java concurrent programming, and it’s relatively simple to use. Before synchronized optimization, synchronized performance was much worse than ReentrantLock, but since synchronized introduced bias locking, lightweight locks (spin-locks), the performance of both has been similar. In cases where both methods are available, synchronized is even recommended, and I feel that the optimization of synchronized is based on the CAS technology in ReentrantLock. Both are attempts to solve the locking problem in user mode and avoid thread blocking in kernel mode.