directory

• Write it first

• Thread safety

Thread safety in the Java language

Thread-safe implementation

• lock optimization

Spin-locking and adaptive spin

Lock elimination

Lock coarsening

Lightweight lock

Biased locking


• Write it first

Reason, when it comes to thread safe and lock before optimization, need to understand what is the thread, like the concept of hard here I will not say, what is more lightweight than process thread scheduling execution unit, and the introduction of the thread, can put the resource allocation and scheduling the execution of a process to separate, individual threads can share the process resources, It can also be independently scheduled (thread is the basic unit of CPU scheduling), which thread 5 states: new, running, wait, block, end, etc. It should also be mentioned that threads are implemented in three main ways: kernel threads, user threads, and a mixture of user threads and lightweight processes. The specific concept of what is not the content of this article, I will not say here, can be checked to understand. Note that thread scheduling refers to the process in which the system allocates processor rights to threads. There are two main scheduling methods, namely cooperative thread scheduling and preemptive thread scheduling, while Java is preemptive thread scheduling.

• Thread safety

The definition of this abstract, can say so: multiple threads access to an object, if don’t have to consider these threads in the runtime environment scheduling and alternate, also do not need additional synchronization, or the caller in other collaborative operation, call this method ultimately to be able to get the right results, that the object is thread-safe.

Thread safety in the Java language

Now that we have given an abstract definition of thread safety, let’s discuss how thread safety is represented in the Java language. What operations are thread-safe? We can rank thread-safe from strong to weak. We can classify the data shared by various operations in the Java language into five categories: immutable, absolute thread-safe, relative thread-safe, thread-compatible, and thread-antagonistic.

1, immutable, in fact, hear immutable heart know, immutable object must be thread safe, both object method implementation or method caller, do not need to take any thread safety safeguards. Like final (this escape did not occur), java.lang.String: subString(), replace(), concat () does not affect the original value, but instead builds a new immutable value and returns it. Here, for example, is the code for the Integer constructor, which guarantees state immutable by defining the internal state variable value as final.

 private final int value;  

  /** 
   * Constructs a newly allocated {@code Integer} object that 
   * represents the specified {@code int} value. 
   * 
   * @param   value   the value to be represented by the 
   *                  {@code Integer} object. 
   */  
  public Integer(int value) {  
      this.value = value;  
  }
Copy the code

Absolute thread-safety: Regardless of the runtime environment, callers do not need to perform additional synchronization measures, and thread-safety is not necessarily safe. There are many classes that declare themselves thread-safe in the JavaAPI, most of which are not completely thread-safe. Use Vector, for example, whose internal implementation each method is modified with synchronized.

Thread removeThread = new Thread(new Runnable() {  
    @Override  
    public void run(a) {  
        synchronized(vector) {  
            for (int i = 0; i < vector.size(); i++) { vector.remove(i); }}}}); Thread printThread =new Thread(new Runnable() {  
    @Override  
    public void run(a) {  
        synchronized(vector) {  
            for (int i = 0; i < vector.size(); i++) { System.out.println(vector.get(i)); }}}});Copy the code

3. Relatively thread-safe: No additional synchronization is required, but synchronization is required for certain sequential calls, such as those wrapped by the synchronizedCollection() method for Vector\HashTable\Collections.

Thread-compatible: An object itself is not thread-safe, but can be safely used in a concurrent environment through the correct use of synchronization on the calling side. Most Java apis fall under thread-compatible categories, such as ArrayList and HashMap.

5. Thread opposition: code that cannot be used concurrently in a multithreaded environment regardless of whether the calling end has taken synchronization measures. Harmful, should be avoided, such as Thread class suspend () and resume () methods, if two threads hold a Thread object, one attempt to interrupt the Thread, the other attempt to resume the Thread, and concurrently, regardless of whether synchronization is called, the target Thread will have the risk of deadlock. So these two methods have been declared deprecated.

Thread-safe implementation

1. Mutually exclusive synchronization

  • Rule: Ensure that shared data can only be accessed by one thread at a time

  • Mutex is the method; synchronization is the destination

  • Synchronized: reentrant for the same thread and does not lock itself

  • Java.util. concurrent: ReentrantLock

    • Wait interruptible: the thread holding the lock has not been released, and the waiting thread can give up waiting
    • Fair lock: When multiple threads are waiting for the same lock, they must acquire the lock in the order in which the lock was applied
    • Bind multiple conditions: ReentrantLock calls newCondition multiple times, and bind multiple Condition objects
  • Synchronized is preferred. Similar performance.

2. Non-blocking synchronization

  • Principle: do not suspend the thread, there are errors to take compensation measures;

3. No synchronization

  • To ensure thread safety, synchronization is not necessary;

  • Native thread-safe code:

    • Reentrant code: if a method, its calculation results are predictable
    • ThreadLocal storage: java.lang.threadlocal

• lock optimization

Spin-locking and adaptive spin

In the mutex synchronization and mutex synchronization for the biggest impact is blocking the implementation of the performance, the operation of the hung thread and restoring the thread needs to be done in kernel mode, all these operations to concurrent performance of the system has brought a lot of eyes, at the same time, the development of the virtual machine team also noticed in many applications, Shared data locked only lasts for a short period of time, It’s not worth suspending and resuming threads for this time. If there are more than one processor physical machine, can make two or more threads in parallel at the same time, we can make the back request lock the thread “wait”, but don’t give up the processor execution time, and see whether the thread holding the lock will soon release the lock, in order to make a thread to wait, we just need to let the thread executing a busy loop (spin), This technique is known as spin locking. Spin wait is not a substitute for blocking, and despite the number of processors required, spin wait itself consumes processor time, although it avoids the overhead of thread switching. There must be a limit to the spin wait time. If the spin exceeds the limit and the lock is still not acquired, the thread should be suspended in the traditional way. The default number of spins is 10.

And adaptive spin lock means spin time was not fixed, but by a previous spin time on the same lock and lock the owner of the state to determine, if on the same lock object, spin wait just successfully won the lock, and the thread holding the lock is running, the virtual machine will think this spin is also likely to success again, In turn, it will allow spin waits to last relatively long times, such as 100 cycles. The summary is as follows:

  • Spin lock: Make the thread execute a busy loop (actually keep it busy for a while)
  • Adaptive spin: The spin time is determined by the last spin time of the same lock and the state of the lock owner

Lock elimination

Lock elimination refers to the elimination of locks that are detected as impossible to compete for shared data, even though the compiler requires synchronization on some code at runtime. The primary determination of lock elimination is based on data from escape analysis. If it is determined that all data on the heap in a piece of code will not escape and be accessed by other threads, it can be treated as if it is thread private and locking is not necessary. The summary is as follows:

  • When the virtual machine builds on time, it finds some locks that are declared, but are not actually used (there is no shared data contention), and removes them directly

Lock coarsening

In principle, when writing code, it is always recommended to keep the scope of the synchronized block as small as possible – to synchronize only in the actual scope of the shared data. This is to keep the number of operations that need to be synchronized as small as possible, so that if there is a lock contention, the thread waiting for the lock can acquire the lock as quickly as possible. In most cases, the above principle is true, but if a series of consecutive operations repeatedly lock and unlock the same object, even if the locking operation occurs at the weight of the loop, frequent mutex synchronization can lead to unnecessary performance loss, even if there is no thread contention.

  • It is better that the scope of the lock is smaller originally, but there is the case that adds the lock to unlock repeatedly inside, expand the scope then.

Lightweight lock

The word “lightweight” in its name refers to traditional locks that use operating system mutexes, so the traditional locking mechanism is called “heavyweight” locks. First of all, it is important to note that lightweight locks are not intended to replace heavyweight locks. Reduce the performance cost of traditional heavyweight locks using operating system mutex. Lightweight locks improve application synchronization performance because “for the vast majority of locks, there is no competition for the entire synchronization cycle”

  • Reduce the performance cost of traditional heavyweight locks using OS mutex without multithreading competition

Biased locking

The purpose of biased locking is to eliminate the synchronization primitives in the case of uncontested data and further improve the performance of the program. If a lightweight lock uses a CAS operation to eliminate the mutex used in synchronization without contention, a biased lock eliminates the entire synchronization without contention, even the CAS operation. A biased lock is a lock that is biased in favor of the first thread to acquire it. If the lock is not acquired by another thread during subsequent execution, the thread holding the biased lock will never need to synchronize again.

  • The lock is biased in favor of the first thread to acquire it
  • The bias mode ends when another thread attempts to acquire it

Thread safety means that multiple threads can share data and still get prepared results without additional or collaborative synchronous processing. Synchronization and ReentrantLock are used to achieve thread safety in Java development. Also remember that ThreadLocal can store private information for threads. We use locks to ensure synchronization. For unnecessary locks, we use some optimization methods to reduce the performance overhead of locks.