“This is the 21st day of my participation in the First Challenge 2022. For details: First Challenge 2022.”

This article focuses on the use of the volatile keyword, first outlining its three characteristics, then introducing the JMM model, and finally explaining why volatile is used in singleton patterns (lazy patterns).

Characteristics of the volatile keyword

Volatile is a lightweight synchronization mechanism provided by the JVM virtual machine

  • Guaranteed visibility
  • Atomicity is not guaranteed
  • Disallow instruction reordering

Overview of the JMM memory model

The JMM is outlined below

Thread-safety model

The Java Memory Model (JMM) itself is an abstract concept that doesn’t really exist. It is a set of rules or specifications that define how variables in a program (including instance fields, static fields, and elements that make up array objects) can be accessed.

JMM rules on synchronization:

  • Before a thread can be unlocked, it must flush the value of the shared variable into main memory
  • Before a thread locks, it must read the new value of main memory into its own working memory
  • Lock unlocking is the same lock

Because the JVM to run the program implementation is thread, and every thread creation when the JVM to create a working memory (called the stack space) in some places the working memory is the private data area of each thread, while the Java memory model of all are stored in main memory, main memory is Shared memory region, all threads can access, But the thread to the operation of the variable (read and assignment) must be in working memory, the first thing to copy the variables from the main memory to their working memory space, and then to operating variables, operation to complete before you write variables back to main memory, no direct operation of the main memory of the variables, each thread working memory stored in main memory of the copy of the variables in the copy, Therefore, different threads cannot access each other’s working memory, and the communication between threads (passing values) must be done through the main memory, followed by a simple access process

Schematic diagram of variable modification in JMM model (take initFlag variable operated by multi-core CPU as an example)

There are three major features of the JMM

1. Visibility

We know from the previous JMM introduction

Operations on shared variables in main memory in each thread are copied to its own working memory area by each thread cell and then written back to main memory.

That is, there may be A thread AAA that changes the value of the shared variable X but has not yet accessed the main memory, and another thread BBB operates on the same variable X in the main memory, but the shared variable X in the working memory of A new city is not visible to thread B.

This delay in synchronizing working memory with main memory creates visibility problems.

class MyData {
    int number = 0;

    public void addTo60(a) {
        this.number = 60; }}public class VolatileDemo {

    public static void main(String[] args) {
        MyData myData = new MyData(); / / resource class

        new Thread(() -> {
            System.out.println(Thread.currentThread().getName() + "\ t enter");
            try { TimeUnit.SECONDS.sleep(3); } catch (InterruptedException e) { e.printStackTrace(); }
            myData.addTo60();
            System.out.println(Thread.currentThread().getName() + Update "\ t");
        }, "AAA");

        // The second thread is our main thread
        while (myData.number == 0) {
            // main is waiting for the loop until number does not equal zero
        }
        System.out.println(Thread.currentThread().getName() + "\t changed to 60 successfully"); }}Copy the code

2. Atomicity

What does atomicity mean?

Indivisible, complete, that is, a thread is doing a specific business, the middle can not be added or divided. You need to succeed collectively, or fail at the same time.

MyData myData = new MyData();
for (int i = 0; i < 20; i++) {
    new Thread(() -> {
        for (int j = 0; j < 1000; j++) {
            myData.addPlusPlus(); This.number ++;}},"T" + i).start();
}
while (Thread.activeCount() > 2) {
    Thread.yield();
}
System.out.println(myData.number);
Copy the code

this.number++; The operation is not thread-safe, so the result is not necessarily 20000. To ensure atomicity, the synchronized keyword can be added for synchronization operation. Or use the AtomicInteger thread-safe class provided by JUC.

Why is the value less than 20000? Let’s use javap -c to find out

For ease of viewing bytecode. I modified the ++ program

public class OnePlus {
    volatile int number = 0;

    public void addPlusPlus(a) {
        this.number++; }}Copy the code

Bytecode instruction analysis and viewing


How to solve the atomicity problem? Don’t have to sync

public class OnePlus {
    AtomicInteger number = new AtomicInteger(0);

    public void addPlusPlus(a) { number.getAndIncrement(); }}Copy the code

Order and instruction reordering

When a computer executes a program, to improve performance, the compiler and processor often rearrange instructions, generally divided into three situations.

Source code ==> compiler optimization reordering ==> instructions parallel reordering ==> Memory system reordering ==> instructions finally executed.

However, the thread environment ensures that the final execution result of the program is consistent with the sequence of code execution.

The processor must consider previous data dependencies when reordering

In a multi-threaded environment, threads are executed alternately. Due to the existence of compiler optimization rearrangement, it is uncertain whether the variables used in the two threads can guarantee consistency, and the results are unpredictable.

Reorder 1

public class MySort {

    int x = 11;  / / 1
    int y = 12;  / / 2
    x = x + 5;   / / 3
    y = x * x;   / / 4

}
Copy the code

Possible order of execution:

1234

2134

1324

Question: can number 4 be reordered and called number 1? Not because of data dependencies

Reorder 2

int a, b, y = 0;

Thread 1 Thread 2
x = a y = b
b = 1 a = 2
x =0 y = 0

If the compiler rearranges and optimizes this code, something might happen

Thread 1 Thread 2
b = 1 a = 2
x = a y = b;
x = 2 y =1

Order reordering 3

public class MyReSortSeqDemo {

    int a = 0;
    boolean flag = false;

    public void method1(a) {
        a = 1;          //语句1
        flag = true;    //语句2
    }

    public void method2(a) {
        if (flag) {
            a = a + 5;   //语句3
            System.out.println("*** retValue "+ a); }}}Copy the code

Disallow instruction reordering summary

Volatile disables instruction reordering optimization to avoid out-of-order execution in multithreaded environments.

A Memory Barrier, also known as a Memory Barrier, is a CPU instruction that serves two purposes:

1. Ensure the order in which certain operations are performed.

2. Ensure memory visibility for certain variables (use this feature for volatile memory visibility)

Because both the compiler and the processor can perform instruction rearrangement optimization. Inserting a Memmory Barrier between instructions tells the compiler and CPU that no instructions can be reordered with the memory-barrier instructions. This prevents reordering of instructions before and after the Barrier by inserting a Barrier. Another function of the memory barrier is to force the cache data of various cpus to spawn, so that any CPU can read the latest version of the data

Thread-safe access

Visibility problems caused by delayed synchronization between working memory and main memory

The key can be synchronized or volatile, which can make changes made by one thread immediately visible to other threads.

Visibility problems and orderliness problems caused by instruction reordering

You can use the volatile keyword to solve this problem, because another effect of volatile is to prevent reordering optimizations.

Volatile application scenario

You’ve seen volatile in those places

Singleton DCL

A singleton DCL without the volatile keyword

public class SingletonDemo {

    private static SingletonDemo singletonDemo;


    private SingletonDemo(a) {
        System.out.println(Thread.currentThread().getName() + "\t invoke SingletonDemo()");
    }

    public static SingletonDemo getSingleton(a) {
        if (singletonDemo == null) {
            // synchronize code block lock
            synchronized (SingletonDemo.class) {
                if (singletonDemo == null) {
                    singletonDemo = newSingletonDemo(); }}}returnsingletonDemo; }}Copy the code

Singleton pattern volatile analysis

Singleton DCL with volatile keyword

public class SingletonDemo { private static volatile SingletonDemo singletonDemo; private SingletonDemo() { System.out.println(Thread.currentThread().getName() + "\t invoke SingletonDemo()"); } public static SingletonDemo getSingleton() {if (SingletonDemo == null) {// synchronized (SingletonDemo.class) { if (singletonDemo == null) { singletonDemo = new SingletonDemo(); } } } return singletonDemo; }}Copy the code

Summary of singleton patterns

DCL double lock mechanism is not necessarily thread safe, due to the existence of instruction rearrangement, join

Volatile can prohibit instruction reordering

The reason is that the instance reference object may not be fully initialized until a thread performs the first check and reads instance that is not null.

Simulation code:

instance = new SingletonDemo() 
    
    
memory = allocate() 1. Allocate memory space
instance(memory)    //2. Initialize the object
instance = memory   //3. Set instance to execute the allocated memory address. = null
    
Copy the code

There is no data dependency between steps 2 and 3, and the result of execution before and after reordering does not change in the thread, so this reordering optimization is allowed

instance = new SingletonDemo() 
    
    
memory = allocate() 1. Allocate memory space
instance = memory   //3. Set instance to execute the allocated memory address. = null but the object has not been initialized yet!
instance(memory)    //2. Initialize the object

    
Copy the code

However, instruction reordering only ensures consistent execution of serial statements (single thread), but does not care about semantic consistency across multiple threads.

So when a thread accesses instnce that is not null, the tail of the instance instance has already been initialized, thus causing thread-safety problems.