Three characteristics of volatile:

  • This ensures visibility between threads
  • Atomicity is not guaranteed
  • Reordering prevention

Visibility:

First, each thread has its own working memory, in addition to a CPU’s main memory, which is a copy of main memory. When a thread is working, it cannot directly manipulate values in main memory. Instead, it must copy values from main memory to its own working memory. When a variable is modified, it is first modified in working memory and then flushed to main memory.

Note: When does a thread need to copy a value from main memory to working memory

  • When a lock is released in a thread
  • Thread switch
  • When the CPU has free time (such as thread sleep)

Assume that a shared variable flag is false and thread A changes its working memory to true. In this case, thread B does not know that A has modified the flag when it performs corresponding operations on it, which is also called invisible to THREAD B. Therefore, we need a mechanism to notify all threads when the value of main memory changes, so that they can see the change.

public class ReadWriteDemo {
	
    // Flag is not volatile
    public boolean flag = false;
    public void change(a) {
        flag = true;
        System.out.println("flag has changed:" + flag);
    }

    public static void main(String[] args) {

        ReadWriteDemo readWriteDemo = new ReadWriteDemo();
        // Create a thread to modify the flag, such as thread A described above
        new Thread(new Runnable() {
            @Override
            public void run(a) {
                try {
                    Thread.sleep(3000);
                    readWriteDemo.change();
                } catch (InterruptedException e) {
                    e.printStackTrace();
                }
            }
        }).start();

        // The main thread, such as thread B described above
        while(! readWriteDemo.flag) { } System.out.println("flag:"+ readWriteDemo.flag); }}Copy the code

If it were not volatile, the main thread (thread B) would not see its child (thread A) change the flag. That is, from the main thread’s point of view, flag is always false in the absence of special circumstances, while(! Println (“flag:” + ReadWriteDemo.flag) {} if the judgment condition is true, the System will not execute system.out. println(“flag:” + ReadWriteDemo.flag);

To avoid contingency, I let the program run for six minutes. As you can see, the child thread did change the value of the flag, and the main thread did not see the flag change as we expected, so it kept going in an endless loop. If we add a volatile to flag, the expected result is that the child thread’s changes are visible to the main thread, and the main thread exits the loop.

It can be seen that in less than a minute, after the child thread changes the value of flag, the main thread immediately exits the loop, indicating that it immediately senses the change of flag variables.

What’s interesting: If the interval between ab and B is not long, you will notice that the changes between the two threads are also visible when B waits 10 seconds to read. Why, as explained on stakc Overflow, the following shared variables are read from main memory to update the values in working memory when the CPU executing this thread is free. More interestingly, at the time of this writing, CPU and memory were performing normally, but the fact that they were having problems was indicative of the use of volatile.

How to ensure visibility:

The Java memory model defines the protocol for interaction between working memory and main memory. It defines eight atomic operations:

  1. Lock: Locks variables in main memory and makes them exclusive to one thread.
  2. Unlock: Unlocks the variable, allowing other threads to access it.
  3. Read: Reads variable values from main memory into the worker thread.
  4. Load: Stores the value read to a copy of the variable in working memory.
  5. Use: Passes the value to the code execution engine of the thread.
  6. Assign: Reassigns the value returned by the execution engine’s processing to the copy of the variable.
  7. Store: Stores the value of a copy of a variable into main memory.
  8. Write: Writes the value stored in store to a shared variable in main memory.

I looked on the Internet and read different blogs, and it was said that volatile was the underlying lock prefix. The lock prefix tells you what to do. What happens if you write to a volatile variable. The JVM sends an instruction prefixed with lock to the processor, and thread A locks the variables in main memory, changes them, and flusher them to main memory. Thread B will also lock variables in main memory, but it will find that the values of variables in main memory and working memory are different, and will read the latest values from main memory. This ensures that changes to variables are visible to each thread.

Atomicity:

In the programming world, atomicity refers to indivisible operations, the smallest unit of execution in which an operation is either all or none performed.

public class TestAutomic {
    volatile int num = 0;
    void add(a) {
        num++;
    }

    public static void main(String[] args) throws InterruptedException {
        TestAutomic testAutomic = new TestAutomic();
        for (int i = 0; i < 1000; i++) {
            new Thread(new Runnable() {
                @Override
                public void run(a) {
                    try {
                        Thread.sleep(10);
                        testAutomic.add();
                    } catch (InterruptedException e) {
                        e.printStackTrace();
                    }
                }
            }).start();
        }
        // Wait 12 seconds for the child threads to complete
        Thread.sleep(12000); System.out.println(testAutomic.num); }}Copy the code

** Expected phenomenon: ** says atomicity is not guaranteed, so it should not be equal to 1000

The results vary from computer to computer. Mine is 886, and yours may not, but both show that volatile does not guarantee atomicity.

Why atomicity is not guaranteed:

This starts with the num++ operation, which can be divided into three steps:

  • Read the value of I and load it into working memory
  • I add one
  • Writes the value of I back to working memory, flushing it to main memory

We know that thread execution is random. Assume that num=0 is in the working memory of both thread A and thread B, and line A steals the execution right of CPU first and increments 1 in the working memory, but has not refreshed into main memory. Thread B now gets the CPU execution and increments by 1. Then thread A flushed to main memory, num=1, and thread B flushed to main memory, num=1, but num= 2 after two operations.

Solution:

  • Use the synchronized keyword
  • Using atomic classes

Reorder:

What does that mean when we write a program, the CPU reorders the instructions in line according to how we want to make the program more efficient

a = 2;
b = new B();
c = 3;
d = new D();
Copy the code

After optimization, the possible true order of instructions is:

a = 2;
c = 3;
b = new B();
d = new D();
Copy the code

Not all instructions are reordered. Reordering is all about making instructions more efficient.

a = 2;
b = a;
Copy the code

These two lines of code will not be reordered in any case, because the second instruction depends on the first instruction, and reordering is based on the fact that the final result remains the same. The following is an example of how volatile prevents reordering:

public class TestReorder {
    private static int a = 0, b = 0, x = 0, y = 0;

    public static void main(String[] args) throws InterruptedException {
        while (true) {
            a = 0; b = 0; x = 0; y = 0;
            / / a thread
            new Thread(new Runnable() {
                @Override
                public void run(a) {
                    try {
                        Thread.sleep(10);
                        a = 1;
                        x = b;
                    } catch (InterruptedException e) {
                        e.printStackTrace();
                    }

                }
            }).start();

            / / thread b
            new Thread(new Runnable() {
                @Override
                public void run(a) {
                    try {
                        Thread.sleep(10);
                        b = 1;
                        y = a;
                    } catch (InterruptedException e) {
                        e.printStackTrace();
                    }

                }
            }).start();

            // The main thread sleeps 100ms to ensure that all child threads finish executing
            Thread.sleep(100);
            System.out.println("a=" + a + "; b=" + b + "; x=" + x + "; y="+ y); }}}Copy the code

Remember that if two threads are sleeping for the same amount of time, they are visible between them

Expected Results:

  • If thread A is executed first (a = 1, x = b = 0) and then thread B (b = 1, y = a = 1), the final result is a = 1; b = 1; x = 0; y = 1
  • If thread B is executed first (b = 1, y = a = 0) and thread A (a = 1, x = b = 1), the final result is a = 1; b = 1; x = 1; y = 0
  • If thread A is executed (a = 1), thread B is executed (b = 1, y = a = 1), and thread A is executed (x = b = 1), the final result is a = 1; b = 1; x = 1; y = 1

It can be found that in addition to the three cases expected above, there is another case where a = 1; b = 1; x = 0; The y = 0 situation, which I’m sure you already know, is caused by reordering. Either thread A reorders x = b first; A = 1; , or thread B reordered y = a first; B = 1; ; Either both threads are reordered.

Private volatile static int a = 0, b = 0, x = 0, y = 0; What happens if you add the volatile keyword?

In order to ensure correctness, we continue to run for another 5 minutes, and we can see that x=0 does not appear again; Y is equal to 0.

How do I prevent reordering

Let’s start with the four memory barriers

The memory barrier role
StoreStore barrier Disallow normal writes above and volatile write reordering below
StoreLoad barrier Disallow volatile writes above and volatile read/write reordering below
LoadLoad barrier Disallow normal reads below and volatile read reordering above
LoadStore barrier Disallow plain writes below and volatile read reordering above

May look at the role of more abstract, direct example, pa

  • forS1; StoreStore; S2Before S2 and subsequent write operations, ensure that write operations of S1 are visible to other threads.
  • forS; StoreLoad; L, ensuring that the write of S is visible to other threads before L and subsequent read/write operations.
  • forL1; LoadLoad; L2Before L2 and subsequent read operations, ensure that L1 has finished reading data.
  • forL; LoadStore; S, ensure that L finishes reading data before S and subsequent operations.

So how does volatile guarantee order?

  • Insert a StoreStore barrier before each volatile write and a StoreLoad barrier after each write.
  • A LoadLoad barrier is inserted before each volatile read and a LoadStore barrier is inserted after each read.

For example, what if we had a volatile write S and a volatile read L?

  • To write:S1; StoreStore; S ; StoreLoad LThis prevents reordering by keeping S (protected against volatile variables in the middle).
  • For reading the same thing:L1; LoadLoad; L ; LoadStore SAs well as protect volatile variables.

So much for volatile.