The visibility, atomicity, and order of Volatile are not guaranteed

Reference: blog.csdn.net/TZ845195485…

  • JMM, Happen-before, JMM is the norm, and there is a rule called Happen-before, used to ensure order is volatile, synchronized
  • How can volatile guarantee order and visibility by using memory barriers as loadload, StoreLoad, LoadStore, and StoreStore

1. Variables modified by volatile have two characteristics

Features: visibility, orderliness, no guarantee of atomicity

Memory semantics for volatile

  • When a volatile variable is written, the JMM immediately flusher the value of the shared variable in the thread’s local memory back to main memory.
  • When a volatile variable is read, the JMM invalidates the thread’s local memory and reads the shared variable directly from main memory
  • So the write memory semantics of volatile are flushed directly into main memory, and the read memory semantics are read directly from main memory

2. Volatile

2.1. Ensure visibility

  • Ensure that the variable is visible to different threads when they operate on it, meaning that any change to the variable is immediately visible to all threads
  • The code shown
Int number=0; The number variable is not preceded by the volatile keyword and has no visibility 2. Adding volatile solves the visibility problem * */
class Resource{
    //volatile int number=0;
    volatile int number=0;

    public void addNumber(a){
        this.number=60; }}public class Volatile_demo1 {
    public static void main(String[] args) {
        Resource resource=new Resource();
        new Thread(()->{
            System.out.println(Thread.currentThread().getName()+"\t coming ");
            try {TimeUnit.SECONDS.sleep(4); }catch(InterruptedException e){e.printStackTrace(); } resource.addNumber(); System.out.println(Thread.currentThread().getName()+"\t update "+resource.number);
        },"Thread A").start();

        // If the main thread accesses resource. Number ==0, then the loop continues
        while(resource.number==0){

        }
        // If you get to this point, prove that main now has a resource. Number value of 60
        System.out.println(Thread.currentThread().getName()+"\t"+resource.number); }}Copy the code

The results of

Without volatile, without visibility, the program cannot be stopped. With volatile, the program can be stoppedCopy the code

Above code principle explanation

  • Without volatile, thread A changes the shared variable (number=60), and the main thread (thread B) accesses number (0), which is invisible
  • After volatile, thread A makes A change to the shared data, and the main thread accesses it again, number=60

2.2 Atomicity is not guaranteed

We loop 100 times on 20 threads.

/** ** atomicity is not guaranteed * number <=2w ** /
public class VDemo02 {

    private static volatile int number = 0;

    public static void add(a){
        number++; 
        ++ is not an atomic operation, but two ~ three operations
        //
    }

    public static void main(String[] args) {
        // Number === 20000

        for (int i = 1; i <= 20; i++) {
            new Thread(()->{
                for (int j = 1; j <= 1000 ; j++) {
                    add();
                }
            }).start();
        }
        // Wait for all 20 threads to complete the calculation before using the main thread to obtain the final value
        while (Thread.activeCount()>2) {By default, there are two threads, a main thread and a background GC thread
            Thread.yield();
        }
        System.out.println(Thread.currentThread().getName()+",num="+number); }}Copy the code

Lock and synchronized solve this problem, but what about atomicity without lock and synchronized?

Solution: use class under atomic package under JUC;

public class VDemo02 {

    // Use AutoInteger to ensure atomicity, atomic class int
    private static volatile AtomicInteger number = new AtomicInteger();

    public static void add(a){
        //number++;
        number.incrementAndGet();  // The bottom layer is CAS guaranteed atomicity
    }

    public static void main(String[] args) {
        // Number === 20000

        for (int i = 1; i <= 20; i++) {
            new Thread(()->{
                for (int j = 1; j <= 1000 ; j++) {
                    add();
                }
            }).start();
        }

        while (Thread.activeCount()>2) {//main gc
            Thread.yield();
        }
        System.out.println(Thread.currentThread().getName()+",num="+number); }}Copy the code

Underlying principles:

The bottom layers of these classes hook directly to the operating system! Is to modify the value in memory. The Unsafe class is a very special existence;

There are no data problems for read/write operations

(If number=1, number++ is required for the main memory. For two threads T1 and T2, if the operation is a read/write operation (no data loss), T1 takes over the CPU at some point and reads the shared data 1 back to T1’s working memory, number++ is performed. Number =2, write 2 from working memory back to main memory. Immediately after writing back, notify t2 thread to read number=2 to t2 worker thread)

For two writes, there are data problems

If number=0 and number++ is applied to main memory for 10 times, the final result is 10. For two threads T1 and T2, if there are two writes (which will cause data loss), T1 and T2 read the shared data from main memory into their working memory. At some point, T1 grabs the CPU, number++, writes number=1 back to main memory, and at that moment T2 grabs the CPU, number++ is equal to 1, T1 writes number=1 back to main memory, and tells t2 to read number=1 from main memory into working memory of T2, which did number++ once before, and then does number++ again. So I’m missing the number once, so I’m not going to get 10 after 10 number++.)

Read -load-use and assign-store-write become two indivisible atomic operations, but there is still a very small vacuum between use and assign. It is possible that the variable will be read by another thread, resulting in a write loss.

3.3. Instruction rearrangement is prohibited

Reordering is a process by which compilers and processors reorder instruction sequences to optimize program performance, sometimes by changing the order of statements (there are no data dependencies and reordering is possible; Data dependencies exist, reorder is not allowed)

The classification and execution process for reordering

  • Compiler optimized reordering: The compiler can reorder the execution order of instructions without changing the single-threaded serial semantics
  • Instruction level parallelism reordering: The processor uses instruction level parallelism to execute multiple instructions on top of each other. If there is no data dependence, the processor can change the order of execution of the corresponding machine instructions
  • Reordering of memory systems: Since the processor uses caching and read/write buffers, this makes loading and storing operations appear to be out of order

Data dependency

If two operations access the same variable, and one of the two operations is a write operation, there will be data dependence between the two operations.

Volatile prevents instruction reordering:

  • volatileA barrier of memory will be added to the barrier to ensure the order of instructions in the barrier.
  • The memory barrier: CPU instruction, function:
    • Ensuring the order in which certain operations are performed;
    • Memory visibility of certain variables can be guaranteed (with these features, visibility of volatile implementations can be guaranteed)

  • write

    • Insert a StoreStore barrier before each volatile write
    • Insert a StoreLoad barrier after each volatile write
  • read

    • Insert a LoadLoad barrier after each volatile read

      Insert a LoadStore barrier after each volatile read

conclusion

  • Volatile guarantees visibility;
  • Atomicity is not guaranteed
  • Because of the memory barrier, instruction reordering is guaranteed to be avoided

Interviewer: So where do you know you use this memory barrier the most? The singleton pattern

3. Memory barriers

Memory barrier (also known as memory barrier, memory barrier, barrier instruction, etc., is a kind of synchronization barrier instruction, is a synchronization point in the OPERATION of random access to memory by CPU or compiler, so that all read and write operations before this point can be executed before the operation after this point), to avoid code reordering. Memory barriers are JVM instructions. The rearrangement rules of the Java memory model require the Java compiler to insert specific memory barrier instructions when generating JVM instructions. Volatile implements visibility and order in the Java memory model, but volatile does not guarantee atomicity

All writes before the barrier are written back to main memory, and all reads after the barrier get the latest results of all writes before the barrier (visibility achieved).

In a word: a write to a volatile field happens-before any subsequent read to that volatile field, also known as a write after read

The memory barrier commands are StoreStore, StoreLoad, LoadLoad, and LoadStore