“This is the fifth day of my participation in the Gwen Challenge in November. Check out the details: The Last Gwen Challenge in 2021.”

Multithreading visibility problem analysis

Multiple threads share a variable. One thread modifies the value and another reads it. If volatile is not used, the reading thread will keep reading the old value. This problem is essentially one of multithreading visibility.

Main memory and CPU cache model

In a computer, the CPU will fetch data instructions from main memory to perform computation operations. If data is fetched from main memory every time, the CPU speed is far faster than the memory speed, and frequent reads will degrade performance. Therefore, modern CPUS have their own CPU cache.

A CPU has multiple operating cores. We call each operating core a physical core, and each physical core can run applications. Each physical core has a private Level 1 cache (L1 cache), consisting of a Level 1 instruction cache and a Level 1 data cache, as well as a private Level 2 cache (L2 cache).

Because L1 and L2 caches are private to each physical core, when data or instructions are stored in L1 and L2 caches, the physical core can access them with a latency of no more than 10 nanoseconds, ensuring very high CPU computing efficiency.

Since each CPU has its own local cache, each calculation reads its own cache. The thread modifying the data is also being modified by another CPU in its own local cache, and both are being read and evaluated in their own local cache, so this visibility problem can occur when multiple threads are running concurrently.

So the essence of the multithreaded visibility problem is that it goes down to the CPU level.

Bus locking mechanism and MESI cache consistency protocol

The bus locking mechanism means that when a CPU processes data, it locks the system bus or memory bus so that other cpus do not have access to the memory, thus ensuring cache consistency.

Bus lock mechanism, inefficient, has been eliminated.

The current comparison of processes is the MESI protocol, the Cache Consistency protocol. MESI ensures cache consistency by flushing data back to main memory after changes, sniffing for changes in main memory, and invalidating its own cache mechanism if changes are made.

MESI is an acronym for four words: Modified,Exclusive,Shared,Invalid

Java memory model

The Java memory model is based on the CPU cache model. The Java memory model is standardized to mask underlying differences between computers.

  • Read Reads from main memory
  • Load writes the values read from main memory to working memory
  • Use reads data from working memory to calculate
  • Assign reassigns the calculated value to working memory
  • Store writes working memory data to main storage
  • Write assigns variables from the store past to variables in main memory

Visibility problems occur because working memory does not access main memory every time.

volatile

What is volatile?

Three types of problems may arise in multithreaded concurrent programming, namely visibility, atomicity, and orderliness.

Visibility: When one thread changes the value of a shared variable, other threads are immediately aware of the change.

Orderliness: In concurrency, the execution of the program may be out of order. Compilers and instructions sometimes reorder instructions to make code run more efficiently.

Atomicity: An operation is not interruptible. Even when multiple threads are working together, once an operation has started, it cannot be disturbed by other threads. For example, when multiple threads concurrently execute I ++ on the shared variable int I = 0. If atomicity is guaranteed then I is equal to 2, and if atomicity is not guaranteed then I doesn’t have to be a number, it could be 1.

Volatile guarantees visibility and order, but not atomicity.

How to use volatile

Volatile is easy to use, as long as you add the volatile keyword when you share variable declarations.

public class VolatileDemo {

    public volatile static int i = 0;
    
}
Copy the code

How does Voltaile guarantee visibility

Volatile ensures that data in the modified working memory is flushed into main memory immediately and that caches of that data in other working memory expire. This ensures visibility.

For volatile variables, the JVM sends a lock prefix to the CPU, which writes the value back to main memory as soon as it evaluates, and because of the MESI cache consistency protocol, each CPU sniffs the bus to see if its local cache has been modified.

If the CPU finds that another thread has modified a cache, the CPU will expire the local cache, and then the thread executing on that CPU will reload the latest data from main memory when it reads that variable.

How does volatile guarantee order

The compiler executor may reorder the code, following the happens-before principle.

The happens-before principle and one of them is

The volatile variable rule: Writes to a variable occur before reads to that variable occur.

Volatile variables insert memory barriers before they are read and written, and prevent reordering of code before and after the barriers.

Why does volatile not guarantee atomicity

Since both threads execute the operation simultaneously, assign, Store, and write back to main memory, the result of the operation is independent of the variable value in the working memory, so atomicity cannot be guaranteed.