This article is from — Coke Coke, the author’s homepage link: Coke Coke personal homepage

Here’s a quick review of caches in operating systems

The cache in the operating system is shown in the figure above. The operating system divides memory and cache into blocks of equal size. Then, the cache blocks corresponding to memory blocks are specified according to the number of cache blocks. When used, the cache is directly accessed, and the cache is updated if the memory blocks are not matched.

But the operating system cache has a key implication: there is only one cache per block of memory, and one cache corresponds to multiple memory.

In the Java memory model, however, the situation seems to be reversed, as shown below

Java memory model

In Java,

  1. Instance fields, static fields, and array elements are stored in heap memory, and there are shared variables (i.e., shared variables) between threads in the heap
  2. Local variables, method definition parameters, and exception handling parameters are not shared between threads (these are syntactically restricted from being shared)

This leads to the problem of thread visibility if two threads wish to communicate:

In computer space, there are only two ways for processes or threads to communicate (as long as two threads are involved, they can communicate)

  • Put information in a public place for the target to read (shared memory)
  • Knock on the door and deliver the message

But for the JMM model, the essence of communication is still the interaction of shared variables,

Since we are destined to go through these processes by sharing variables:

  1. Threads A and B read the shared variable into their cache
  2. A does something to update the cache
  3. Cache writes back to main memory
  4. B Updates the cache to obtain information

Instruction reordering

Instruction reordering is mentioned in operating systems, computer composition, and so on.

Suppose you have a task like this:

  1. Read variable 1,
  2. Calculate the variable 1*500,
  3. Read variable 2,
  4. So let’s compute variable 1 plus variable 2

In a program, to maximize the use of computer resources, instructions are reordered to ensure that each device is always working, or that work is completed in a short time.

Possible order of execution: 1324

The idea of instruction reordering is applied from the bottom of the computer to the compiler

There are three kinds of reordering:

  1. Compiler optimized reordering
  2. Instruction – level parallel reordering
  3. Memory system reordering

But in multithreaded operations, instruction reordering means memory visibility problems.

To ensure cache consistency, memory visibility

  • The JMM compiler disables reordering in some cases
  • The JMM ensures that certain types of handler reordering are prohibited by inserting a memory barrier.

Here’s a new word: memory barriers

What is a memory barrier? Memory barriers are mainly read/write barriers to ensure that two operations are performed in sequence (read, write, write, read, and read).

The instruction reorder principle comes from happens-before

The happens-before rule that is relevant to programmers is the following (and important).

  1. Procedure order rule: For every action in a thread, happens-before any subsequent action in that thread.
  2. Monitor lock rule: a lock is unlocked, happens-before a lock is subsequently locked.
  3. Volatile variable rule: Writes to a volatile field, happens-before any subsequent reads to that volatile field. A read of a volatile variable always sees the last write to that volatile variable. Visibility)
  4. Transitivity: If A happens-before B, and B happens-before C, then A happens-before C.

It boils down to four things: single thread invariant, locking, volatile, and passing

A happens-before rule corresponds to one or more compiler and handler reordering rules. Java programmers need only be familiar with this principle, not with the underlying implementation.