What is the memory model

In order to ensure the correctness of shared memory, that is, visibility, order, atomicity and so on, the memory model defines the specification of multi-thread read and write operations in shared memory. These rules are adopted to standardize the read and write operations of memory and ensure the correctness of instructions. It has to do with processors, caching, concurrency, and compilers. It solves the problem of memory access caused by CPU multi-level cache, processor optimization and instruction rearrangement. Ensures consistency, atomicity and orderliness in concurrent scenarios.

What is the Java Memory model

We know that Java programs need to run on Java virtual machines, Java memory model is in line with the specification of memory model, shielding a variety of hardware and operating systems caused by access differences, to ensure that Java programs in each platform to the memory access can ensure the consistent effect of the mechanism and specification.

Visibility issues:

In today’s multi-core era, each CPU has its own cache, this time the CPU cache and memory data consistency is not so simple to solve, when multiple threads on different cpus, the operation of these threads is different CPU cache. As shown in the figure, thread A is CPU1’s cache and thread B is CPU2’s cache. If thread A operates on variable A, then thread B will not be visible to it.

Atomicity problem

Early, the operating system is to the CPU scheduling based on the process, it is not Shared between different processes of memory space, so if you want to do the task switching process will address memory mapping of the switch, and the process to create all the threads are Shared, so use a thread to do task switching cost is very low is very low. Operating system to do task switching, can be executed in any CPU instructions completed.

Suppose that count=0, thread A switches threads after instruction 1, thread A and thread B follow the sequence shown above, then both threads perform count+1=1 and get 1 instead of the expected 2.

Order problem:

In order to optimize the performance, the compiler sometimes changes the sequence of program statements. For example, the program “A =1, b=2” may become B =2, A =1 after compiler optimization. Although this does not affect the final execution result of the program, But compiler and interpreter optimizations can lead to unexpected bugs (the case of double-checking to create a singleton is a classic example).

The sequence of instructions in Java from source code to actual execution goes through one of three reorders:

These reorders can cause memory visibility problems for multithreaded programs, and for compilers, the JMM compiler reordering rules prohibit certain types of compiler reordering.

The design of the JMM

To learn happens-before you need to introduce the JMM. The JMM defines two policies at design time. The first is to reorder programs that change the result of program execution. Of course, the JMM requires that the compiler and processor must forbid such reordering. The second is that the JMM does not require the compiler or processor to use reorders that do not change the result of the program’s execution (the JMM also allows such reorders).

As you can see from the figure above, the JMM provides a strong enough guarantee of memory visibility that some visibility guarantees must exist without affecting the execution results of the program. This leads, on the other hand, to the JMM’s ability to satisfy as few constraints as possible on the compiler and processor. Its rule is that the compiler and processor can optimize whatever they want, as long as they don’t change the result of the program’s execution.

The happens-before principle

How do you understand the happens-before rule? Happens-before does not mean that the previous action happened before the next action, but that the result of the previous action is visible when the subsequent action is read. Because the JMM provides the programmer with a perspective of sequential execution, one happens-before the other, so that the execution result of the first operation is visible to the execution result of the second, and the execution order of the first operation precedes the second.

1, “program order rule:” in the same thread, according to the order of the code, the preceding code before the following code, precisely is to control the flow order, because to consider the branch and loop structure.

2. Pipe lock rule: An UNLOCK operation occurs first when a subsequent lock operation is performed on the same lock.

3. “Rules for Volatile Variables:” Writes to a volatile variable occur first (in time) before reads to that variable

Thread start rule: The start() method of a Thread occurs first for each operation of the Thread.

5. “Thread termination Rule:” All operations of a thread precede termination detection of this thread. Thread termination can be detected by the end of thread.join (), the return value of thread.isalive (), and so on.

Thread.interrupt() rules: A call to the thread.interrupt () method occurs when the interrupt event is detected by the code of the interrupted Thread.

7. Object finalization rule: The finalization of an object precedes the initialization of the Finalize () method that generates it.

8. Transitivity: If operation A precedes operation B and operation B precedes operation C, then operation A precedes operation C.

“Look at the eight rules above in detail.”

“Program ordering rules:” In a single thread of code the execution results are ordered, because the virtual machine and the processor reorder the instructions. Although reordering, but it does not affect the program execution results, so the program execution results and sequential execution results are consistent. Thus, this rule is valid only for single threads, and there is no guarantee of correctness in multi-threaded environments.

Lock rule: If a lock is in a locked state, you must unlock it before you can lock it.

The “Valatile variable rule:” rule states that Valatile ensures thread visibility.

“The rule of transitivity:” This rule reflects the transitivity of the happens-before principle, that is, A happens-before B, B happens-before C, then A happens-before C.

“Thread start rule:” if thread A starts ThreadB with ThreadB during its execution, thread A’s changes to A shared variable are made visible to ThreadB after ThreadB.

“Thread termination rule:” If thread A waits for thread B to terminate by joining during execution, then thread B’s changes to shared variables before termination will be visible after thread A waits for return.