preface


Don’t know why, when I was writing an article want to put a picture on the top of the, may be because of obsessive-compulsive disorder, recently bought some Java concurrency related books in look, if no big problem, recently a few articles should be record related contents of the Java memory model (pictured), the content is a little bit more, so ready to separate, The Java memory model is the basis of the Java memory model. The Java memory model is the basis of the Java memory model. The Java memory model is the basis of the Java memory model.

Two key problems with the concurrent programming model


How do threads communicate with each other

Communication refers to the mechanism by which threads exchange information. In imperative programming, there are two communication mechanisms between threads: shared memory and message passing.

Shared memory: Threads share the common state of a program and communicate implicitly through the common state in write-read memory.

Messaging: There is no common state between threads, and threads must communicate explicitly by sending messages.

Concurrency in Java is a shared memory model.

How are threads synchronized

Synchronization is the mechanism used in a program to control the relative order in which operations occur between different threads.

Shared memory: In the shared memory model, synchronization is explicit, and programmers must explicitly specify that a method or piece of code needs to be executed mutually exclusive between threads.

Messaging: In the messaging model, synchronization is implicit because the message must be sent before the message is received.

conclusion

Concurrency in Java uses a shared memory model, where communication between Java threads is always implicit and completely transparent to the programmer.

The abstract structure of the Java memory model


In Java, all instance fields, static fields and array elements are stored in the heap memory, heap memory is Shared between threads, this article use the term “Shared variables” referred to instance field, static field, and array element), local variables and methods defined parameters and exception handler will not Shared between threads, they will not have problems with memory, visibility, It’s not affected by the memory model.

Communication between Java threads is controlled by the Java Memory Model (JMM), which determines when writes by one thread to a shared variable are visible to another thread. From an abstract point of view, the JMM defines an abstract relationship between threads and main memory: shared variables between threads are stored in main memory, and each thread has a private local memory where it stores copies of shared variables to read/write. Local memory is a JMM concept that doesn’t really exist. It covers caching, write buffers, registers, and other hardware and compiler optimizations. An abstract representation of the Java memory model looks like this:

1. Thread A flushed the updated shared variables from the local memory to the main memory. 2. Thread B goes to main memory to read the shared variable updated by thread A.

The following figure illustrates these two steps:

As shown, local memory A and B have copies of the shared variable X in main memory. Let’s say that when we first met, all of these memory x values were 0. Thread A when executed, the updated x (assuming A value of 1) temporarily stored in its own local memory, A of the threads when A thread A and B need communication, thread A will first put himself in the local memory modified x flushed to main memory, at this time of the main memory of the x value into 1, then thread B to main memory read thread A updated x value, The x value of thread B’s local memory also changes to 1.

Taken as A whole, these two steps essentially involve thread A sending messages to thread B, and this communication must go through main memory. The JMM provides Java programmers with memory visibility assurance by controlling the interaction between main memory and the local memory of each thread.

Reordering from source code to instruction sequence


When executing a program, the compiler and processor often reorder instructions to improve performance. There are three types of reordering:

1) Compiler optimized reordering: The compiler can rearrange the execution order of statements without changing the semantics of a single-threaded program. 2) Instruction-level parallel reordering: Modern processors use instruction-level parallel counting to overlap multiple instructions. If there is no data dependency, the processor can change the execution order of the machine instructions corresponding to the statement. 3) Reordering of the memory system: Since the processor uses caching and read/write buffers, it can appear that load and store operations are performed out of order.

The sequence of instructions from the Java source code to the actual execution goes through the following three sorts, as shown in the figure below:

Where 1 belongs to compiler reorder, 2 and 3 to handler reorder. These reorders can cause memory visibility problems in multithreaded programs. For compilers, the JMM’s compiler reordering rules disallow certain types of compiler reordering (not all compiler reordering is prohibited). For processor reordering, the JMM’s processor reordering rules require the Java compiler to insert a memory barrier instruction of a specific type when generating the instruction sequence, which prevents reordering of a particular type of processor.

The JMM is a language-level memory model that ensures consistent memory visibility for programmers across compilers and processor platforms by disallowing certain types of compiler reordering and processor reordering.

Happens-before profile


Starting with JDK5, Java uses the new JSR-133 memory model, which uses the concept of happens-before to account for memory visibility between operations. In the JMM, if the results of one operation need to be visible to another, there must be a happens-before relationship between the two operations. The two operations mentioned here can be within a thread or between different threads.

The happens-before rule, which is closely related to programmers, is as follows:

1) Procedural order rule: every action in a thread happens-before any subsequent action in that thread. 2) Monitor lock rule: a lock is unlocked, happens-before a lock is later locked. 3) Volatile variable rules: Writes to a volatile field are happens-before any subsequent reads to that volatile field. 4) Transitivity: If A happens-before B, and B happens-before C, then A happens-before C.

A happens-before relationship between two operations does not mean that the previous action must be executed before the latter! Happens-before simply requires that the previous action (the result of the execution) be visible to the next action, and that the previous action precedes the second in order.

The relationship between happens-before and JMM is shown below:

As you can see from the figure, a happens-before rule corresponds to one or more compiler and handler reordering rules. The happens-before rule is straightforward for programmers, and it prevents them from having to learn complex reordering rules and how to implement them in order to understand the memory visibility guarantees provided by the JMM.

If there is a need you can pay attention to my public number, will immediately update the Java related technical articles, public number there are some practical information, such as Java second kill system video tutorial, dark horse 2019 teaching materials (IDEA version), BAT interview summary (classification complete), MAC commonly used installation package (some are taobao to buy, Has PJ’s).