To find out more about the world ahead of time, visit your personal blog

Writing in the front

One of the rules of happens-before in this article is the rule of volatile variables

A write to a volatile field is happens-before any subsequent read to the volatile field

By definition, knowing this rule should be enough to use volatile, but interviewers tend to dig deep to understand the memory and read-write semantics of volatile, to gain some ground in the interview, and to fill in the gaps left by the previous article. So this article was embarrassingly born

Happens-before rules for volatile variables

Do you remember the table below? (Yes, you remember 😂)

Can I resort Second operation Second operation Second operation
The first operation Ordinary reading/writing Volatile read Volatile write
Ordinary reading/writing NO
Volatile read NO NO NO
Volatile write NO NO

The table above shows the JMM’s compiler – specific rules for volatile reordering. How does the JMM prohibit reordering? The answer is the memory barrier

Memory Barriers/Fences

It doesn’t matter if you’ve heard the term before or not. It’s simple. Watch

To implement the memory semantics of volatile, when the bytecode is generated, the compiler inserts a memory barrier into the instruction sequence to prevent a particular type of handler from reordering

If there is a barrier between two variables, they cannot be reordered. There are loads and stores, and there are operations before and after. The JMM divides the memory barrier insertion strategy into four types:

  1. Insert a StoreStore barrier before each volatile write
  2. Insert a StoreLoad barrier after each volatile write
  3. Insert a LoadLoad barrier after each volatile read
  4. Insert a LoadStore barrier after each volatile read

1 and 2 are graphically represented and the corresponding table rules look like this:

3 and 4 are graphically described and the corresponding table rules look like this:

It’s actually a representation of what’s in the table, but it just shows you how the memory barrier prevents reordering instructions, so you just have to remember what’s in the table, right

Reading and writing a program is usually not as simple as the above two cases. How do these barriers work together? It’s not that difficult at all. We just need to put these instructions into the table at the beginning of this article, and then join them together in sequence

Take a look at a short program:

public class VolatileBarrierExample {

	private int a;
	private volatile int v1 = 1;
	private volatile int v2 = 2;

	void readAndWrite(a){
		int i = v1; // The first volatile read
		int j = v2;	// The second volatile read
		a = i + j;	/ / write ordinary
		v1 = i + 1;	// The first volatile write
		v2 = j * 2;	// The second volatile write}}Copy the code

Introducing the barrier command into the program looks like this:

Let’s take a look at the above image from several angles:

  1. Color is all that is generated by introducing the barrier instruction into the program, which is the “safest” option generated by the compiler
  2. There are obviously many barriers that are redundant, and the dashed box on the right points to barriers that can be “optimized” ** to remove

Now that you know how volatile can get through memory barriers to ensure that programs are not “arbitrarily” sorted, how does volatile guarantee visibility?

Volatile memory semantics for write-read

To review the program in the previous article, assume that thread A executes writer first and thread B executes reader:

public class ReorderExample {

	private int x = 0;
	private int y = 1;
	private volatile boolean flag = false;

	public void writer(a){
		x = 42;	/ / 1
		y = 50;	/ / 2
		flag = true;	/ / 3
	}

	public void reader(a){
		if (flag){	/ / 4
			System.out.println("x:" + x);	/ / 5
			System.out.println("y:" + y);	/ / 6}}}Copy the code

Do you remember JMM from earlier? Yes, you remember 😂, when thread A executes the writer method, and look at the following figure:

Thread A writes variables changed in local memory back to main memory

Memory semantics for volatile reads:

When a volatile variable is read, the JMM invalidates the thread’s local memory. The thread will next read the shared variable from main memory.

So when thread B executes the reader method, the graph structure looks like this:

Thread B’s local memory variable is invalid, so it reads variables from main memory into local memory and gets the result of thread A’s change

If you have read the previous article, you will understand the two pictures above. Together, they say:

  1. When thread A writes A volatile variable, thread A essentially sends A message to A thread that will read the volatile variable next
  2. Thread B reads a volatile variable, essentially receiving a message from a previous thread that made changes to the shared variable before writing to the volatile variable.
  3. Thread A writes A volatile variable, and thread B reads the volatile variable. Essentially, thread A sends A message through main memory to thread B.

At this point, when interviewing for volatile, you should have something to talk about and a deeper understanding of the semantics of volatile

eggs

As mentioned in a previous post:

From a memory semantic point of view, volatile write-read has the same memory effect as lock release-acquire; Volatile writes and lock releases have the same memory semantics; Volatile reads have the same memory semantics as lock acquisition

Remember the last two pictures, and you’ll get a good idea of what synchronized means when we talk about it. If you’re interested, learn the writer-read semantics of synchronized for yourself

Let’s talk about lock-related content, stay tuned…

Soul asking

  1. If volatile writes and then returns, will the StoreLoad directive be generated?
  2. How is synchronized progressively optimized?

Efficiency tools

tool.lu

Tool. lu is an online tool that integrates very many functions and meets the basic needs of daily development


Recommended reading

  • Don’t miss this step into the world of concurrency
  • A thorough understanding of these three cores is the key to learning concurrent programming
  • There are three sources of concurrency bugs, so keep your eyes open for them
  • Visibility order, happens-before
  • Solve the atomic problem? The first thing you need is a macro understanding

Welcome to continue to pay attention to the public account: “One soldier of The Japanese Arch”

  • Cutting-edge Java technology dry goods sharing
  • Efficient tool summary | back “tool”
  • Interview question analysis and solution
  • Technical data to receive reply | “data”

To read detective novel thinking easy fun learning Java technology stack related knowledge, in line with the simplification of complex problems, abstract problems concrete and graphical principles of gradual decomposition of technical problems, technology continues to update, please continue to pay attention to……