Due to the details of the content is too much, so only part of the knowledge points sorted out rough introduction, each small node there are more detailed content! Here we go


Why the volatile keyword

In most cases, the interviewer who asks about volatile will know a thing or two about it. Volatile is the entry point for the Java Memory Model (JMM) and concurrency is the entry point for Java concurrent programming. The underlying operations of the JVM, bytecode operations, and singletons can all be involved.

So people who know how to ask questions have a knack. So let’s take a look at what volatile is designed to do: memory visibility (JMM), atomicity (JMM), instruction reordering prohibition, thread concurrency, differences from synchronized… Dig deeper, and bytecode, JVM, and so on might be involved.

Fortunately, if you have already studied the wechat public account “program new Horizon” JVM series of articles, the above knowledge is not a problem, just as a review. So, let’s try answering questions in the form of an interviewer’s question, without looking at the answers, and see how you learn. Killing the chain, here we go…

Halftime! Recently compiled a Java core knowledge points. It covers volatile, JVM, locking, concurrency, Java reflection, Spring principles, microservices, Zookeeper, database, data structure and many other knowledge points.

If you want to get this document for free, check it out below



Interviewer: Tell me about the characteristics of the volatile keyword

Shared variables that are volatile have two characteristics:

  • This ensures the memory visibility of different threads’ operations on this variable.
  • Disallow instruction reordering;

The volatile keyword is a volatile keyword. Continue to focus on these two features.

Interviewer: What is memory visibility? Can you give some examples?

This question relates to the Java Memory Model (JVM) and its memory visibility features, and is answered in part from the previous series “The Java Memory Model (JMM) In Detail” and “Principles of the Java Memory Model in Detail.”

First, the Memory model: The Java Virtual Machine specification attempts to define a Java Memory Model (JMM) that blocks out differences in memory access across hardware and operating systems so that Java programs can achieve consistent memory access across platforms.

The Java memory model synchronizes the new value of a variable to main memory after it is modified, and then refreshes the value of the variable from main memory before it is read, using main memory as the transfer medium. The process of memory visibility can be illustrated.

Local memory A and B share A copy of variable X in main memory, both with an initial value of 0. Thread A updates x to 1 after execution and stores it in local memory A. When thread A and thread B need to communicate, thread A will first flush the x=1 value in the local memory to the main memory, and the x value in the main memory becomes 1. Thread B then goes to main memory to read the updated x value, and thread B’s local memory x value changes to 1.

Finally, visibility: Visibility means that when one thread changes the value of a shared variable, other threads immediately know about the change.

This is true for both plain and volatile variables, except that volatile variables guarantee that new values are immediately synchronized to main memory and flushed from main memory as soon as they are used, ensuring visibility for multithreaded operations. Ordinary variables are not guaranteed.

Interviewer: Speaking of JMM and visibility, can you talk about other features of the JMM

We know that JMM has atomicity and orderliness in addition to visibility.

Atomicity means that an operation or series of operations cannot be broken. Even in the case of multiple threads, once an operation has started, it cannot be disturbed by other threads.

For example, if A static variable int x is assigned by two threads at the same time, thread A is assigned 1 and thread B is assigned 2. No matter how the thread is running, the final value of x is either 1 or 2. The operation between thread A and thread B is atomic and cannot be interrupted.

Orderliness in the Java memory model can be summarized as follows: if you observe in one thread, all operations are ordered, and if you observe in one thread, all operations are out of order.

Orderliness refers to the sequential execution of single-threaded code. However, in a multi-threaded environment, disorder may occur, because during compilation there will be “instruction rearrangement”, and the order of the rearranged instructions may not be the same as the original instructions.

Therefore, the first part of the above sentence refers to the serial semantic execution in the thread, and the second part refers to the “rearrangement” phenomenon and “working memory and main memory synchronization delay” phenomenon.

Interviewer: You have mentioned reordering several times. Can you give an example?

In order to improve the efficiency of program execution, the CPU and compiler will allow instruction optimization according to certain rules. However, there is a certain sequence between the code logic, concurrent execution according to different execution logic will get different results.

To give an example of a possible rearrangement in multithreading:

class ReOrderDemo {    int a = 0;    boolean flag = false;     public void write() {        a = 1;                   //1        flag = true;             //2    }         public void read() {        if(flag) { //3 int i = a * a; / / 4... }}}Copy the code

In the above code, the read method gets the value of flag to determine the desired result when executing in a single thread. But in the case of multiple threads, different results can occur. For example, when thread A writes, the order of code execution in the write method might look like this due to instruction reordering:

flag = true; //2a = 1; / / 1Copy the code

That is, flag may be assigned first, and then A. This does not affect the final output in a single thread.

However, if thread B is calling the read method at the same time, then it is possible that flag is true but a is still 0, and the result of the step 4 operation will be 0 instead of 1 as expected.

Volatile variables, on the other hand, prevent instruction reordering, thereby avoiding multithreading problems to a certain extent.

Interviewer: Does volatile guarantee atomicity?

Volatile guarantees visibility and order (disallows instruction reordering), but does it guarantee atomicity?

Volatile does not guarantee atomicity; it is atomic only for reads/writes of individual volatile variables, but not for compound operations such as i++.

The following code, intuitively, feels like the output is 10000, but is not guaranteed because the IN ++ operation is a compound operation.

public class Test {    public volatile int inc = 0;     public void increase() {        inc++;    }     public static void main(String[] args) {        final Test test = new Test();        for(int i=0; i<10; i++){ newThread(){                public void run() {                    for(int j=0; j<1000; j++) test.increase(); }; }.start(); }while(Thread.activecount ()>1) // Ensure that all previous threads have finished executing thread.yield (); System.out.println(test.inc); }Copy the code

Suppose thread A, which reads inc with A value of 10, blocks because it did not modify the variable and did not trigger the volatile rule. Thread B also reads the value of inc, increments the value of inc from main memory to 10, and then immediately writes back to main memory to 11. Now thread A executes, and since it’s holding 10 in working memory, it increments again, writes back to main memory, and 11 gets written again. So even though two threads increase() twice, the result is only one increment.

Doesn’t volatile invalidate cached rows? However, thread A does not change the value of inc after reading, and thread B still reads 10. If thread B writes 11 back to main memory, wouldn’t thread A’s cache be set to invalid? The only way to read from main memory is to find that the cache row is invalid. Thread A has already read from main memory before thread B writes, so thread A has to increment.

In this case, you can only use atomic action classes under synchronized, Lock, or concurrency.

Interviewer: That’s synchronized. Can you tell me the difference between synchronized

  • Volatile essentially tells the JVM that the value of the current variable in the register (working memory) is indeterminate and needs to be read from main memory; Synchronized locks the current variable so that only the current thread can access it and other threads are blocked.
  • Volatile can only be used at the variable level; Synchronized can be used at the variable, method, and class levels;
  • Volatile only enables change visibility of variables, not atomicity. Synchronized can guarantee the change visibility and atomicity of variables.
  • Volatile does not block threads; Synchronized can cause threads to block.
  • Volatile variables are not optimized by the compiler; Variables of the synchronized tag can be optimized by the compiler.

Interviewer: Can you give other examples of how volatile works

An example of an implementation of the singleton pattern, typical of double-checked locking (DCL) :

class Singleton{    private volatile static Singleton instance = null;     private Singleton() {}     public static Singleton getInstance() {        if(instance==null) { // 1            synchronized (Singleton.class) {                if(instance==null) instance = new Singleton(); / / 2}}return instance;    }}Copy the code

This is a lazy singleton pattern in which objects are created as they are used and instance is volatile to avoid reordering the initialization instructions.

Why volatile when synchronized? Specifically, synchronized guarantees atomicity, but does not guarantee the correctness of instruction reordering. Initialization will be performed by thread A. However, there may be too many operations in the constructor, so the instance instance of thread A has not been created, but has been assigned (i.e., operation 2 in the code, Allocate memory space before building objects).

Thread B mistakenly thinks instance has been instantiated, and then finds that instance has not been initialized. Keep in mind that our threads are atomicity, but the program may be running on a multi-core CPU.

summary

Of course, there are other extensions to the volatile keyword, including JMM to the differences between the JMM and the Java memory model, atomicity to how to view class bytecodes, and concurrency to all the ways in which threads can work concurrently.

In fact, not only the interview so, in learning knowledge can also refer to this interview thinking, ask a few more why. Take a point and expand it into a web of knowledge through why.

From the above introduction, WE believe that you have a further understanding of volatile, hope to help you!


Because there are too many details, so only part of the knowledge of the rough introduction to the recent compilation of the Java core knowledge. It covers JVM, locking, concurrency, Java reflection, Spring principle, microservices, Zookeeper, database, data structure and many other knowledge points.

If you want to get this document for free, check it out below