GitHub 4.1K Star Java engineers become god’s path, not to learn about it?

GitHub 4.1K Star Java engineer into god’s road, really not to know?

GitHub 4.1K Star Java engineer into god’s road, really sure not to check out?

Recently, I have interviewed many advanced Java developers and asked them many times about the Java memory model. After the interview, many of them began to answer:

The Java memory model consists of several parts, the heap, the local method stack, the virtual machine stack, the method area…

Each time I didn’t want to interrupt them, even though I knew it was another friend who had misunderstood my question.

Actually, the Java memory model I want to ask about is related to concurrent programming. The candidate gave me an answer called THE JVM memory structure, which is completely different.

Most of the time, without me interrupting them, some of them slowly talked about GC. In this case, I’ll just have to bite the bullet and continue to ask about the JVM.

However, I really wanted to see how much he knew about Java concurrency.

Often, I follow up with a few more questions about the “Java memory model “and kindly note that the Java memory model is not the one I was asking about…

Sometimes, I’ll go one step further: it has to do with concurrent programming, it has to do with main memory and thread working memory…

So, this article will briefly explain how to answer this interview question about the Java memory model.

Why the misunderstanding

First of all, let’s analyze what many people, even most people, do not answer?

I think there are several main reasons:

Java memory model, the term sounds too much like knowledge about memory distribution. That doesn’t sound like it has anything to do with concurrent programming.

A lot of information on the Internet is wrong. Do a web search for “Java Memory Model “and you’ll find a lot of references to JVM memory structures under the title memory Model.

By the way, I tried a Google search for “Java memory model “and the first page showed the following results:

Out of the top five articles on the front page, one is wrong about the JVM memory structure.

PS: Fortunately, two of the top 5 articles on the front page were written by me, at least my two articles are not misleading at all!!

3. There is also a case, although rare, but also exists. Many interviewers themselves think that the memory model is all about the heap, stack, and method area. As a result, sometimes candidates don’t know exactly how to answer.

So what exactly is the Java memory model? How do you answer this interview question?

What is the memory model

I went over the ins and outs of the Java Memory model in detail in “Send this article to the next person who asks what it is.” Here’s a recap.

The Java Memory Model is translated from the English Java Memory Model (JMM). The JMM is not as real as the JVM memory structure. It is only an abstract concept.

The Java Memory Model is described in JSR-133: Java Memory Model and Thread Specification. The JMM is related to multithreading. It describes a set of rules or specifications that define a shared variable written by one thread to be visible to another thread.

The Java Memory Model (JMM) is a mechanism and specification that conforms to the specification of the Memory Model, shielding the access differences of various hardware and operating systems, and ensuring that the Java program can get the same effect on the access of Memory on various platforms. The goal is to solve atomicity, visibility (cache consistency), and orderliness problems due to multiple threads communicating through shared memory.

So, what are the so-called memory model specifications, and atomicity, visibility, and orderliness?

atomic

Thread is the basic unit of CPU scheduling. The CPU has the concept of time slice and will schedule threads according to different scheduling algorithms. So in a multithreaded scenario, atomicity problems occur. When a thread performs a read rewrite operation, it is required to abandon the CPU and wait for rescheduling after the time slice runs out. In this case, read rewriting is not an atomic operation. There is a problem with atomicity.

Cache consistency

In a multi-core CPU, multi-threaded scenario, each core has at least one L1 cache. If multiple threads access a shared memory in a process and execute on different cores, each core keeps a buffer of shared memory in its own CAEHE. Since multiple cores can be parallel, it is possible for multiple threads to write to their caches at the same time, and the data in their caches may be different.

Adding caches between CPU and main memory can lead to cache consistency issues in multi-threaded scenarios, that is, in a multi-core CPU, each core’s own cache may have different contents for the same data.

order

In addition to the introduction of time slices, due to processor optimization and instruction rearrangement, the CPU may also execute input code out of order, such as load->add->save may be optimized to load->save-> Add. That’s the problem of order.

The consistency problems caused by multi-CPU and multi-level caching, the atomicity problems caused by THE CPU time slice mechanism, and the ordering problems caused by processor optimization and instruction rearrangement are all caused by the continuous upgrading of hardware. So, is there a good mechanism to solve these problems?

The simplest and most straightforward way to do this is to do away with the processor and processor optimization techniques, do away with CPU caching, and let the CPU interact directly with main memory. However, doing so can guarantee concurrency problems in multiple threads. But that’s a bit of throwing out the baby with the bath water.

Therefore, in order to ensure that concurrent programming can meet the atomicity, visibility and order. There is an important concept, and that is the memory model.

In order to ensure the correctness (visibility, orderliness, atomicity) of shared memory, the memory model defines the specification of read and write operation of multithreaded program in shared memory system. These rules are used to regulate the read and write operations of memory, so as to ensure the correctness of instruction execution. It’s about the processor, it’s about the cache, it’s about concurrency, it’s about the compiler. It solves the memory access problems caused by multi-level CPU cache, processor optimization and instruction rearrangement, and ensures the consistency, atomicity and order in concurrent scenarios.

For these problems, different operating systems have different solutions, and the Java language in order to shield the underlying differences, defined a set of Java language memory model specification, namely the Java Memory model.

The Java memory model stipulates that all variables are stored in the main memory, and each thread has its own working memory. The working memory of the thread stores a copy of the main memory of the variables used in the thread. All operations on variables must be carried out in the working memory of the thread, instead of reading and writing the main memory directly. Different threads cannot directly access variables in each other’s working memory, and the transfer of variables between threads requires data synchronization between their own working memory and main memory.

The JMM is used to synchronize data between working memory and main memory. It specifies how and when to synchronize data.

Implementation of the Java memory model

Those of you familiar with Java multithreading know that Java provides a series of keywords related to concurrent processing, such as volatile, synchronized, final, concurren packages, and so on. These are the keywords that the Java memory model provides to programmers by encapsulating the underlying implementation.

When developing multithreaded code, we can directly use keywords like synchronized to control concurrency and never need to worry about underlying compiler optimizations, cache consistency, and so on. Therefore, the Java memory model, in addition to defining a set of specifications, provides a set of primitives that encapsulate the underlying implementation for developers to use directly.

This article does not attempt to cover the use of all keywords individually, as there is much information available on the web about the use of individual keywords. Readers can learn for themselves. Another important point of this article is that, as we mentioned earlier, concurrent programming should solve the problems of atomicity, orderliness, and consistency. Let’s take a look at the methods used to ensure this in Java.

atomic

In Java, two high-level bytecode instructions, Monitorenter and Monitorexit, are provided to ensure atomicity. Synchronized is the key word corresponding to these two bytecodes in Java.

Therefore, synchronized can be used in Java to ensure that operations within methods and code blocks are atomic.

visibility

The Java memory model relies on main memory as a transfer medium by synchronizing the new value back to main memory after a variable is modified and flushing the value from main memory before the variable is read.

The volatile keyword in Java provides the ability to synchronize modified variables to main memory immediately after they are modified, and to flush variables from main memory each time they are used. Therefore, volatile can be used to ensure visibility of variables in multithreaded operations.

In addition to volatile, the Java keywords synchronized and final are also visible. It’s just implemented in a different way. It’s not expanded anymore.

order

In Java, synchronized and volatile can be used to ensure order between multiple threads. Implementation methods are different:

The volatile keyword disallows instruction reordering. The synchronized keyword ensures that only one thread can operate at a time.

Ok, so this is a brief introduction to the keywords that can be used to solve atomicity, visibility, and order in Java concurrent programming. As readers may have noticed, the synchronized keyword seems to be all-purpose, satisfying all three of these attributes at once, which is why so many people abuse synchronized.

Synchronized, however, is a performance deterrent, and while the compiler provides many lock optimization techniques, overuse is not recommended.

How to answer an interview

I’ve covered some of the basics of the Java memory model, just the basics, but not all of them, because any of them can be covered. How does volatile implement visibility? How does synchronized achieve orderliness?

However, when the interviewer asks you: Can you briefly describe the memory model you understand?

First, check with the interviewer: By memory model, you mean the JMM, which is related to concurrent programming.

After you get a positive response, start the introduction (if not, you may need to answer the heap, stack, method area which…. Sorry…). :

The Java memory model is a mechanism and specification that ensures that Java programs can access memory consistently on various platforms. The goal is to solve atomicity, visibility (cache consistency), and orderliness problems due to multiple threads communicating through shared memory.

In addition, the Java memory model provides a set of primitives that encapsulate the underlying implementation for developers to use directly. Some of the keywords we use are synchronized, volatile, and synchronized packets.

This is the end of your answer, and then the interviewer may continue to ask questions, and then continue to answer according to his question.

So, when someone asks you about the Java memory model again, don’t just open your mouth and say stack, method area or even GC, that’s unprofessional!