preface

The end of the holiday, began to go to work, it is a day full of frustration, but also good to have the flower Gie article accompany, otherwise I will be lonely to death (smelly shameless).

Multi-threading has introduced three pieces of multi-threading a lot of knowledge points, have not read the small partners remember to take some time to look at, this series is from shallow to deep, gradually progressive, do not want to eat a fat oh, be careful of fat (manual dog head).

Dog left son: flower GieGie~, come so early.

Me: the article hasn’t finished yet, there are a bunch of small partners waiting for me, certainly come early ah.

Dog left son: that you still squat pit here

I:…

At the end of this chapter, the explanation of the Java memory model is coming to an end. Although it is over, it will continue to cover all the knowledge points in the future. The basic knowledge points of multi-threading are almost explained. The next few chapters will cover thread pools, CAS, ThreadLocal, atomic classes, AQS, concurrent collections, and so on, but you won’t be able to see who else is up against you.

Spluttered pa… See how I punch in the face

The body of the

Me: Dog, yesterday you mentioned the main memory and working memory, the introduction was a little rough last time, can be more detailed today

B: OK, I’ll do whatever you want.

When it comes to JMM main memory and working memory, we must first look at the CPU cache structure.

Core0 and Core1 represent two cores

L1: Each core has two L1s, one for Data Cache and one for Instruction Cache.

If you look at the diagram above, the CPU has three levels of cache, L1/L2/L3. You might think that the CPU has nothing to do. Why do you design multiple levels of memory to read and write data directly from main memory? But we have to think about it, CPU is very efficient, the processing speed and physical memory compared to the same level, if the CPU each read and write directly with the main memory, it will greatly reduce the execution of instructions and express, which leads to the three-level cache.

a = a + 1

To take a simple example, when the thread executes this statement, it reads the value of variable A from main memory and copies it to the cache. Then the CPU executes instructions to increment variable A and write the data to the cache. Finally, the modified value of variable A from the cache is flushed to main memory.

If the thread fails to find the data in the Cache (L1), it will look for the data in the Cache (L1). If the thread fails to find the data in the Cache (L1), it will look for the data in the Cache (Main Memoy). If the thread fails to find the data in the Cache (L1), it will look for the data in the Cache (Main Memoy).

I: that this and our JMM of memory structure have what relation?

Java, as a high-level language, shields these low-level details, but the JMM defines a set of specifications for reading and writing memory. In JMM, main memory and working memory are not really physically separated, but an abstraction of JMM, which abstracts L1, L2 and registers into working memory, which is exclusive to each processor, while L3 and RAM are abstracted into main memory, which is shared among processors.

JMM constraint on main/working memory:

  • All variables are stored in main memory. Each thread has its own working memory. The variables in working memory are copies of the copies in main memory.
  • The thread cannot manipulate the main memory directly, but only by modifying the local memory and then synchronizing the local memory to the main memory.
  • Threads can not communicate directly, only through the main memory for transit;
  • Because of the way threads communicate with each other, and because of the latency of communication between threads, this leads to visibility problems.

Me: Is there anything we can do to solve the visibility problem?

We can solve the visibility problem through the happens-before principle.

Me :(sloppy, I haven’t heard of it) So… Can you tell me exactly what this means?

As an example, action A happens before action B, so action B must be able to see action A. This is the happens-before principle.

If this still sounds abstract, let’s look at a counterexample: two threads (Thread 1, Thread 2). Thread 2 sometimes sees something that Thread 1 executes, but sometimes it doesn’t, in which case there is no happens-before. For those of you who have read the flower-gie series, you should remember that we explained a case of visibility in the last article. When the fourth situation occurs, B =3 and A =1, the multi-thread series of Java Memory Model is explained in detail, precisely because there is no happens-before principle.

Me: As happens-before, what are its applications?

Here is a brief list, and it is enough for you to have a general understanding of the scope of happens-before. Later, we will explain each item separately.

It is widely used. Check out the following categories, which most of you have probably known about:

  • The single thread principle:

    In a single thread, in the order of the program, subsequent operations must be able to see the contents of previous operations.

  • Start () :

    Main thread A starts thread B, and in thread B you can see what the main thread did before it started B.

  • The join () :

    Main thread A waits for child thread B to complete, and when child thread B completes, main thread A can see all the operations of thread B.

  • volatile
  • Synchronized, the Lock
  • Tools:

    • Thread safe containers: For example, CurreEnthHashMap
    • CountDownLatch
    • Semaphore
    • The thread pool
    • Future
    • CyclicBarrier

Synchronized, thread pool and other knowledge points, which will be explained one by one in the future due to space limitation, will be updated gradually, interested friends can pay attention to (Huagie, today’s advertisement has helped you to type, please pay your salary).

I :(old face a powder) wages that say again, you first say volatile, I still waiting to move bricks.

First, volatile is a synchronization mechanism. Once a shared variable (member variable, static member variable) is decorated with volatile, it serves two purposes:

  • Visibility: When one thread changes the value of a variable, other threads immediately know that the variable has been changed.
  • Disallow instruction reordering.

  • Here’s an example of visibility:
static boolean flag = true; Public static void main(String[] args) throws InterruptedException {new Thread(new Runnable() {@Override public Void run() {while (flag){system.out.println (" !!!!"); ); } } }).start(); Thread.sleep(10); // Thread 2 new Thread(new Runnable() {@Override public void run() {flag = false; } }).start(); }

This code is used to stop a thread, but it is not the correct way to stop a thread, because there is a very small chance that the thread will fail to stop. When thread 1 changes the flag variable and is assigned to do something else before it can write the contents back to main memory, thread 1 is not aware that thread 2 has changed the flag variable and continues execution.

This can be completely avoided if flag variables are decorated with volatile, for several reasons:

  • Using the volatile keyword forces the modified value to be written to main memory immediately;
  • The use of volatile will invalidate the cache line of the flag variable in thread 1’s working memory when thread 2 modifies it (i.e., the L1 or L2 cache line in the CPU mentioned above).
  • Since thread 1’s working memory cache line for the flag variable is invalid, thread 1 will read the flag variable in main memory when it reads the value again.

Thus thread 2 stop value (modified thread 2 values in working memory, and to write the value of the modified into memory) and make the thread 1 stop working memory cache variable cache line is invalid, then read by a thread 1, found himself the cache line is invalid, it will wait for the cache line corresponds to the main memory addresses are updated, go to the corresponding main memory to read the latest values.


  • Disallow instruction reordering

In the previous chapter, we mentioned that when the compiler interprets the code, the order in which the code is actually executed may not be the same as the order in which the code is written. To put it plainly, the compiler only guarantees that the execution result is the same as you want, but it is up to me which code to execute first and which code to execute later. But this only works well with a single thread, and when you introduce multiple threads, all sorts of weird things happen.

Here’s a simple chestnut:

//flag is volatile; //flag is volatile; // 1 b = 0; // 2 flag = true; // c = 4; D = -1; 5 / / statement

Since the flag variable is volatile, the reorder process does not place statement 3 before statements 1 and 2, nor does it place statement 3 after statements 4 and 5. Note, however, that the order of statements 1 and 2, and the order of statements 4 and 5 is not guaranteed.

Moreover, the volatile keyword guarantees that statements 1 and 2 must have completed when statement 3 is executed, and that the results of statements 1 and 2 are visible to statements 3, 4, and 5.

Me: So I understand. Can volatile be used to solve A ++ problems?

Let’s look at the following code first.

import java.util.concurrent.atomic.AtomicInteger; public class volatileDemo implements Runnable { volatile int a; RealCount = new AtomicInteger(); realCount = new AtomicInteger(); public static void main(String[] args) throws InterruptedException { Runnable r = new volatileDemo(); Thread thread1 = new Thread(r); Thread thread2 = new Thread(r); thread1.start(); thread2.start(); thread1.join(); thread2.join(); Println ((((Cmd) r).a); println(((Cmd) r).a); System.out.println((((CordleDemo) r).realCount.get())); } @Override public void run() { for (int i = 0; i < 1000; i++) { a++; / / 1 realCount realCount incrementAndGet (); }}}

The results are as follows:

Why is this? I clearly added volatile ah, you are not doing this, cheater, refund.

Calm… But let’s take a closer look at a++, which is not an atomic operation, but involves several steps: reading the value of a, adding 1, and assigning the value to a.

It’s not surprising that volatile doesn’t guarantee atomicity.

Consider the following process:

  • Thread 1 reads the value of a and completes the +1 action (before the last assignment is performed)
  • The other thread 2 also reads the value of A and performs the +1 action;
  • Thread 1 and Thread 2 complete the assignment and write the new value back to main memory.
  • It can be seen that the value of A used by thread 2 is still the same as before the modification, so after the execution of thread 2, the value of A will be increased by one less time.

Me: Great, drumsticks must be added. Can you summarize volatile for your friends?

In summary, there are the following points:

  • Volatile provides visibility. Used for variables shared by multiple threads to ensure that after being modified by any one thread, other threads can immediately get the modified value;
  • Volatile cannot be replacedsynchronized, it does not have atomicity and mutual exclusion;
  • Volatile works only on properties and prevents them from being reordered.
  • Volatile provideshappens-beforeWhen a volatile variable is modified, the value will be available to other threads.

conclusion

Today this chapter again and further explore the JMM, do you also have a new understanding to it, in addition we also introduced new knowledge points volatile, this is also a multithreaded comparison basis and very common, is very necessary to grasp, liver for a day, the space is a little long, must struggle to see friends.

In the next chapter, Huagie will continue to introduce synchronized, which is very familiar to everyone. Is it different from what you know? I will see you in the next chapter. Hope you keep your attention, for Dachang dream, we continue to liver.

Keep your eyes on the ground

The above is the whole content of this issue, if there is a mistake, please comment, thank you very much. I’m Giegie. Feel free to leave a comment if you have any questions. We’ll see you at 🦮 next time.

The article is constantly updated, you can search on WeChat”
Java development zero to one“The first time to read, will continue to update the Java interview and all kinds of knowledge points, interested partners welcome to pay attention to, learn together, together ha 🐮.

If you think this article is useful to you, thank brother for liking, commenting or forwarding this article, because this will be the motivation for me to produce more good articles. Thank you!