One, foreword

We have learned some basic concepts about concurrent programming. This article will continue to summarize and review the content of the basic article.

Processes and threads

2.1 What is a process?

Process is a process of program execution, is the basic unit of the system running program, so the process is dynamic. When a system runs a program, it is a process from creation, to execution, to extinction.

In Java, when we start main, we actually start a JVM process, and the main thread is one of the threads in that process, also known as the main thread.

As shown in the following figure, we can clearly see the process running on Windows by viewing the task Manager:

2.2 What is a thread?

A thread is similar to a process, but a thread is a smaller unit of execution than a process. A process can produce multiple threads during its execution. Unlike the process heap of similar process are Shared by multiple threads and method of area resources, but each thread has its own program counter, the virtual machine and the local method stack, so the system in one thread, or switch between individual threads, burden is much smaller than the process, also because of this, thread, also known as a lightweight process. Let’s look at the thread/process relationship from a JVM perspective.

2.3 Diagram the relationship between threads and processes

Here’s a schematic of the abbreviated version:


Summary: Threads are smaller units of execution into which processes are divided. The main difference between threads and processes is that processes are essentially independent, whereas threads are not necessarily, as threads in the same process will most likely interact with each other. Thread execution costs little, but is not conducive to resource management and protection; The process is the opposite.

Java memory model

JMM(Java Memory Model), because concurrent programs are much more complex than serial programs, one of the important reasons is that data access consistency and security in concurrent programs will be seriously challenged. How do you ensure that a thread can see the correct data? This seems like an idiotic question. For a serial program, it’s a piece of cake, if you read a variable, and the value of the variable is 1, you get 1, and that’s how simple things get complicated in a parallel program. In fact, if you let the threads run in parallel without control, you might read a 2 even if it was originally a 1. Therefore, we need to define a rule on the premise of in-depth understanding of the parallelism mechanism to ensure that multiple threads can have younger brothers and work correctly together. And that’s what the JMM was born for.

The key technical points of the JMM are built around atomicity, visibility, and order of multithreading. We need to understand these concepts first.

3.1 atomic

Atomicity means that the operations are indivisible and either all are performed at once or not at all. In Java, this means that certain operations on shared variables are not separable and must be done consecutively. For example a++, the operation on the shared variable a actually performs three steps:

1. Read the value of variable A, if a=1

2. The value of A +1 is 2

3. Assign the value 2 to variable A, which should be 2

Any one of these three operations, if the value of A is tampered with by another thread, will produce undesirable results. Therefore, it is necessary to ensure that these three operations are atomic. During the operation a++, other threads will not change the value of A. If other threads change the value of A in the above process, the operation should fail under the principle of atomicity.

There are roughly two ways to implement atomic operations in Java: the locking mechanism and the lockless CAS mechanism, which will be described in a later chapter.

3.2 the visibility

Visibility is the value of whether changes made by one thread to a shared variable are visible to another thread.

Let’s start with the Java thread memory model:


Shared variable visibility is implemented by:

1. After modifying A variable in thread A’s working memory, thread A needs to refresh the value of the variable to the main memory 2. Thread B updates the value of a variable in main memory to the working memory

Control of thread visibility can be achieved using volatile, synchronized, and locking, as described in the following sections.

3.3 order

Orderliness refers to the order in which the program is executed. To optimize performance, compilers and processors do instruction sequencing, sometimes changing the order of program statements.

Take the following example:

	int a = 1;  / / 1
	int b = 2;  / / 2
	int c = a + b;  / / 3
Copy the code

After compiler and processor optimization, it is possible to change to the following order:

    int b = 2;  / / 1
	int a = 1;  / / 2
	int c = a + b;  / / 3
Copy the code

This example tweaks the order of code execution, but does not affect the final result of program execution.

So let’s look at another example (singletons are implementations — double locking implementations) :

package com.MyMineBug.demoRun.test;

public class Singleton {
	static Singleton instance;

	static Singleton getInstance(a) {
		if (instance == null) {
			synchronized (Singleton.class) {
				if (instance == null)
					instance = newSingleton(); }}returninstance; }}Copy the code

Operations not optimized by the compiler:

Instruction 1: Allocate a section of memory H

Instruction 2: Initialize the Singleton object on memory H

Instruction 3: Assign the address of H to the instance variable

Compiler optimized operation instructions:

Instruction 1: Allocate a block of memory W

Instruction 2: Assign the address of W to the instance variable

Instruction 3: Initialize the Singleton object on memory W

If multiple threads execute this code at this point, unexpected results can occur.

So what is the best way to create a singleton pattern? We’ll talk about that later.

If you like it, please give it a thumbs up!!

Share Technology And Love Life