Java memory model

The Java Memory Model (JMM) is a Memory Model defined by the JVM to mask differences in Memory access between hardware and operating systems.

  • Main Memory: All variables are stored in Main Memory (analogous to physical Memory).
  • Working Memory: each thread has its own Working Memory (Working Memory, the processor cache) analogy, thread’s Working Memory holds the variables used by the thread to main Memory copy of a copy of the thread of variables all the operations (read, assignment, etc.) must be done in Working Memory, and cannot be directly read/write variables in main Memory. Different threads cannot directly access variables in each other’s working memory, and the transfer of variable values between threads needs to be completed through the main memory.

Interoperation between memory

  • Lock: A variable acting on main memory that identifies a variable as a thread-exclusive state.
  • Unlock: A variable that acts on main memory. It releases a locked variable so that it can be locked by another thread.
  • Read: A variable acting on main memory that transfers the value of a variable from main memory to the thread’s working memory for subsequent load action.
  • Load: Variable acting on working memory, which puts the value of the variable obtained from main memory by the read operation into a copy of the variable in working memory.
  • Use: variable applied to working memory, which passes the value of a variable in working memory to the execution engine. This operation is performed whenever the virtual machine reaches a bytecode instruction that requires the value of the variable to be used.
  • Assign: a working memory variable that assigns a value received from the execution engine to the working memory variable. This operation is performed whenever the virtual machine accesses a bytecode instruction that assigns a value to the variable.
  • Store: Variable applied to working memory that transfers the value of a variable in working memory to main memory for subsequent write operations.
  • Write: a variable operating on main memory that places the value of a variable in main memory obtained from the store operation in working memory.

Read and load, store and write must be executed in pairs, sequentially but continuously. If you load a read, you load it immediately after the read. The same goes for Store and write. Lock and unlock also come in pairs. A variable can only be locked by one thread at a time.

  1. For normal variable operations: creating a variable is initialized in main memory. A variable used by a thread is read from main memory, loaded into working memory, used and assigned. Then store to working memory, and update to write out the original variable.

    Ordinary variable values are passed between threads through main memory. For example, if thread A modifies the value of A common variable and writes back to main memory, and thread B reads from main memory after thread A has written back, the value of the new variable will not be visible to thread B.

  2. For volatile variables: Procedures are the same as normal variables. However, variables are guaranteed to be visible to all threads, and instruction reordering optimization is prohibited.

    Special rules for volatile ensure that new values are immediately synchronized to main memory and flushed from main memory immediately before each use. Thus, it can be said that volatile guarantees visibility of variables in multithreaded operations in a way that normal variables do not.

Happens-before principle

It is the primary basis for determining whether data is competing and whether threads are safe, and with this principle, we can solve all the problems of whether two operations may conflict in a concurrent environment in a package of several rules.

First occurrence is defined in the Java memory model of partial order relation between two operations, if A first occurred in operation, B is actually happening B before operation, operation impact can be operating B observed, “affect” including modify the Shared memory variable values, sent messages, call the method, etc.

thread

Thread is a more lightweight scheduling execution unit than process. The introduction of thread can separate the resource allocation and execution scheduling of a process. Each thread can share process resources (memory address, file I/O, etc.), and can be independent scheduling (thread is the basic unit of CPU scheduling).

There are three ways to implement threads:

  • Implemented using kernel threads
    • Kernel-level threads (KLT) are threads directly supported by the operating system Kernel. These threads are switched by the Kernel, which manipulates the Scheduler to schedule the threads. And is responsible for mapping the tasks of the thread to the individual processors. Each Kernel thread can be regarded as a doppelgant of the Kernel, so that the operating system can handle more than one thing at a time. A Kernel that supports multithreading is called a multi-threads Kernel.
    • Programs typically do not use kernel threads directly, but rather lightweight processes (threads in the common sense). This 2 person1:1Correspondence. Each lightweight process consumes certain kernel resources (such as the stack space of kernel threads), so a system supports only a limited number of lightweight processes.
  • Implemented using user threads
    • Broadly speaking, a thread that is not a kernel thread can be considered a user thread. Therefore, lightweight processes are also user threads, but the implementation of lightweight processes is always built on top of the kernel, and many operations require system calls, which can be very inefficient.
    • In a narrow sense, a user thread refers to a thread library that is completely built in user space, and the system kernel is not aware of the implementation of threads. The creation, synchronization, destruction, and scheduling of user threads are complete in user mode without the help of the kernel. If implemented properly, this thread does not need to be switched to kernel mode, so operations can be very fast and low cost, or can support a larger number of threads, some of which are implemented by user threads in high-performance databases. Between this process and the user thread1:NIs called the one-to-many threading model.
    • The advantage of using user threads is that there is no kernel support, and the disadvantage is that there is no kernel support. All threading operations need to be handled by the user program itself, which can be complicated to implement, so it is rarely used today.
  • Use a mix of user threads and lightweight processes
    • There are both user threads and lightweight processes. User threads are still built entirely in user space, so user threads are still cheap to create, switch, and destruct, and can support large-scale user thread concurrency. The lightweight process supported by the operating system acts as a bridge between the user thread and the kernel thread. In this way, the thread scheduling function and processor mapping provided by the kernel can be used, and the system call of the user thread is completed through the lightweight thread, greatly reducing the risk of the whole process being completely blocked. In this hybrid mode, the ratio of user threads to lightweight processes is variable, i.eN:MThis is the many-to-many threading model.

Java thread implementation: user threads before JDK1.2, 1.2 and later, using the operating system native thread model (kernel threads).

Java thread scheduling

Thread scheduling refers to the process in which the system allocates processor rights to threads. There are two main scheduling methods:

  • Cooperative Threads-scheduling: The execution time of a thread is controlled by the thread itself. When the thread completes its own work, it actively notifies the system to switch to another thread.
    • Advantages: simple implementation, and because the thread will do its own work after the switch, switch operation is known to the thread itself, so there is no thread synchronization problem.
    • Cons: Thread execution time is out of control, and even if a thread is not written properly and never tells the system to switch, the program will always block.
  • Preemptive threads-scheduling: each thread will be allocated execution time by the system, and switching of Threads will not be determined by the thread itself (in Java,Thread.yield()You can give away execution time, but there’s nothing the thread can do to get it). In this way, the execution time of the thread is controlled by the system, and there is no problem that one thread will cause the whole process to block. The thread scheduling method used in Java is preemptive scheduling.

Although Java thread scheduling is done automatically, but we still can give some threads “recommendation” system allocation more execution time, other threads allocated less – by setting the thread priority way (two threads in the Ready state at the same time, the higher the priority of a thread, the more easy to choose to perform) system, This approach is not very reliable, however, because system thread priorities do not necessarily correspond to Java’s 10 thread priorities.

Thread state

A thread has only one state at any point in time

  • New: not started after being created
  • Runable: Executing or waiting for the CPU to allocate execution time for it
  • Wait (Waiting) :
    • Infinite Waiting: a thread is not allocated CPU execution time and is Waiting to be explicitly woken up by another thread.
    • Timed Waiting: threads are not allocated CPU execution time and do not have to wait to be explicitly woken up by other threads. They are automatically woken up by the system after a certain amount of time.
  • Blocked: It is Blocked
    • The difference between blocking and waiting:The blocking stateWhile waiting to acquire an exclusive lock, this event occurs when another thread abandons the lock. whileWait stateThey are waiting for a period of time, or the awakening action to occur. The thread enters this state while the program is waiting to enter the synchronization zone.
  • Terminated: It is Terminated