Implementation of threads

Threads are the basic unit of CPU scheduling, and the Thread class differs significantly from most Java apis in that all of its key methods are declared Native, meaning that the method doesn’t use or can’t be implemented using platform-independent means.

Kernel-lever Thread (KLT)

A thread that is directly supported by the operating system kernel (Kermel). The kernel performs thread switching by manipulating the scheduler (Sheduler) to schedule the thread and map the task of the thread to the various processors.

Each Kernel thread can be regarded as a doppelgant of the Kernel, so that the OS can handle more than one thing at a time. A Kernel that supports multithreading is called a multi-threads Kernel.

KLT is usually not directly used by programs, but an advanced interface of KLT, namely Light Weight Process (LWP), which is generally referred to as thread. Since each LWP is supported by a KLT,LWP can only be created if KLT is first supported. This 1:1 relationship is called the one-to-one threading model.

  • limitations

    Since it is implemented based on KLT, various thread operations, such as create, destruct, and synchronize, require system calls. However, the cost of system call is relatively high, and it needs to switch back and forth between user mode and kernel mode. Secondly, each LWP needs to be supported by KLT, so LWP consumes certain kernel resources (such as KLT stack space), so the amount of LWP supported by a system is limited.

User threads

All the details of creation, switching, and scheduling need to be considered, and the implementation is extremely difficult and has been abandoned by Java, Ruby, and other languages for user-threaded hybrid lightweight processes.

Java thread implementation

User thread is fully established in user space, so the user thread creation, switch and destructor is still cheap operation, and can support large-scale concurrent user threads operating systems support a lightweight process, as the bridge between user threads and kernel thread, it can provides the thread scheduling function and using the kernel processor mapping, And system calls from user threads are done through lightweight threads, greatly reducing the risk of the entire process being completely blocked.

In this hybrid mode, the ratio of user threads to lightweight processes is variable, i.e., N: M. Many UN1X series operating systems such as Solaris and HP-UX provide an implementation of the N: M threading model.

Java thread

JDK 1.2 was implemented based on user Threads called “green-threads”, but in JDK 1.2 it was implemented based on the OS native threading model. This largely determines how Java virtual machine threads are mapped, which cannot be agreed upon across platforms, and the virtual machine specification does not specify which thread model Java threads need to implement.

The threading model only affects the concurrency scale and operation cost of a thread, and these differences are transparent to the code and execution of a Java program.

For the Siun JDK, both the Windows and Linux versions are implemented using a one-to-one threading model, where a Java thread maps to a lightweight process, because Windows and Linux provide a one-to-one threading model. On Solaris, Due to the threading characteristics of the operating system, one-to-one (implemented through Bound Threaids or Alternate Libthread) and many-to-many (implemented through LWP/Thread Based Synchronization) can be supported simultaneously

The Solaris JDK also provides two platform-specific virtual peach parameters:

-xx :+UseLWPSynchronization (the default) and -xx :+UseBoyndThreads specify which threading model the VIRTUAL machine uses.

Java thread scheduling

  • Thread scheduling

    There are two main scheduling methods in which the system assigns processor rights to threads

  • Cooperative Threads-scheduling

  • Preemptive threads-scheduling

In the multi-threaded system with cooperative scheduling, the execution time of the thread is controlled by the thread itself. After the thread completes its own work, it should inform the system to switch to another thread.

Cooperative multithreading

  • The best advantage

    The implementation is simple, and because the thread has to finish its own work before the thread switch, switch operation is known to the thread, so there is no thread synchronization problem

  • The disadvantages are also obvious

    Thread execution time is out of control

Using preemptive scheduling multithreaded system, so each thread will be performed by the system to allocate time, thread switching not up to the thread itself, in the realization of thread scheduling mode, controllable Java thread execution time system using thread scheduling mode is a preemptive scheduling although Java thread scheduling is done automatically, However, we can still “suggest” that the system allocate more execution time to certain threads, by setting thread priority. Java language sets a total of 10 Thread priorities (thread.min_priority to thread.max_priority). When two threads are in the Ready state at the same time, the Thread with the higher priority is more likely to be selected by the system for execution.

Java threads are mapped to the system’s native threads, so thread scheduling ultimately depends on the OS. Although many OSS provide the concept of thread priority, it does not necessarily correspond to the Java thread priority. For example, Solaris has 2147483648 (232) priorities. But in Windows, there are only seven, and systems with higher priority than Java threads are good enough to leave a little space in the middle, but systems with lower priority than Java threads have to have several identical priorities, not just the fact that different priorities actually become the same on some platforms, There are other situations where we can’t rely too much on priorities: priorities can be changed by the system itself.

For example, In Windows, there is a feature called Priority Boosting (which, of course, can be turned off), which basically means that when the system detects a thread that is performing particularly “hard,” it may allocate execution time to it over its Priority. Therefore, we cannot determine in a program with complete accuracy which of a set of threads in the Ready state will execute first by priority.