Thread scheduling general refers to a system for the thread processor access process, this process will produce a context switch, the operating system’s CPU utilization time slice rotation way to assign each task a certain execution time, then the current task status, then loading state and perform a task, it is a state of preservation and loading process.

There are two common thread scheduling modes: preemptive scheduling and cooperative scheduling.

Preemptive scheduling refers to each thread execution time, switch is controlled by the system thread, thread switching not up to the thread itself, the system control refers to some operating mechanism in the system, may each thread all points in the execution of the same time, also may be some thread execution time is longer, and even some threads without execution time slice. In this mechanism, the blocking of one thread does not cause the whole process to block.

Collaborative scheduling refers to that when a thread finishes execution, it actively notifies the system to switch to another thread for execution, and the execution time of the thread is controlled by the thread itself. This mode is like a relay race, in which one person finishes his own race and passes the baton to the next person, who continues to run. Thread execution time is controlled by the thread itself, thread switching can be predicted, there is no multi-thread synchronization problem, but it has an Achilles’ heel: if a thread is written in a problem, it runs halfway and keeps blocking, then the whole system may crash.

On the left is preemptive thread scheduling. If three threads need to run, the path the processor takes is to run a time slice on thread one and then force it to run a time slice on thread two, then cut to thread three, then back to thread one, and so on until all three threads have executed. Collaborative thread scheduling does not do this, it will first finish thread one, thread again and again notify thread two to execute, thread two then notify thread three, until thread three is finished.

What thread scheduling pattern does Java use? This problem involves the JVM implementation, the JVM specification specified in each thread has a priority, and the higher priority of the priority, but high priority doesn’t mean to occupy the execution time, may be a high priority to get the more execution time, on the contrary, the low priority assigned to the distribution of execution time but not less than the execution time. The specification of JVM does not strictly define the scheduling policy. Generally, the thread scheduling used in Java is preemptive scheduling, which is reflected in THE JVM to give CPU rights to the threads with the highest priority in the runnable pool. If the threads in the runnable pool have the same priority, the threads are randomly selected. Note, however, that there is actually only one thread running at an absolute point in time (relative to a CPU) until that thread enters a non-runnable state or another thread of higher priority enters the runnable thread pool.

————- Recommended reading ————

My 2017 article summary – Machine learning

My 2017 article summary – Java and Middleware

My 2017 article summary – Deep learning

My 2017 article summary — JDK source code article

My 2017 article summary – Natural Language Processing

My 2017 Article Round-up — Java Concurrent Article


Talk to me, ask me questions:

The public menu has been divided into “reading summary”, “distributed”, “machine learning”, “deep learning”, “NLP”, “Java depth”, “Java concurrent core”, “JDK source”, “Tomcat kernel” and so on, there may be a suitable for your appetite.

Why to write “Analysis of Tomcat Kernel Design”

Welcome to: