Send you the following Java learning materials









The state exposed to us at the Java virtual machine level is a different level from the thread state at the bottom of the operating system. Specifically, the Java Thread State is derived from the State defined in the internal enumerated class Thread:

What is a RUNNABLE?

Go straight to the description in its Javadoc:

A thread executing within the JVM is in this state. A thread executing in the Java virtual machine is in this state.

The traditional state of advance (line) is generally divided as follows:

Note: the process here refers to the early single-threaded process, here the so-called process state is essentially thread state.

So what’s the difference between Runnable and Ready and Running?

Differences from the traditional ready state

To be more specific, this is what Javadoc says:

A thread in the Runnable state is executing in the Java Virtual Machine, but it may be waiting on other resources from the operating system, such as the processor.

A thread in the runnable state is executing in the Java virtual machine but it may be waiting for other resources from the operating system such as processor.

Obviously, the Runnable state essentially includes the Ready state.

There may even be partial subdivisions including the waiting state in the figure above, as we’ll see later.

The difference with the traditional running state

It is often thought that the Java thread state is missing a running state, which is actually confusing two different layers of state. There is no such thing as a running state for a Java thread state; its runnable state contains the running state.

Why, we might ask, is there no distinction between these two states in the JVM?

The current time-sharing multi-task operating system architecture is usually preemptive with what is called “time quantum or time slice” (Preemptive) Round-robin scheduling.

More complex ones might also include a priority mechanism.

This time slice is usually very small. A thread can only run on the CPU for, say, 10-20ms (in running state) at most at a time, that is, about 0.01 seconds. After the time slice is used, it will be switched off and placed at the end of the scheduling queue to be scheduled again. (Return to the ready state)

Note: If I/O operation is carried out during the period, it will also lead to early release time sharding and entering the waiting queue.

Or it can be preempted when the time sharding is not finished, and then it will return to the ready state.

This process is called the context switch. Of course, the CPU does not simply kick the thread, but also needs to save the corresponding execution state in memory for the subsequent resumption of execution.

Obviously, 10-20ms is fast for people,

Excluding switching overhead (within 1ms each time), this is equivalent to 50-100 switches within 1 second. In fact, the time slice is often not used up and the thread is interrupted for various reasons, and the actual number of switches can be even greater.

This is the rationale behind what is known as “concurrent” on a single-core *CPU, but it is the illusion of rapid switching, a bit like a very fast juggler who can keep several balls in the air at the same time.

Timing sharding is also configurable, and if you are not pursuing fast responses across multiple threads, you can configure this time to be larger to reduce the overhead of switching.

True concurrency is possible on multicore CPUs. This is often called pararell, but you may see the two words used interchangingly, so I won’t worry about the difference here.

In general, Java’s thread state is for monitoring purposes, and if threads are switching so quickly, it doesn’t make much sense to distinguish between ready and running.

When you see running on the monitor, the corresponding thread may have already been switched down, or even switched up again, and you may only see the ready and running states flashing quickly.

Of course, it is necessary to obtain accurate running times for accurate performance assessments.

The current mainstream JVM implementation maps the Java thread to the underlying thread of the operating system one by one, and delegates the scheduling to the operating system. The state we see at the virtual machine level is essentially the mapping and packaging of the underlying state. The JVM itself does no real scheduling, and mapping the underlying ready and running states doesn’t make much sense, so a unified runnable state is a good choice.

As we’ll see, changes in the state of Java threads are usually only related to the mechanism they explicitly introduced.

When I/O is blocked

We know that traditional I/O is blocked because I/O operations are often orders of magnitude slower than CPU operations. If the CPU is left to wait for the I/O operation, it is likely that the time slice will be exhausted and the I/O operation will not be completed. In any case, this will result in very low CPU utilization.

So, the solution is that as soon as I/O code is executed in the thread, the corresponding thread is immediately cut off and scheduled to run by another thread in the ready queue.

The thread that performed the I/O is no longer running, which is called blocked. It will not be put on the dispatch queue, because it is likely that the I/O will not be completed when it is scheduled again.

The thread will be placed in what is called a waiting queue and will be in the waiting state shown above:

Of course, what we call a block is a period of time when the CPU is not paying attention to it, but another component, such as the hard disk, is doing its best to serve it. The CPU is concurrent with the hard disk. If the thread is regarded as a job, this job is completed alternately by the CPU and the hard disk. When it is waiting on the CPU, it is running on the hard disk. However, when we discuss the thread state at the level of the operating system, we usually talk about it centered on the CPU.

When I/O is complete, the CPU is notified by a mechanism called interrupt:

This is also known as “interrupt-driven,” and it is a mechanism that is widely used in modern operating systems.

In a sense, this is also an inversion of control (IOC) mechanism, where the CPU doesn’t have to repeatedly ask the hard disk, which is also known as the “Hollywood Principle” — Don’t call us, we will call you. Hollywood agents often say to actors, “Don’t call me. We’ll call you.”

In this case, the interaction between the hard disk and the CPU is similar. The hard disk says to the CPU, “Don’t keep asking me if I’m done with my IO. I’ll let you know when I’m done.”

Of course, the CPU still needs to be constantly checking for interrupts, just like an actor has to be on the phone, but it’s better than asking questions, which are mostly in vain.

The CPU receives an interrupt signal, say from the hard disk, and enters the interrupt handling routine, thus interrupting the executing thread at hand and returning to the ready queue. A thread that was previously waiting for I/O returns to the ready queue as the I/O completes, and the CPU may select it to execute.

On the other hand, the so-called time sharding rotation is also essentially driven by a timer interrupt that causes threads to go from running back to ready:

For example, set a 10ms countdown, the time is up to send an interrupt, as if the deadline has come, and then reset the countdown, and so on.

The thread that is playing hot with the CPU may not want to hear this interrupt signal, because it means that the time with the CPU is coming to an end…… Slave is hard to get out, when will you come again?

Now let’s look at the thread state defined in Java again. Hey, it also has Blocked, it also has Waiting, and even more so, and the TIMED\_WAITING:

Now the question is, what is the thread state of Java when blocking I/O? Is BLOCKED? Still WAITING?

As you might have guessed, since it’s under the topic of Runnable, the state is still Runnable. We can also verify this with some tests:

`@Test` `public void testInBlockedIOState() throws InterruptedException {` `Scanner in = new Scanner(System.in); ' ' '// Create a Thread named' inputoutput 't' 'Thread t = new Thread(new Runnable() {'' @Override ' 'public void run() {'' 'try {'' '// block read' on the command line  `String input = in.nextLine(); ` `System.out.println(input); ` `} catch (Exception e) {` `e.printStackTrace(); ` `} finally {` `IOUtils.closeQuietly(in); ' ' '} ' ' '}, "input-output "); // Thread name ' '// Start' 't.art (); ' '// Make sure that run has already been executed' 'Thread.sleep(100); AssertThat (t.getState()). IsEqualTo (Thread.State. Runnable); ` ` `}

Add a breakpoint to the last statement, which is also reflected on the monitor:

The same applies when the network is blocked, such as socket.accept, which we call a “blocked” method, but the thread state is still Runnable.

`@Test` `public void testBlockedSocketState() throws Exception {` `Thread serverThread = new Thread(new Runnable() {` `@Override` `public void run() {` `ServerSocket serverSocket = null; ` `try {` `serverSocket = new ServerSocket(10086); ' 'while (true) {''/' blocking accept method ' 'Socket Socket = serverSocket.accept(); ` `// TODO` `}` `} catch (IOException e) {` `e.printStackTrace(); ` `} finally {` `try {` `serverSocket.close(); ` `} catch (IOException e) {` `e.printStackTrace(); ' '} ' ' '} ' '}, "Socket thread "); // Thread name ' 'serverThread.start(); ' '// Make sure that run has already been executed' 'Thread.sleep(500); AssertThat (ServerThread.getState ()).isEqualTo(Thread.State. Runnable); ` ` `}

Monitoring display:

Of course, Java has introduced the so-called NIO (New IO) package for a long time, but I won’t go into the details of what the thread state looks like when using NIO.

At least we’ve seen that when we do traditional IO operations, we colloquially say “blocking,” but this “blocking” is not the same thing as a thread’s BLOCKED state!

What about the Runnable state?

First of all, the above mentioned, pay attention to the distinction between the two aspects:

The virtual machine rides on top of your operating system, and the operating system beneath it exists as a resource to meet the virtual machine’s needs.

While the underlying operating system thread may indeed be blocked when performing blocking IO operations, we care about the thread state of the JVM.

The JVM doesn’t care about the underlying implementation details, whether it’s when to sharde or when to switch IO.

As stated earlier, “A thread in the Runnable state is executing in * the Java Virtual Machine, but it may be waiting on * other resources from the operating system, such as the processor.”

The JVM thinks of those as resources, whether it’s a CPU, a hard disk, or a network card, that something is serving the thread, and it thinks the thread is “executing.”

It doesn’t care if you use your mouth, your hands, or some bird to satisfy its needs

The CPU is not executing the thread, but the network card may still be listening, even though it may not have received data for the time being:

It’s like the receptionist or the security guard sitting in their place. They may not be serving anyone, but can you say they’re not working?

So the JVM thinks the thread is still executing. The thread state of the operating system revolves around the CPU core, which is different from the JVM’s focus.

We also emphasized earlier that “changes in the state of a Java thread are usually only related to the mechanism that was explicitly introduced by the JVM itself.” If the thread state changes in the JVM, it is usually caused by the mechanism itself.

For example, synchronize makes it possible for threads to enter a BLOCKED state, and methods such as sleep and wait make it possible for threads to enter a state such as WATING.

This corresponds to the traditional thread state as follows:

The RUNNABLE state corresponds to the traditional ready, running, and partial waiting states.