This is the 24th day of my participation in the August Text Challenge.More challenges in August

Today, we’re going to talk about one of the most basic points, and one that must be understood. Before we get to the topic, I would like to ask the official why my first two articles were not recommended to the home page, whether the goods are not enough or I am ignored. I hope the official can give me a reply when they see this, haha, just kidding. Today we are talking about threads. I wrote about thread safety in my last article, welcome to read it!

Today we’re talking about thread safety

How many states can threads have? Three? Five?

As some people read my last two feel some do not quite understand, so today this article came out! (Understanding + code)

Without further ado, let’s get to the point

What is a thread

Modern operating systems create a process for a program when it runs. For example, when you start a Java program, the operating system creates a Java process. The smallest unit of modern operating system scheduling CPU is thread, also known as Light Weight Process;

Multiple threads can be created within a process, each with its own attributes such as counters, stacks, and local variables, and have access to shared memory variables. The processor switches between these threads at high speed, giving the user the impression that they are executing simultaneously.

Thread implementations fall into two categories:

  • User-level threads

  • Kernel-level threads

Before we can understand thread classification we need to understand the concepts of user space and kernel space of the system, with 4G memory space

As you can see in the figure above, Linux reserves several page boxes for kernel code and data structures that are never rolled out to disk.

Linear addresses from 0x00000000 to 0xC0000000 (PAGE_OFFSET) can be referenced by user code and kernel code (user-space). Linear addresses from 0xC0000000 (PAGE_OFFSET) to 0xfffffff can only be accessed by kernel code (that is, kernel space). The kernel code and its data structures must reside in this 1 GB address space, but the larger consumer of this address space is a virtual mapping of physical addresses.

This means that out of 4 GB of memory space, only 3 GB can be used for user applications. A process can only run in usermode or kernelmode. User programs run in user mode, while system calls run in kernel mode. The stack used is different in these two modes: the user mode uses a normal stack, while the kernel mode uses a fixed size stack (typically the size of a page of memory)

Each process has its own 3GB of user space and they share 1GB of kernel space. When a process enters kernel space from user space, it no longer has its own process space. This is why we often say that thread context switching involves switching from user to kernel mode, right

The user process

A thread implemented in a user program without kernel support, independent of the operating system core, and controlled by an application process using a thread library that provides functions to create, synchronize, schedule, and manage threads. In addition, user threads are created and managed by application processes using thread libraries, independent of the operating system core. No user mode/core state switch, fast speed. The operating system kernel is unaware of the existence of multithreading, so one thread blocking causes the entire process (including all its threads) to block. Because the processor time slice allocation here is based on the process, the execution time per thread is relatively reduced.

A kernel thread

All management of threads is done by the operating system kernel. The kernel keeps thread state and context information, and when one thread makes a blocking system call, the kernel can schedule other threads of that process to execute. On multiprocessor systems, the kernel can dispatch multiple threads belonging to the same process to run on multiple processors, increasing the parallelism of process execution. Because the kernel is required to create, schedule, and manage threads, these operations are much slower than user-level threads, but they are still faster than process creation and management. Most operating systems on the market, such as Windows and Linux, support kernel-level threading.

The difference between kernel and user

First look at the picture, look at the picture below combined with the text understanding

Relationship between Java threads and system kernel threads

There are two ways to create threads in the JVM

  • new java.lang.Thread().start()

  • Attach a native thread to the JVM using JNI (this one is more abstract)

Thread().start()

/ / implementation Runnable
public class RunnableThread implements Runnable {
    @Override
    public void run(a) {
        System.out.println('Implementing threads with implementing Runnable interface'); }}// Inherit the Thread class
public class ExtendsThread extends Thread {
    @Override
    public void run(a) {
        System.out.println('Thread implementation with Thread class'); }}// There is also the familiar thread pool creation thread
static class DefaultThreadFactory implements ThreadFactory { DefaultThreadFactory() { SecurityManager s = System.getSecurityManager(); group = (s ! =null)? s.getThreadGroup() : Thread.currentThread().getThreadGroup(); namePrefix ="pool-" +
            poolNumber.getAndIncrement() +
            "-thread-";
    }
 

    public Thread newThread(Runnable r) {
        Thread t = new Thread(group, r,
                    namePrefix + threadNumber.getAndIncrement(),
0);

        if (t.isDaemon())
            t.setDaemon(false);
        if(t.getPriority() ! = Thread.NORM_PRIORITY) t.setPriority(Thread.NORM_PRIORITY);returnt; }}Copy the code

I’ve just listed a few of them, but there’s only one way to create a Thread(Thread().start() is what it all boils down to

Why is there only one way to implement threads

Here’s the second one

Attach a native Thread to the JVM using JNI

For new java.lang.thread ().start(), threads are actually created in the JVM only when the start() method is called. The main lifecycle steps are:

Create the corresponding JavaThread instance

2. Create the corresponding OSThread instance

3. Create native threads for the actual underlying operating system

4. Prepare appropriate JVM states, such as ThreadLocal storage space allocation, etc

5. The underlying native thread starts running and calls the run() method of the Object generated by java.lang.Thread

6. Terminate the Native Thread after the run() method of the Object generated by java.lang.Thread is executed and returned, or after an exception is thrown

7. Release resources associated with JVM threads and clear the corresponding JavaThread and OSThread

To attach a native thread to the JVM for JNI, the main steps are:

  • Apply for a connection to the executing JVM instance through JNI Call AttachCurrentThread

  • The JVM creates the corresponding JavaThread and OSThread objects

  • Create the corresponding java.lang.Thread object

  • Once the java.lang.Thread Object is created, JNI can call the Java code

  • When the JNI call DetachCurrentThread is passed, JNI breaks the connection from the JVM instance

  • The JVM clears the corresponding JavaThread, OSThread, and Java.lang.Thread objects

Here’s a full life cycle diagram:

OK. That’s the end of today’s lesson. Threads are the most basic part that we need to understand. Later I will cover the TOPIC of JVM

conclusion

Thank you for reading, if you feel that you have learned something, please like, follow. Also welcome to have a question we comment below exchange

Come on! See you next time!