The Master said, “Courage, poverty, and disorder. If a man is not benevolent, he will become ill and in chaos.” The Analects of Confucius: Tabor

A hundred blog series. This paper is: the v25. Xx HongMeng kernel source code analysis (parallel) concurrent | heard countless times two concepts

Task management related articles are:

  • V03. Xx HongMeng kernel source code analysis (the clock) whose biggest contribution | trigger scheduling
  • V04. Xx HongMeng kernel source code analysis (scheduling) | is the kernel task scheduling unit
  • V05. Xx HongMeng kernel source code analysis (task management) | pool of task is how to manage
  • V06. Xx HongMeng kernel source code analysis (scheduling queue) | how many scheduling are the kernel queue
  • V07. Xx HongMeng kernel source code analysis (scheduling) scheduler | task is how to
  • V21. Xx HongMeng kernel source code analysis concept (thread) | who is constantly with the CPU
  • V25. Xx HongMeng kernel source code analysis (concurrent parallel) | heard countless times two concepts
  • V37. Xx HongMeng kernel source code analysis (system calls) | forever mantra
  • V41. Xx HongMeng kernel source code analysis (task switching) switch | see how assembly task

This article deals with concurrency

It is recommended to read the process/thread series before reading this article to gain a better understanding of parallel concurrency.

Understand concurrency concepts

  • Concurrent: Multiple threads running concurrently at a single core, with only one thread running at a time.

  • Popular analogy is the high-speed one-way street, single lane refers to the number of CPU cores, running car is thread (task), process is the company that manages the car, a company can have many cars. Concurrency and parallelism depend on the number of CPU cores. You can only run one car in the lane at a time, but because the command system is so good, it’s fast enough to switch cars in milliseconds, you don’t even notice the switch. So the external perception is going on at the same time, realizing serial at the micro level and parallel at the macro level.

  • The essence of thread switching is that THE CPU needs to change the site to work, and where to go to work is provided by the site. That site is the task stack, and each task stack saves all kinds of materials to work, and it can work immediately when it comes to it. That material is the task context. Simply said is the last job to do there, come back to continue to do. The context is stored by the task stack itself, the CPU doesn’t care, it only takes care of the material handed in by the task, and it moves the brick wherever the material tells it to move the brick.

If you remember one word, you can remember the difference between concurrency, order, order.

Understanding parallelism

Parallel each thread is allocated to a separate CPU core, and the threads are actually running at the same time.

The popular analogy is that high-speed multi-lane, micro and macro simultaneously. Of course, parallelism is fast, and more people make work less tiring, but more people will inevitably lead to management problems, which will make the problem more complicated, please think about what problems will occur?

Understand the concept of coroutines

Coroutines, for example, the Go language is supported by coroutines. In fact, coroutines are not related to the kernel layer, but the concept of application layer. It is a higher level of encapsulation on top of the thread, and in popular metaphors, it is another lane in the car to play. It’s nothing new to the kernel, the kernel is only responsible for scheduling the car, and what you do with the car is up to the application. The essential difference is that the CPU has not moved at all, whereas concurrency/parallelism has moved.

How does the kernel describe the CPU

    typedef struct {
        SortLinkAttribute taskSortLink;             /* task sort link */ // Each CPU core has a linked list of tasks to sort
        SortLinkAttribute swtmrSortLink;            /* swtmr sort link */ // Each CPU core has a linked list of timer sorts

        UINT32 idleTaskID;                          /* idle task id */  // Idle task ids are seen in OsIdleTaskCreate
        UINT32 taskLockCnt;                         /* task lock flag */ // The number of task locks, when > 0, needs to be rescheduled
        UINT32 swtmrHandlerQueue;                   /* software timer timeout queue id */ // Handle to the soft clock timeout queue
        UINT32 swtmrTaskID;                         /* software timer task id */ // ID of the soft clock task

        UINT32 schedFlag;                           /* pending scheduler flag */ INT_NO_RESCH INT_PEND_RESCH
    #if (LOSCFG_KERNEL_SMP == YES)
        UINT32 excFlag;                             /* cpu halt or exc flag */ // Indicates that the CPU is stopped or running
    #endif
    } Percpu;

    Percpu g_percpu[LOSCFG_KERNEL_CORE_NUM];// Global CPU array
Copy the code

This is the kernel’s description of the CPU, mainly is two sort linked lists, one is the task sort, one is the timer sort. What do you mean? As mentioned several times in this series, tasks are the kernel’s scheduling unit, not processes, although scheduling also requires processes to participate in, and also requires switching processes and user space. But the core of scheduling is switching tasks, each task of the code instruction is the FOOD of the CPU, it is eating a instruction. Each task must specify the pickup address (the entry function).

Another thing that provides an entry function is a scheduled task. It’s so important and so common that it’s impossible to do your 9:00 a.m. routine every night without it. Each CPU has its own separate task and timer chain in the kernel.

Each time the Tick arrives, the processing function scans the two linked lists to see if there is a timer timeout task that needs to be executed. If there is a timer timeout task, it immediately executes the scheduled task. The scheduled task has the highest priority among all tasks, with priority 0.

LOSCFG_KERNEL_SMP

# if (LOSCFG_KERNEL_SMP == YES)
# define LOSCFG_KERNEL_CORE_NUM                          LOSCFG_KERNEL_SMP_CORE_NUM // Number of CPU cores supported in multi-core mode
# else
# define LOSCFG_KERNEL_CORE_NUM                          1 // Single-core configuration
# endif
Copy the code

Multi-cpu core operating system has three processing modes (SMP+AMP+BMP) Hongmeng is the way to achieve SMP

  • Each CPU core runs a separate operating system or separate instances of the same operating system.

  • Symmetric Multiprocessing (SMP) An instance of an operating system can manage all CPU cores at the same time. Applications are not bound to a single CPU core.

  • Bound Multiprocessing (BMP) An instance of an operating system can manage all CPU cores simultaneously, but each application is locked to a specific core.

The macro LOSCFG_KERNEL_SMP indicates support for multiple CPU cores. LOSCFG_KERNEL_SMP is enabled by default.

Multi-cpu core support

The CPU operation of the Hongmeng kernel can be seen in LOs_mp.c, because the file is not large, the code is posted here.

    #if (LOSCFG_KERNEL_SMP == YES)
    // Send a scheduling signal to the parameter CPU
    VOID LOS_MpSchedule(UINT32 target)//target Each corresponds to a CPU core
    {
        UINT32 cpuid = ArchCurrCpuid(a); target &= ~(1U << cpuid);// Get a CPU other than itself
        HalIrqSendIpi(target, LOS_MP_IPI_SCHEDULE);// Sends a scheduling signal to the target CPU, interprocessor Interrupts (IPI)
    }
    // Hard interrupt wake up handler
    VOID OsMpWakeHandler(VOID)
    {
        /* generic wakeup ipi, do nothing */
    }
    // Hard interrupt scheduling handler
    VOID OsMpScheduleHandler(VOID)
    {// Set the dispatch flag to be different from the wake up function so that the scheduler can be triggered when the hard interrupt ends.
        /* * Set schedule flag to differ from wake function, * so that the scheduler can be triggered at the end of irq. */
        OsPercpuGet()->schedFlag = INT_PEND_RESCH;// Attach a scheduling label to the current Cpu
    }
    // Hard interrupt pause handler
    VOID OsMpHaltHandler(VOID)
    {
        (VOID)LOS_IntLock(a);OsPercpuGet()->excFlag = CPU_HALT;// Stop the current Cpu

        while (1) {}// Fall into an empty loop, that is, an idle state
    }
    //MP timer handler function, recursively check all available tasks
    VOID OsMpCollectTasks(VOID)
    {
        LosTaskCB *taskCB = NULL;
        UINT32 taskID = 0;
        UINT32 ret;

        /* recursive checking all the available task */
        for (; taskID <= g_taskMaxNum; taskID++) { // Recursively check all available tasks
            taskCB = &g_taskCBArray[taskID];

            if (OsTaskIsUnused(taskCB) || OsTaskIsRunning(taskCB)) {
                continue;
            }

            * Though task status is not atomic, this check may succeed, but the deletion cannot be completed, and the deletion will be processed before the next run * though task status is not atomic,  this check may success but not accomplish * the deletion; this deletion will be handled until the next run. */
            if (taskCB->signal & SIGNAL_KILL) {// The task received a kill signal
                ret = LOS_TaskDelete(taskID);// Go back to the task pool
                if(ret ! = LOS_OK) {PRINT_WARN("GC collect task failed err:0x%x\n"Ret); }}}}// The multi-core processor (MP) is initialized
    UINT32 OsMpInit(VOID)
    {
        UINT16 swtmrId;

        (VOID)LOS_SwtmrCreate(OS_MP_GC_PERIOD LOS_SWTMR_MODE_PERIOD,// Create a periodic 100-tick timer(SWTMR_PROC_FUNC) OsMpCollectTasks, & swtmrId,0);//OsMpCollectTasks is a timeout callback function
        (VOID)LOS_SwtmrStart(swtmrId);// Start a scheduled task

        return LOS_OK;
    }
    #endif
Copy the code

The code is annotated one by one, here is another explanation:

1.OsMpInit

In the case of multi-core, each CPU has its own number, and the kernel is divided into primary and secondary cpus. 0 defaults to the primary CPU, and OsMain() is executed by the primary CPU and called by assembly code. Initialization only opens a scheduled task, and the only thing it does is recycle unused tasks. The condition for recovery is whether the task received a kill signal. For example, the shell command kill 9 14, which means the signal to kill thread 14, is saved by the thread. You can kill yourself or you can wait to be killed. Note that there are two types of missions that can’t be killed: system missions that can’t be killed, and running missions.

2. Initialize the secondary CPU

Also called by assembly code, each CPU core is initialized using the following function

    // The number of times this function is executed depends on the number of secondary cpus. For example, in the quad-core case, this is executed three times, and number 0 is usually defined as executing main on the primary CPU
    LITE_OS_SEC_TEXT_INIT VOID secondary_cpu_start(VOID)
    {
    #if (LOSCFG_KERNEL_SMP == YES)
        UINT32 cpuid = ArchCurrCpuid(a);OsArchMmuInitPerCPU(a);// Each CPU needs to initialize the MMU

        OsCurrTaskSet(OsGetMainTask());// Sets the current task for the CPU

        /* increase cpu counter */
        LOS_AtomicInc(&g_ncpu); // Count the CPU number

        /* store each core's hwid */
        CPU_MAP_SET(cpuid,OsHwIDGet());// Store the HWID for each CPU
        HalIrqInitPercpu(a);//CPU hardware interrupts initialization

        OsCurrProcessSet(OS_PCB_FROM_PID(OsGetKernelInitProcessID())); // Set the kernel process to CPU process
        OsSwtmrInit(a);// The scheduled task is initialized. Each CPU maintains its own timer queue
        OsIdleTaskCreate(a);// Create idle tasks, and each CPU maintains its own task queue
        OsStart(a);// This CPU officially starts working at the kernel layer
        while (1) {
            __asm volatile("wfi");// Wait for Interrupt to wait for the next Interrupt
        }// Similarly, WFE: wait for Events, i.e. wait for the next event to happen
    #endif
    }
Copy the code

You can see what the initialization steps are for the secondary CPU:

  • Initialize the MMU, OsArchMmuInitPerCPU

  • Set the OsCurrTaskSet of the current task

  • Initialize hardware interrupt HalIrqInitPercpu

  • Initialize the timer queue OsSwtmrInit

  • Create an empty task, OsIdleTaskCreate, where the CPU stays in the empty task and circles itself when there are no tasks outside.

  • Start your own workflow OsStart, officially start work, run tasks

What are the problems with multi-CPU cores?

  • How to deal with resource conflict between cpus?

  • How to solve the communication between cpus (also called inter-core communication)?

  • How to ensure that two cpus do not perform the same task at the same time?

  • How does assembly code implement CPU mobilization

Go to the series or directly to the kernel annotation code to see. I’m not going to explain it here.

Intensive reading of the kernel source code

Four code stores synchronous annotation kernel source code, >> view the Gitee repository

Analysis of 100 blogs. Dig deep into the core

Add comments to hongmeng kernel source code process, sort out the following article. Content based on the source code, often in life scene analogy as much as possible into the kernel knowledge of a scene, with a pictorial sense, easy to understand memory. It’s important to speak in a way that others can understand! The 100 blogs are by no means a bunch of ridiculously difficult concepts being put forward by Baidu. That’s not interesting. More hope to make the kernel become lifelike, feel more intimate. It’s hard, it’s hard, but there’s no turning back. 😛 and code bugs need to be constantly debug, there will be many mistakes and omissions in the article and annotation content, please forgive, but will be repeatedly amended, continuous update. Xx represents the number of modifications, refined, concise and comprehensive, and strive to create high-quality content.

Compile build The fundamental tools Loading operation Process management
Compile environment

The build process

Environment script

Build tools

Designed.the gn application

Ninja ninja

Two-way linked list

Bitmap management

In the stack way

The timer

Atomic operation

Time management

The ELF format

The ELF parsing

Static link

relocation

Process image

Process management

Process concept

Fork

Special process

Process recycling

Signal production

Signal consumption

Shell editor

Shell parsing

Process of communication Memory management Ins and outs Task management
spinlocks

The mutex

Process of communication

A semaphore

Incident control

The message queue

Memory allocation

Memory management

Memory assembly

The memory mapping

Rules of memory

Physical memory

Total directory

Scheduling the story

Main memory slave

The source code comments

Source structure

Static site

The clock task

Task scheduling

Task management

The scheduling queue

Scheduling mechanism

Thread concept

Concurrent parallel

The system calls

Task switching

The file system Hardware architecture
File concept

The file system

The index node

Mount the directory

Root file system

Character device

VFS

File handle

Pipeline file

Compilation basis

Assembly and the cords

Working mode

register

Anomaly over

Assembly summary

Interrupt switch

Interrupt concept

Interrupt management

HongMeng station | into a little bit every day, the original is not easy, welcome to reprint, please indicate the source.