Public number “code nongxiaokui” operating system – process scheduling algorithm

The last article talked about what a process is and how the CPU needs to switch back and forth between processes in a multi-process system.

So how does the CPU perform context switching? If process A needs A long computation time, and process B needs A short computation time, but the user is eager to get A response from process B.

At this time, how to balance the efficiency and fairness of process switching is very important.

In fact, if you only consider the maximization of one aspect, the algorithm is relatively easy to implement, but if you want to take care of both aspects, the difficulty is suddenly increased.

Before we talk about algorithms, let’s look at what scheduling is.

scheduling

Every process, of course, wants to be executed quickly by the CPU, but there is only one CPU, and now there are hundreds of processes. Conditions do not allow processes to have exclusive CPU usage.

Only the operating system can implement a series of scheduling policies, and the CPU has to take care of each process. (CPU: I’m so hard)

As mentioned earlier, there are three basic states of a process. Each state switch actually means that a process has been switched, that is, the operating system has performed a process scheduling.

When scheduling occurs:

The basic state of a process changes back and forth between run-block-ready, and each state change triggers a process dispatch.

  • Ready -> Running: After the process is created, it enters the ready queue and waits for the process selected by the operating system to execute.
  • Running -> Blocked: When an I/O event occurs, the process enters the blocking queue. At this time, the operating system selects a process from the ready queue to execute.
  • Running state -> End state: After a process exits, the operating system selects a process from the ready queue to execute.

When a process has a state change, the operating system will consider whether to cede CPU execution to other processes, and when to cede CPU execution to other processes, how to cede CPU execution to other processes, which process is the so-called process scheduling strategy.

There are many process scheduling strategies. According to how to deal with periodic interruption of clock, process scheduling algorithms are divided into two categories:

Supplement: the so-called clock interrupt, refers to the hardware CPU per second digital pulse concussion times, on behalf of the CPU frequency, the arrival of each cycle, corresponding to a CPU operation. In other words, the higher the clock frequency, the more CPU operations per unit of time. Hardware enthusiasts often speak of “overclocking” to improve CPU performance, which is the principle.

  • Non-preemptive scheduling algorithms ignore clock interrupts and only schedule another process if it is blocked or exits.
  • Preemptive scheduling algorithms run each process for one cycle, and at the end of the cycle, if the process is still running, it is suspended, and the operating system then selects a process from the ready queue to execute. An interrupt occurs at the end of each cycle, and the CPU executes each process for a short period of time, known as the time slice mechanism.

Before we talk about algorithms in detail, we need to understand the metrics of the process scheduling algorithm so that we can understand why the algorithm is thinking this way.

  • CPU usage: If an I/O request is made to a process, the CPU is idle and the process is blocked waiting for the disk to return data. Frequent I/O requests inevitably reduce CPU utilization, so the scheduler selects a process from the ready queue to execute.
  • System Throughput: Indicates the number of tasks completed by the CPU per unit time. If a process takes a long time to execute and is using up CPU, it is unfair to subsequent short-run processes, causing the ready queue of the process to pile up and block.
  • Turnaround time: refers to the running time of the process plus the waiting time of the process. If the process only needs 1s to complete, but waits 5s, this kind of turnaround time is not expected to happen.
  • Wait time: Refers to the length of time a process has to wait in the ready queue. If a process does not wait for the CPU to execute, it is called “starving”.
  • Response time: interactive systems consider a very high weight ratio of a metric, is the user clicks the mouse or keyboard, to the process to give the response time, of course, as soon as possible.

After understanding these indicators, we know the specific optimization direction of the scheduling algorithm. Next, we will talk about the specific scheduling algorithm.

Process scheduling algorithm

There are many process scheduling algorithms, they do not appear at the same time, but according to the needs of continuous development, the following according to the historical development of the process scheduling algorithm.

  1. First come, first served scheduling algorithm

    A simple algorithm, First Come First Served (FCFS), as the name implies, is the traditional First Come last, in which the scheduler selects a process from the ready queue until it finishes executing or blocks, and then selects a process from the ready queue.

    That sounds fair, but when a long job runs first, the short job that follows takes a long time to wait. It’s like when you wait in line for a meal and there are only three people in the line, but the first person helps five roommates with their meal. It’s too long for the people behind you to wait.

  2. Shortest job first scheduling algorithm

    Shortest Job First (SJF), also well understood, is to choose the short execution time of the process to run First. Let’s take the example above, and let the person who helps to bring the rice go to the end of the line. Whoever buys less rice comes first.

    The same problem arises again. If there are a lot of short jobs, and the ready queue sends an endless stream of short jobs, the long jobs will not be executed by the CPU according to this strategy, and the process will starve.

  3. High response ratio priority algorithm

    The previous first come, first served algorithm is favorable for long jobs, while the shortest job first algorithm is favorable for short jobs. But can not reach the balance of long job and short job scheduling.

    The high response ratio mainly schedules processes according to priority,

    Priority =(wait time + requested service time)/requested service time

    According to the formula, if two processes have the same service time, the one with the longest waiting time takes precedence.

    If the waiting time is consistent, the person who requests the shortest service time takes precedence.

  4. Time slice rotation scheduling algorithm

    Fair and pure Round Robin (RR) scheduling. Each process is allocated a time slice, and the CPU switches the process in turn.

    If a time slice ends and the process has not finished executing, it suspends and switches to another process.

    If the process is stopped or blocked before the end of a time slice, another process is switched immediately.

  5. Highest priority scheduling algorithm

    The previous time slice rotation schedule is too fair, in fact, the urgency of the task of the process is different, some processes can slowly calculate in the background, while some processes are eager to give user response.

    Therefore, on the basis of time slice rotation, priority is added. The scheduler not only wants to rotate the schedule for each process, but also wants to select the process with the highest priority as possible for each rotation

  6. Multi-level feedback queue scheduling algorithm

    Multilevel Feedback Queue (Multilevel Feedback Queue)

    “Multi-level” means that there is no longer one ready queue, but multiple queues, with different priorities and time slices.

    “Feedback” means that the algorithm is dynamic, changing according to the current process.

See how this mechanism works:

  • There are multiple queues, and each queue has a different priority. The higher the priority is, the shorter the time slice is.
  • The new process is placed in the queue with the highest priority first. If the process is not finished in the time slice of the level-1 queue, the process is sent down to the level-2 queue, and so on until the process is finished.
  • Processes in the lower queue are executed only if the higher queue is empty. If a process in the lower level queue is executing and a new process enters the higher level queue, stop running the current process and run the process in the higher level queue instead.

It can be found that this algorithm gives consideration to fairness and efficiency. First of all, short jobs entering the high-level queue can be executed quickly without entering the low-level queue, which ensures the response speed of short jobs. Second, long jobs may not be finished in the time slice of the advanced queue and will be moved to the low-level queue. Although the low-level queue has a low priority, the duration of the low-level queue is long. In other words, the probability of low-level queues being selected for execution is relatively small, but if selected, it takes a long time to execute.

END

Each switch in process state means that a process schedule has occurred.

The strategy of process scheduling is very important. The index of scheduling algorithm is the trade-off between fairness and efficiency.

Each scheduling algorithm has its own advantages, and the appropriate scheduling strategy can be selected in different scenarios and different systems.


Welcome to pay attention to “code farmers small kui”, more technology to share with you.