Process scheduling

  • consideration

    1. To build a scheduling policy, you need to make some simplifying assumptions about the processes running on the system, collectively known as workloads.

      (1) Each process runs for the same amount of time

      (2) All jobs arrive at the same time. Sometimes, when the time difference between the arrival of multiple jobs is very small, it is approximately considered to arrive at the same time

      (3) Once work is started, each work will be kept running until completion

      (4) All work is done using the CPU, i.e., they do not perform I/O operations

      (5) The running time of each job is known

    2. In order to measure the advantages and disadvantages of different scheduling strategies, an index — Turnaround Time was proposed.

      (1) Definition: task completion time minus task arrival time, namely:

(2) When the assumptions are met at the same time, the arrival time is 0 and the turnaround time is equal to the completion time (3) Turnaround time is a ** performance ** indicator **. Performance and fairness are often at odds with scheduling systems. Scheduling systems can optimize performance, but at the expense of preventing some tasks from running, which reduces fairnessCopy the code
  • First in, first out (FIFO)

    1. First In First Out: What is ready is executed First

    2. Suppose there are three jobs A, B and C. A is A little earlier than B, and B is A little earlier than C. At this time, according to our hypothesis, A, B and C can be approximately regarded as arriving at the same time. But according to the actual situation, A is executed first, then B, and then C. Assuming that each job runs for 10s, calculate the average turnaround time of the job?

    The turnaround time of A is 10s, B is 20s, and C is 30s

    The average turnaround time is (10+20+30) /3=20

    1. Now relax hypothesis 1, let A, B and C run for different times, and consider whether FIFO has A long average turnaround time

    2. Assuming that A, B and C work, the running time of A is 100s, and the running time of B and C is 10s. If A still arrives A little earlier, then B, and finally C (still approximately considered to arrive at the same time), the average turnover time of the system is longer (100+110+120) /3=110

    3. A four-fifO situation is called the Convoy effect, in which potential resource consumers who spend less time fall behind heavy resource consumers. For example, at the grocery store when there’s only one line and you see three trolleys of goods in front of you, it can make you wait a long time

  • Minimum Task First (SJF)

    1. Shortest Job First: Running the Shortest time First, then the next Shortest time, and so on

    2. Under the condition of 4 above, according to SJF’s strategy, the average turnaround time is (10+20+120) /3=50, which significantly reduces the average turnaround time compared with FIFO. But the premise ismeetAssumption 2 –At the same time to reach

    3. Now relax assumption 2, that jobs can arrive at any time, and consider the long average turnaround time of SJF

    4. Let’s say that A arrives at t=0 and runs for 100 seconds, while B and C arrive at t=10 and run for 10 seconds each. Then, the turnaround time of A is 100s, that of B is 110-10=100, and that of C is 120-10=110. The average turnaround time is (100+100+110) /3=103.33s

    5. It is obvious that SJF may have a long average turnaround time when work is available at any time

  • Minimum completion time priority (STCF)

    1. Shortest time-to-completion First: Relax assumption 3 that the scheduler can assign other work to preempt the CPU occupied by the running work.

    2. Preemption has been added to SJF, and whenever a new job enters the ready state, it determines which of the remaining and new jobs has the least time to complete, and then schedules that job

    3. In the case of 4 above, STCF will preempt A and run B and C before continuing. Then the turnaround time of A is 120s, that of B is 10s, and that of C is 20s. The average turnaround time is (120+10+20)/3=50, which significantly reduces the average turnaround time of SJF under the same condition

  • Add considerations

    1. STCF is a good strategy when hypothesis 4 and 5 are true, i.e. the task length is known, the task uses CPU only, and the current only metric is turnaround time. However, when time-sharing systems are introduced, problems arise because tasks and users need to interact, and turn-around time does not measure task interactivity

    2. Response time: Measures the interactivity of a task and is defined as the time from the time a task arrives on the system to its first run

  • Rotation (RR)

    1. Round-robin: Running a job in a timeslice and then switching to the next task in the runqueue, rather than running a job until completion. It repeats until all tasks are complete.

    2. The time slice length must be a multiple of the clock period. If not, you need to wait for a period of time before performing a context switch. In this case, the CPU does not work, and CPU resources are wasted

    3. Suppose three tasks, A, B, and C arrive in the system at the same time, and they all want to run 5s. The SJF scheduler must run the current task before running the next one, while the RR of the 1s time slice can work in A fast loop. The average RR response time is (0+1+2) /3=1 (note: simultaneous arrival, arrival time is 0), and the average SJF algorithm response time is (0+5+10) /3=5

  1. The time slice length is critical to an RR. The shorter the TIME slice is, the better the RR performs in response time, but the time slice should not be too short: sudden context switches affect overall performance. Because the cost of context switching comes from more than just the operating system operation of saving and restoring a few registers. At runtime, the program also establishes a large number of states in the CPU cache, TLB, branch predictor, and other on-chip hardware. So the timeslice length needs to be carefully balanced so that it is long enough to amortize context-switching costs without making the system unresponsive

  2. Amortize: By reducing the frequency of costs (performing fewer operations), the total cost of a system will be reduced. For example, if the time slice is set to 10ms and the context switch time is set to 1ms, about 10% of the time will be wasted on context switch. To amortize this cost, you can increase the time slice length to 100ms, and less than 1% of the time will be spent on context switching.

  3. In 3, we only consider the response time, not the turnaround time. If we calculate the turnaround time of RR, A is 13, B is 14, and C is 15, with an average of 14. The turnaround time of SJF is 5 for A, 10 for B and 15 for C, with an average of 10. At this time, RR has a good response time, but a poor turnaround time.

  4. So far. There are two classes of schedulers

    (1) SJF and STCF optimize the turnaround time, but the response time — interaction is not good

    (2) RR optimizes the response time, but the turnaround time is poor

  • Combining with the I/O

    1. Relax assumption 4, which is that the work is performing I/O, where the scheduler faces two problems

      (1) Initiate AN I/O request to make a decision because the currently running task will not use the CPU during I/O and will be blocked waiting for the I/O to complete. At this point, the scheduler needs to consider whether to wait for this task to execute or schedule another task

      (2) Make a decision when I/O is completed. An interrupt occurs when I/O completes, and the operating system runs and moves the I/O issuing process back from blocked to ready. At this point, the scheduler will consider whether to continue with the task or to perform another task

    2. Suppose you have two jobs, A and B, each requiring 50ms of CPU time. A makes an I/O request for every 10ms run, while B simply uses CPU50ms. The scheduler runs A and then B. Suppose you build an STCF scheduler. Each 10ms sub-work of A can be regarded as A separate work. So when the task is running, the sub-task of A10ms will be executed first, and then B will be executed. When the I/O request is completed, B will be preempted and run for 10ms, which will make full use of the system.

    1. While interactive jobs (i.e., high I/O requests) are performing I/O, other CPU-intensive tasks (i.e., low I/O operations) are running, making better use of the processor
  • Multi-level Feedback Queue (MLFQ)

    1. Multi-level Feedback Queue (MLFQ) problems to be solved

      (1) Optimize turnaround time

      (2) Relax hypothesis 5, that is, the running time of the task is not known

      (3) Reduce response time and obtain better interactive experience

    2. Basic structure: There are many independent queues, each with a different priority level

    3. Basic rules:

      (1) Rule 1: If the priority of A is greater than that of B, run A (not run B)

      (2) Rule 2: If the priority of A = the priority of B, run A and B in rotation

    4. How do I change priority (1)?

      (1) The tasks to be performed by the system can be divided into the following two categories

      A. Interactive tasks with short running time and frequent CPU abandonment

      B. Long, computationally intensive tasks that require a lot of CPU time and whose response time is not important

      (2) Priority adjustment algorithm

      A. Rule 3: When a task enters the system, it is placed in the highest priority (uppermost queue)

      B. Rule 4A: After a task has completed its time slice, reduce priority (move to the next queue)

      C. Rule 4b: If the CPU is released in its time slice, the priority remains unchanged

      (3) Example 1: Single long work

      The following figure shows a scheduler with three queues. The work first enters the highest priority (Q2), after the execution of the 10ms time slice, priority -1, and finally enters Q1 until the execution is completed

      (4) Example 2: A short job comes up

      There are two jobs: A is A long running CPU intensive task, and B is an interactive task with A short running time. Let’s say A executes for some time and B arrives. In the following figure, A (represented by black) is in the lowest priority queue (it can be seen from (3) that the long-duration task will be in the lowest priority queue for A long time), and B (represented by gray) arrives at time T=100 and joins the highest priority queue. Because of its short running time, B completes execution after two time slices before being moved to the lowest priority queue. And then A keeps going.

      (5) Objective of MLFQ algorithm: if you do not know whether a task is a short task or a long task, assume that it is a short task in the exam and give it the highest priority. If the task is really short, it will be completed quickly. Otherwise, the task will be slowly moved to the lower priority queue, and the task will be considered long. In this way, MLFQ approximates SJF.

      (6) Example 3: THERE is I/O

      According to 4B, interactive work with a lot of I/O (such as waiting for the user’s keyboard or mouse input) will give up the CPU before the time slice runs out, in which case we will keep its priority unchanged

      Suppose interactive job B (shown in gray) requires I/O operations every 1ms and competes for CPU with long-running job A (shown in black). The MLFQ algorithm keeps B at the highest priority because B always lets the CPU. If B is interactive work, MLFQ further accomplishes its goal of making interactive work run quickly.

    5. MLFQ now shares the CPU fairly between long-duration tasks and gives good response times for short or interactive work. Is that perfect? There’s a problem

      Starvation. That is, a system with a lot of interactive work will constantly grab the CPU for a long task, so that they never get the CPU

      B. Game the scheduler. That is, the process actively frees the CPU by invoking an I/O operation (such as accessing an unrelated file) before the time slice runs out. This keeps eating at a high priority and consumes more CPU time

      C. A program may behave differently at different times. That is, a computationally intensive process may behave as an interactive process at some point in time

  • How to increase priority (2)?

    1. How to solve the problems in 5 above?

      Periodically increase the priority of all tasks. The simplest is to periodically place all tasks in the highest priority queue

      Rule 5: After a period of time S, the system’s tasks are added back to the highest priority queue.

    2. The new rules solve the problem

      (1) The process doesn’t starve to death — in the highest-priority queue, it shares the CPU with other high-priority jobs in an RR fashion, and eventually gets executed

      (2) If a CPU-intensive job becomes interactive, the scheduler will treat it correctly when its priority increases

    3. There are two graphs below, no priority promotion on the left, long quest starved to death after two short quest arrived. The one on the right is prioritized every 50ms (50ms is just an example), so long tasks are at least guaranteed to progress to the highest priority every 50ms and are executed regularly.

    4. The remaining problems

      (1) How to set the value of S? If S is set too high, long tasks will starve, and if S is set too low, interactive work will not get the appropriate CPU time ratio

      (2) How to prevent the scheduler from being fooled? There is a problem with releasing the CPU to preserve its priority while working within the time slice

  • Choose good timing

    1. Better CPU timing: The scheduler should keep track of the total time spent by a process at a certain level and relegate it to a lower priority queue as soon as the process uses up its quota.

    2. Rewrite rules 4A and 4b

      Rule 4: Once a job has used up its time quota at a tier (no matter how many times the CPU has been actively given up in the middle), lower its priority (move to a lower tier queue)

    3. The following figure compares the performance of a process that also tries to fool the scheduler under the policy of Rule 4A, rule 4b (left), and the new policy of rule 4 (right). Off Under the protection of Rule 4, a process can initiate an I/O operation before the end of each time slice to monopolize THE CPU time. With this protection, the I/O priority of the process decreases regardless of its I/O behavior

  • MLFQ tuning and other issues

    Question:

    (1) How many queues are configured

    (2) The time slice configuration of each layer queue

    There are no obvious answers to these questions, and only experience with the workload and subsequent tuning of the tuning program will lead to a satisfactory balance

    For example, high-priority queues usually have short time slices, allowing interactive work to switch faster. Low-priority queues are more CPU-intensive, and it is better to configure longer slices

  • Summary of MLFQ rules

    (1) Rule 1: If the priority of A is greater than that of B, run A (not run B)

    (2) Rule 2: If the priority of A = the priority of B, run A and B in rotation

    (3) Rule 3: Tasks enter the system with the highest priority (uppermost queue)

    (4) Rule 4: Once a job has used up its time quota in a tier (no matter how many times the CPU has been actively given up in the middle), lower its priority (move to a lower tier queue)

    (5) Rule 5: After a period of time S, the system’s tasks are readded to the highest priority queue.