Reading notes for Modern Operating System, Operating Systems: Three Easy Pieces

1. Turnaround Time?

Turnaround time = Completion time − Arrival time

That is, the time from request to completion for each process

2 first come, first served [FCFS] algorithm?

Use cpus in the order in which processes are ready

Advantages: simple implementation, fair

Disadvantages: A short process after a long process takes a long time to wait, which is not good for user experience

Three processes A,B, and C arrive simultaneously

The order of services is A->B->C, so

The turnaround time is calculated as follows:

A: T = 100-0 = 100

B: T = 110-0 = 110

C: T = 120-0 = 120

Then, the average turnaround time = (100 + 110 + 120) / 3 = 110

3 short job priority [SJF] algorithm?

If the running time is predictable, the process with the shortest running time takes precedence, not preemptively

Advantages: Shortest average turnaround time

Cons: Long processes can be starved by an endless stream of short processes

But let’s look at another example:

It takes 100 seconds for A to arrive at t equals 0

B and C arrive when t = 10, and each takes 10s to execute

SJF’s first example had an average turnaround time of 50, but here it goes up to 103.33

4. Preemptive short job first [STCF] algorithm?

Shortest Time-to-completion First

That is, the shortest remaining time takes precedence, and the process with the shortest remaining time in the current ready queue is scheduled first

Again, the same example, we see that unlike SJF

When B and C arrive, the algorithm will determine the remaining time of all processes and decide which one will execute first

Therefore, after A has executed for 10s, B and C arrive, and the CPU schedules processes B and C to complete the rest of A

As you can see, this is a preemptive algorithm

5 Response time?

Response time = First execution time − Arrival time

Response time reflects the interactivity of the execution

Obviously, the performance of the above three algorithms in response time and interactivity is not very good

6 Round Robin Algorithm?

Periodic switching, allocating a time slice to each process, using clock interrupts to rotate

Advantages: Fair, fast response time

Disadvantages: Process switching increases system overhead

Let’s look at this example

In THE RR algorithm, the time slice is set to 1 (every 1s rotation)

The response time of the two algorithms is:

RR: Response time = (0 + 1 + 2) / 3 = 1

SJF: Response time = (0 + 5 + 10) / 3 = 5

The size of the time slice should be neither too small (context switch is too expensive) nor too large (approximately FCFS).

7 Highest priority scheduling Algorithm?

Select the process with the highest priority to run

Generally, system processes have a higher priority than user processes

Foreground processes have a higher priority than background processes

Operating systems prefer IO processes

Priorities can be static or dynamic, such that a process running on the CPU decreases priority as the elapsed time increases, while a process in the ready queue increases priority as the elapsed time increases

Advantages: simple implementation, high flexibility, all aspects of performance are good

Cons: But unfair, possibly low priority processes can produce hunger

【 note 】

For preemptive priority scheduling, priority inversion exists: The two processes, A high priority, low priority process B, sharing A critical resource, process B first run, the top lock, if you are the critical zone, is A process to squeeze down, but A process of entering the critical section will not work, and B because it could not get run time and release is not the critical section, so lead to two processes can run

Solution: Set priority upper limit, dynamic priority, use interrupt disable

8 Multi-level feedback queue?

A combination of priority and time slice rotation

Multiple ready queues are set, with the first level having the highest priority and descending order

Time slices of different lengths are allocated to processes in different ready queues. The first queue has the smallest time slice, and the time slice increases with the decrease of priority. (This strategy reflects that if the CPU intensive process is rotated too fast, it will increase a lot of system consumption. Therefore, for the CPU intensive process with lower priority, the time slice is set larger to reduce the rotation.)

When the first queue is empty, it rotates in the second queue, and so on

Queues are sorted by priority

Queues at all levels are scheduled by time slice rotation

All newly created processes enter the tier 1 queue

If the process runs out of CPU time and is interrupted by the clock, it is queued to the next level. (This reflects the operating system’s preference for IO-intensive processes, which require shorter slices to improve response times, and for CPU-intensive processes, which require larger slices. So when a process breaks due to running out of time slices, it is biased towards CPU-intensive processes and should be degraded.)

If the process abandons the CPU due to blocking, its wait time will still return to the ready queue

9 Integrating IO?

When a process requests I/O, it blocks until the I/O completes

At this point, the running job does not use the CPU during THE IO, resulting in a decrease in efficiency

If we use preemption, during IO usage

That is, CPU idle time to perform another job B, can greatly increase system utilization