This article has participated in the “Digitalstar Project” and won a creative gift package to challenge the creative incentive money.

Small knowledge, big challenge! This article is participating in the creation activity of “Essential Tips for Programmers”.

  • The function of operating system: improve the efficiency of computer system through resource management; Improve the human-computer industry to provide a user-friendly working environment.
  • Characteristics of operating system: concurrency, sharing, virtuality, uncertainty.
  • Operating system functions: process management, storage management, file management, device management, job management.
  • Classification of operating systems: batch operating system, time-sharing operating system (rotating CPU workpieces), real-time operating system (quick response), network operating system, distributed operating system (physically dispersed computer interconnection system), microcomputer operating system (Windows), embedded operating system.
  • The initialization process of embedded system is in the order from bottom to top and from hardware to software:
  1. Chip level initialization
  2. Board level initialization
  3. System initialization
  • The chip level is the initialization of the microprocessor, the board level is the initialization of other hardware devices, and the system level is the initialization of software and operating system.
  • The basic process of computer boot is BIOS-> Master boot Record > operating system.

I. Process Management

1.1 Process Composition and status

  • Process composition: process control block PCB (only – – mark), program (describe what the process to do), data (store the process to execute the data required).
  • The basic state of the process is the three-state diagram in the lower left figure, which shows that there are only three states when the system is under automatic control, while the five-state diagram in the lower right figure shows two more states: static ready and static blocking, which need human operation to enter the corresponding state. Active ready means ready, and active blocking means waiting.

The above figure shows that after human intervention, the process will be suspended and enter the static state. At this point, human activation is required to restore the active state. After that, the nature of the three-state diagram remains.

1.2 Forward diagram and Process resource diagram

1.2.1 before the figure

It is used to indicate which tasks can be executed in parallel and which tasks have sequential relationships, as shown in the following figure:

1.2.2 Process Resource Map

Used to represent the allocation and request relationship between processes and resources, as shown in the following figure:

  • In the figure above, R1 points to P1, indicating that R1 has a resource allocated to P1, and P1 points to R2, indicating that P1 needs to request another R2 resource to execute.

  • Blocked node: A process cannot obtain required resources because all requested resources have been allocated. The process is blocked and cannot continue. See P2 in the figure above.

  • Non-blocking node: Resources requested by a process are still available and can be allocated to the process to continue running. For example, P1 and P3 in the figure above.

  • Deadlock occurs when all processes in a process resource graph are blocking nodes.

  • Process resource diagram of the simplified method: first see how many resource allocation, system is left to see what the process is not blocked, then get rid of all sides not blocking process, forming an isolated point, and distributing the system to the process of recycling back, so that system of the rest of the idle resources and more up, It then looks at which of the remaining processes are not blocking, and then turns them into isolated points one by one. Eventually, all resources and processes become isolated points.

1.3 Synchronization and Mutual exclusion between processes

  • Mutex and synchronization are not antonyms. Mutex means that a resource can be used by only one task at a time and must be locked. After being used, the resource can be unlocked for other tasks.

  • Synchronization means that two tasks can be executed at the same time, but there is a difference in speed, which requires speed matching. There is no problem whether resources are separate or shared.

1.4 PV operation semaphore

1.4.1 The basic concept

  • Critical resource: a resource that can be used by only one process at a time and can be used by other processes once released.
  • Critical section: The section of code in each process that accesses critical resources.
  • Semaphore: a special variable.
    1. The mutex is used to access critical resources. After the mutex is used, other processes cannot access it. The initial value is 1.
    2. Synchronization semaphore, the control of access to shared resources, the initial value is the number of shared data.

PV operation

PV operation was invented by Dijkstra, a Dutchman. It’s algorithmic thinking. P: ‘passeren’ translates into ‘application’ in Dutch. V: : Translates to Dutch vrijgeven, which means to release.

  • Solve mutual exclusion and synchronization problems. The PV operation is looked at separately
  • P operation: Apply for resources and set S=S-1. If S< 0, the resources are placed in the waiting queue and continue to be executed if S is greater than or equal to O. The result returned isTo wait or continue (to pass).
  • V operation: Release resources, make S=S+1, if S<=0, send a signal to wake up a process in the waiting queue. The result that can be returned isWake up the.

1.5 Process Scheduling

Process scheduling refers to how CPU is allocated when a higher priority process arrives. There are two types: disposable and indisposable. Disposable means that when a process with a higher priority arrives, the CPU of the running process is forcibly allocated to the process with a higher priority. Inalienable means that a high-priority process must wait for the current process to automatically release the CPU.

Three-level scheduling: high-level scheduling (also known as long scheduling or job scheduling, which determines which jobs can be called into the system), high-level scheduling (also known as swapping scheduling, which determines which ready processes can be called into memory), and low-level scheduling (also known as process scheduling, which determines which ready processes in memory can occupy CPU).

1.5.1 Scheduling Algorithm:

  • First come, first served FCFS: The process that arrives first allocates CPU first. For macro scheduling.
  • Time slice rotation: Allocate CPU slices to each process and take turns using the CPU. Each process has the same size and is fairly used for micro scheduling.
  • Priority scheduling: Each process has a priority. Cpus with a higher priority are allocated first.
  • Multi-level feedback scheduling: time slice rotation is combined with priority scheduling, and multiple ready queues are set. 1,2,3.. N, each queue is assigned different priorities and allocated different time slice lengths; The new program enters the end of queue 1 first and executes the time slice of queue 1 according to FCFS principle. If the process cannot be completed, it is passed to the end of queue 2 and so on.

1.6 Deadlocks

A deadlock occurs when a process is waiting for an event that will never happen. A system deadlock occurs when multiple processes are in a deadlock state.

There are four necessary conditions for a deadlock to occur:

  1. Mutually exclusive resources,
  2. Each process holds and waits for other resources,
  3. The system cannot deprive process resources,
  4. The process resource diagram is a loop.

Deadlock prevention: A policy is adopted to restrict resource requests of concurrent processes to destroy one of the four conditions for deadlock generation, so that the system does not meet the conditions for deadlock at any time.

1.6.1 Deadlock Avoidance

Generally, the banker algorithm is used to avoid the deadlock. The banker algorithm is to calculate a resource allocation method that will not deadlock in advance, and then allocate resources, otherwise not allocate resources, which is equivalent to borrowing money, considering the other party can afford to borrow money, after thinking well in advance, you can avoid deadlock. silver

  • Deadlock detection: Deadlocks are allowed to occur, but the system periodically runs a deadlock detection program, if detected in the system, try to remove the deadlock.
  • Deadlock release: : A deadlock release method, such as forcibly depriving resources or canceling processes.
  • Deadlock resource calculation: There are n processes in the system, and each process requires R resources. Therefore, the maximum number of resources that can deadlock is N *(R-1). The minimum number of resources without deadlock is n * (r-1) + 1

1.7 the thread

Traditional processes have two attributes:

  • An independent unit that can own a resource
  • A basic unit that can be dispatched and distributed independently.

After thread is introduced, thread is the smallest unit of independent scheduling, and process is the smallest unit of resources. Threads can share common data, global variables, codes, files and other resources of process, but cannot share resources unique to thread, such as identifying data such as thread stack pointer.

2 Storage Management

Memory structure: register – Cache – Cache – main storage – external storage. Address relocation: The process of converting a logical address to an actual physical address in main memory. It can be either static relocation (the conversion is completed as the program is loaded into main memory) or dynamic relocation (the conversion is performed while running).

2.1 Partition Storage Management

The so-called partitioned storage organization is the lump-sum storage, which allocates all the memory a process needs to run and then executes it. There are three ways to partition:

  1. Fixed partition: The static partition method divides main memory into several fixed partitions into which the job to be run is assembled. Due to the fixed partition, internal fragmentation is generated because the size is different from the size required by the job.
  2. Variable partition: Dynamic partition method. The partition of main storage space is divided when the job is transferred in, which is exactly the size required by the job. In this way, there is no internal fragment, but it is easy to cut the whole main storage space into many pieces, resulting in external fragments. The algorithm of variable partition is as follows:

There are many algorithms for the system to allocate memory, as shown in the figure below. According to the memory before allocation, 9K space needs to be allocated. The results of different algorithms are introduced as follows:

  • First adaptation method: search from the beginning according to the order of memory address, find the first free block >=9K space, that is, cut 9K space and allocate it to the process.
  • Best fit method: sort all free memory blocks in memory from small to large, find the first free block >=9K space, cut and allocate, this will find the nearest free block with 9K space size.
  • Worst fit method: In contrast to the best fit method, 9K space is allocated to the process with the largest free block space in memory, in order to prevent too many small free blocks in the system.
  • Cyclic first adaptation method: according to the order of memory address search, find the first >=9K space free block, and then if still need to allocate, find the next, do not have to start from the beginning each time, this is different from the first adaptation method.
  1. Relocatable partition: It can solve the fragmentation problem by moving all allocated areas into one contiguous area so that other small external partition fragments can be merged into larger partitions to meet operational requirements. Only if the space required by the external job is not satisfied.

2.2 Page Storage Management

If the partition storage is used, it is the whole storage, there will be a problem, that is, when the memory required by the process to run is greater than the system memory, the whole process can not be called into the memory, so it can not run, to solve this problem, we need to use segmental page storage organization, page storage is based on variable partition and put forward.

First introduction page type storage organization, process space can be divided into one page, assuming that each page size of 4 k, the same physical space of the system is also divided into one physical block of 4 k, so every time the logic page of the need to run into physical block, after running into other pages need to be run again, so you can run several processes, It is not necessary to load the whole logical space into physical memory, which solves the problem of running processes with large space.

As shown in the figure below, the logical page is divided into page number and in-page address. The in-page address is the physical offset address, but the page number and physical block number do not correspond in sequence. You need to query the page table to get the physical block number corresponding to the page number, and then add the physical block number and offset address to get the real physical address of the running time.

  • Advantages: High utilization, small fragmentation, simple allocation and management.
  • Disadvantages: Increased system overhead, may produce jitter phenomenon

2.2.1 Address Representation and translation

  • Address composition: page address + intra-page offset address; The page address is high and the offset address is low
  • Physical address: physical block number + intra-page offset address;
  • Logical address: page number + in-page offset address;

The in-page offset address of a physical address is the same as the in-page offset address of a logical address. You only need to obtain the corresponding relationship between the page number and the physical block number. First, you need to obtain the number of bits of the page number to obtain the page number, and then query the corresponding physical block number in the page table.

2.2.2 Page replacement algorithm

Sometimes, the process space is divided into 100 pages, and the system memory only 10 physical blocks, unable to meet all the allocation, it is necessary to immediately execute the page allocation, and then according to the algorithm to eliminate, so that 100 pages can be executed in the order of the physical block.

Page absence indicates that the page to be executed is not in the physical block of memory and needs to be imported from the external memory, which increases the execution time. Therefore, the more page absence, the lower system efficiency. Page replacement algorithm is as follows:

  • Optimal algorithm :OPT, the theoretical algorithm, cannot be realized, is the best efficiency calculation after the process execution, used to compare the gap between other algorithms. The principle is to select the page replacement that will not be accessed for the longest time in the future. This ensures that all future pages will be accessed immediately.
  • First-in, first-out algorithm :FIFO, the pages that are first called into memory are replaced and eliminated first, which will produce jitter phenomenon, that is, the more pages allocated, the more pages may be missing (that is, the lower the efficiency).
  • Least recently used :LRU, in the recent past, in the process of execution, the least used pages were replaced out, according to the principle of locality, this way is high efficiency, and will not produce jitter phenomenon, use a large number of counters, but not as much as LFU.
  • Elimination principle: first eliminate the recently not visited, and then eliminate the recently not modified pages.

2.2.3 fast table

Is a small capacity associated memory, composed of fast memory, according to the content access, fast speed, and can guarantee from the hardware according to the content of parallel search, generally used to store the current visit the most frequent few active pages page number.

Fast tables store page tables in the Cache. Slow means the page table is stored in memory. A slow table requires two memory accesses to fetch a page, whereas a fast table accesses Cache and memory once and is therefore faster.

2.3 Segmented Storage Management

Process space can be divided into small segments, each segment number and period of the inside address, unlike page storage, each physical size is different, segmentation is according to the logic of whole section, therefore, segment table is different from the content of the page table, directly in the page table is logic corresponding physical block number, page number and shown below, table has long and base two properties, The position of a logical segment in a physical segment can be determined.

Paging is divided according to physical space, each page is the same size; Segmentation is divided according to logical space, each segment is a complete function, easy to share, but different sizes.

2.3.1 Address Representation

(Segment number, in-segment offset): The in-segment offset cannot exceed the segment length corresponding to the segment number. Otherwise, an out-of-bounds error occurs. The real memory address corresponding to this address should be: base address corresponding to the segment number + in-segment offset.

2.4 Segment-page Storage Management

The process space is segmented first and then paging. The specific schematic diagram and advantages and disadvantages are as follows: