Chapter 1 Computer system overview

1.1 Basic Concepts of the OPERATING system

1.1.1 Operating system Concepts

The definition of operating system: it is to manage all the hardware resources of the computer system, including software resources and data resources; Control program operation; Improve man-machine interface; To provide support for other application software, so that all resources of the computer system to maximize the role, to provide users with convenient, effective, friendly service interface.

1.1.2 Functions of the OPERATING system

1.1.3 Features of the operating system

Concurrent, shared, virtual, asynchronous

1.2 Development and classification of operating systems

1.3 Operating mechanism and architecture of the operating system

1.3.1 Operating Mechanism and Architecture of the OPERATING System (Large Kernel and Small Kernel)

Large kernel: Fast and difficult to maintain (Linux)

Small kernel: slow speed, clear modules (rarely used)

1.3.2 Interrupt and Exception (internal interrupt and external interrupt, interrupt processing process)

Interrupt handling procedure

  • Off interrupt: CPU response is interrupted
  • Save breakpoint: save breakpoint is to save the content of the PC, this part of the function is completed by the terminal implicit instruction, used to interrupt the return to know where to execute;
  • Locate the interrupt service routine address: Take out the midcourse service routine entry and send it to the PC
  • Save site: mainly save program status register and some general register inside the content
  • Open interrupt: Allows response to more advanced interrupts
  • Execute interrupt service routine: Execute interrupt service routine
  • Off interrupt: ensure that the recovery site and shielding word is not interrupted, the completion of the interruption can resume the site
  • Restore live and masked words
  • Open interrupt: Returns to the user program that first interrupted

1.3.3 System call (execution process, access instructions, library functions and system call)

1. System call knowledge framework diagram

2. The difference between system calls and library functions

3. System call execution process

Chapter 2 Process Management

2.1 Processes and Threads

2.1.1 Definition, characteristics, composition and organization of the process

The definition of thread

(1) Concept of program

(2) The concept of process

(3) Definition of process

Ii. Process characteristics

Iii. Process composition

And one of the most important is the Process Control Block (PCB).

PCB’s brief introduction:

  • PCB records all the information required by the operating system to describe the current situation of the process and control the process operation.

  • The function of PCB is to make a program (including data) that cannot run independently in multi-program environment become a basic unit that can run independently, a process that can execute concurrently with other processes.

  • In other words, OS controls and manages concurrent processes according to PCB.

  • For example, when the OS wants to dispatch a process to execute, it needs to check its current state and priority from the PCB of the process; After dispatching to a process, according to the processor state information stored in its PCB, set up the process to restore the operation of the scene, and according to the program and data in the PCB memory starting address, find its program and data;

  • In the process of execution, when the process needs to synchronize, communicate or access files with the process, it also needs to access the PCB;

  • When the process is suspended for some reason, the processor environment of the breakpoint must be saved in the PCB.

  • It can be seen that during the whole life of a process, the system is always controlled by the PCB, that is, the system is aware of the existence of the process based on the PCB of the process and nothing else.

  • So the PCB is the only indication that a process exists.

Iv. Organization of processes

(1) Link mode

(2) Index method

2.1.2 Process Status Conversion

2.1.3 Process Communication

2.2.4 Threading and multithreading model

1. Definition of threads

2. The difference between threads and processes

Threads are the basic unit of CPU scheduling and processes are the basic unit of resource allocation

3. Implementation scheme of thread

(1) User-level threads

(2) Kernel level threads

(3) Special combination

4. Multi-threaded model

According to the implementation scheme of user thread and kernel thread mentioned above, it can be divided into the following models

(1) Many-to-one model

(2) One-to-one model

(3) Many-to-many model

The JVM’s threading model is one-to-one

2.2 Processor scheduling

2.2.1 Concepts and Layers

1. The concept

Processor scheduling: Selects a process from the ready queue according to certain algorithms to allocate CPU to it

2. Three kinds of scheduling

(1) Advanced scheduling (disk to memory)

(2) Intermediate scheduling (disk to memory)

The difference between an advanced schedule and an intermediate schedule: An advanced schedule has not yet produced a process (no PCBS have been allocated), while an intermediate schedule is a schedule that has already allocated PCBS

(3) Low-level scheduling (memory to CPU)

Seven state model

The five CPU states were shifted to 7 CPU states due to the addition of pending state (virtual memory technology that puts some temporarily unwanted data on external memory)

2.2.2 Processor scheduling timing

2.2.3 CPU Scheduling Algorithm

  • Short assignments
  • Always serve first
  • Higher than the corresponding
  • Time slice
  • priority
  • Multi-stage feedback

2.3 Process Synchronization and Mutual exclusion

2.3.1 Concepts of process synchronization and mutual exclusion

1. Process synchronization

  • Synchronization is also called direct constraint relations.
  • In multi-program environment, processes are executed concurrently, and there are different mutual constraints among different processes. In order to coordinate the mutual constraints between processes, such as waiting and transmitting information, the concept of process synchronization is introduced. Process synchronization is to solve the problem of process asynchrony.
  • B can’t do it until A is done

The difference between synchronous and asynchronous network programming: synchronous refers to doing by yourself, asynchronous can be understood as letting someone else do it (callback)

2. Processes are mutually exclusive

  • Mutual exclusion, also known as indirect constraints. Mutual exclusion means that when one process accesses a critical resource, another process that wants to access the critical resource must wait. Another process can access a critical resource only after the resource is released.

2.3.2 Software implementation method of critical region process exclusion

1. Single sign

P0 has to go in first before P1 can go in

2. Double mark first inspection method

Before entering, check if any other thread wants to enter the critical section. If not, code true for its flag

3. Double sign after inspection method

4. Peterson algorithm

2.2.3 Hardware implementation method of critical region process exclusion in operating system

1. Interrupt the hiding method

2. TestAndSet instruction

  • When executing TSL instructions, its internal operation logic:
  • Assuming lock is now false, which means that critical resource A is free, I can access that resource and set lock=true to remind other processes that I’m using critical resource A, and let them wait
  • Assuming lock is true, it means that the critical resource is being used, so I have to wait, and setting lock=true doesn’t affect anything, so it doesn’t matter, just so lock can be locked when lock is false, locking and checking is done in a TSL instruction.

3. Swap instruction

2.3.3 Semaphore mechanism (integer semaphore, recording semaphore P, V)

Just remember that the shaping semaphore needs to be continuously probed (understood as CAS operation), the recording semaphore is understood as SYN, and the presence of the waiting queue can be woken up

1 integer

2 record form

2.3.4 tube side

Is Java inside the lock, to solve the direct use of semaphore encoding trouble prone to error

2.4 a deadlock

1. The difference between deadlock, starvation, and death cycle

2. Deadlock conditions (4, often asked)

  • Exclusive access
  • Do not deprive
  • Request and Hold
  • Loop waiting for

3. Solutions to deadlocks

(1) Preventing deadlocks (breaking the four conditions)

  • Destroy mutually exclusive access: You can use the SPOOLing technology to change exclusive resources to shares
  • Destruction without deprivation: The operating system helps by forcibly extracting resources from other threads when there are insufficient resources, or automatically abandoning all resources when one thread does not have enough resources requested
  • Request and Hold: Request all resources at once. Failure blocks
  • Loop wait: Threads can only request resources in the order of resource number

(2) Avoid deadlocks

1. Safety sequence

If B is given, then none of the three threads will terminate in the worst case, which is unsafe

= = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =

If YOU give A, then you have 20 in your hand, which guarantees that in the worst case, either A or T gets done, and then B gets done, it’s safe

2. Banker algorithms

The method is the same as the safe sequence, which is to determine whether you can borrow or not, but one is one-dimensional and one is multidimensional

Deadlock monitoring and lifting

1. Deadlock monitoring

Draw a resource allocation diagram, processing, first need to allocate the edge processing, and then determine the request edge, if all edges can be eliminated, indicating that there is no deadlock

2. The deadlock is cleared

reference

  • Blog.csdn.net/weixin_4391…
  • www.bilibili.com/video/av701…