, basic course for my small white, computer science and the source is the crash course in computer science B station link: www.bilibili.com/video/BV1EW…

The cover is old Capital switching teacher ヾ(>∀ <)(Blue ∀ ‘●) What a beautiful drawing style!

18 Operating System

The operating system, or OS for short, is actually a program, but it has special rights to operate hardware, and can run and manage other programs. It starts first when you start up, and it starts all the other programs.

It can automatically process the next program.

In the old days of computers, different external devices were often incompatible with programs, which was very inconvenient.

To solve this problem, the operating system acts as an intermediary between software and hardware, abstracting hardware called device drivers. Programmers can use standardized mechanisms to interact with input/output hardware (I/O).

The Atlas system makes up for the gap between running one program and another, allowing one program to run on the CPU while another program prints data and another program reads data.

Multitasking: Enables multiple programs to run simultaneously, sharing time on a single CPU.

The problem with multitasking is that you need to store data after you run it, so you need to allocate your own block of memory to each program.

The problem is that a program can be divided into discrete chunks of memory that are difficult to track, so it needs to “virtualize” memory addresses, called virtual memory. Programs assume that memory always starts at address 0 to convert the actual physical location.

The operating system automatically handles the mapping between virtual memory and physical memory. The memory size of the program can be flexibly increased or decreased, called dynamic memory allocation. For a program, memory is contiguous.

Advantage is flexible processing memory, and convenient isolation, a program error, will not affect other programs, called memory protection.

The above solved the problem of multiple programs running at the same time, so how to solve the problem of multiple users accessing the computer using terminals?

“Terminal” is just a keyboard + screen, connected to the main computer, terminal itself has no processing capacity.

Time-sharing operating systems: To ensure that one person does not take up the memory, each user can only use a small portion of the processor, memory, etc.

The Unix operating system is divided into two parts:

  • Core function: kernel
  • A bunch of useful tools, but not part of the kernel

If a runtime error occurs, the kernel “panics”, calls a function called “panic”, prints the word “panic” and loops indefinitely.

The early version of Windows, released by Microsoft in 1985 and popular in the 1990s, lacked “memory protection”, which caused a blue screen when a program misbehaved and crashed the system.

There are still “multitasking”, “virtual memory” and “memory protection”.

19 Memory & Storage media

Generally speaking, computer memory is “non-permanent” and data may be lost when a power failure occurs, hence the name “volatile” memory.

Memory is a little different from memory in that data in memory will remain there until it is deleted or overwritten, and will not be lost in power outages.

The earliest storage medium is punched paper card, the main program stored on the paper card, used for more than ten years, because it does not use electricity and cheap and durable disadvantages is slow reading, can only write once. Paper cards don’t work well for storing temporary values.

In 1944, a delay line memory was invented, which is similar to a tube. A speaker at one end emits a pulse that generates a pressure wave, which takes time to travel to a microphone at the other end. The microphone converts the pressure wave back into an electrical signal, which can use the delay in the propagation of the pressure wave to store data.

Assuming pressure wave is 1 and no pressure wave is 0, the speaker can output 1101 signal.

One drawback of delay line memory is that only one bit of data can be read at a time. If you need to access a particular bit, you have to wait for it to emerge from the loop, so it is also called “sequential memory” or “cyclic memory”.

But we need “random access memory” that can be accessed anywhere at any time.

If the memory density is increased, the pressure waves are more likely to mix together if they are compact, so other types of delay line memory, such as magnetostrictive delay memory, have emerged.

It used vibrations of wire to represent data and stored data by winding the wire into coils, but by the mid-1950s this delay line memory became obsolete.

Now a new technology came into being, magnetic core memory, which can be magnetized in one direction by wrapping a wire around the core and applying an electric current. If the current is turned off, the core remains magnetized; If a current is applied in the opposite direction, the direction of magnetization will be reversed so that zeros and ones can be stored. The cores are arranged with wires that select rows and columns, and wires that run through each core to read and write one bit.

The core memory can be accessed at any time. It was popular for more than 20 years from the mid-1950s.

In 1951, a new approach was proposed: magnetic tape. The tape can be moved back and forth in a tape drive with a write head around the wire. A small part of the tape is magnetized by the current passing through a magnetic field. The direction of the current represents the polarity, representing 1 and 0. Magnetic tape is still used today.

The main disadvantage of tape is the speed of access. Tape is continuous, must be rewound or fast-forward to a particular location, and is slow to read.

A similar technology is the magnetic drum memory, which has a metal cylinder covered with magnetic material to record data, rotating continuously and surrounded by dozens of reading and writing heads.

Magnetic drums led to the development of hard disks, which have magnetic, thin surfaces that can be stacked on top of each other, providing more surface area to store data.

To access a particular bit, a read/write head moves up or down, finds the right disk, and the head slides in.

In the 1970s, disks improved dramatically and became common.

Optical memory was introduced in 1972, in the 12-inch “laser disc”. It came to be called a compact disc (CD/DVD). There are many pits on the surface of the disc, causing different reflections of light, which are picked up by the optical sensor and recorded as 1/0.

The first RAM integrated circuits appeared in 1972, making core memory rapidly obsolete.

Mechanical hard drives are increasingly being replaced by solid-state drives, or SSDS. SSDS have no moving parts and the heads don’t have to wait for the disk to spin, so access times are extremely fast, but still not as fast as RAM.

20 File System

Each file has a corresponding format, such as TXT files, the original is a long string of binary, can use ASCII encoding.

Memory does not have the concept of file, just store a large number of bits, in order to store multiple files, need a special file, record the location of other files, this special file is called “directory file”. This file is always stored at the beginning, in position 0, so it’s easy to find.

Directory files store the names of all other files in the format of filename + “. “+ extension, which helps you know the type of file, and directory files also store metadata about the file, such as when it was created, when it was last modified, who owned the file, and so on.

A directory file has a start location and length. To add, delete, or change the file name, you must update the directory file.

Directory files and the management of directory files is a very simple example of a file system that manages files.

This is a flat file system. Every file is on the same plane, so if you add content to one file, you may overwrite the next file.

To solve this problem:

  • The space is divided into blocks, leading to some “reserved space”, easy to change, management;
  • Split files, existing in multiple blocks, as long as the allocation of blocks, files can be easily enlarged. (Similar to virtual memory)

If you want to delete a file, delete records from the directory, but the file actually exists, but memory is allocated to cover the area of the file when it needs to be overwritten, but it remains there until it is overwritten, so you can “recover” the data.

Due to operations such as modification and deletion, files are inevitably saved in fragments and allocated in different locations of memory.

To make it easier to read the files, you need to “defragment”, where the computer moves the pieces back and forth to arrange them in the correct order.

All of the above mentioned are flat management systems, files are in the same directory, it is feasible to handle small volume files, but in the explosive growth of capacity now, the number of files is also rapid growth, all files in the same layer is impractical.

Related files are now kept in the same folder, managed by multiple folders, called a tiered file system.

Directory files point not only to files but also to directories, and additional metadata is required to distinguish files from directories. This directory file is at the top level, so it’s called the root directory. All other files and folders are in the root directory.

To move a file, you only need to delete the record in one directory and add the file in another directory. The memory location of the file remains unchanged.

File systems make it easier to organize and access files without worrying about their exact location on tape or disk.