Welcome to my blog

process

Processes were created to make better use of CPU resources and to make concurrency possible. Is the execution of a program, that is, once the program is loaded into memory and ready to execute, it is a process. Process is the basic unit of resource allocation, scheduling operation and concurrent execution in the system.Copy the code

Process synchronization

Although multi-process improves system resource utilization and throughput, it may cause system chaos due to process asynchrony. The task of process synchronization is to coordinate the execution sequence of multiple related processes, so that the concurrent execution of multiple processes can effectively share resources and cooperate with each other, and ensure the reproducibility of program execution. General process synchronization methods are: ** timer **,** semaphore **,** event **,** mutex **.Copy the code

A semaphore

An integer value used to transmit signals between processes. There are only three operations that can be performed on a semaphore: initialization, P (-), and V (+), all of which are atomic operations. The P operation (-) can be used to block a process, and the V operation (+) can be used to unblock a process. A semaphore can be initialized to a non-negative number, and the semWait operation reduces the semaphore s by one. If the value is negative, the process executing semWait is blocked. Otherwise, the process continues. SemSignal increments the semaphore by one, and if the value is greater than or equal to zero, the process blocked by semWait is unblocked.Copy the code

Process status

Ready → execute the scheduler to select a process with the highest priority to occupy the processor; Execution → Ready A process changes from execution to ready when, during its execution, it has to yield a processor because a time slice allocated to it has used up. 'The current running process time slice used up; Execution → blocking the current running process waiting for keyboard input, entered the sleep state; When an executing process cannot continue executing because it is waiting for some event to occur, it changes from the executing state to the blocked state. Block → Ready I/O operation completes and is woken up by interrupt handler. A process in a blocked state changes from blocked to ready if the event it is waiting for has occurred.Copy the code

Process of communication

The purpose of process communication is: data transfer, data sharing, notification events, resource sharing, process control. There are several ways of communication

Pipeline pipe

A pipe is a half-duplex mode of communication in which data flows only one way and can only be used between related processes. Process kinship usually refers to the parent-child process relationship.Copy the code

Name the pipe FIFO

Named pipes are also a half-duplex communication mode, but they allow communication between unrelated processes.Copy the code

MessageQueue MessageQueue

Message queues are linked lists of messages stored in the kernel and identified by message queue identifiers. The message queue overcomes the disadvantages of little signal transmission, pipe carrying only plain byte stream and limited buffer size.Copy the code

Shared storage SharedMemory

Shared memory maps a segment of memory that can be accessed by other processes. This segment of shared memory is created by one process but can be accessed by multiple processes. Shared memory is the fastest IPC mode and is specifically designed for the low efficiency of other interprocess communication modes. It is often used in conjunction with other communication mechanisms, such as semaphores, to achieve synchronization and communication between processes.Copy the code

Semaphore:

A semaphore is a counter that can be used to control access to a shared resource by multiple processes. It is often used as a locking mechanism to prevent other processes from accessing a shared resource while one process is accessing it. Therefore, it is mainly used as a means of synchronization between processes and between different threads within the same process.Copy the code

The Socket Socket

Socket socket is also an interprocess communication mechanism, which is different from other communication mechanisms in that it can be used for interprocess communication.Copy the code

signal

Signals are a complex form of communication used to notify a receiving process that an event has occurred.Copy the code

thread

Each task executed in a single process is a thread. A thread is the smallest unit of operation performed in a process. A thread can belong to only one process, but a process can have multiple threads. Multithreading allows multiple tasks to be performed at the same time in a single process. Thread is a lightweight process, compared with the process, thread to the operating system to bring side creation, maintenance, and management of the burden is lighter, meaning that the cost of thread or overhead is relatively small. Threads have no address space and are contained in the address space of a process. A thread context contains only one stack, one register, one priority, the thread text is contained in its process text fragment, and all resources owned by the process belong to the thread. All threads share process memory and resources. Multiple threads in the same process share code segments (code and constants), data segments (global and static variables), and extended segments (heap storage). But each thread has its own stack segment, the contents of the registers, also known as the run-time, where all local and temporary variables are stored. Parent and child processes use interprocess communication, where threads of the same process communicate by reading and writing data to process variables. Any thread in the process can destroy the process by destroying the main thread. Destroying the main thread causes the process to be destroyed. Changes to the main thread may affect all threads.Copy the code

Processes and threads have in common:

Both processes and threads have ID/ register groups, status and priorities, information blocks, and can change their own attributes after being created. Both can share resources with their parent process, and neither can directly access resources of other unrelated processes or threads.Copy the code

Coroutines:

A program that can contain multiple coroutines can be compared to a process that contains multiple threads, so let's compare coroutines and threads. We know that multiple threads are relatively independent, have their own context, and the switch is controlled by the system; Coroutines are also relatively independent and have their own context, but their switching is controlled by themselves, from the current coroutine to other coroutines.Copy the code

In parallel:

Parallelism refers to the simultaneous execution of two or more "work units" at the same time. From the perspective of hardware, it means that two or more instructions are being executed at the same time. So multicore is a prerequisite for parallelismCopy the code

Concurrent:

Enables multiple operations to take place in overlapping time periodsCopy the code

Parallelism vs. concurrency:

Concurrent design makes concurrent execution possible, and parallelism is one of the modes of concurrent execution.Copy the code