Parallelism and concurrency

Parallel: The simultaneous execution of multiple instructions on multiple processors at the same time.

Concurrency: Concurrency refers to the concurrency of only one command running at a time. The concurrency effect is similar to that of multiple processes running concurrently at a macro level. However, the concurrency effect is different at a micro level.

  • Parallelism is two queues using two coffee machines at the same time
  • Concurrency is two queues using a coffee machine alternately

Parallelism: Different units of execution, seemingly executed together, but still microscopically serial.

Concurrent: Multi-core cpus execute simultaneously.

Process thread coroutines

Processes are the unit of resource allocation and threads are the unit of scheduling.

Coroutines are user-mode, which simply stack the programs to be executed and run them alternately on the same thread, but in parallel, not in parallel.

Multithreading requires context switching between the CPU and the kernel layer, which switches the time slice.

Multi – process, not only time slice, and resource scheduling, more troublesome.

The advantage of multiple processes is data isolation, A crash does not affect B.

Multithreading is sharing data, and inter-process communication will be more convenient.

Interprocess communication mode:

Pipes, message queues, semaphores, sockets, Streams, etc.

(These all communicate via shared memory, whereas GO uses communication to share memory.)

What is a coroutine

To reduce switching costs, Java can use asynchronous callbacks.

Asynchronous callbacks do not block, but they have two disadvantages.

  • One: the original code business logic is split.
  • Second: falling into correction hell is hard to maintain.

Go takes the thread idea:

A thread can save the execution context when it encounters a block and move on to executing code elsewhere.

Wait for the blocked request to complete and then go back to continue execution.

Can a function exit halfway through and come back?

When a thread runs out of time slices or encounters I/O blocking in the middle of executing a function, it is suspended by the operating system after saving the context and switches to another thread. And then go back and do it when you have the opportunity, right?

If the operating system can schedule and manage multiple threads, why can’t threads schedule and manage the execution of functions?


Thread is the execution flow abstracted by the operating system and managed by the operating system.

That in a thread, can also abstract out a number of execution flow, by the thread to unified scheduling management. The abstract flow of execution above this thread is called a coroutine.

Thread:

Coroutines:

Thread scheduling is managed by the operating system and is preemptive scheduling.

Coroutines, on the other hand, need to cooperate with each other and voluntarily surrender execution power, which is the origin of the name of coroutines, cooperative programs.

Welcome to my language space: www.yuque.com/lixin-fjgsf…