The difference between concurrency and parallelism

We all talk about designing parallel, high-concurrency programs, and there are often times when we subconsciously think we know the difference between Parallelism and Concurrency. But when we try to address them explicitly, we feel like we can’t give a clear description.

So what is concurrency? What is parallelism? Executions are simple, when many things are executed at the same time. Concurrency, on the other hand, is organizing your program in such a way that it can be divided into modules that can be executed independently. Parallelism is necessarily multi-core, one processor is not parallel; But concurrency has nothing to do with the processor, and our programs can be concurrent on a processor.

For example, hua Luogeng must boil water, wash cups, take tea leaves and other steps to make tea. Now we want to do this as quickly as possible, which is “a lot of things to do,” and there are many ways to do concurrency, like having more than one person do it at the same time, which is called parallelism. Parallelism is one way to achieve concurrency, but it’s not the only way. Concurrency can also be achieved by one person, such as boiling water and then washing cups without waiting for the water to boil, so it can be achieved by adjusting the way the program runs.

If that sounds too abstract to you, here’s a short story based on a speech by Rob Pike, one of the founders of Go.

The story begins with a demand: a group of gophers cart a pile of discarded instruction manuals into a furnace and burn them.

It started with a single gopher, using a cart, loading books onto the cart, transporting them to the stove, and unloading them to the stove. The task inevitably takes a long time to complete.

At this point, it would not be much use to add a gopher, because one gopher is working and the other can only wait. (Of course, some say that taking turns with one cart gives the gophers a break, which makes them work faster and more efficiently.)

Get another cart, and the two gophers, using their own carts, load the books into the cart and transport them to the fire, where they are unloaded. This improves transport efficiency, but they reduce efficiency by queuing up to load and unload books.

This is faster than before, but there is still a bottleneck. Since there was only one pile of books and one stove, we also had to coordinate the actions of the two gophers through messages. All right, let’s divide the books into two more piles and add a stove.

So it’s almost twice as efficient as before. Now the model is concurrent, because two gophers can do one thing independently, which improves the efficiency of transportation, and there is no queue for loading and unloading books, which improves the efficiency of loading and unloading. But the model doesn’t have to be parallel, as only one gopher may be working at a time.

This is the first concurrency model, and we can design more. Keep reading the comics.

This time, three ground squirrels were found. One was responsible for loading the books into the car, one was responsible for transporting them, and one was responsible for unloading the books to the stove and burning them. Each gopher performs an independent task, but of course the three gophers use some means of coordination such as message communication.

The gopher who loaded and burned the books was relaxed, but the gopher who delivered the books was tired, and there was a bottleneck in the system. Well, let’s get another gopher to bring back the empty carts.

We enhanced the performance of the system by adding a concurrent step (a fourth gopher) to an existing design (the one with three gophers). Thus, two gophers working on transport, if well coordinated, will theoretically work four times as efficiently as one.

There are four concurrent steps:

  1. Load the books into the car;
  2. Brought the cart to the fire;
  3. Unload the books into the stove;
  4. Return the empty cart.

You can add another grouping to parallelize the concurrency model.

Let’s look at another concurrency model. The gopher in charge of the transport complained that it was too long, so we added a transfer station.

This concurrency model is then parallelized by adding another group, and the two groups are executed in parallel.

The concurrency model above could be improved. Along with the addition of the transfer station, two more gophers were added, one to unload the books from the stack to the transfer station, and the other to load the books from the transfer station into a cart for the gophers to transport to the stove.

Then add another grouping to parallelize the concurrency model.

The cartoon ends here, introducing three models of concurrency, each of which can be easily parallelized. As you can see, each time the concurrency model is improved, the task is broken down into more detail. Once the problem is broken down, the concurrency occurs naturally, with each person focusing on only one task.

Back in the program, the book is the data, the gopher is the CPU, the car is the serialization, deserialization, network, etc., and the stove is the agent, browser, or other consumer. The concurrency model above is an extensible Web Service.

Concurrency is Not Parallelism.

  • Lecture slides: talks.golang.org/2012/waza.s…
  • Presentation video: www.youtube.com/watch?v=cN_…

Refer to the link

  • My.oschina.net/3233123/blo…
  • Blog.csdn.net/claram/arti…

Wechat official account

Scan the following QR code to follow the wechat public account, in the public account reply ◉ plus group ◉ to join our cloud native communication group, and Sun Hongliang, Zhang Curator, Yang Ming and other leaders to discuss cloud native technology