What is a coroutine?

  1. A coroutine is similar to a thread, but much lighter. A program launch takes up a process and a process can have multiple threads, and a thread can have multiple coroutines.
  2. A process contains at least one main thread, and a main thread can have more child threads. Threads have two scheduling strategies, one is: time-sharing scheduling, the other is: preemptive scheduling.
  3. Threads are the smallest unit of execution for an operating system. A process is the smallest unit of resource management. Threads generally have five states: initialized, runnable, running, blocked, and destroyed.
  4. The coroutine is executed in user mode, not managed by the kernel of the operating system. It is completely scheduled and controlled by the program itself.
  5. The creation, switching, suspending and destroying of coroutines are all memory operations.
  6. Coroutines belong to threads, and coroutines are executed in threads. Coroutine scheduling strategy: Collaborative scheduling.

# # # # go goroutine

Go is a very concurrency friendly language. It provides a simple syntax for two major mechanisms: goroutines and channels.

Goroutine is a lightweight thread Go that supports native coroutines at the language level. The coroutine is smaller than the thread cost, about 2 kilobytes. Threads have to specify the size of the stack, which is fixed depending on the program cost. GPM scheduling model

Simple use of coroutine native support

Package main import (" FMT ""time") func main() {FMT.Println(" test ") // Here start asynchronous go func() { Println(" Test 3")}() FMT.Println(" Test 3")} // delay the main program exit time.sleep (Time.microControl *100)}

Go is a multi-threaded printout coroutine, which can use multi-core CPUs. Multiple coroutines can be scheduled at the same time, which will cause concurrency problems.

The following code, how to execute the result. The normal print should be 1-20

package main import ( "fmt" "time" ) var count =0 func main() { for i:=0; i<=20; i++ { go func() { count ++ fmt.Println(count) }() } time.Sleep(time.Microsecond*100) }
$go run main.go // The first time 13 5 2 14 18 19 10 11 12 13 6 15 16 17 4 8 20 9 7 21 $go run main.go // The second time 13 18 2 5 7 8 19 10 11 12 13 14 15 16 17 4 9 20 21 6

The results are different each time. Reading a variable from within a variable is the only safe way to handle a variable concurrently. You can have as many readers as you want, but writes have to be synchronized. There are too many ways to do this, including using true atomic operations that depend on a particular CPU instruction set. However, the common operation is to use mutexes.

Lock the coroutine when writing data

package main import ( "fmt" "sync" "time" ) var ( lock sync.Mutex count =0 ) func main() { for i:=0; i<=20; i++ { go func() { lock.Lock() defer lock.Unlock() count ++ fmt.Println(count) }() } time.Sleep(time.Microsecond*100) }

We lock the count increment to ensure that only one coroutine is writing to the desired result at the same time.

$ go run main.go
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21

It looks like we’ve solved the concurrency problem, but it’s not the same as concurrency programming. It can also cause deadlock problems. This is fine when using A single lock, but if you use two or more locks in your code, it is easy to have A dangerous situation where coroutine A owns lock lockA and wants to access lock lockB, while coroutine B owns lock lockB and needs to access lock lockA.

channel

Channels are a powerful mechanism for sharing resources for multi-coroutine scheduling. Channels are shared channels for passing data between coroutines. One coroutine can pipe data to another coroutine so that only one coroutine can access data at any given time

Create a pipe

c := make(chan int)

The type of the channel is CHAN INT. Therefore, to pass the channel to the function, our function signature would look like this:

func worker(c chan int) { ... }

The pipeline supports two operations

// Sending DATA VAR := < -channel // Sending DATA VAR := < -channel

Pipeline data receive or send using for

For example

package main import ( "fmt" "time" "math/rand" ) func main() { c := make(chan int) for i := 0; i < 5; i++ { worker := &Worker{id: i} go worker.process(c) } for { c <- rand.Int() time.Sleep(time.Millisecond * 50) } } type Worker struct { id int } func  (w *Worker) process(c chan int) { for { data := <-c fmt.Printf("worker %d got %d\n", w.id, data) } }

Buffer channel

The sending and receiving process of an unbuffered pipe is blocked. It is also possible to create a buffered pipe.


Sending data to the Bufferer Channel is blocked only if the buffer is full. Also, receiving data from the buffer pipeline is blocked only if the buffer is empty.


You can create a buffer pipe by passing an additional parameter to the make function to represent the capacity (specifying the size of the buffer).


For a pipe to be buffered, capacity in the syntax above should be greater than 0. The capacity of an unbuffered pipe defaults to 0.

ch := make (chan type, capacity)

select

Even with buffering, at some point we need to start deleting messages. We can’t run out of memory to make the worker easy. To do this, we use Go’s select:

Select the role of

Go provides a keyword SELECT. You can listen to the flow of data over a channel through a SELECT, which starts a selection and the selection criteria are described by a CASE statement. SELECT uses the restriction that there must be one IO operation in each case statement. So select is typically used with coroutine pipes to give an example:

For {select {case c < -rand.int (): // Default: Fmt.println ("dropped")} time.sleep (time.millisecond * 50)} fmt.println ("dropped")} time.

timeout

for { select { case c <- rand.Int(): Case < -time.after (time.millisecond * 100) case < -time.after (time.millisecond * 100) case < -time.after (time.millisecond * 100) Fmt.println (" Timed out") default} Time.Sleep(time.millisecond * 50)} Time.