Program operation often need data, and data IO and often need time transmission, and the common serial processing, is a task to complete the processing of new tasks, its low efficiency can be imagined. Suppose there are three tasks to be processed, and each task has some blocking, the serial processing might look something like this:

main.go
package main

import (
    "fmt"
    "time"
)

type Task struct {
    Duration time.Duration
    Name string
}

func main(a) {
    // Declare the task to be processed
    taskList := []Task {
        {
            1."Deal with 1"}, {2."Deal with 2"}, {3."Deal with 3",
        },
    }
    starTime := time.Now().Unix()
    for _, item := range taskList {
        goProcess(item)
    }
    fmt.Printf("Available: % ds \ n", time.Now().Unix() - starTime)
}

/** * to process */
func goProcess(task Task) {
    time.Sleep(time.Second * task.Duration) // Let's say this is I/O blocking and it will block for some time, and the I/O could be network I/O or disk I/O, and then it will run again
    fmt.Printf("Task :%s processed \n", task.Name)
}
Copy the code

Processing printed results

Task: 1 End Task: 2 End Task: 3 End Time :6sCopy the code

The downside of this serial processing is that it takes a lot of time when you have I/O blocking and you can see that the program might not take much time to do the computation but the I/O blocking takes a lot of time. This kind of squatting in the manger, waiting for people to jump outside, is not very good. By using asynchronous processing, blocking can be skipped to avoid the occurrence of pit occupation.

With coroutines, you can execute asynchronously when blocking without waiting, wait for all coroutines to finish processing, and then sum up the results. The code looks something like this:

package main

import (
	"fmt"
	"sync"
	"time"
)

type Task struct {
	Duration time.Duration
	Name string
}

func main(a) {
	// Declare the task to be processed
	taskList := []Task {
		{
			1."Deal with 1"}, {2."Deal with 2"}, {3."Deal with 3",
		},
	}
	starTime := time.Now().Unix()
	var res []string // Process result collection
	resChang := make(chan string.len(taskList))
	wg := &sync.WaitGroup{}
	// The result of the asynchronous processing is collected, and the data is piped in, similar to a single subscription function
	go func(a) {
		wg.Add(1)
		defer wg.Done() // After the channel is closed, the processing result is collected, triggering a notification to the lower batch that the processing result is collected
		var lock sync.Mutex / / the mutex
		for resItem := range resChang {
			lock.Lock() / / lock
			res = append(res, resItem)
			lock.Unlock() / / unlock
		}
	}()
	taskWG := &sync.WaitGroup{}
	for _, item := range taskList {
		taskWG.Add(1) // Batch semaphore +1
		go goProcess(item, &resChang, taskWG)
	}
	taskWG.Wait()// block and wait for all processing to complete before continuing
	close(resChang)// Close the processing channel when the processing is complete
	wg.Wait() // This is blocking and waiting for processing to be collected before continuing to run
	// Prints the results of the batch collection
	for _, i := range res {
		fmt.Printf("%s", i)
	}
	fmt.Printf("Available: % ds \ n", time.Now().Unix() - starTime)
}


/** * to process */
func goProcess(task Task, resChan *chan string, taskWG *sync.WaitGroup) {
	time.Sleep(time.Second * task.Duration) // Let's say this is I/O blocking and it will block here for some time, and the I/O could be network I/O or disk I/O, and then it will continue
	res := fmt.Sprintf("Task :%s processed \n", task.Name)
	defer func(a) {
		*resChan <- res // Pass the results along
		taskWG.Done() // Batch semaphore -1 to report completion(1)}}Copy the code
Result Task: Process 1 Process end Task: Process 2 Process end Task: Process 3 Process end Time :3sCopy the code

Compared to the previous serial, the concurrency effectively handles THE BLOCKING of THE IO, which is equivalent to the serial is the Angle of the hole, the concurrency does not care about these, you do not need to kick you first, to someone who needs to use first, so the kick kick, efficiency increases.

Relearning Serial Processing and Distribution Processing of Golang/ Golang