High Concurrency is one of the most important things to consider when designing an Internet distributed system architecture. It usually refers to the idea of designing a system that can handle many requests in parallel. To achieve high concurrency, it is necessary to achieve reasonable allocation of resources, how much is needed to allocate how much, on the contrary, the small horse big cart, pull not move, the big horse small two wheels, resulting in a waste of resources.

Strictly speaking, a single-core CPU cannot do parallelism, only a multi-core CPU can do parallelism strictly because a CPU can only do one thing at a time. Then why can a single-core CPU achieve high concurrency? This is the operating system process thread scheduling switch execution, which feels like parallel processing. Therefore, as long as there are enough process threads, it can handle C1K C10K requests, but the number of process threads is limited by the operating system memory and other resources. Each thread must allocate 8M stack memory, whether it is used or not. Of course, software processing power is not only related to memory, but also blocking, asynchronous processing, CPU, and so on. Could there be a language that uses a smaller processing unit that takes up less memory than threads, so that its concurrent processing capacity can be higher? So Google did that, and golang came along, and Golang supports high concurrency at the language level.

Go’s high-concurrency core – Goroutine

Goroutine is at the heart of Go’s parallel design. A Goroutine is essentially a coroutine, it’s smaller than a thread, it takes less resources, and there are dozens of goroutines that you can see at the bottom, maybe five or six threads, and inside Go you can share memory between these Goroutines. Goroutine takes very little stack memory (about 4 to 5KB) to execute, and of course scales accordingly. Because of this, thousands of concurrent tasks can be run simultaneously. Goroutine is easier to use, more efficient, and lighter than Thread. Coroutines are lighter and take up less memory, which is the premise of their high concurrency.

How to achieve high concurrency in GO Web development

Let’s start with a simple Web service code


package main

import (
    "fmt"
    "log"
    "net/http"
)

func response(w http.ResponseWriter, r *http.Request) {
    fmt.Fprintf(w, "Hello world! This is the Go") // This is written to w as output to the client} funcmain() {
    http.HandleFunc("/", response)
    err := http.ListenAndServe(": 8088", nil)
    iferr ! = nil { log.Fatal("ListenAndServe: ", err)
    }
}Copy the code

This is a simple WEB service. We just said that Goroutine is at the heart of the go language’s support for high concurrency. Let’s take a step-by-step look at how this WEB service achieves high concurrency.

We follow the http.handlefunc (“/”, response) method all the way up the code.

func HandleFunc(pattern string, handler func(ResponseWriter, *Request)) {
    DefaultServeMux.HandleFunc(pattern, handler)
}
var DefaultServeMux = &defaultServeMux
var defaultServeMux ServeMux

typeServeMux struct {mu sync.RWMutex Lock required for concurrent processing M map[string]muxEntry// Route rule map. One rule one muxEntry hosts bool // Whether the rule contains host information} A string of routing rules, corresponding to a handler processing method.type muxEntry struct {
    h       Handler
    pattern string
}Copy the code

Above is the definition and description of DefaultServeMux. We see a ServeMux structure with a read/write lock that handles concurrent use. MuxEntry structure containing handler handler and route string. . Let’s see, HTTP HandleFunc function, namely DefaultServeMux. HandleFunc do what was wrong. Let’s look at the second argument to mux.handle HandlerFunc(handler)

func (mux *ServeMux) HandleFunc(pattern string, handler func(ResponseWriter, *Request)) {
    mux.Handle(pattern, HandlerFunc(handler))
}
typeHandler interface {ServeHTTP(*Request)}type HandlerFunc func(ResponseWriter, *Request)
func (f HandlerFunc) ServeHTTP(w ResponseWriter, r *Request) {
    f(w, r)
}Copy the code

As we can see, the custom response method we passed is forced to be HandlerFunc, so our response method implements ServeHTTP by default.

Let’s move on to the first parameter of mux.handle.

func (mux *ServeMux) Handle(pattern string, handler Handler) {
    mux.mu.Lock()
    defer mux.mu.Unlock()

    if pattern == "" {
        panic("http: invalid pattern")}if handler == nil {
        panic("http: nil handler")}if _, exist := mux.m[pattern]; exist {
        panic("http: multiple registrations for " + pattern)
    }

    if mux.m == nil {
        mux.m = make(map[string]muxEntry)
    }
    mux.m[pattern] = muxEntry{h: handler, pattern: pattern}

    ifpattern[0] ! ='/' {
        mux.hosts = true}}Copy the code

The route string and handler function are stored in the map table of servemux. m. The muxEntry structure in the map is described above.

ListenAndServe(“:8900”, nil) what does http.listenandServe (“:8900”, nil) do

func ListenAndServe(addr string, handler Handler) error {
    server := &Server{Addr: addr, Handler: handler}
    return server.ListenAndServe()
}

func (srv *Server) ListenAndServe() error {
    addr := srv.Addr
    if addr == "" {
        addr = ":http"
    }
    ln, err := net.Listen("tcp", addr)
    iferr ! = nil {return err
    }
    return srv.Serve(tcpKeepAliveListener{ln.(*net.TCPListener)})
}Copy the code

Net.listen (” TCP “, addr), which uses port addr to build a TCP service. TcpKeepAliveListener monitors the addr port.

Now comes the key code, the HTTP processing

func (srv *Server) Serve(l net.Listener) error {
    defer l.Close()
    if fn := testHookServerServe; fn ! = nil { fn(srv, l) } var tempDelay time.Duration // how long to sleep on accept failureiferr := srv.setupHTTP2_Serve(); err ! = nil {return err
    }

    srv.trackListener(l, true)
    defer srv.trackListener(l, false)

    baseCtx := context.Background() // base is always background, per Issue 16220
    ctx := context.WithValue(baseCtx, ServerContextKey, srv)
    for {
        rw, e := l.Accept()
        ife ! = nil { select {case <-srv.getDoneChan():
                return ErrServerClosed
            default:
            }
            if ne, ok := e.(net.Error); ok && ne.Temporary() {
                if tempDelay == 0 {
                    tempDelay = 5 * time.Millisecond
                } else {
                    tempDelay *= 2
                }
                if max := 1 * time.Second; tempDelay > max {
                    tempDelay = max
                }
                srv.logf("http: Accept error: %v; retrying in %v", e, tempDelay)
                time.Sleep(tempDelay)
                continue
            }
            return e
        }
        tempDelay = 0
        c := srv.newConn(rw)
        c.setState(c.rwc, StateNew) // before Serve can return
        go c.serve(ctx)
    }
}Copy the code

At the end of the block above, we find the keyword go, which is used to launch a Goroutine.

go c.serve(ctx)Copy the code

This code starts the Goroutine to execute c. sever. This is the key to go’s high concurrency. Each request is a separate goroutine to execute, c. sever (CTX) inside is each request routing matching logic, this function analysis out URI METHOD, etc., execute serverHandler{C. sever}.serveHTTP (w, w.eq), etc

So a Web service process would look something like this



Why is it possible to achieve high concurrency using a separate Goroutine? This is going to take a long time, but stay tuned for another blog post: Coroutines for Golang’s High Concurrency Inquiry

* This article refers to The “Go High Concurrency Understanding” by Yu Feixiang Mannong.