I. Three load balancing solutions

High availability and high performance communication services are usually implemented by service registration and discovery, load balancing and fault tolerance. Based on the location of load balancing, there are three solutions:

  • 1. Centralized LB (Proxy Model)
  • 2. In-process LB (balance-Aware Client)
  • External Load Balancing Service

Out here, very detailed: link address

Two. GRPC preparation

GRPC uses Protocol Buffers by default, a mature structured data serialization mechanism open-source by Google (you can also use other data formats such as JSON). The client provides Objective-C and Java interfaces, and the server side has Java, Golang, C++ interfaces, providing a solution for mobile terminal (iOS/Androi) to server side communication. The link address

1. Install BREW, this yourself baidu Google.

2. The terminal:

brew install autoconf automake libtool
Copy the code

3. Install golang Protobuf

Go get the -u github.com/golang/protobuf/proto / / golang protobuf library go get -u github.com/golang/protobuf/protoc-gen-go / / protoc - go_out toolsCopy the code

Simple protobuf

../proto/hello.proto

syntax = "proto3";

package proto;

message SayReq {
    string content = 1;
}

message SayResp {
    string content = 1;
}

service Test{
    rpc Say(SayReq) returns (SayResp) {}
}
Copy the code

Protoc –go_out=plugins= GRPC:.hello.proto protoc –go_out=plugins= GRPC:.hello.proto

Generate hello.pb.go file, using protoc can generate.pb.go files required by different languages


Here are four protocs that you can skip if you don’t need to stream:

1: simple RPC

2: server-side streaming RPC

3: client streaming RPC

4: bidirectional streaming RPC

The four methods correspond to the following four services: test.proto file: service test {

rpc LZX1(SayReq) returns (SayResp) {}
rpc LZX2(SayReq) returns (stream SayResp) {}
rpc LZX3(stream SayReq) returns (SayResp) {}
rpc LZX4(stream SayReq) returns (stream SayResp) {}
Copy the code

}

Test.pb. go file generated:

type TestClient interface {

LZX1(ctx context.Context, in *SayReq, opts ... grpc.CallOption) (*SayResp, error) LZX2(ctx context.Context, in *SayReq, opts ... grpc.CallOption) (Test_LZX2Client, error) LZX3(ctx context.Context, opts ... grpc.CallOption) (Test_LZX3Client, error) LZX4(ctx context.Context, opts ... grpc.CallOption) (Test_LZX4Client, error)Copy the code

}

So the flowless functions, the first simple RPC, are just one-to-one correspondence. Whenever there is a stream, the type returned is the service structure name. As long as the client is streaming, the passer will contain no additional parameters.


Four. Six load balancing algorithms

1. Polling method

2. Random method

3. Source address hashing

4. Weighted polling method

5. Weighted random method

6. Least join number method

See here for a description of the algorithm: link address

Example of gRPC

Example source link: Link address

Architecture:

Random load balancing is used as follows:

Etcd client/client/random/main. Go

package main import ( etcd "github.com/coreos/etcd/client" grpclb "github.com/liyue201/grpc-lb" "github.com/liyue201/grpc-lb/examples/proto" registry "github.com/liyue201/grpc-lb/registry/etcd" "golang.org/x/net/context" "google.golang.org/grpc" "log" "strconv" "time" ) func main() { etcdConfg := etcd.Config{ Endpoints: , string [] {" http://120.24.44.201:2379 "}} r: = registry. NewResolver ("/GRPC - lb ", "test", EtcdConfg) registry b := grpclb.NewBalancer(r, grpclb.newrAndOmSelector ()) grpc.WithInsecure(), grpc.WithBalancer(b)) if err ! = nil { log.Printf("grpc dial: %s", err) return } defer c.Close() client := proto.NewTestClient(c) var num int for i := 0; i < 1000; i++ { resp, err := client.Say(context.Background(), &proto.SayReq{Content: "random"}) if err ! = nil { log.Println(err) time.Sleep(time.Second) continue } time.Sleep(time.Second) num++ log.Printf(resp.Content + ", clientOfnum: " + strconv.Itoa(num)) } }Copy the code

Server-side etcd/server/main. Go

package main import ( "flag" "fmt" etcd "github.com/coreos/etcd/client" "github.com/liyue201/grpc-lb/examples/proto" registry "github.com/liyue201/grpc-lb/registry/etcd" "golang.org/x/net/context" "google.golang.org/grpc" "log" "net" "sync" "time" ) var nodeID = flag.String("node", "node1", "node ID") var port = flag.Int("port", 8080, "listening port") type RpcServer struct { addr string s *grpc.Server } func NewRpcServer(addr string) *RpcServer { s := grpc.NewServer() rs := &RpcServer{ addr: addr, s: s, } return rs } func (s *RpcServer) Run() { listener, err := net.Listen("tcp", s.addr) if err ! = nil { log.Printf("failed to listen: %v", err) return } log.Printf("rpc listening on:%s", s.addr) proto.RegisterTestServer(s.s, s) s.s.Serve(listener) } func (s *RpcServer) Stop() { s.s.GracefulStop() } var num int func (s *RpcServer) Say(ctx context.Context, req *proto.SayReq) (*proto.SayResp, error) { num++ text := "Hello " + req.Content + ", I am " + *nodeID + ", serverOfnum: " + strconv.Itoa(num) log.Println(text) return &proto.SayResp{Content: text}, nil } func StartService() { etcdConfg := etcd.Config{ Endpoints: , string [] {" http://120.24.44.201:2379 "}} registry, err: = registry. NewRegistry (registry) Option {EtcdConfig: etcdConfg, RegistryDir: "/grpc-lb", ServiceName: "test", NodeID: *nodeID, NData: registry.NodeData{ Addr: Sprintf("127.0.0.1:%d", *port), //Metadata: map[string]string{"weight": "1"},}, Ttl: 10 * time.second,}) if err! = nil {log.Panic(err) return} server := NewRpcServer(FMT.Sprintf("0.0.0.0:%d", *port)) wg := sync.WaitGroup{} wg.Add(1) go func() { server.Run() wg.Done() }() wg.Add(1) go func() { registry.Register() wg.Done() }() //stop the server after one minute //go func() { // time.Sleep(time.Minute) // server.Stop() // registry.Deregister() //}() wg.Wait() } func main() { flag.Parse() StartService() }Copy the code

The code for the server is as follows: Run the following command to run the three server processes and then start the client.

go run main.go -node node1 -port 28544

go run main.go -node node2 -port 18562

go run main.go -node node3 -port 27772

Six. Final effect

(The client keeps randomly accessing three servers)

Client:


The service side node1:


The server 2:


The service side node3:


Disconnect server node1,node3,client still runs, and only connects to server node2:

Client:

Server 2:


Start node1 again,node3 server is automatically connected, and continue to randomly access three servers:

Client:

The service side node1:

The service side node3:


Finally, all servers are shut down and the client is blocked.