Service registration and service discovery are often involved in distributed service architecture. The common service registration and service discovery tools in the industry are ZooKeeper, ETcd, Consul and Eureka. Consul’s key features are service discovery, health check, KV storage, secure service communication and multi-data center. Consul distinguishes Consul from several Other tools you can check out Consul vs. Other Software here.

Why do we need service registration and service discovery?

Suppose there are two services service-a and service-b in A distributed system. When S-A wants to call S-B, the first thing we think of is to directly request the IP address of the server where S-B resides and the port on which it listens. This is fine when the service size is small, but there are problems when the service size is large and each service has more than one instance deployed, such as S-B deploying three instances S-B-1, S-B-2 and S-B-3, Which service instance IP should S-A call S-B? Or write the IP of all three service instances in s-A code and select one of them every time YOU call S-B? This was not flexible, and we thought that Nginx would be a good solution to this problem. With the introduction of Nginx, the architecture now looks like this:

In this architecture:

  • First, the instance of S-B registers its service information (mainly the IP address and port number of the service) with the registration tool after it is started. Different registration tool services have different registration methods. Details about Consul’s registration methods will be covered later.
  • After registering service information with the registration tool, the registration tool can perform health checks on the service to determine which service instances are available and which are not.
  • After s-A is started, it can obtain the IP and port of all healthy S-B instances through service registration and service discovery tools, and put this information into its own memory, which CAN be used by S-A to call S-B.
  • S-a can update the service information of S-B stored in memory by using the Watch registration tool. For example, if S-B-1 fails, the health check mechanism will mark it as unavailable. Such information changes are detected by S-A, who updates the service information of S-B-1 in its own memory.

Therefore, in addition to the service registration and discovery functions of the service itself, the service registration and service discovery tool should at least have the functions of health check and status change notification.

Consul

Consul is a distributed service tool and is deployed in a cluster to avoid a single point of failure. In Consul, nodes are classified into Server and Client (all nodes are also called Agents). The Server node stores data. The Client node is responsible for health check and forwarding data requests to the Server. The Server node has one Leader node and multiple followers nodes. The Leader node synchronizes data to the followers node. If the Leader node fails, a new Leader will be elected.

The Client node is lightweight and stateless. It forwards read and write requests to the Server node in RPC mode, and can also directly send read and write requests to the Server node. Here’s Consul’s architecture:

The official documentation

}}}}}}}}}}}}}}}}}}}}}}}}
docker run -d --name=c1 -p 8500:8500 -e CONSUL_BIND_INTERFACE=eth0 consul agent --server=true--bootstrap-expect=3 --client=0.0.0.0 -- UI docker run-d --name=c2 -e CONSUL_BIND_INTERFACE=eth0 consul agent --server=true --client=0.0.0.0 --join 172.17.0.5

docker run -d --name=c3 -e CONSUL_BIND_INTERFACE=eth0 consul agent --server=true- client = 0.0.0.0 - join 172.17.0.5Start the Client node
docker run -d --name=c4 -e CONSUL_BIND_INTERFACE=eth0 consul agent --server=false- client = 0.0.0.0 - join 172.17.0.5Copy the code

Consul = Consul; Consul = Consul; Consul = Consul; Consul = Consul; Consul = Consul;

docker run -d --name=c1 -p 8500:8500 -e consul agent --server=true- the bootstrap - expect = 3 - client = 0.0.0.0 - bind ='{{ GetInterfaceIP "eth0" }}' -ui
Copy the code

Consul Commands and HTTP API are available to operate Consul. If you enter any container and run Consul Members, the Consul cluster is successfully set up.

Node          Address          Status  Type    Build  Protocol  DC   Segment
2dcf0c824cf0  172.17.0.7:8301  alive   server  1.4.4  2         dc1  <all>
64746cffa116  172.17.0.6:8301  alive   server  1.4.4  2         dc1  <all>
77af7d94a8ca  172.17.0.5:8301  alive   server  1.4.4  2         dc1  <all>
6c71148f0307  172.17.0.8:8301  alive   client  1.4.4  2         dc1  <default>
Copy the code

Code practice

Suppose you now have a node-server service written in Node.js and need to invoke a Go-server service written in Go via gRPC. Here is the service and data type file hello.proto defined with Protobuf.

syntax = "proto3";

package hello;
option go_package = "hello";

// The greeter service definition.
service Greeter {
  // Sends a greeting
  rpc SayHello (stream HelloRequest) returns (stream HelloReply) {}
}

// The request message containing the user's name. message HelloRequest { string name = 1; } // The response message containing the greetings message HelloReply { string message = 1; }Copy the code

Generate Go language code from Protobuf definitions with commands: Protoc –go_out=plugins= GRPC :./hello./*.

// hello.go
package hello

import "context"

type GreeterServerImpl struct {}

func (g *GreeterServerImpl) SayHello(c context.Context, h *HelloRequest) (*HelloReply, error)  {
	result := &HelloReply{
		Message: "hello" + h.GetName(),
	}
	return result, nil
}
Copy the code

Here is the entry file main.go, which registers the service we defined into gRPC and builds a /ping interface for future Consul health checks.

package main

import (
	"go-server/hello"
	"google.golang.org/grpc"
	"net"
	"net/http"
)

func main(a) {
	lis1, _ := net.Listen("tcp".": 8888")
	lis2, _ := net.Listen("tcp".": 8889")
	grpcServer := grpc.NewServer()
	hello.RegisterGreeterServer(grpcServer, &hello.GreeterServerImpl{})
	go grpcServer.Serve(lis1)
	go grpcServer.Serve(lis2)
	
	http.HandleFunc("/ping".func(res http.ResponseWriter, req *http.Request){
		res.Write([]byte("pong"))
	})
	http.ListenAndServe(": 8080".nil)}Copy the code

So far, the code of go-Server side has been written completely. It can be seen that there is no reference to Consul in the code. Therefore, the project code can be non-invasive by using Consul for service registration. The next step is to register the Go-Server into Consul. Registering services with Consul can be done by calling the REST API provided by Consul directly, as well as by registering profiles that are not intrusive to the project. Details about the Consul service configuration file can be viewed here. Here is our config file services.json for service registration via config file:

{
  "services": [{"id": "hello1"."name": "hello"."tags": [
        "primary"]."address": "172.17.0.9"."port": 8888."checks": [{"http": "http://172.17.0.9:8080/ping"."tls_skip_verify": false."method": "GET"."interval": "10s"."timeout": "1s"}]}, {"id": "hello2"."name": "hello"."tags": [
        "second"]."address": "172.17.0.9"."port": 8889."checks": [{"http": "http://172.17.0.9:8080/ping"."tls_skip_verify": false."method": "GET"."interval": "10s"."timeout": "1s"}]}]}Copy the code

172.17.0.9 in the configuration file represents the IP address of the go-server. Port is the different port that the go-server listens to. The check part defines the health check. Consul requests /ping every 10 seconds to determine if the service is healthy. Copy this profile to the c4 container’s /consul/config directory and run consul reload to register the Hello service from the profile with Consul. By hosting the curl http://localhost:8500/v1/catalog/services\? Pretty will see our registered Hello service. Here is the code for the Node-server service:

const grpc = require('grpc');
const axios = require('axios');
const protoLoader = require('@grpc/proto-loader');
const packageDefinition = protoLoader.loadSync(
  './hello.proto',
  {
    keepCase: true.longs: String.enums: String.defaults: true.oneofs: true
  });
const hello_proto = grpc.loadPackageDefinition(packageDefinition).hello;

function getRandNum (min, max) {
  min = Math.ceil(min);
  max = Math.floor(max);
  return Math.floor(Math.random() * (max - min + 1)) + min;
};

const urls = []
async function getUrl() {
  if (urls.length) return urls[getRandNum(0, urls.length- 1)];
  const { data } = await axios.get('http://172.17.0.5:8500/v1/health/service/hello');
  for (const item of data) {
    for (const check of item.Checks) {
      if (check.ServiceName === 'hello' && check.Status === 'passing') {
        urls.push(`${item.Service.Address}:${item.Service.Port}`)}}}return urls[getRandNum(0, urls.length - 1)];
}

async function main() {
  const url = await getUrl();
  const client = new hello_proto.Greeter(url, grpc.credentials.createInsecure());
   
  client.sayHello({name: 'jack'}, function (err, response) {
    console.log('Greeting:', response.message);
  }); 
}

main()
Copy the code

172.17.0.5 in the code is the IP address of c1 container. The Node-server project directly obtains the address of Hello service from Consul’s API. After obtaining the service, we need to filter out the address of healthy service. Then randomly select one of the obtained addresses to call. There is no eavesdropping on Consul in the code. The implementation of the eavesdropping can update the urls array by constantly polling the API above to filter out the address of the health service. Start Node-server now to invoke the Go-server service. Service registration and discovery bring dynamic scaling capabilities to services and add complexity to the architecture. Consul has many applications in configuration centers and distributed locks in addition to service discovery and registration.