Consul Service Registry

Netflix Eureka 2 X github.com/Netflix/eur… Eureka 1.X series, and the official is also actively maintain 1.X github.com/Netflix/eur…

The existing open source work on eureka 2.0 is discontinued. The code base and artifacts that were released as part of the existing repository of work on the 2.x branch is considered use at your own risk.

Eureka 1.x is a core part of Netflix’s service discovery system and is still an active project.

Translation:

Existing open source work on Eureka 2.0 has been stopped. Code libraries and artifacts released as part of an existing working repository on the 2.x branch are considered to be used at their own peril.

Eureka 1.x is a core part of Netflix’s service discovery system and remains an active project.

Although Eureka, Hystrix, etc., will not continue to be developed or maintained, it does not affect use at present. Anyway, thanks to open source, and hats off to Netflix for open source.

On the other hand, Spring Cloud supports many service discovery software, Eureka is just one of them. The following are the service discovery software and feature comparison supported by Spring Cloud.

A common registry

  • Netflix Eureka
  • Alibaba Nacos
  • HashiCorp Consul
  • Apache ZooKeeper
  • CoreOS Etcd
  • CNCF CoreDNS
features Eureka Nacos Consul Zookeeper
CAP AP CP + AP CP CP
Health check Client Beat TCP/HTTP/MYSQL/Client Beat TCP/HTTP/gRPC/Cmd Keep Alive
Avalanche protection There are There are There is no There is no
Automatic logout instance support support Does not support support
Access protocol HTTP HTTP/DNS HTTP/DNS TCP
Listening to the support support support support support
Multi-data center support support support Does not support
Synchronization across registries Does not support support support Does not support
SpringCloud integration support support support support

Consul is introduced

Consul is an open source tool from HashiCorp for service discovery and configuration in distributed systems. Consul’s solution is more one-stop than other distributed service registration and discovery solutions, with built-in service registration and discovery framework, distributed consistency protocol implementation, health check, Key/Value storage, and multi-data center solution, eliminating the need for other tools (such as ZooKeeper) and making it easier to use.

Consul is written in the Go language, making it naturally portable (Linux, Windows, and Mac OS support); The installation package contains only one executable file for easy deployment and works seamlessly with lightweight containers such as Docker.

Consul features

  • Raft algorithm

  • Service discovery

  • Health check

  • The Key/Value store

  • Multi-data center

  • Supports HTTP and DNS interfaces

  • The official web management interface is provided

Consul role

  • Client: a stateless client that forwards HTTP and DNS interface requests to the server cluster on the LAN.
  • Server: a server that stores configuration information. In a high availability cluster, you are advised to configure three or five servers in each data center.

First, there are two data centers in the figure, Datacenter1 and Datacenter2. Consul supports multiple data centers, with clients and servers in each data center. Typically, there are three to five servers, which balances stability and performance because data synchronization is slow on more machines. However, there is no limit to the number of clients. There can be thousands of them.

All nodes in the data center join the Gossip protocol. This means that there is a Gossip pool that contains all the nodes in the data center. The client does not need to configure the server address information, and the discovery service is done automatically. The work of detecting faulty nodes is not placed on the server side, but distributed. This makes failure detection more scalable than a localized heartbeat mechanism. The data center is used as the message layer to broadcast messages when important things like choosing a leader occur.

The servers in each data center are part of a set of nodes in a single Raft. This means they work together to select a single leader — a selected server with additional responsibilities. The leader is responsible for handling all queries and transactions. Things must also be copied to all nodes in the node set as part of the synchronization protocol. Because of this requirement, when a non-Leader server receives an RPC request, it forwards the request to the cluster leader.

Server-side nodes also serve as part of the WAN Gossip pool, which unlike the LAN pool is optimized for high network latency and only contains nodes from other Consul servers. The purpose of this pool is to allow data centers to discover each other with minimal cost. Starting a new data center is as simple as joining an existing WAN Gossip. Because these servers are running in this pool, it also supports cross-data center requests. When the server receives a request for a different data center, it forwards it to a random server in the correct data center. That server might be forwarded to the local leader.

This results in very low coupling in the data center. But requests across data centers are relatively fast and reliable due to fault detection, connection caching, and reuse.

In general, data is not replicated and backed up between different data centers. When receiving a request for a resource in another data center, the local Consul server sends an RPC request to the remote Consul server and returns the result. If the remote data center is unavailable, then the resource is also unavailable, but this does not affect the local data center. Limited sets of data are replicated and backed up across data centers in special cases, such as Consul’s built-in ACL replication capability, or external tools like Consul – Replicate.

Service discovery and registration

When the service Producer starts, it sends a request to Consul for information such as its Ip address and host. After Receiving the registration information from the Producer, Consul sends a health check request to the Producer every 10 seconds (by default). Testing the health of Producer.

The service call

When a Consumer requests a Product, the temp table containing the IP address and Port of the Product service is retrieved from Consul. · Producer selects an IP and Port from the Temp table, and then sends access requests based on the IP and Port. The Temp table contains only Producer information that has passed the health check and is updated every 10 seconds (by default).

Consul installation

Eureka is essentially a Servlet program that runs in a Servlet container; Consul is a third-party tool written in go that needs to be installed and used separately.

download

Visit Consul’s website at www.consul.io to download the latest Consul version.

Outside the chain picture archiving failure, the source station might be hotlinking prevention mechanism, proposed to directly upload picture preserved (img – agqMlLGF – 1608622664803) (Consul service registry. The assets / 1578478694286 PNG)]

Supports multiple environments. The screenshot only shows some environments.

Outside the chain picture archiving failure, the source station might be hotlinking prevention mechanism, proposed to directly upload picture preserved zgiykdz (img – 2-1608622664805) (Consul service registry. The assets / 1578478877282 PNG)]

The installation

In order to learn how to install different environments, we will install single node on Windows and cluster environment on Linux.

A single node

There is only one consul. Exe executable file in the package.

CD Go to the corresponding directory and run CMD to start Consul

-dev = consul agent-dev-client =0.0.0.0 -server = consul agent-dev =0.0.0.0Copy the code

To facilitate the startup, you can also create a script in the consul.exe directory with the following contents:

Consul agent-dev-client =0.0.0.0 pauseCopy the code

Visit admin back end: http://localhost:8500/ Seeing the following figure means that our Consul service has been successfully started.