Some of the more popular registries are Eureka, ZooKeeper, Consul, and Nacos. Recently, I learned about the overall framework and implementation of the four registries, and the specific implementation of service registration and subscription from a source point of view, mainly for NACOS. Finally, the differences between these four registries are compared.

one Eureka

The Eureka Client in the upper left corner is a service provider: it registers and updates its own information with the Eureka Server, and it gets information about other services from the Eureka Server registry. There are four specific operations as follows:

  • The Client side registers its metadata with the Server side for service discovery;
  • Renew: Maintain and update the validity of service instance metadata in the registry by sending a heartbeat to the Server. If the Server does not receive the heartbeat information from the Client within a certain period of time, it will take the default service offline and delete the information of the service instance from the registry.
  • Cancel offline: The Client initiates the cancellation of the service instance metadata from the Server on shutdown, in which case the Client’s service instance data will be deleted from the Server’s registry;
  • Get Registry: The Client requests Registry information from the Server for service discovery, thus initiating remote calls between services.

Eureka Server Service Registry: Provides service registration and discovery capabilities. Each Eureka Client registers its own information with Eureka Server, and can also obtain information of other services through Eureka Server to discover and call other services.

Eureka Client Service Consumer: Retrieve information registered with other services on Eureka Server, and use that information to find the required service to make a remote call.

REPLICATION: Synchronous replication of registry information between Eureka Server to make service instance information consistent across different registries in the Eureka Server cluster. Because the synchronous replication between clusters is carried out through HTTP, based on the unreliability of the network, the registry information between Eureka servers in the cluster inevitably has asynchronous time nodes, which does not meet the C(data consistency) in CAP.

Make Remote Call: A Remote Call between service clients.

two Zookeeper

2.1 Overall framework of ZooKeeper

  • Leader: The core of the ZooKeeper cluster, the only scheduler and processor of transaction requests (write operations), ensuring the sequence of the cluster transactions; The scheduler of the various services within the cluster. For requests with write operations, such as CREATE, SET DATA, DELETE, etc., they need to be uniformly forwarded to the leader for processing, and the leader needs to decide the number and execute the operation. This process is called a transaction.
  • Follow-up: Handles client non-transactional (read operation) requests to forward the transaction request to the Leader participating in the cluster Leader.
  • Observer: The Observer role is a new role for the ZooKeeper cluster with high traffic. Observe and synchronize the latest state changes of the ZooKeeper cluster, which can be processed independently for non-transactional requests and forwarded to the Leader server for transactional requests. It is a service that does not participate in any form of voting. It is usually used to increase the non-transactional capacity of the cluster without affecting the transactional capacity of the cluster. It is used to increase concurrent requests.

2.2 ZooKeeper storage structure

The following diagram depicts the tree structure of the ZooKeeper file system for memory representation. The ZooKeeper node is called ZNode. Each ZNode is identified by a name and separated by a path (/) sequence. In the diagram, you start with a zNode delimited by a “/”. In the root directory, there are two logical namespaces: config and workers. The config namespace is used for centralized configuration management, and the Workers namespace is used for naming.

Under the config namespace, each ZNode can store up to 1MB of data. This is similar to a UNIX file system, except that the parent ZNode can also store data. The main purpose of this structure is to store synchronized data and describe the ZNode metadata. This structure is called the ZooKeeper data model. Each node in the ZooKeeper namespace is identified by a path.

ZNode has both file and directory features. It not only maintains data structures such as data length, meta information, ACL and timestamp like a file, but also can be used as a part of path identification like a directory:

  • Version Number – Each ZNode has a version number, which means that whenever the data associated with a ZNode changes, its corresponding version number increases. The use of version numbers is important when multiple ZooKeeper clients are trying to perform operations on the same ZNode.
  • Operation Control List (ACL) – The ACL is basically the authentication mechanism for accessing ZNode. It manages all ZNode reads and writes.
  • Timestamp – The timestamp represents the elapsed time for creating and modifying a ZNode. It is usually measured in milliseconds. ZooKeeper identifies each change to a ZNode from its “transaction ID”(ZXID). The ZXID is unique and retains time for each transaction so that you can easily determine the elapsed time from one request to another.
  • Data Length – The total amount of data stored in ZNode is the data length. You can store up to 1MB of data.

ZooKeeper also has the concept of ephemeral nodes. ZNodes exist as long as the session on which they were created is active. At the end of the session, the ZNode is dropped.

2.3 Monitoring function of ZooKeeper

ZooKeeper supports the concept of a watch, where clients can set up observations on ZNode. When zNode changes, monitoring is triggered and removed. When monitoring is triggered, the client receives a packet indicating that ZNode has changed. If the connection between the client and one of the ZooKeeper servers is broken, the client will receive a local notification. What’s new in 3.6.0: The client can also set persistent recursive monitors on the ZNode. These monitors are not deleted when triggered and will recursively trigger changes to the registered ZNode and all child ZNodes.

2.4 ZooKeeper election process

ZooKeeper needs at least three nodes to work, and ZooKeeper node states are generally considered to have four:

  • Looking: Represents the node that is in the process of election. In this state, it needs to enter the election process
  • Leading: The node in which the role is already the Leader
  • Following: The follower status indicates that the Leader has been elected and the current node role is follower
  • Observer: The status of OBSERVER indicates that the current role of the node is OBSERVER. The OBSERVER does not enter the election and only accepts the election result. In other words, it does not become the Leader node but provides services as the follower node.

The Leader selection process is shown in the figure below:

In the cluster initialization stage, when one server, Zk1, is started, the Leader election cannot be conducted and completed independently. When the second server, Zk2, is started, the two machines can communicate with each other and each machine tries to find the Leader, thus entering the Leader election process. The electoral process began as follows:

(1) Each Server issues one vote. Since it is the initial situation, Zk1 and Zk2 will vote themselves as the Leader server, and each vote will contain the server’s ID and ZxID (transaction ID), which are represented by (ID, ZxID). At this time, the vote of Zk1 is (1, 0), and the vote of Zk2 is (2, 0). Each then sends the vote to the other machines in the cluster.

(2) Accept votes from various servers. After each server in the cluster receives the vote, it first determines the validity of the vote, such as checking whether it is the current vote and whether it comes from the server in the LOOKING state.

(3) Processing of votes. For each vote, the server needs to compare others’ votes with its own. The rules are as follows:

  • Check the ZXID first. Servers with large ZXIDs are preferred as leaders.
  • If the ZXID is the same, then the server ID is compared. The server with the larger ID serves as the Leader server.

For Zk1, its vote is (1, 0), and the vote receiving Zk2 is (2, 0). First, the Zxid of the two will be compared, both of which are 0, and then the ID will be compared. In this case, the ID of Zk2 is larger, so Zk2 wins. ZK1 updates its own vote to (2, 0) and resends the vote to ZK2.

(4) Count the votes. After each vote, the server will count the voting information to determine whether more than half of the machines have received the same voting information. For Zk1 and Zk2, it is counted that two machines in the cluster have accepted the voting information of (2, 0), and then it is considered that Zk2 has been elected as the Leader.

(5) Change the server state. Once the Leader is identified, each server updates its status to “FOLLOWING” if it is the Follower or “LEADING” if it is the Leader. When the new ZooKeeper node ZK3 is started, it is found that there is already a Leader, and the state is changed directly from LOOKING to FOLLOWING instead of election.

3. Consul

3.1 Consul overall framework

Consul supports multiple Data centers, there are two Data Centers in the image above, they are connected over the Internet via Wan Gossip, and to improve communication efficiency, only Server nodes join communication across the Data centers. Therefore, Consul can support WAN based synchronization between multiple data centers.

Consul divides into Client and Server nodes in a single data center (all nodes are also referred to as agents).

  • Server nodes: participate in consensus arbitration, store cluster state (log storage), process queries, and maintain relationships with neighboring (LAN/WAN) nodes
  • Agent Node: Responsible for health checking of microservices registered into Consul via this node, turning client registration requests and queries into RPC requests to the Server, and maintaining relationships with surrounding (LAN/WAN) nodes

They communicate with each other through GRPC. In addition, there is a LAN GOSSIP between the Server and the Client. This is used to notify the remaining nodes of a LAN topology change. For example, when the Server node is down, the Client will trigger the corresponding Server node to be removed from the available list. All the Server nodes together form a cluster, which runs the RAFT protocol among themselves and elects the leader through consensus arbitration. All the business data is written to the cluster by the leader for persistence. When more than half of the nodes store the data, the server cluster will return the ACK, so as to ensure strong consistency of the data. Of course, having a large number of servers also affects the efficiency of writing data. All followers follow the leader to ensure that he has an up-to-date copy of the data. Consul nodes within a cluster maintain membership through the gossip protocol, such as what nodes are still in the cluster and whether they are clients or servers.

The gossiping protocol in a single data center communicates using both TCP and UDP, and both use port 8301. The gossiping protocol across the data center also uses both TCP and UDP communication, using port 8302. The requests for reading and writing data in the cluster can either be sent directly to the Server or forwarded to the Server via the Client using RPC, and the request will eventually reach the Leader node.

Four. Nacos

4.1 Overall framework of NACOS

Services registered on the Server when the local registration by polling center cluster node address for service registration, on the registry, namely Nacos Server using the Map information, save the instance is configured with persistence services will be saved in the database, in the service of the caller, in order to guarantee the situational awareness of local service instance list, Nacos differs from other registries in that it operates in a Pull/Push manner.

4.2 Nacos election

The Nacos cluster is similar to ZooKeeper, which is divided into leader and follower roles. As the name of this role indicates, the cluster has an election mechanism. Because if you don’t have the voting capability, the character will be named Master/Slave.

Election algorithm:

Nacos clustering is implemented using RAFT algorithm, which is a relatively simple election algorithm compared to ZooKeeper. The core of the election algorithm is in RAFTCore, including data processing and data synchronization.

In RAFT, nodes have three roles:

  • Leader: Responsible for receiving client requests
  • Candidate: A role used to elect a Leader (election status)
  • Followers: responsible for responding to requests from the Leader or Candidate

When all nodes start up, they are in the follower state. If you do not receive a heartbeat from the leader for a period of time (either because the leader is not available or because the leader is dead), then the follower becomes a Candidate. Then launch an election, and before the election, it will add term, which is the same as the epoch in ZooKeeper.

The followers will vote for themselves and send ticket information to other nodes. When other nodes reply, several situations may occur during this process:

  • After receiving more than half of the votes, he becomes the leader
  • If they are told that other nodes have become leaders, they switch to follower
  • If no more than half of the votes have been received within a certain period, the election is called again. Constraints A single node can vote at most one vote in any term

In the first case, after winning an election, the leader sends a message to all nodes to prevent other nodes from triggering a new election.

In the second case, let’s say I have three nodes A, B, and C. A and B initiate the election at the same time, while A’s election message reaches C first, and C votes for A. When B’s message reaches C, the constraint condition mentioned above cannot be satisfied, that is, C will not vote for B, while A and B obviously will not vote for each other. After A wins, it will send A heartbeat message to B and C. Node B finds that the term of node A is not lower than its own term and knows that it has A Leader, so it becomes follower.

In the third case, no node has a majority of the votes, which may be a tie. If there are four nodes (A/B/C/D), Node C and Node D are both candidates, but Node A voted for Node D and Node B voted for Node C, thus causing A tie vote. At this point, everyone is waiting until the election is called again after a timeout. If there is a tie vote, then the system is not available for a long time, so Raft introduced randomizedElection Timeouts to try to avoid the tie vote situation.

4.3 NACOS service registration process source code

Nacos source code is in the https://github.com/alibaba/nacos to download the latest version 2.0.0 – bugfix (30 th Mar, 2021).

When registration is required, Spring-Cloud injects the instance NacosServiceRegistry.

@Override public void registerInstance(String serviceName, String groupName, Instance instance) throws NacosException { NamingUtils.checkInstanceIsLegal(instance); String groupedServiceName = NamingUtils.getGroupedName(serviceName, groupName); / / add the heartbeat information if (instance. IsEphemeral ()) {BeatInfo BeatInfo = beatReactor. BuildBeatInfo (groupedServiceName, instance); beatReactor.addBeatInfo(groupedServiceName, beatInfo); } / / call the service proxy classes to register serverProxy. RegisterService (groupedServiceName, groupName, instance); }

The RegisterService method is then called to register, construct the request parameters, and initiate the request.

public void registerService(String serviceName, String groupName, Instance instance) throws NacosException {

        NAMING_LOGGER.info("[REGISTER-SERVICE] {} registering service {} with instance: {}", namespaceId, serviceName,
                instance);

        final Map<String, String> params = new HashMap<String, String>(16);
        params.put(CommonParams.NAMESPACE_ID, namespaceId);
        params.put(CommonParams.SERVICE_NAME, serviceName);
        params.put(CommonParams.GROUP_NAME, groupName);
        params.put(CommonParams.CLUSTER_NAME, instance.getClusterName());
        params.put("ip", instance.getIp());
        params.put("port", String.valueOf(instance.getPort()));
        params.put("weight", String.valueOf(instance.getWeight()));
        params.put("enable", String.valueOf(instance.isEnabled()));
        params.put("healthy", String.valueOf(instance.isHealthy()));
        params.put("ephemeral", String.valueOf(instance.isEphemeral()));
        params.put("metadata", JacksonUtils.toJson(instance.getMetadata()));

        reqApi(UtilAndComs.nacosUrlInstance, params, HttpMethod.POST);

    }

Entering the reqApi method, we can see that the service polls the configured registry address during registration:

public String reqApi(String api, Map<String, String> params, Map<String, String> body, List<String> servers, String method) throws NacosException { params.put(CommonParams.NAMESPACE_ID, getNamespaceId()); if (CollectionUtils.isEmpty(servers) && StringUtils.isBlank(nacosDomain)) { throw new NacosException(NacosException.INVALID_PARAM, "no server available"); } NacosException exception = new NacosException(); If (StringUtils.isNotBlank(NacosDomain)) {for (int I = 0; i < maxRetry; i++) { try { return callServer(api, params, body, nacosDomain, method); } catch (NacosException e) { exception = e; if (NAMING_LOGGER.isDebugEnabled()) { NAMING_LOGGER.debug("request {} failed.", nacosDomain, e); } } } } else { Random random = new Random(System.currentTimeMillis()); int index = random.nextInt(servers.size()); for (int i = 0; i < servers.size(); i++) { String server = servers.get(index); try { return callServer(api, params, body, server, method); } catch (NacosException e) { exception = e; if (NAMING_LOGGER.isDebugEnabled()) { NAMING_LOGGER.debug("request {} failed.", server, e); } // Servers for Servers = (Servers + 1) % Servers.size(); }}

Finally, call is initiated through callServer(API, params, server, method)

public String callServer(String api, Map<String, String> params, Map<String, String> body, String curServer, String method) throws NacosException { long start = System.currentTimeMillis(); long end = 0; injectSecurityInfo(params); Header header = builderHeader(); String url; / / sending HTTP requests the if (curServer startsWith (UtilAndComs. HTTPS) | | curServer. StartsWith (UtilAndComs. HTTP)) {url = curServer + api; } else { if (! IPUtil.containsPort(curServer)) { curServer = curServer + IPUtil.IP_PORT_SPLITER + serverPort; } url = NamingHttpClientManager.getInstance().getPrefix() + curServer + api; }}

Nacos server-side processing:

The server provides an InstanceController class, in which the API for service registration is provided

@CanDistro @PostMapping @Secured(parser = NamingResourceParser.class, action = ActionTypes.WRITE) public String register(HttpServletRequest request) throws Exception { final String namespaceId = WebUtils .optional(request, CommonParams.NAMESPACE_ID, Constants.DEFAULT_NAMESPACE_ID); final String serviceName = WebUtils.required(request, CommonParams.SERVICE_NAME); NamingUtils.checkServiceNameFormat(serviceName); Final instance instance = parseInstance(request); final instance instance = parseInstance(request); serviceManager.registerInstance(namespaceId, serviceName, instance); return "ok"; }

The ServiceManager is then invoked to register the service

public void registerInstance(String namespaceId, String serviceName, Throws NacosException {// ServiceMap () {// ServiceMap () { CreateEmptyService (NamespaceId, ServiceName, Instance. isEphemeral()); Service = getService(NamespaceId, ServiceName); if (service == null) { throw new NacosException(NacosException.INVALID_PARAM, "service not found, namespace: " + namespaceId + ", service: " + serviceName); } addInstance(namespaceId, serviceName, instance.isephemeral (), instance); }

When an empty service instance is created

public void createServiceIfAbsent(String namespaceId, String serviceName, boolean local, Throws NacosException {Service Service = getService(namespaceId, serviceName); // if it is null. If (service == null) {logger.srv_log.info (" Creating Empty Service {}:{}", namespaceId, serviceName); service = new Service(); service.setName(serviceName); service.setNamespaceId(namespaceId); service.setGroupName(NamingUtils.getGroupName(serviceName)); // now validate the service. if failed, exception will be thrown service.setLastModifiedMillis(System.currentTimeMillis()); service.recalculateChecksum(); if (cluster ! = null) { cluster.setService(service); service.getClusterMap().put(cluster.getName(), cluster); } service.validate(); putServiceAndInit(service); if (! local) { addOrReplaceService(service); }}}

The getService method uses a Map for storage:

private final Map<String, Map<String, Service>> serviceMap = new ConcurrentHashMap<>();

NACOS maintains services in different namespaces, and there are different groups in each namespace. Only in different groups can there be corresponding services, and then the Service instance is determined by using this ServiceName. The first time it comes in, it goes to initialization, and after initialization it calls putServiceAndInit.

Private void putServiceAndInit(Service Service) throws NacosException {private void putServiceAndInit(Service Service) throws NacosException {private void putService(Service) throws NacosException (Service) { service = getService(service.getNamespaceId(), service.getName()); // Set up the service. Init (); // Implement data consistency monitoring. Ephemeral (identifies if the service is ephemeral and is persistent by default, i.e. True)=true for the raft protocol. False indicates the Distro consistencyService. Listen (KeyBuilder buildInstanceListKey (service. GetNamespaceId (), service. The getName (), true), service); consistencyService .listen(KeyBuilder.buildInstanceListKey(service.getNamespaceId(), service.getName(), false), service); Loggers.SRV_LOG.info("[NEW-SERVICE] {}", service.toJson()); }

After getting the service, the service instance is added to the collection, and then the data is synchronized based on the consistency protocol. Then I call addInstance

public void addInstance(String namespaceId, String serviceName, boolean ephemeral, Instance... Ips) throws NacosException {/ / assembly key String key = KeyBuilder. BuildInstanceListKey (namespaceId serviceName, ephemeral); Service = getService(namespaceId, serviceName); synchronized (service) { List<Instance> instanceList = addIpAddresses(service, ephemeral, ips); Instances instances = new Instances(); instances.setInstanceList(instanceList); // This implementation is consistencyService.put(Key, Instances); }}

4.4 Nacos service subscription source code

Node subscription has different implementations in different registries, generally divided into pull and push.

Pushing means that when the subscribed node updates, it will actively push to the subscriber. ZK is the implementation of push. The client and the server will establish a TCP long connection, the client will register a watcher, and then when there is data update, the server will push through the long connection. This mode of establishing long connections consumes server resources severely, so when there are too many Watchers and frequent updates, ZooKeeper’s performance will be very low or even fail.

Pulldown refers to the fact that the subscribed node actively obtains the information of the server node on a regular basis, and then makes a comparison locally. If there is any change, it will make some updates. There is also a Watcher mechanism in Consul, but unlike ZK, it is implemented via HTTP long polling. Consul server will return immediately whether the request URL contains a wait parameter or wait for a specified wait time to return if the service changes. The performance of this method may be high but the real-time performance may not be high.

In NaCos, these two ideas are combined to provide both pull and active push.

Fetching ServiceInfo from HostReactor

public ServiceInfo getServiceInfo(final String serviceName, final String clusters) { NAMING_LOGGER.debug("failover-mode: " + failoverReactor.isFailoverSwitch()); String Key = ServiceInfo.getKey(ServiceName, Clusters); String Key = ServiceInfo.getKey(ServiceName, Clusters); if (failoverReactor.isFailoverSwitch()) { return failoverReactor.getService(key); } // Find the list of service providers from ServiceInfoMap by key, ServiceInfoMap is the local cache of the client's service address serviceInfo serviceObj = getServiceInfo0(ServiceName, Clusters); If (NULL == ServiceObj) {ServiceObj = new ServiceInfo(ServiceName, Clusters); // If it cannot be found, create a new one and put it in ServiceInfoMap, put it in UpdatingMap, execute UpdateServiceNow, and remove it from UpdatingMap; serviceInfoMap.put(serviceObj.getKey(), serviceObj); updatingMap.put(serviceName, new Object()); // Immediately load the service address information updateServiceNow(ServiceName, Clusters) from the Nacos Server; updatingMap.remove(serviceName); } else if (updatingMap.containsKey(serviceName)) {// If the serviceObj from the serviceInfoMap is in the updatingMap wait for the UPDATE_HOLD_INTERVAL  if (UPDATE_HOLD_INTERVAL > 0) { // hold a moment waiting for update finish synchronized (serviceObj) { try { serviceObj.wait(UPDATE_HOLD_INTERVAL); } catch (InterruptedException e) { NAMING_LOGGER .error("[getServiceInfo] serviceName:" + serviceName + ", clusters:" + clusters, e); }}}} // Enable Scheduling to look up service addresses every 10s // Enable ScheduleUpdateIBABSENT for ScheduleUpdateIBABSENT if present in local cache. ServiceInfo scheduleUpdateFabsent (ServiceName, Clusters) from ServiceInfoMap; return serviceInfoMap.get(serviceObj.getKey()); }

Nacos push function, Nacos will record our subscribers above to our PushService

PushService implements ApplicationListener<ServiceChangeEvent>, so it listens for the event, listens for the service status change event, and then traversal all the clients to broadcast the message via UDP protocol:

public void onApplicationEvent(ServiceChangeEvent event) { Service service = event.getService(); String ServiceName = Service.getName (); // String namespaceId = Service.getNamespaceId (); / / / / namespace mission Future Future. = GlobalExecutor scheduleUdpSender (() - > {try {Loggers. PUSH. Info (serviceName + "is changed, add it to push queue."); ConcurrentMap<String, PushClient> clients = clientMap .get(UtilsAndCommons.assembleFullServiceName(namespaceId, serviceName)); if (MapUtils.isEmpty(clients)) { return; } Map<String, Object> cache = new HashMap<>(16); long lastRefTime = System.nanoTime(); for (PushClient client : clients.values()) { if (client.zombie()) { Loggers.PUSH.debug("client is zombie: " + client.toString()); clients.remove(client.toString()); Loggers.PUSH.debug("client is zombie: " + client.toString()); continue; } Receiver.AckEntry ackEntry; Loggers.PUSH.debug("push serviceName: {} to client: {}", serviceName, client.toString()); String key = getPushCacheKey(serviceName, client.getIp(), client.getAgent()); byte[] compressData = null; Map<String, Object> data = null; if (switchDomain.getDefaultPushCacheMillis() >= 20000 && cache.containsKey(key)) { org.javatuples.Pair pair = (org.javatuples.Pair) cache.get(key); compressData = (byte[]) (pair.getValue0()); data = (Map<String, Object>) pair.getValue1(); Loggers.PUSH.debug("[PUSH-CACHE] cache hit: {}:{}", serviceName, client.getAddrStr()); } if (compressData ! = null) { ackEntry = prepareAckEntry(client, compressData, data, lastRefTime); } else { ackEntry = prepareAckEntry(client, prepareHostsData(client), lastRefTime); if (ackEntry ! = null) { cache.put(key, new org.javatuples.Pair<>(ackEntry.origin.getData(), ackEntry.data)); } } Loggers.PUSH.info("serviceName: {} changed, schedule push for: {}, agent: {}, key: {}", client.getServiceName(), client.getAddrStr(), client.getAgent(), (ackEntry == null ? null : ackEntry.key)); // Execute the UDP push UDPPush (ackEntry); } } catch (Exception e) { Loggers.PUSH.error("[NACOS-PUSH] failed to push serviceName: {} to client, error: {}", serviceName, e); } finally { futureMap.remove(UtilsAndCommons.assembleFullServiceName(namespaceId, serviceName)); } }, 1000, TimeUnit.MILLISECONDS); futureMap.put(UtilsAndCommons.assembleFullServiceName(namespaceId, serviceName), future); }

The service consumer needs to establish a UDP service monitor at this time, otherwise the server cannot push the data. This listener is initialized in the HostReactor constructor.

The push mode of Nacos saves a lot of resources for ZooKeeper over a long TCP connection. Even a large number of node updates will not cause too many performance bottlenecks for Nacos. In NACOS, if the client receives a UDP message, it will return an ACK. If the nacos-server does not receive an ACK after a certain period of time, it will also resend. After a certain period of time, it will no longer resend. But NACOS also has regular training rotations as a back-up, so there’s no need to worry about data not being updated.

Through these two means, NACOS not only ensures real-time, but also ensures that data updates will not be missed.

Five. Comparison of four registries

Each of the four registries has its own characteristics. The differences can be clearly compared in the following list:

The text/hz

Pay attention to the technology of things, hand in hand to the cloud technology