preface

This chapter is the final chapter of the Nacos source reading, summarizing the Nacos configuration center and registry.

  • Configuration center: configuration query, configuration listening, configuration publishing, configuration injection
  • Registry: Service registry, service discovery, Health check, Distro protocol

I. Configuration center

Namspace (Tenant) : specifies the namespace (Tenant). The default namespace is public. A namespace can contain multiple groups.

Group: Group. The default Group is DEFAULT_GROUP. A group can contain more than one dataId.

DataId: In nacOS, DataId represents an entire configuration file and is the smallest unit of configuration.

X 1.4.

Configure the query

From the client’s point of view.

Priority is given to whether failover is enabled. If failover is enabled, the failover configuration on the local disk is read.

In most cases, the real-time server configuration (/v1/cs/configs) is queried without failover. After each query, the configuration is synchronized to the snapshot file on the local PC.

If the real-time query server is abnormal or the code not 403Forbidden is returned but the query succeeds, the configuration in the local snapshot file is displayed.

From a server-side perspective, there are two scenarios.

// ConfigServletInner.doGetConfig
if (PropertyUtil.isDirectRead()) {/ / # 1
    configInfoBase = persistService.findConfigInfo(dataId, group, tenant);
} else { / / # 2
    file = DiskUtil.targetFile(dataId, group, tenant);
}
// PropertyUtil
public static boolean isDirectRead(a) {
  return EnvUtil.getStandaloneMode() && isEmbeddedStorage();
}
Copy the code

The first is -dnacos. standalone=true -DembeddedStorage=true, standalone embedded data source case. At this point, data is returned from the Derby data source.

Other cases, such as standalone mysql/ cluster mysql/ cluster Derby, will read the configuration in the current node file system in real time and return.

Configured to monitor

From the client’s point of view.

AddListener adds a listener, only registered in the memory configuration item CacheData, without actually communicating with the server (as opposed to the registry’s service listener).

// CacheData.java
// Register a listener on the tenant-group-datAID configuration
private final CopyOnWriteArrayList<ManagerListenerWrap> listeners;
public void addListener(Listener listener) {
    ManagerListenerWrap wrap =
            (listener instanceof AbstractConfigChangeListener) ? new ManagerListenerWrap(listener, md5, content)
                    : new ManagerListenerWrap(listener, md5);

    if(listeners.addIfAbsent(wrap)) { LOGGER.info(; }}Copy the code

When md5 in a client CacheData changes, all listeners corresponding to CacheData are notified.

// CacheData.java
// Md5 is configured
private volatile String md5;
void checkListenerMd5(a) {
    for (ManagerListenerWrap wrap : listeners) {
        // Compare md5 in CacheData to the last MD5 in Listener
        if(! md5.equals(wrap.lastCallMd5)) {// If not, the listener is triggeredsafeNotifyListener(dataId, group, content, type, md5, wrap); }}}Copy the code

The real listening logic belongs to the client’s long polling logic.

  1. Every 3000 CacheData, the client starts a LongPollingRunnable long polling task, requesting the server, v1, cs, Configs, or listener to listen for configuration changes. The long polling timeout is 30s.

  2. After the configuration is changed, the server pushes the configuration item (groupKey) to the client through long polling.

  3. The client uses the configuration query logic to check the server configuration, v1, cs, and configS based on the configuration item.

  4. The configuration is updated to CacheData. If the MD5 configuration is changed, the corresponding Listener is triggered.

From a server-side perspective.

/v1/cs/configs/listener leverages Servlet3.0 AsyncContext for long polling.

// LongPollingService
public void addLongPollingClient(HttpServletRequest req, HttpServletResponse rsp, Map<String, String> clientMd5Map, int probeRequestSize) {
    String str = req.getHeader(LongPollingService.LONG_POLLING_HEADER);
    String noHangUpFlag = req.getHeader(LongPollingService.LONG_POLLING_NO_HANG_UP_HEADER);
    String appName = req.getHeader(RequestUtil.CLIENT_APPNAME_HEADER);
    // Determine the actual timeout for long polling
    int delayTime = SwitchService.getSwitchInteger(SwitchService.FIXED_DELAY_TIME, 500);
    long timeout = Math.max(10000, Long.parseLong(str) - delayTime);
    if (isFixedPolling()) {
        timeout = Math.max(10000, getFixedPollingInterval());
    } else {
        // Check whether any configuration items have changed by using md5 in memory cache. If so, return immediately
        List<String> changedGroups = MD5Util.compareMd5(req, rsp, clientMd5Map);
        if (changedGroups.size() > 0) {
            generateResponse(req, rsp, changedGroups);
            return;
        } else if(noHangUpFlag ! =null && noHangUpFlag.equalsIgnoreCase(TRUE_STR)) {
            // If the header of the Long polling request contains long-pulling - timeout-no-hangup, 200 is returned immediately
            return;
        }
    }
    String ip = RequestUtil.getRemoteIp(req);
    / / open AsyncContext
    final AsyncContext asyncContext = req.startAsync();
    asyncContext.setTimeout(0L);
    // Submit the long polling task to another thread
    ConfigExecutor.executeLongPolling(
            new ClientLongPolling(asyncContext, clientMd5Map, ip, probeRequestSize, timeout, appName, tag));
}
Copy the code

/v1/cs/configs/listener

  1. Check whether the MD5 of the listening configuration item in the comparison packet has been changed. If so, return the groupKey of the changed configuration item immediately
  2. If no configuration item has changed, commit to the asynchronous thread to perform ClientLongPolling

ClientLongPolling will submit a timeout detection task. The timeout time corresponds to the long polling timeout. After the timeout, the client will automatically return empty data, indicating that no configuration changes have taken place.

class ClientLongPolling implements Runnable {
    @Override
    public void run(a) {
        // 1. Submit the timeout processing task
        asyncTimeoutFuture = ConfigExecutor.scheduleLongPolling(new Runnable() {
            @Override
            public void run(a) {
		// ...
            }

        }, timeoutTime, TimeUnit.MILLISECONDS);
        // ...}}Copy the code

Configuration to release

There are two ways to respond to the client’s long polling: one is to wait for 30 seconds and directly return empty data to the client; the other is to respond to the client through publishing configurations.

When starting in non-clustered Derby mode, the configuration publishing process is as follows:

  1. POST /configs, update the database, respond to the client.
  2. Asynchronous request all the current node (including yourself)/v1 / cs/communication/dataChange, perform configuration synchronization.
  3. The node to be synchronized dumps the configuration to the local file system and updates the MD5 value of the configuration item in the memory.
  4. The node being synchronized checks whether the client long poll request on the changed configuration item is listened to. If yes, the changed configuration item responds to the client long poll.

When started in clustered Derby mode, the configuration publication process introduces the JRaft framework, which uses the Raft consistency algorithm for read and write consistency.

For configuration publishing, clustering Derby differs from the above process in the write to the database step. Since each node stores one copy of data, this requires a Raft write, and once the log commit is successful on more than half of the nodes, the log will be applied to the local State machine Derby database.

In addition, in the Dump process, you need to check the real-time configuration through the configuration item. In this case, the read is the linear consistent read implemented by JRaft.

2.x

2. The main change in the X configuration center is to introduce long connections instead of long polling for short connections.

Client changes:

Change Point 1:

1. Every 3000 CacheData, the client starts a LongPollingRunnable polling task. 2. X Every 3000 CacheData, the client enables one RpcClient. Each RpcClient establishes a long connection with the server.

Change 2:

Added the logic of scheduled full pull configuration on the client.

In 1.x, the Nacos configuration center updates the client configuration through a long polling mode, with only configuration push for the client.

X supports periodic client synchronization. Therefore, 2. X combines push and pull.

Pull: Every five minutes, the client sends a ConfigBatchListenRequest (ConfigBatchListenRequest) request to full CacheData. If the MD5 configuration is changed, the client receives the changed configuration item and sends a ConfigQuery request to query the real-time configuration.

Push: server configuration changes, will send ConfigChangeNotifyRequest request giving long connection to the current node client notice item configuration changes.

Server side changes:

Change Point 1:

Because 2.x uses long connections instead of long polling, the ConfigBatchListenRequest will not be held by the server and will be returned immediately. The server simply keeps the listening relationship in memory for subsequent notification.

The mapping between groupKey and connectionId helps you find the client long connection by changing configuration items. The mapping between connectionId and groupKey is for console display only. These relationships are stored in the server ConfigChangeListenContext single case.

@Component
public class ConfigChangeListenContext {
    /** * groupKey-> connection set. */
    private ConcurrentHashMap<String, HashSet<String>> groupKeyContext = new ConcurrentHashMap<String, HashSet<String>>();
    
    /** * connectionId-> group key set. */
    private ConcurrentHashMap<String, HashMap<String, String>> connectionIdContext = new ConcurrentHashMap<String, HashMap<String, String>>();
}
Copy the code

Change 2:

For change point 1, 1.x needs to find the client AsyncContext that is still conducting long polling through the groupKey. 2. X is found through groupKey connectionId, then find long connection, through connectionId ConfigChangeNotifyRequest notice sending the client configuration changes.

// RpcConfigChangeNotifier
public void configDataChanged(String groupKey, String dataId, String group, String tenant, boolean isBeta,
        List<String> betaIps, String tag) {
    // From the context of the registered listener, get all listener connectionids corresponding to the groupKey
    Set<String> listeners = configChangeListenContext.getListeners(groupKey);
    if(! CollectionUtils.isEmpty(listeners)) {for (final String client : listeners) {
            // Obtain the actual LONG gRPC connection using connectionId
            Connection connection = connectionManager.getConnection(client);
            if (connection == null) {
                continue;
            }
            // ...
            // Build synchronous configuration request parameters
            ConfigChangeNotifyRequest notifyRequest = ConfigChangeNotifyRequest.build(dataId, group, tenant);
            // To avoid blocking other event processing, submit a task to another thread pool for processing
            RpcPushTask rpcPushRetryTask = new RpcPushTask(notifyRequest, 50, client, clientIp, connection.getMetaInfo().getAppName()); push(rpcPushRetryTask); }}}Copy the code

stater

For spring-cloud-starter-alibaba-nacos-config, it is only supported to version 1.4.1, but the support to version 2.x is only the underlying logic change, which is not sensitive to the client (except the original 1.4.1 dependency of nacos-client, Add 2.x nacos-client.

Configuration of injection

Nacos using PropertySourceBootstrapConfiguration this ApplicationContextInitializer, Before Application container refresh (phase prepareContext) using NacosPropertySourceLocator converts nacos configuration PropertySource injected with the Environment.

Nacos NacosPropertySourceLocator read configuration, the bottom is also called the Nacos – client ConfigService methods.

Configuring a Priority

Using Spring’s CompositePropertySource internal list structure, the higher the configuration items, the higher the priority.

spring:
  application:
    name: nacos-config-example
  profiles:
    active: DEV
---
spring:
  profiles: DEV
  cloud:
    nacos:
      config:
        server-addr: 127.0. 01.: 8848
        namespace: 789b5be0-0286-4cda-ac0c-e63f5bae3652
        group: DEFAULT_GROUP
        extension-configs:
          - data_id: arch.properties
            group: arch
            refresh: true
          - data_id: jdbc.properties
            group: data
            refresh: false
        shared-configs:
        	- data_id: share.properties
        	  group: DEFAULT_GROUP
        	  refresh: true
Copy the code

Nacos configurations in SpringCloud fall into three categories:

  • Application configuration: dataId under a group under a namespace corresponding to Nacos. DataId = {prefix} – {spring. Profiles. The active}. {file – the extension}. For the prefix prefix, priority spring. Cloud. Nacos. Config. The name > spring. Cloud. Nacos. Config. The prefix > spring. The application. The name. Application configurations also have internal priorities, from low to high:
    • {prefix}
    • {prefix}-{spring.profiles.active}
    • {prefix}-{spring.profiles.active}.{file-extension}
  • Extension-configs: cannot be refreshed by default.
  • Shared-configs: cannot be refreshed by default.

Configured to monitor

When the Spring container is fully started, NacosContextRefresher will receive ApplicationReadyEvent and start listening.

All NacosPropertySource NacosContextRefresher cycle, call ConfigService. AddListener registered listening in.

// NacosContextRefresher
@Override
public void onApplicationEvent(ApplicationReadyEvent event) {
   if (this.ready.compareAndSet(false.true)) {
      this.registerNacosListenersForApplications(); }}private void registerNacosListenersForApplications(a) {
   / / spring. Cloud. Nacos. Config. IsRefreshEnabled switch always opened by default
   if (isRefreshEnabled()) {
      // Get all Nacos configurations from the cache
      for (NacosPropertySource propertySource : NacosPropertySourceRepository.getAll()) {
         if(! propertySource.isRefreshable()) {continue;
         }
         String dataId = propertySource.getDataId();
         // Register listenerregisterNacosListener(propertySource.getGroup(), dataId); }}}Copy the code

When the listener callback is triggered, the RefreshEvent event event is issued, and any change to a single dataId causes all beans in the RefreshScope to be destroyed and recreated. Configuration re-injection for a single dataId is not supported.

// NacosContextRefresher
private void registerNacosListener(final String groupKey, final String dataKey) {
   String key = NacosPropertySourceRepository.getMapKey(dataKey, groupKey);
   Listener listener = listenerMap.computeIfAbsent(key,
         lst -> new AbstractSharedListener() {
            @Override
            public void innerReceive(String dataId, String group, String configInfo) {
               refreshCountIncrement();
               nacosRefreshHistory.addRefreshRecord(dataId, group, configInfo);
               applicationContext.publishEvent(
                     new RefreshEvent(this.null."Refresh Nacos config")); }}); configService.addListener(dataKey, groupKey, listener); }Copy the code

Registration center

X 1.4.

Registry model

Namspace (Tenant) : specifies the namespace (Tenant). The default namespace is public. A namespace can contain multiple groups.

Group: Group. The default Group is DEFAULT_GROUP.

Service: indicates the application Service.

Cluster: indicates a Cluster. The DEFAULT Cluster is DEFAULT.

Instance: indicates a service Instance.

The service registry

The client sends the POST/NACOS /v1/ NS /instance request to the server to send its instance information to the server.

The server stores and detects client health status in different ways for temporary and persistent instances.

Temporary instance

If the client Instance is temporary (the default), instance.ephemeral =true.

From the client’s point of view.

The client time heartbeat mission, request to the server sends a heartbeat PUT/nacos/v1 / ns/instance/beat, maintain their registration information from the server. Heartbeat interval is preserved. Heart. Beat. Interval, defaults to 5 s. If the server does not have the registration information of the current instance when sending the heartbeat, it returns RESOURCE_NOT_FOUND (20404), and the client initiates a registration request POST /nacos/v1/ns/instance.

From the server’s point of view.

The server processes the POST/NACOS /v1/ NS /instance registration request as shown in the figure above:

  1. Will Service registry write memory (ServiceManager serviceMap)
  2. If no heartbeat is sent within 15 seconds, the Instance is deemed unhealthy. If no heartbeat is sent within 30 seconds, the Instance is deleted from the registry and the Service is deregister. (ClientBeatCheckTask)
  3. DistroConsistencyServiceImpl will serve as the key, service under all the Instance as the value written to memory KV DataStore storage component.
  4. Asynchronous, when the KV changes, update the registry (ServiceManager. ServiceMap), asynchronous by UDP pushed to have to monitor the current service changes to the client.
  5. Asynchronously, when KV changes, delay 1s, call PUT /v1/ NS /distro/datum of other nodes in the cluster, synchronize the list of service instances.

The server-side processing PUT/nacos/v1 / ns/instance/beat the heart request:

  1. Update Instance health in the memory registry: instance.healthy =true
  2. If the instance health changes from unhealthy to healthy, UDP pushes it to the listening client

Persistent instance

If the client Instance is persistent, instance.ephemeral =false. This part was not covered earlier, because non-default scenarios are covered directly.

Differences from temporary instances:

  1. Service registration and cancellation of write requests, such as walk Raft writing process, written to a local file system (FileKvStorage), then an asynchronous write registry (ServiceManager. ServiceMap) memory.

  2. The client does not need to send a heartbeat to the server, which uses the TCP connection to detect whether the client is alive, not removing the Instance, but marking it as unhealthy.

Service discovery

From the client’s point of view.

HostReactor is responsible for service discovery.

For service subscriptions, there are three layers of the client service registry from a storage perspective.

The service subscription process is as follows:

  1. When failover is enabled, the registry in the local file system is read and loaded to the serviceMap variable of the FailoverReactor. During the query, the FailoverReactor’s serviceMap is read to obtain the service information.
  2. Generally, when failover is disabled, HostReactor. ServiceMap memory registry is read preferentially.
  3. If the service is not read in HostreActor.Servicemap, the Nacos server is requested. On the one hand, the service registration information is obtained and the hostreActor. serviceMap memory registry is updated. On the other hand, the query request informs the server of the UDP port started locally and informs the server to subscribe to the service.
  4. For each subscription service, the client will enable the service UpdateTask UpdateTask to periodically request the server to obtain the latest registry. By default, the client pulls data once every second. However, the server returns a packet to control the client to pull data once every 10 seconds.

For the service query, the user can either subscribe logic or real-time query logic, depending on the fourth input of the getAllInstances method of NacosNamingService, SUBSCRIBE =true means that the service subscription process, Subscribe =false Indicates that the server directly queries the latest service registry without going through the service subscription process.

public List<Instance> getAllInstances(String serviceName, String groupName, List<String> clusters, boolean subscribe) 
Copy the code

Service subscriptions are primarily to update the client’s memory registry, and user code can listen for service changes.

nacosNamingService.subscribe("nacos.test.3".new AbstractEventListener() {
    @Override
    public void onEvent(Event event) { System.out.println(((NamingEvent) event).getServiceName()); System.out.println(((NamingEvent) event).getInstances()); }});Copy the code

InstancesChangeNotifier manages the client listening service and notifies all listeners when the service changes.

public class InstancesChangeNotifier extends Subscriber<InstancesChangeEvent> {
    // Listen on the registry
    // service Uniquely identifies groupName+@@+serviceName+@@+clusterName - Listener
    private final Map<String, ConcurrentHashSet<EventListener>> listenerMap = new ConcurrentHashMap<String, ConcurrentHashSet<EventListener>>();
Copy the code

From the server’s point of view.

Whether service subscription, or service query, are all go GET/nacos/v1 / ns/instance/list. The difference is that the former request parameter contains the udp port number of the client. The udp port number of the latter is 0.

In the case of service subscription, the PushService saves the information about the subscription service and the subscription Client in the PushService Map. After the service changes, the PushService pushes the information to the Client through UDP.

public class PushService implements ApplicationContextAware.ApplicationListener<ServiceChangeEvent> {
    // The first key is namespace+groupService and the second key is pushClient.tostring
    private static ConcurrentMap<String, ConcurrentMap<String, PushClient>> clientMap = new ConcurrentHashMap<>();
Copy the code

No matter Service subscription or Service query, temporary instances or persistent instances, the ServiceManager obtains the Cluster from the Cluster to the Service, and then obtains all instances from the Service and sends them back to the client.

// ServiceManager.java
// namespace - groupName@@serviceName - Service
private final Map<String, Map<String, Service>> serviceMap = new ConcurrentHashMap<>();
// Service.java
// key is the cluster name
private Map<String, Cluster> clusterMap = new HashMap<>();
// Cluster.java
/ / persistent Instance
private Set<Instance> persistentInstances = new HashSet<>();
/ / temporary Instance
private Set<Instance> ephemeralInstances = new HashSet<>();
Copy the code

In addition, the server has a protected mode if the client queries only healthy instances. This is a representative feature of the AP schema registry, such as Eureka.

The protection mode is enabled depending on the Console configuration protectThreshold, which is stored on the Service instance.

If the active instance/total instance is <= protectThreshold, the current service node is considered faulty and enters the protected mode. All instances under the service are returned. For default protectThreshold=0, if the surviving instance is 0, all instances are returned.

Distro

The Nacos registry uses Distro protocol and belongs to AP.

For the client, all Nacos nodes are peer, which means that requests to any Nacos node can process read and write requests. If a node fails to process, the client selects a new node request.

But from the server side, it’s not that simple.

Read requests:

Because Distro protocol is not strongly consistent, each node can respond to the client based on data in the current node’s memory.

Write requests:

For client write requests, such as service registrations and client heartbeats, the DistroFilter blocks them (based on the @candistro annotation). Check whether groupServiceName in the request parameter belongs to the management scope of the current node by using hash. If it does not belong to the management scope of the current node, forward the process to other nodes and return the return information of other nodes to the client. If it is managed by the current node, go to Controller.

Cluster data synchronization:

When service registration or deregistration occurs (including heartbeat timeout of the client 30s), the responsible node synchronizes service data to other non-responsible nodes.

If the server detects that the client heartbeat timed out for 15 seconds (less than 30 seconds), the server marks the instance as unhealthy on the current responsible node and does not synchronize the unhealthy instance to other nodes.

After the server receives the heartbeat from the client again (15 to 30 seconds), the instance is marked as healthy and data is not synchronized.

Responsibility node (every 5 seconds. The default nacos. The core protocol. The distro. Data. Verify_interval_ms = 5000 ms) perform VERIFY and synchronize all their Service Instance list of MD5 to other nodes, If other nodes detect MD5 changes, they check the responsible node and update local data.

Cluster management:

Nacos default with Nacos. Home/cluster/cluster. The conf configuration file, initialize cluster list.

After Tomcat is started, each node executes POST /v1/core/cluster/report every 2 seconds to send information about the current node to random nodes (including DOWN) in the cluster. This is to synchronize information about the current node and perform health check.

The health check is bidirectional. Each node either initiates or receives the health check. If the health check fails, the peer node is marked as SUSPICIOUS, indicating that the peer node may be offline but can process write requests as the responsible node. If the health check fails for more than three consecutive times, the peer node is marked as DOWN and cannot process write requests as the responsible node.

2.x

Registry model

2. X makes a big change in the model. The server uses Service, Instance, Client, Connection instead of Service, Cluster, and Instance.

  • Service: indicates the Service. Namespace + Group +name= singleton Service. A Service and Instance are managed by a ServiceManager
  • Instance: Instance, InstancePublishInfo, managed by Client.
  • Client: One Client corresponds to one long connection. One Client holds the Service and Instance registered and monitored by the corresponding Client. Client associates Service and Instance and is managed by ClientManager.
  • Connection: a long Connection. One Connection corresponds to one Client and is managed by the ConnectionManager.

Model index: Service is not directly related to Instance. You need to iterate over all registered services and instances to obtain all instances under Service. To speed up queries, two indexing services are provided

  • ClientServiceIndexesManager: Service – > Client, Service and the Service & Service and monitoring the Service Client relationships.
  • ServiceStorage: Service->Instance: indicates the relationship between a Service and instances under the Service.

The service registry

For clients, temporary instance registration, go to gRPC; Persistent instance registration goes through HTTP.

For the server, whether gRPC or HTTP, the underlying flow is the same:

  1. Establish the Connection->Client->Service->Instance relationship
  2. Build indexes to aid in queries
  3. Notify subscription clients
  4. Cluster Data Synchronization

Service discovery

For service query, use ServiceStorage index service. If the ServiceStorage cannot query the data, follow the complex query logic and then put it into the ServiceStorage cache. The ServiceStorage cache data is updated when the service changes.

Service subscription: The server registers the service subscribed by the Client to the Client for management.

public abstract class AbstractClient implements Client {
    // Client Indicates the subscribed service
    protected final ConcurrentHashMap<Service, Subscriber> subscribers = new ConcurrentHashMap<>(16.0.75 f.1);
}
Copy the code

Client memory registry update, or a combination of push and pull.

Push: 2. The X server pushes service changes to the NotifySubscriberRequest client through gRPC instead of UDP.

Pull: again periodically request server to update the memory registry, but the interval has been changed from 10s to 60s.

Health check

Persistent instance health check is the same as 1.x.

The temporary instance Client establishes a long-term connection with the server and performs bidirectional health check to ensure the Client’s survival.

The server detects the 20s idle connection and sends a probe request to the client. If the client responds within 1s, the health check passes. If the check fails, the instance is directly offline and the service is deregistered.

The client checks the idle connection for 5s and sends a health check request to the server. If the server responds within 3s, the health check passes. If the check fails, the next Nacos node will be selected to establish a long connection.

Distro

Read/write request processing:

Both reads and writes go to the node that has established a long connection with the current client, not the server DistroFilter.

Unlike 1.x, read requests are sent to different nodes until the node where the long connection is set up.

Write requests are different from 1.x, which may be forwarded twice. The node that establishes a long connection with the client is the responsible node. Different from 1.x, responsible nodes are set based on Service. 2.x sets responsible nodes based on Client.

The isNative property of the ConnectionBasedClient is true, and the current instance is the responsible node. The isNative property of ConnectionBasedClient is false, indicating that the current instance is a non-responsible node.

public class ConnectionBasedClient extends AbstractClient {
    / * * * {@code true} means this client is directly connect to current server. {@code false} means this client is synced
     * from other server.
     */
    private final boolean isNative;
}
Copy the code

Cluster data synchronization:

As with 1.x, service changes are synchronized to other non-responsible nodes at that time.

Unlike 1.x, the 2.x responsible node VERIFY task no longer just sends digest MD5 of the list of service instances to the non-responsible node.

The responsible node sends VERIFY renewal Client to the non-responsible node every 5s, which contains all the data of the Client, not only MD5, so as to avoid the non-responsible node to check back. Non-responsible nodes periodically scan the data of clients whose isNative value is false. If the lease is not renewed within 30 seconds, remove these non-native clients.

Cluster management:

The value is consistent with 1.x.