sequence

In this paper, we study the spring cloud DefaultEurekaServerContext

EurekaServerAutoConfiguration

@Configuration
@Import(EurekaServerInitializerConfiguration.class)
@ConditionalOnBean(EurekaServerMarkerConfiguration.Marker.class)
@EnableConfigurationProperties({ EurekaDashboardProperties.class,
		InstanceRegistryProperties.class })
@PropertySource("classpath:/eureka/server.properties")
public class EurekaServerAutoConfiguration extends WebMvcConfigurerAdapter {
	//......
	@Bean
	public EurekaServerContext eurekaServerContext(ServerCodecs serverCodecs,
			PeerAwareInstanceRegistry registry, PeerEurekaNodes peerEurekaNodes) {
		returnnew DefaultEurekaServerContext(this.eurekaServerConfig, serverCodecs, registry, peerEurekaNodes, this.applicationInfoManager); } / /... }Copy the code

DefaultEurekaServerContext

Eureka – core – 1.8.8 – sources jar! /com/netflix/eureka/DefaultEurekaServerContext.java

    @PostConstruct
    @Override
    public void initialize() throws Exception {
        logger.info("Initializing ...");
        peerEurekaNodes.start();
        registry.init(peerEurekaNodes);
        logger.info("Initialized");
    }

    @PreDestroy
    @Override
    public void shutdown() throws Exception {
        logger.info("Shutting down ...");
        registry.shutdown();
        peerEurekaNodes.shutdown();
        logger.info("Shut down");
    }
Copy the code

PeerEurekaNodes. Start (); And the registry. The init (peerEurekaNodes); Execute Registry.shutdown () before destruction; And peerEurekaNodes. Shutdown ();

start

PeerEurekaNodes.start

Eureka – core – 1.8.8 – sources jar! /com/netflix/eureka/cluster/PeerEurekaNodes.java

    public void start() {
        taskExecutor = Executors.newSingleThreadScheduledExecutor(
                new ThreadFactory() {
                    @Override
                    public Thread newThread(Runnable r) {
                        Thread thread = new Thread(r, "Eureka-PeerNodesUpdater");
                        thread.setDaemon(true);
                        returnthread; }}); try { updatePeerEurekaNodes(resolvePeerUrls()); Runnable peersUpdateTask = newRunnable() {
                @Override
                public void run() {
                    try {
                        updatePeerEurekaNodes(resolvePeerUrls());
                    } catch (Throwable e) {
                        logger.error("Cannot update the replica Nodes", e); }}}; taskExecutor.scheduleWithFixedDelay( peersUpdateTask, serverConfig.getPeerEurekaNodesUpdateIntervalMs(), serverConfig.getPeerEurekaNodesUpdateIntervalMs(), TimeUnit.MILLISECONDS ); } catch (Exception e) { throw new IllegalStateException(e); }for (PeerEurekaNode node : peerEurekaNodes) {
            logger.info("Replica node URL: {}", node.getServiceUrl()); }}Copy the code

Here executed first updatePeerEurekaNodes, after registered timed tasks to trigger updatePeerEurekaNodes, time interval for erverConfig. GetPeerEurekaNodesUpdateIntervalMs ()

resolvePeerUrls

    /**
     * Resolve peer URLs.
     *
     * @return peer URLs with node's own URL filtered out */ protected List
      
        resolvePeerUrls() { InstanceInfo myInfo = applicationInfoManager.getInfo(); String zone = InstanceInfo.getZone(clientConfig.getAvailabilityZones(clientConfig.getRegion()), myInfo); List
       
         replicaUrls = EndpointUtils .getDiscoveryServiceUrls(clientConfig, zone, new EndpointUtils.InstanceInfoBasedUrlRandomizer(myInfo)); int idx = 0; while (idx < replicaUrls.size()) { if (isThisMyUrl(replicaUrls.get(idx))) { replicaUrls.remove(idx); } else { idx++; } } return replicaUrls; }
       
      Copy the code

Get the replicaUrls from resolvePeerUrls, which fetch healthy urls, and then discard your own urls

updatePeerEurekaNodes

    /**
     * Given new set of replica URLs, destroy {@link PeerEurekaNode}s no longer available, and
     * create new ones.
     *
     * @param newPeerUrls peer node URLs; this collection should have local node's URL filtered out */ protected void updatePeerEurekaNodes(List
      
        newPeerUrls) { if (newPeerUrls.isEmpty()) { logger.warn("The replica size seems to be empty. Check the route 53 DNS Registry"); return; } Set
       
         toShutdown = new HashSet<>(peerEurekaNodeUrls); toShutdown.removeAll(newPeerUrls); Set
        
          toAdd = new HashSet<>(newPeerUrls); toAdd.removeAll(peerEurekaNodeUrls); if (toShutdown.isEmpty() && toAdd.isEmpty()) { // No change return; } // Remove peers no long available List
         
           newNodeList = new ArrayList<>(peerEurekaNodes); if (! toShutdown.isEmpty()) { logger.info("Removing no longer available peer nodes {}", toShutdown); int i = 0; while (i < newNodeList.size()) { PeerEurekaNode eurekaNode = newNodeList.get(i); if (toShutdown.contains(eurekaNode.getServiceUrl())) { newNodeList.remove(i); eurekaNode.shutDown(); } else { i++; } } } // Add new peers if (! toAdd.isEmpty()) { logger.info("Adding new peer nodes {}", toAdd); for (String peerUrl : toAdd) { newNodeList.add(createPeerEurekaNode(peerUrl)); } } this.peerEurekaNodes = newNodeList; this.peerEurekaNodeUrls = new HashSet<>(newPeerUrls); }
         
        
       
      Copy the code

This is mainly compared to the original or last updated peerEurekaNodeUrls. Unhealthy nodes are removed. When removed, the Shutdown method of PeerEurekaNode is called, and when added, it is created by createPeerEurekaNode

    protected PeerEurekaNode createPeerEurekaNode(String peerEurekaNodeUrl) {
        HttpReplicationClient replicationClient = JerseyReplicationClient.createReplicationClient(serverConfig, serverCodecs, peerEurekaNodeUrl);
        String targetHost = hostFromUrl(peerEurekaNodeUrl);
        if (targetHost == null) {
            targetHost = "host";
        }
        return new PeerEurekaNode(registry, targetHost, peerEurekaNodeUrl, replicationClient, serverConfig);
    }
Copy the code

A new PeerEurekaNode is created

registry.init(peerEurekaNodes)

Eureka – core – 1.8.8 – sources jar! /com/netflix/eureka/registry/PeerAwareInstanceRegistryImpl.java

    public void init(PeerEurekaNodes peerEurekaNodes) throws Exception {
        this.numberOfReplicationsLastMin.start();
        this.peerEurekaNodes = peerEurekaNodes;
        initializedResponseCache();
        scheduleRenewalThresholdUpdateTask();
        initRemoteRegionRegistry();

        try {
            Monitors.registerObject(this);
        } catch (Throwable e) {
            logger.warn("Cannot register the JMX monitor for the InstanceRegistry :", e); }}Copy the code

Here to initialize RenewalThresholdUpdateTask ResponseCache, scheduling, and initialize RemoteRegionRegistry

shutdown

registry.shutdown()

Eureka – core – 1.8.8 – sources jar! /com/netflix/eureka/registry/PeerAwareInstanceRegistryImpl.java

    /**
     * Perform all cleanup and shutdown operations.
     */
    @Override
    public void shutdown() {
        try {
            DefaultMonitorRegistry.getInstance().unregister(Monitors.newObjectMonitor(this));
        } catch (Throwable t) {
            logger.error("Cannot shutdown monitor registry", t);
        }
        try {
            peerEurekaNodes.shutdown();
        } catch (Throwable t) {
            logger.error("Cannot shutdown ReplicaAwareInstanceRegistry", t);
        }
        numberOfReplicationsLastMin.stop();

        super.shutdown();
    }
Copy the code

The main call here is peerEurekaNodes. Shutdown (), with super shutdown

super.shutdown

Eureka – core – 1.8.8 – sources jar! /com/netflix/eureka/registry/AbstractInstanceRegistry.java

    /**
     * Perform all cleanup and shutdown operations.
     */
    @Override
    public void shutdown() {
        deltaRetentionTimer.cancel();
        evictionTimer.cancel();
        renewsLastMin.stop();
    }
Copy the code

Basically, turn off some timers

peerEurekaNodes.shutdown()

Eureka – core – 1.8.8 – sources jar! /com/netflix/eureka/cluster/PeerEurekaNodes.java

    public void shutdown() {
        taskExecutor.shutdown();
        List<PeerEurekaNode> toRemove = this.peerEurekaNodes;

        this.peerEurekaNodes = Collections.emptyList();
        this.peerEurekaNodeUrls = Collections.emptySet();

        for(PeerEurekaNode node : toRemove) { node.shutDown(); }}Copy the code

In addition to being called when the eureka node is shutdown, the shutdown method is also called when the eureka node is considered unhealthy and is removed

PeerEurekaNode.shutdown()

Eureka – core – 1.8.8 – sources jar! /com/netflix/eureka/cluster/PeerEurekaNode.java

    /**
     * Shuts down all resources used for peer replication.
     */
    public void shutDown() {
        batchingDispatcher.shutdown();
        nonBatchingDispatcher.shutdown();
    }
Copy the code

Basically shutting down the Dispatcher

summary

EurekaServerContext mainly registers the operations performed when the bean is initialized and destroyed. To initialize, start peerNodes and then initialize Registry; When destroyed, close Registry and then peerNodes. Mainly about peerNodes is timed tasks to erverConfig getPeerEurekaNodesUpdateIntervalMs () time interval to trigger updatePeerEurekaNodes regularly, This action compares the original or last updated peerEurekaNodeUrls to remove unhealthy nodes and add new ones, if any. The shutdown method of PeerEurekaNode is called when the PeerEurekaNode is removed. A new peerNode is created by createPeerEurekaNode when the peerNode is added.

doc

  • Understanding Eureka Peer to Peer Communication