One, foreword

“This article has participated in the good article call order activity, click to see: back end, big front end double track submission, 20,000 yuan prize pool for you to challenge!”

I believe that reading juejin.cn/post/698220… NameServer, NameServer, NameServer, NameServer, NameServer, NameServer Here we summarize:

  1. NameServer is the routing registry
  2. NameServer provides the following functions: registration discovery and route deletion

When I read the source code, I want to look at it with questions. I also think that the first step of learning is to learn to ask questions. Sometimes a good question is more important than a good answer.

After understanding the technical architecture of NameServer and its main functions, you might ask the following questions:

  1. How does NameServer implement registration discovery and route deletion?
  2. Nameservers are deployed in clusters but do not communicate with each other. How do I resolve the data inconsistency between Nameservers?
  3. Why not use ZooKeeper as the registry instead of NameServer?

For these three questions, try to find the answer to the source……

Two, start the process

The first question we often ask about something is: where does it come from? Where is the source of its life cycle? For some components, the question is: How does it start?

Let’s first look at how NameServer is started.

Start class: org/apache/rocketmq/namesrv/NamesrvStartup Java

You can refer to the above flowchart, their own part of the source code through.

2.1 NameSrvStartup#main0

First look at the main0 method:

    public static NamesrvController main0(String[] args) {

        try {
            NamesrvController controller = createNamesrvController(args);
            start(controller);
            String tip = "The Name Server boot success. serializeType=" + RemotingCommand.getSerializeTypeConfigInThisServer();
            log.info(tip);
            System.out.printf("%s%n", tip);
            return controller;
        } catch (Throwable e) {
            e.printStackTrace();
            System.exit(-1);
        }

        return null;
    }
Copy the code

This method does two things:

  1. To create aNamesrvControllerThe instance
  2. Start the instance

NamesrvController is the core controller of the NameServer, so starting the NamesrvController is basically starting it.

2.2 NameSrvStartup#createNamesrvController

Populate the properties of the configuration object

This method first fills the properties of the two configuration objects in two ways:

  • -c: followed by the configuration file path
  • -p: indicates yes.Attribute name Attribute valueFormal configuration of

Related attributes are as follows:

public class NamesrvConfig {
   private static final InternalLogger log = InternalLoggerFactory.getLogger(LoggerName.NAMESRV_LOGGER_NAME);

    // The RocketMQ home directory, which is configured by setting ROCKETMQ_HOME, is a step in the source environment setup process
    private String rocketmqHome = System.getProperty(MixAll.ROCKETMQ_HOME_PROPERTY, System.getenv(MixAll.ROCKETMQ_HOME_ENV));
    // To store the persistent path of the KV configuration
    private String kvConfigPath = System.getProperty("user.home") + File.separator + "namesrv" + File.separator + "kvConfig.json";
    // Default configuration file path, invalid. To configure the NameServer startup properties during startup, run the -c configuration file path command
    private String configStorePath = System.getProperty("user.home") + File.separator + "namesrv" + File.separator + "namesrv.properties";
    private String productEnvName = "center";
    private boolean clusterTest = false;
    // Whether to support sequential messages, the default is not supported, is not important, is to process the client to obtain routing data identifier, if the support of sequential messages, need to return the corresponding topic of the sequence message configuration properties
    private boolean orderMessageEnable = false;
  
  // getter and setter. }Copy the code
public class NettyServerConfig implements Cloneable {
    private int listenPort = 8888; // NameServer listens on port, default is 9876
    private int serverWorkerThreads = 8; // Number of Netty service thread pool threads
    private int serverCallbackExecutorThreads = 0; // Netty public Task thread pool Number of threads, Netty network design, according to the service type will create different thread pools
    // For example, processing message sending, message consumption, heartbeat detection, etc. If the thread pool is not registered for the service type, the execution is performed by the public thread pool
    private int serverSelectorThreads = 3; // The number of IO thread pools. This is the number of NameServer and Broker threads that resolve requests and return phases
    // For the network request, the request packet is parsed and forwarded to each business thread pool to complete the specific operation, and then the result is returned to the caller
    private int serverOnewaySemaphoreValue = 256; // Send oneway Message request concurrency (broker parameter)
    private int serverAsyncSemaphoreValue = 64; // The maximum concurrency for sending asynchronous messages
    private int serverChannelMaxIdleTimeSeconds = 120; // Network connection maximum idle time

    private int serverSocketSndBufSize = NettySystemConfig.socketSndbufSize; // Network socket send buffer size
    private int serverSocketRcvBufSize = NettySystemConfig.socketRcvbufSize; // Network receiver cache size
    private boolean serverPooledByteBufAllocatorEnable = true; // Bytebuffer Specifies whether to enable caching

    /** * make make install * * * .. /glibc-2.10.1/configure \ --prefix=/usr \ --with-headers=/usr/include \ * --host=x86_64-linux-gnu \ --build=x86_64-pc-linux-gnu \ --without-gd */
    private boolean useEpollNativeSelector = false; // Whether to enable the Epoll IO model
Copy the code

Create a NameServerController object

Create objects based on the configuration that has been populated above and back up the configuration in the NameServerController

final NamesrvController controller = new NamesrvController(namesrvConfig, nettyServerConfig);

// remember all configs to prevent discard
controller.getConfiguration().registerConfig(properties);
Copy the code

2.3 NameSrvStartup#start

We first execute the initialize method, which does a lot of things:

public boolean initialize(a) {

        // 1. Load the KV configuration
        this.kvConfigManager.load();

        // 2. Create the Netty server
        this.remotingServer = new NettyRemotingServer(this.nettyServerConfig, this.brokerHousekeepingService);

        // 3. Create a thread pool for receiving client requests
        this.remotingExecutor =
            Executors.newFixedThreadPool(nettyServerConfig.getServerWorkerThreads(), new ThreadFactoryImpl("RemotingExecutorThread_"));

        this.registerProcessor();

        this.scheduledExecutorService.scheduleAtFixedRate(new Runnable() {

            @Override
            public void run(a) {
                // 4. Create a scheduled task to scan inactive brokers every 10 seconds
                NamesrvController.this.routeInfoManager.scanNotActiveBroker(); }},5.10, TimeUnit.SECONDS); . }Copy the code

Then I do an important thing: I register a hook method, listen for JVM exit events, and release the Controller’s resources when the JVM exits.

And then start the controller

// Register a hook method that listens for JVM exit events and releases controller resources when the JVM exits
Runtime.getRuntime().addShutdownHook(new ShutdownHookThread(log, new Callable<Void>() {
     @Override
     public Void call(a) throws Exception {
         controller.shutdown();
         return null; }})); controller.start();Copy the code

“If a thread pool is used in your code, an elegant way to stop the thread pool is to register a JVM hook function and close the thread pool before the JVM is shut down, freeing resources in time.”

— Inside RocketMQ Technology

That’s not the end of it, let’s move on, controller.start()

    public void start(a) throws Exception {
      	// Start the Netty server for receiving client requests
        this.remotingServer.start();

        if (this.fileWatchService ! =null) {
          	// Start the thread listening for the TLS configuration file
            this.fileWatchService.start(); }}Copy the code

As you can see, starting the NameServer is to start the Netty server, which can then receive registration requests from the broker and routing discovery requests from the client.

In the case of fileWatchService, it listens to the TLS (Transport Layer Security) configuration file and does not involve business logic. So I’m not going to go into detail, but those of you who are interested in network security protocols can go online.

At this point, the startup process analysis is complete. Finally, to summarize:

  1. It is first populated with command-line arguments, configuration files, and default configurationsNamesrvConfig,NettyServerConfig
  2. Based on the above two configuration objectsNamesrvControllerAnd back up the configuration information toNamesrvControllerIn the
  3. Start theNamesrvControllerInstance, which actually starts the Netty server

So, it’s not that hard, right? Let’s move on.

What does NameServer do after launch? The answer has been given at the beginning of this article: registration discovery and route deletion, followed by a question:

How does NameServer implement registration discovery and route deletion?

Keep looking for answers in the source code……

3. Registration and discovery

Registration and discovery are actually two actions, but they both target the broker. Registration means that brokers register with a NameServer, and discovery means that producers and consumers discover brokers through a NameServer.

3.1 registered

Since it is “broker registered with NameServer”, it is necessary to look in the broker’s source code first. The question is, with so much broker source code, where do I start? Stop and think: If I were designing a system that needed to report its survival to a registry, when would I register?

It should be easy to think that it would be better to register as soon as you have started, and then set up a heartbeat with the registry to keep telling the registry: I’m alive!

Whether this is true or not, at least now I have a direction, and I know I should go to the broker’s source code and look for its startup process source first.

This is always emphasized to read the source code with questions, by asking questions, let oneself read the source code more purposeful; By thinking about the problem, you can put forward your own guess, if the guess is right, congratulations, you will gain a sense of achievement; If the guess is wrong, congratulations, you can compare your guess with the source code implementation, see where the difference is, this place is your room for improvement.

3.1.1 Broker Start process

BrokerStartup#main

The flowchart is as follows:

The general steps of the above flowchart are:

  1. Create the broker’s core controllerBrokerController
  2. Start theBrokerController: This step starts many services, such as message storage services, Netty servers,fileWatchService, and the service that sends the heartbeat to NameServer that we’re focused on

Here we focus on numbers 18 to 20 in the chart:

        // 1. Register the broker once before it starts
        if(! messageStoreConfig.isEnableDLegerCommitLog()) { startProcessorByHa(messageStoreConfig.getBrokerRole()); handleSlaveSynchronize(messageStoreConfig.getBrokerRole());this.registerBrokerAll(true.false.true);
        }

        / / 2. The following delay 10 s first, and then every 30 s (brokerConfig. GetRegisterNameServerPeriod () my configuration is 30 s) to make a registration
        this.scheduledExecutorService.scheduleAtFixedRate(new Runnable() {

            @Override
            public void run(a) {
                try {
                    BrokerController.this.registerBrokerAll(true.false, brokerConfig.isForceRegister());
                } catch (Throwable e) {
                    log.error("registerBrokerAll Exception", e); }}},1000 * 10, Math.max(10000, Math.min(brokerConfig.getRegisterNameServerPeriod(), 60000)), TimeUnit.MILLISECONDS);
Copy the code

Why delay for 10 seconds?

Because the broker has just sent the registration request, there is no need to register it again immediately, so the scheduled task thread pool is delayed for 10 seconds. The design is detailed, but it is commercially effective and avoids unnecessary waste of resources.

To understand what the registerBrokerAll method means: This method does not register “all brokers”, but registers the broker with all NameServer, which is verified in the source code below.

3.1.2 registerBrokerAll

This method takes three arguments:

boolean checkOrderConfig, // Whether to verify the sequential message configuration
boolean oneway,  // Whether to send one-way, one-way send does not receive return value
boolean forceRegister // Whether registration is mandatory
Copy the code
    public synchronized void registerBrokerAll(final boolean checkOrderConfig, boolean oneway, boolean forceRegister) {
        // topicConfigWrapper encapsulates topic information and dataVersion on the broker
        TopicConfigSerializeWrapper topicConfigWrapper = this.getTopicConfigManager().buildTopicConfigSerializeWrapper();

        / / is the role of this piece of code topicConfigWrapper taken out to encapsulate the value again, plug topicConfigWrapper again, I understand is for the sake of the this brokerConfig. GetBrokerPermission ()
        // set the value of the property. This is not an important detail, as long as we know that The topicConfigWrapper at least contains topic information and dataVersion on the broker
        if(! PermName.isWriteable(this.getBrokerConfig().getBrokerPermission()) || ! PermName.isReadable(this.getBrokerConfig().getBrokerPermission())) {
            ConcurrentHashMap<String, TopicConfig> topicConfigTable = new ConcurrentHashMap<String, TopicConfig>();
            for (TopicConfig topicConfig : topicConfigWrapper.getTopicConfigTable().values()) {
                TopicConfig tmp =
                    new TopicConfig(topicConfig.getTopicName(), topicConfig.getReadQueueNums(), topicConfig.getWriteQueueNums(),
                        this.brokerConfig.getBrokerPermission());
                topicConfigTable.put(topicConfig.getTopicName(), tmp);
            }
            topicConfigWrapper.setTopicConfigTable(topicConfigTable);
        }

        if (forceRegister || needRegister(this.brokerConfig.getBrokerClusterName(),
            this.getBrokerAddr(),
            this.brokerConfig.getBrokerName(),
            this.brokerConfig.getBrokerId(),
            this.brokerConfig.getRegisterBrokerTimeoutMills())) { doRegisterBrokerAll(checkOrderConfig, oneway, topicConfigWrapper); }}Copy the code

Looking at the contents of the if statement at the bottom, because forceRigister == true, there is no chance that the logic of the needRegister method will be executed. But here’s a quick idea:

  1. The broker requests the NameServer to query the data version of the broker configured on the NameServer. The request type is:QUERY_DATA_VERSION = 322
  2. The NameServer receives the request,DefaultRequestProcessor#processRequest, dataVersion sent from the broker is compared to that stored by NameServer. If not, the heartbeat update time of the broker is updated on the NameServer sidelastUpdateTimestamp; If they are equalchanged == falseThe results of the
  3. The Broker side processes all the results returned by NameServer, as long as there is onechanged == true, thenneedRegister == true

DoRegisterBrokerAll is required

Of course doRegisterBrokerAll is executed during the current broker startup process.

3.1.3 doRegisterBrokerAll

This method does two main things:

  1. callbrokerOuterAPI.registerBrokerAllTo register
  2. Process registration resultsregisterBrokerResultList: Update the master address and the configuration of the order message Topic

The important thing is the first step.

BrokerOuterAPI this class is a broker interaction class that encapsulates RemotingClient, where the client sends the actual registration request to NameServer.

Take a look at the core piece of code in registerBrokerAll

 // CountDownLatch allows the subsequent logic to be executed only when all nameServer responses have been returned
            final CountDownLatch countDownLatch = new CountDownLatch(nameServerAddressList.size());
            // Iterates over all NameServer and throws the registerBroker task into the brokerOuterExecutor thread pool for execution
            for (final String namesrvAddr : nameServerAddressList) {
                brokerOuterExecutor.execute(new Runnable() {
                    @Override
                    public void run(a) {
                        try {
                            RegisterBrokerResult result = registerBroker(namesrvAddr,oneway, timeoutMills,requestHeader,body);
                            if(result ! =null) {
                                registerBrokerResultList.add(result);
                            }

                            log.info("register broker[{}]to name server {} OK", brokerId, namesrvAddr);
                        } catch (Exception e) {
                            log.warn("registerBroker Exception, {}", namesrvAddr, e);
                        } finally {
                            // For each result returned, subtract 1countDownLatch.countDown(); }}}); }try {
                // The main thread blocks here until all countdownLatches are reduced to zero
                countDownLatch.await(timeoutMills, TimeUnit.MILLISECONDS);
            } catch (InterruptedException e) {
            }
Copy the code

As for the registerBroker, the logic is to invoke the invokeSync or invokeOneway of the Netty client to send the request to the NameServer. The specific communication process and principles involved in netty are not the focus of this article. I also plan to follow the netty source code analysis, please look forward to.

It is worth mentioning that the above code is multithreaded and uses:

  • CountDownLatch
  • BrokerFixedThreadPoolExecutor: is the parent classThreadPoolExecutor
  • CopyOnWriteArrayList: Stores the results of thread execution. Because multithreaded writes exist, a concurrency safe container is needed

Those of you who are interested in concurrent programming can learn how to use this, and if you’re interested in understanding how it works, you can check out the JDK source code.

At this point, the broker is done registering, and the NameServer receives the request and processes it.

3.1.4 NameServer processes registration requests

Let’s go back to NameServer and see how registration requests are handled. So let’s think about it a little bit, where is this entry point for the code that handles the request? (The actual process is to know the entry through the code debugging, but the code debugging results are actually a bit like the answer, we can try to think about it first)

Of course, the netty server first receives the request, so let’s take a look at NettyRemotingServer, and look around and see that there is no similar method in this class for handling requests. But this class integrates With The NettyRemotingAbstract, so if we continue to look around here, we find a method in this class called processRequestCommand.

This class finally calls NameServer’s DefaultrequestProcess #processRequest, which helps NameServer process various requests from clients and brokers.

Flow chart:

First the method comes in as a switch case, finding the REGISTER_BROKER

switch (request.getCode()) {
            ...
            case RequestCode.REGISTER_BROKER:
                Version brokerVersion = MQVersion.value2Version(request.getVersion());
    						// There is not much difference between the two methods, depending on the version
                if (brokerVersion.ordinal() >= MQVersion.Version.V3_0_11.ordinal()) {
                    return this.registerBrokerWithFilterServer(ctx, request);
                } else {
                    return this.registerBroker(ctx, request); }}Copy the code

In the middle of the request CRC check, request parameters of the processing process we do not see. It is worth mentioning that the class that manages routing information in NameServer is RouteInfoManager, which maintains five tables:

    private final HashMap<String/* topic */, List<QueueData>> topicQueueTable;// Routing information of the Topic message queue, which is used for load balancing when sending messages
    private final HashMap<String/* brokerName */, BrokerData> brokerAddrTable;// Address information of the broker
    private final HashMap<String/* clusterName */, Set<String/* brokerName */>> clusterAddrTable;// Cluster information
    private final HashMap<String/* brokerAddr */, BrokerLiveInfo> brokerLiveTable;// Live broker
    private final HashMap<String/* brokerAddr */, List<String>/* Filter Server */> filterServerTable;
Copy the code

Take a look at the logic for updating brokerLiveTable and note that several other tables will also be updated

  BrokerLiveInfo prevBrokerLiveInfo = this.brokerLiveTable.put(brokerAddr,
                    new BrokerLiveInfo(
                        System.currentTimeMillis(),
                        topicConfigWrapper.getDataVersion(),
                        channel,
                        haServerAddr));
Copy the code

Use the current system time as the time when the lastUpdateTimestamp Broker reports a heartbeat.

The processing results are then encapsulated and returned to the request response, which is then returned to the broker through Netty, closing the loop with the process on the broker side analyzed above.

So far, on the registration of the source analysis is complete, the last summary

3.1.5 summary

  1. The broker server will register with all NameServer ** at startup, establish a long connection, and then send a heartbeat every 30 seconds with brokerId, broker address, name, and cluster information
  2. After the NameServer receives the heartbeat packet, it stores the data of the entire message cluster into several Hashmaps of the RouteInfoManager and updates themlastUpdateTimestamp

Some of the technical points involved can be in-depth:

  1. Concurrent programming, including the use of thread pools, concurrent components (CountDownLatch, CopyOnWriteArrayList), and use of locks (when NameServer updates several routing tables)
  2. Netty related knowledge of network programming

3.2 find

The client obtains broker information from the NameServer, a process known as route discovery. We take the producer as an example to analyze route discovery.

The routing information in producers in DefaultMQProducerImpl. TopicPublishInfoTable

   private final ConcurrentMap<String/* topic */, TopicPublishInfo> topicPublishInfoTable =
        new ConcurrentHashMap<String, TopicPublishInfo>();
Copy the code

ConcurrentMap is a concurrency-safe container. The reason for using ConcurrentMap is that it actually has two write entries, which means that the producer route is discovered at two times:

  1. It is checked when the message is senttopicPublishInfoTableWhether the value is empty or available. If no, go to NameServer to query the value
    • DefaultMQProducer#send, the actual call isDefaultMQProducerImpl#tryToFindTopicPublishInfo
  2. When the producer starts, it also starts a scheduled task to pull topic information from the NameServer
    • The code path is:DefaultMQProducerImpl#start -> mQClientFactory.start() -> MQClientInstance#this.startScheduledTask()-> MQClientInstance#this.updateTopicRouteInfoFromNameServer();

Why are there two entrances?

  • This is because when the routing information changes, Nameserver does not push the routing information. The client needs to proactively pull the routing information to update the routing information on the client. Request typeGET_ROUTEINFO_BY_TOPIC, the callRouteInfoManagerthepickupTopicRouteDataMethod, designed to reduce the complexity of NameServer. So the second approach is essential.
  • The first way, which is better understood, is to update on demand, and by demand I mean the need to send a message.

Both methods call the following method MQClientInstance:

public boolean updateTopicRouteInfoFromNameServer(final String topic, boolean isDefault,
    DefaultMQProducer defaultMQProducer)
Copy the code

The final call MQClientAPIImpl# getTopicRouteInfoFromNameServer

    public TopicRouteData getTopicRouteInfoFromNameServer(final String topic, final long timeoutMillis,
        boolean allowTopicNotExist) throws MQClientException, InterruptedException, RemotingTimeoutException, RemotingSendRequestException, RemotingConnectException {
        GetRouteInfoRequestHeader requestHeader = new GetRouteInfoRequestHeader();
        requestHeader.setTopic(topic);

      // The request code: GET_ROUTEINFO_BY_TOPIC will eventually be found in the processRequest method mentioned earlier
        RemotingCommand request = RemotingCommand.createRequestCommand(RequestCode.GET_ROUTEINFO_BY_TOPIC, requestHeader);

        RemotingCommand response = this.remotingClient.invokeSync(null, request, timeoutMillis);
        assertresponse ! =null;
        switch (response.getCode()) {
            case ResponseCode.TOPIC_NOT_EXIST: {
                if (allowTopicNotExist) {
                    log.warn("get Topic [{}] RouteInfoFromNameServer is not exist value", topic);
                }

                break;
            }
            case ResponseCode.SUCCESS: {
                byte[] body = response.getBody();
                if(body ! =null) {
                    returnTopicRouteData.decode(body, TopicRouteData.class); }}default:
                break;
        }

        throw new MQClientException(response.getCode(), response.getRemark());
    }
Copy the code

The logic for NameServer to process the request is simple, that is, to query the routing information and return, which will not be described here.

At this point, NameServer registration discovery analysis is complete

Iv. Route deletion

Route deletion has two triggers:

  1. The broker closes abnormally. NameServer finds the broker unresponsive and deletes it. Detailed process:
    • The Broker sends a heartbeat packet to Nameserver every 30 seconds and updates the information in brokerLiveTable, especially lastUpdateTimestamp.
    • Namaserver scans brokerLiveTable every 10 seconds. If the value of lastUpdateTimestamp exceeds 120S from the current time, the Broker is considered to be down and routes will be deleted.
  2. The unregisterBroker instruction is executed when the Broker is closed normally, when it disconnects from the NameServer

Route deletion is relatively simple, you can check the source code below the two triggers here.

V. Several problems mentioned in the preface

Now let’s review the three questions we mentioned at the beginning of this article. Now that the first question has been answered, let’s focus on the second and third questions

5.1 How Can NameServer Resolve Data Inconsistency

The first step is to understand why NameServer has data inconsistency issues. Although NameServer is deployed in a cluster, each node does not communicate with each other independently. Therefore, it is inevitable that different data exists between different nodes during route registration and deletion

How to solve it? In fact, RocketMQ doesn’t think this is a problem that needs to be solved. Because the Topic routing information itself does not need to pursue strong consistency among all nodes in the cluster, only the final consistency is needed.

To put it bluntly, NameServer nodes don’t care if their data is consistent with other nodes. The people who care are the producers and the consumers. The essence of what the client cares about is: I want the routing information I get to be as correct and available as possible, that is, I select a broker to send the message based on the routing information I get, and the broker will receive the message.

When NameServer detects that lastUpdateTimestamp is more than 120s from the current time, it considers the Broker to be down and deletes the route. That is, there is a gap of two minutes, during which it is likely that the producer will send a message to a broker that has gone down. So what do you do in this case?

For this question, press no table first, because the answer is not in NameServer, but in producer. The important thing is, now I have another good question!

5.2 Why did RocketMQ choose to develop a NameServer instead of using ZK

In fact, early versions of RocketMQ, namely the MetaQ 1.x and MetaQ 2.x phases, also relied on Zookeeper. However, MetaQ 3.x (aka RocketMQ) removed the ZooKeeper dependency and adopted its own NameServer.

Because RocketMQ is designed to be simple and efficient, RocketMQ’s architecture is designed to require only a lightweight metadata server that is ultimately consistent, rather than a strong consistency solution like Zookeeper. The benefit is clear: you don’t have to rely on another piece of middleware, reducing overall maintenance costs.

By extension, choosing NameServer versus Zookeeper represents a design focus in a distributed system.

According to CAP theory, RocketMQ chose AP mode NameServer instead of CP mode Zookeeper for the registry module design

The reason for not using ZooKeeper is that when RocketMQ does not meet A (availability), the impact is relatively large, affecting stability

Zookeeper CP applies to the following scenarios:

  • It plays an irreplaceable role in such scenarios as distributed master selection and active/standby high availability switchover. However, these demands are often concentrated in related business fields such as big data and offline tasks. Because the big data field pays attention to the partitioning of data sets, and most of the time, multi-process/thread parallel processing of these data sets is divided into tasks. But there are always points where these tasks and processes need to be aligned, and that’s where ZooKeeper comes in handy.
  • However, in the transaction scenario, the transaction link has natural weaknesses in the aspects of master service data access, large-scale service discovery, large-scale health monitoring, etc., so we should try our best to avoid introducing ZooKeeper in these scenarios. In production practice, Before applying for ZooKeeper, the application must evaluate the scenarios, capacity, and SLA requirements.

NameServer applies to the following scenarios:

  • As a name service, NameServer needs to provide basic functions such as service registration, service deletion, and service discovery. However, NameServer nodes do not communicate with each other, and CP or AP can be used to tolerate data inconsistency among nodes at a certain time. However, CP is used for big data. For online services, AP is used. For distributed coordination, CP is used for primary selection, and AP is used for service discovery

References:

  • RocketMQ 4.8.0 source
  • Github.com/apache/rock…
  • Github.com/DillonDong/…
  • Inside The RocketMQ Technology

The last

  • If you feel there is a harvest, three consecutive support;
  • If there are mistakes in the article, please comment and point out, also welcome to reprint, reprint please indicate the source;
  • Personal VX: Listener27, exchange technology, interview, study materials, help the first-line Internet manufacturers in the promotion, etc