There is no best, only the right

preface

Message queues are so widely used in Internet technology storage that almost all backend technology interviewers have to give their friends 360 degrees of difficulty in using and working with message queues.

Last time we talked about some of the problems with introducing message queues

What? All these questions are useless! Don’t panic. Let’s find a solution now.

High availability

The major MQ models have high availability modes to choose from!

RabbitMQ can set up a high availability cluster in mirroring mode. You can configure data synchronization to all nodes or to a specified number of nodes.

RocketMQ, Kafka, RocketMQ, RocketMQ, Kafka, RocketMQ, RocketMQ, RocketMQ, RocketMQ, RocketMQ, RocketMQ, RocketMQ, RocketMQ, RocketMQ, RocketMQ, RocketMQ, RocketMQ, RocketMQ, RocketMQ, RocketMQ, RocketMQ Multi-master multi-slave deployment architecture diagram:

Producer establishes a long connection with one of the nodes in the NameServer cluster (randomly selected), periodically obtains Topic routing information from NameServer, establishes a long connection with the Broker Master that provides Topic services, and periodically sends heartbeats to the Master. The Producer can only send messages to the Broker Master.


A Consumer, on the other hand, establishes a long connection with both the Master and Slave that provide Topic services. It can subscribe to messages from either the Master or Slave, depending on the Broker configuration.

Kafka’s architecture is very similar to RocketMQ

Similar to NameServer, Zookeeper saves cluster configurations and elects leaders.


There are many brokers in a cluster for stacking messages. Kafka supports horizontal scaling. Generally, the more brokers there are, the higher the throughput of the cluster.


Producer uses push mode to publish messages to brokers.


The Consumer subscribes to and consumes messages from the broker using the pull pattern.

Repeat purchases

Now message queues are usually at least once, meaning messages are delivered at least once. Why does the problem of repeated consumption arise in this case? This is usually due to network reasons, which are as follows: Usually after a message has been successfully consumed, the consumer sends a success flag to MQ, which indicates that the message has been successfully consumed and will not be sent to other consumers. But if the network flag is lost because it was not sent to MQ, MQ will assume that the message was not successfully consumed and will send it back to other consumers for consumption, resulting in duplication.

Then the question becomes how do we ensure idempotency at the consumer end.

Idempotent means that any number of times an operation is executed it has the same effect as a single operation and in plain English you call my interface with the same arguments and you get the same result as many times

Idempotence depends on business requirements, but the main principle is that deduplication can be divided into strong check and weak check

  • Strong checksum general and financial related operations are strong checksum 😜 (people in 996, the pot from the sky to steal) such as consumer is a hit money service, after the success of the payment are added a running water record. And the two operations are placed in a transaction. When spending again, go to the flow chart to check whether there is this record, if there is a consumption, directly return. Flow table can also play the role of reconciliation! Some simple scenarios can also rely on database unique constraints

  • The weak check case is not so strict, again not so important case. You can store the ID in a Redis set with an expiration time as appropriate. If the ID cannot guarantee the unique choice of the producer to generate a token and store it in Redis, the consumer will delete it after consumption (the operation of Redis can guarantee its atomicity, and the deletion failure will return 0).

Message loss

How can some news be lost on the run? 💔

In general, there are three ways messages can be lost:

  • Producers lose data Mainstream MQS have acknowledgement or transaction mechanisms to ensure that producers send messages to MQ. RabbitMQ, for example, has transaction and confirm modes.
  • Message queue lost data usually can be solved by enabling persistent disk configuration for MQ, and writing to disk is relieved 👍.
  • Consumers lose data Consumers typically lose data because of automatic confirmation messages. MQ will delete the confirmation message when it receives it, and if the consumer is abnormal, the message is gone. Use manual confirmation can solve this problem!

The order message

Sequential message scenarios may be used less, but there are still some such as an e-commerce order operation, after the order, inventory reduction and order generation, this operation needs to be carried out sequentially. How to ensure the order?

  1. First of all, the producer needs to ensure the order of joining the team, joining the team is disorderly that MQ can not cope with ah 💔 (fluent tongue)
  2. MQ generally ensures that internal queues are FIFO (first-in, first-out), but only for one Queue, so messages from the same operation can be hashed to the same Queue to ensure Queue ordering.
  3. Consumers also need to be careful if multiple consumers consume a queue at the same time. It’s also possible to get out of order. This is the equivalent of multi-threaded consumption!

Through the above combination of basic can solve the problem of sequential message consumption!

Toodles!