@[toc] Single RabbitMQ is definitely not highly available, you have to cluster it to be highly available.

Today Songo will talk about RabbitMQ cluster setup.

1. Two modes

Speaking of clustering, the first question you might ask is, if I have a RabbitMQ cluster, do I keep a copy of every instance of my message cluster?

This involves two modes of RabbitMQ clustering:

  • Average cluster
  • Mirrored clusters

1.1 Common Cluster

In common cluster mode, RabbitMQ is deployed on multiple servers, each server starts one RabbitMQ instance, and the RabbitMQ instances communicate with each other.

The metadata of the Queue (mainly the configuration of the Queue) will be synchronized across all RabbitMQ instances, but messages will only exist on one RabbitMQ instance and will not be synchronized to any other Queue.

When we consume a message, if we connect to another instance, that instance will use metadata to locate the Queue, then access the instance of the Queue, pull the data and send it to the consumer.

This clustering can increase RabbitMQ throughput, but does not guarantee high availability, because once a RabbitMQ instance is down, messages cannot be accessed. If the message queue is persisted, it can be accessed when the RabbitMQ instance is restored. If the message queue is not persisted, the message is lost.

The general flow chart is as follows:

1.2 Image Cluster

The main difference is that Queue data and raw data are not stored on a single machine, but on multiple machines at the same time. This means that each RabbitMQ instance has a copy of mirrored data (copy data). Every time a message is written, data is automatically synchronized to multiple instances, so that if one machine fails, the other machines have a copy of the data to continue to serve, thus achieving high availability.

The general flow chart is as follows:

1.3 Node Types

RabbitMQ has two types of nodes:

  • RAM node: An in-memory node stores all the metadata definitions for queues, switches, bindings, users, permissions, and vhosts in memory, with the benefit of making operations such as switch and queue declarations faster.
  • Disk node: stores metadata on disks. A single-node system allows only Disk nodes to prevent system configuration information from being lost when RabbitMQ is restarted

RabbitMQ requires at least one disk node in the cluster, all other nodes can be memory nodes, and when a node joins or leaves the cluster, the change must be notified to at least one disk node. If the only disk node in the cluster crashes, the cluster can continue to operate, but no other operations (add, delete, change, or check) can be performed until the node is restored. To ensure the reliability of cluster information, or when you are not sure whether to use disk nodes or memory nodes, you are advised to use disk nodes directly.

2. Create a common cluster

2.1 Preliminary Knowledge

General structure understanding, next we put the cluster to build up. Starting from ordinary clusters, we will use Docker to build.

Before building, there are two preparations that you need to understand:

  1. In a building cluster, node Erlang cookies value should agree, by default, the files in/var/lib/rabbitmq /. Erlang. Cookies, when we were using the docker to create the rabbitmq container, to set the Cookie value accordingly.
  2. RabbitMQ uses host names to connect to services. Ensure that each host name can be pinged through. You can manually add the host name and IP address mapping by editing /etc/hosts. If the host name cannot be pinged, the RabbitMQ service will fail to start. (Note that if we are building RabbitMQ clusters on different servers, we will use Docker container link to connect containers. Slightly different).

2.2 Starting Construction

Run the following command to create three RabbitMQ containers:

docker run -d --hostname rabbit01 --name mq01 -p 5671:5672 -p 15671:15672 -e RABBITMQ_ERLANG_COOKIE="javaboy_rabbitmq_cookie" rabbitmq:3-management
docker run -d --hostname rabbit02 --name mq02 --link mq01:mylink01 -p 5672:5672 -p 15672:15672 -e RABBITMQ_ERLANG_COOKIE="javaboy_rabbitmq_cookie" rabbitmq:3-management
docker run -d --hostname rabbit03 --name mq03 --link mq01:mylink02 --link mq02:mylink03 -p 5673:5672 -p 15673:15672 -e RABBITMQ_ERLANG_COOKIE="javaboy_rabbitmq_cookie" rabbitmq:3-management
Copy the code

The running results are as follows:

Three nodes are now started, note in MQ02 and MQ03, respectively, using the –link parameter to achieve container connection, about this parameter, if you do not understand, you can reply to docker in the public number Jiangnan little rain background, docker tutorial written by Songko, there is talk about this. I won’t bore you here. It is also important to note that the MQ03 container should be able to connect to both MQ01 and MQ02.

Next, enter the mq02 container and first check the hosts file. You can see that the container connection we configured has taken effect:

In the future mq02 containers will be accessible via mylink01 or Rabbit01.

Next, let’s start configuring the cluster.

Add mq02 containers to the cluster by executing the following commands respectively:

rabbitmqctl stop_app
rabbitmqctl join_cluster rabbit@rabbit01
rabbitmqctl start_app
Copy the code

To check the cluster status, enter the following command:

rabbitmqctl cluster_status
Copy the code

As you can see, there are already two nodes in the cluster.

Next add MQ03 to the cluster in the same way:

rabbitmqctl stop_app
rabbitmqctl join_cluster rabbit@rabbit01
rabbitmqctl start_app
Copy the code

Next, we can view the cluster information:

As you can see, there are already three nodes in the cluster.

You can also view the cluster information on the Web page of each of the three RabbitMQ instances:

2.3 Code Testing

Let’s briefly test the cluster.

We create a parent project named MQ_cluster_demo, and then create two child projects in it.

The first child project, named Provider, is a message producer created with Web and RabbitMQ dependencies as follows:

Then configure applicaton.properties as follows (note cluster configuration) :

spring.rabbitmq.addresses=localhost:5671,localhost:5672,localhost:5673
spring.rabbitmq.username=guest
spring.rabbitmq.password=guest
Copy the code

A simple queue is provided as follows:

@Configuration
public class RabbitConfig {
    public static final String MY_QUEUE_NAME = "my_queue_name";
    public static final String MY_EXCHANGE_NAME = "my_exchange_name";
    public static final String MY_ROUTING_KEY = "my_queue_name";

    @Bean
    Queue queue(a) {
        return new Queue(MY_QUEUE_NAME, true.false.false);
    }

    @Bean
    DirectExchange directExchange(a) {
        return new DirectExchange(MY_EXCHANGE_NAME, true.false);
    }

    @Bean
    Binding binding(a) {
        returnBindingBuilder.bind(queue()) .to(directExchange()) .with(MY_ROUTING_KEY); }}Copy the code

This is not much to say, it is the basic content, next we will send message test in the unit test:

@SpringBootTest
class ProviderApplicationTests {

    @Autowired
    RabbitTemplate rabbitTemplate;

    @Test
    void contextLoads(a) {
        rabbitTemplate.convertAndSend(null, RabbitConfig.MY_QUEUE_NAME, "Hello jiangnan a little rain"); }}Copy the code

After this message is sent, the RabbitMQ Web administrator will see a message displayed on all three RabbitMQ instances, but the message itself exists on only one RabbitMQ instance.

Next we create a message consumer with the same dependencies and configuration as the message producer. I won’t repeat it. Add a message receiver to the message consumer:

@Component
public class MsgReceiver {

    @RabbitListener(queues = RabbitConfig.MY_QUEUE_NAME)
    public void handleMsg(String msg) {
        System.out.println("msg = "+ msg); }}Copy the code

When the message consumer is successfully started, only one message is received in this method, further confirming that our RabbitMQ cluster is fine.

2.4 Reverse Test

Songo then gives two more counter examples to show that messages are not synchronized to other RabbitMQ instances.

Make sure all three RabbitMQ instances are enabled, disable Consumer, send a message through the provider, close mq01 instance and start Consumer instance. Consumer instance will not consume messages. Instead, an error will be reported stating that the MQ01 instance is not connected. This example shows that the message is on MQ01 and not synchronized to the other two MQS. Conversely, if the provider successfully sends the message and we close the MQ02 instance instead of closing the MQ01 instance, you will find that the consumption of the message is unaffected.

3. Create a mirror cluster

The so-called mirrored cluster mode does not need to be set up, we just need to configure the queue as a mirror queue.

This configuration can be done on a web page or on the command line. Let’s look at it separately.

3.1 Web Page Mirroring Queue Configuration

Let’s start by looking at how to configure a mirror queue on a web page.

Click on the Admin TAB, then click on Policies on the right, then click Add/ Update a Policy as shown below:

Next add a policy as shown below:

The meanings of the parameters are as follows:

  • Name: indicates the Name of the policy.
  • Pattern: Matching Pattern of the queue (regular expression).
  • Definition: mirror Definition, which has three main parameters: ha-mode, ha-params, and ha-sync-mode.
    • Ha-mode: specifies the mode of the mirror queue. The value can be “all”, “exactly”, or “Nodes”. All indicates mirroring on all nodes in the cluster (default). Exactly specifies the exact number of nodes to be mirrored. Ha-params specifies the exact number of nodes. Nodes indicates that the mirror is performed on a specified node. The node name is specified by using ha-params.
    • Ha-params: specifies the parameters required by the ha-mode mode.
    • Ha-sync-mode: indicates the synchronization mode of messages in the queue. The value can be “automatic” or “manual”.
  • Priority is optional and indicates the priority of the policy.

After the configuration is complete, click the add/ Update Policy button below to add the policy as follows:

Once the addition is complete, we can perform a simple test.

Verify that all three RabbitMQ are started and send a message to the message queue using the provider above.

Close the MQ01 instance after sending.

The next step is to start Consumer, which can finish consuming the message (note the difference from the previous reverse test), indicating that the mirrored queue has been set up successfully.

3.2 Configuring a Mirror Queue on the CLI

The command line configuration format is as follows:

rabbitmqctl set_policy [-p vhost] [--priority priority] [--apply-to apply-to] {name} {pattern} {definition}
Copy the code

Take a simple configuration example:

rabbitmqctl set_policy -p / --apply-to queues my_queue_mirror "^" '{"ha-mode":"all","ha-sync-mode":"automatic"}'
Copy the code

4. Summary

So that’s the RabbitMQ cluster setup from Zongo. If you’re interested, go ahead and try it