The RabbitMQ persistence

concept

We have just seen how to handle tasks without losing them, but how to ensure that messages sent by message producers are not lost when the RabbitMQ service is down. By default, when RabbitMQ exits or crashes for some reason, it ignores queues and messages unless it is told not to. Two things need to be done to ensure that messages are not lost: we need to mark both the == queue and the message as persistent; = =

How do queues persist

Rabbitmq queues are durable, durable, durable, durable, durable, durable, durable and durable.However, it is important to note that if the previously declared queue is not persistent, you need to delete the original queue first, or create a new persistent queue, otherwise there will be errors;The following is the UI display area for persistent and nonpersistent queues in the console:

The message is persistent

Messages to persist need producers to modify the code, = = MessageProperties. PERSISTENT_TEXT_PLAIN = = add this attribute.== Mark the message as persistent but there is no guarantee that it will not be lost. Although it tells RabbitMQ to save the message to disk ==, there is still an interval in the cache when the message is ready to be stored on disk but is not finished. There is no actual writing to disk at this point. The persistence guarantee was not strong, but it was more than sufficient for our simple task queue. If you need a more robust persistence strategy, refer to the courseware release validation section below.

Unfair distribution

In the beginning we learned that RabbitMQ distributes messages in rotation, but this strategy does not work well in cases where one consumer is very fast and the other is very slow. This time we are still using distributed training in rotation will fast to the processing of a large part of the idle time, the consumers and the consumers have been dealing with slow in working, this way of distribution in this case is not so good, actually, but the RabbitMQ doesn’t know that it is still very fair for distribution; To avoid this, we can set the == parameter channel.basicqos (1); = = Means I haven’t processed, or if the task I haven’t reply you, you don’t assigned to me first, I now can only handle one task, and then the rabbitmq will take the task assigned to the free customer, not so busy, of course, if all consumers are not done with hands, queue continued to add new tasks, The queue may encounter the situation that the queue is full. In this case, new workers can only be added or strategies of other storage tasks can be changed. = =

Forecast values

The message itself is sent asynchronously, so there must be more than one message on the channel at any one time and manual validation from the consumer is also asynchronous in nature. Therefore, there is a buffer of unacknowledged messages, so the developer is expected to limit the size of this buffer to avoid the problem of unacknowledged messages in the buffer ==. This can be done by setting the prefetch count value using the basic.qos method. == This value defines the maximum number of unacknowledged messages allowed on a channel. == once the configured number is reached, RabbitMQ will stop sending more messages over the channel unless at least one unprocessed message is acknowledged. For example, assuming there are unacknowledged messages 5, 6, 7,8 on the channel and the channel prefetch count is set to 4, RabbitMQ will not pass any more messages over the channel unless at least one unanswered message is ack. For example, if a tag=6 message has just been ACK, RabbitMQ will sense this and send another message. Message response and QoS prevalues have a significant impact on user throughput. In general, increasing prefetching increases the speed of message delivery to consumers. == While the auto-reply transfer message rate is optimal, the number of messages delivered but not yet processed in this case also increases, increasing the consumer’s RAM consumption (random access memory)== Care should be taken with auto-acknowledge mode or manual acknowledge mode with unlimited preprocessing, Consumer spending a lot of news if there is no confirmation, will cause consumers to connect nodes memory consumption, so finding the right value is a process of trial and error, the value of load values vary from 100 to 300 within the scope of the value usually can provide the best throughput, and will not bring too much of a risk to consumers. A prevalue of 1 is the most conservative. This, of course, leads to very low throughput, especially if consumer connection latency is high, especially in environments where consumer connection latency is long. For most applications, a slightly higher value will be optimal.