A list,

Message queue middleware is an important component in distributed system, which mainly solves application coupling, asynchronous message, traffic cutting and other problems. Achieve high-performance, highly available, scalable, and ultimately consistent architectures. The most used message queues are ActiveMQ, RabbitMQ, ZeroMQ, Kafka, MetaMQ and RocketMQ.

Application scenarios of message queues

The following describes the application scenarios of message queues: asynchronous processing, application decoupling, traffic clipping, and message communication.

1. Asynchronous processing

Scenario Description: After registration, users need to send registration emails and SMS messages. There are two traditional methods: serial and parallel.

Serial mode: After the registration information is successfully written into the database, a registration email is sent and then a registration SMS is sent. After the preceding three tasks are complete, return them to the customer.

Parallel mode: After the registration information is successfully written to the database, a registration email is sent and a registration SMS is sent. After the preceding three tasks are complete, the system returns them to the client. The difference with serial is that the parallel approach can improve the processing time.

Assuming that each of the three business nodes uses 50 ms, leaving aside other overhead such as the network, the serial time is 150 ms and the parallel time might be 100 ms.

Because the CPU can process a certain number of requests per unit of time, assume that CPU throughput is 100 times per second. In serial mode, the number of requests that the CPU can process in 1 second is 7 (1000/150). The number of requests processed in parallel is 10 (1000/100).

Summary: As described in the case above, performance (concurrency, throughput, response time) of the traditional approach can be bottlenecking. How to solve this problem?

The introduction of message queues will not be necessary for business logic to be processed asynchronously. The restructured architecture is as follows:

According to the convention above, the user’s response time is equivalent to the time it takes for the registration information to be written to the database, which is 50 milliseconds. After registering an email, sending a SHORT message and writing it to the message queue, the message queue is directly returned. Therefore, the writing speed of the message queue is very fast and can be ignored. Therefore, the response time of the user may be 50 milliseconds. As a result, the throughput of the system increased to 20QPS per second after the architecture change. Three times better than serial and two times better than parallel!

2. Apply decoupling

Scenario Description: After a user places an order, the order system notifies the inventory system. Traditionally, the order system calls the inventory system interface. The diagram below:

Disadvantages of the traditional model:

If the inventory system is not accessible, order destocking will fail, resulting in order failure, order system and inventory system coupling.

How to solve the above problems? The scheme after the introduction of application message queue is shown as follows:

Order system: after the user places an order, the order system completes the persistent processing, writes the message to the message queue, and returns the user to place the order successfully

Inventory system: subscribe to the order message, using the pull/push way to obtain the order information, inventory system according to the order information, inventory operation

What if: inventory system is not working properly when placing an order. It also does not affect the normal order, because after the order is written to the message queue, the order system does not care about other subsequent operations. Realize the order system and inventory system application decoupling.

3, flow cutting edge

Traffic clipping is also a common scenario in message queues and is commonly used in seckilling or group snatching activities!

Application scenario: The application is suspended due to heavy traffic. To solve this problem, messages are usually queued at the front of the application.

You can control the number of people at the event, which can relieve the application of heavy traffic in a short period of time.

The server receives the user’s request and writes it to the message queue. If the message queue length exceeds the maximum number, the user request is discarded or the error page is redirected.

The second kill service performs subsequent processing according to the request information in the message queue.

4. Log processing

Log processing refers to the use of message queues in log processing, such as Kafka, to solve the problem of large log transfer. The architecture is simplified as follows:

Log collection client, responsible for log data collection, periodic write write Kafka queue; Kafka message queue, responsible for receiving, storing and forwarding log data; Log processing application: Subscribe to and consume log data in the Kafka queue.

Here is an example of a Sina Kafka log processing application:

Kafka: message queue that receives user logs

Logstash: Logs are parsed and output to Elasticsearch using JSON.

Elasticsearch is a schemaless real-time data storage service that organizes data through index and provides powerful search and statistics capabilities.

Kibana: ELK Stack is a data visualization component based on Elasticsearch.

5. Message communication

Message communication means that message queues generally have efficient communication mechanisms built in, so they can also be used for pure message communication. Such as implementing point-to-point message queues, or chat rooms, etc.

Point-to-point communication:

Clients A and B use the same queue to communicate with each other.

Chat room newsletter:

Clients A, B, and N subscribe to the same topic to publish and receive messages. Achieve similar chat room effect.

The above are actually two messaging modes for message queues, point-to-point or publish-subscribe. The model is a schematic diagram for reference.

Examples of messaging middleware

1. E-commerce system

Message queuing uses highly available, persistent message middleware. Such as Active MQ, Rabbit MQ, Rocket MQ.

After processing the trunk logic, the application writes it to the message queue. Whether the message is sent successfully enables the confirmation mode of the message. (After the message queue returns the status of receiving a message successfully, the application returns it to ensure the integrity of the message);

Extend the process (SMS, shipping processing) to subscribe to queue messages. Use push or pull to get messages and process;

While message decoupling will be applied, data consistency problems will be brought, which can be solved by final consistency. For example, the master data is written to the database, and the extended application is based on the message queue, and the subsequent processing based on the message queue is realized in combination with the database mode.

2. Log collection system

It consists of four parts: Zookeeper registry, log collection client, Kafka cluster and Storm cluster (OtherApp).

Zookeeper registry, proposed load balancing and address lookup services;

Log collection client for collecting application system logs and pushing data to Kafka queue;

Kafka cluster: receiving, routing, storing, forwarding and other message processing;

Storm cluster: it is at the same level as OtherApp and consumes data in the queue in pull mode.

Original: http://www.fx114.net/qa-36-149204.aspx

Recommended reading

, end,

— Writing is not easy, your forwarding is the biggest support for me —