1. Why message queues

  • Asynchronous:

Processing a kill request involves a number of steps, such as:

Risk control;

Inventory lock;

Generate order;

SMS notification;

Update statistics.

Without any optimizations, the normal process is that the App sends the request to the gateway, invokes the five processes in turn, and returns the results to the App.

Risk control and inventory lock are the only two steps that can determine the success of the second kill. As long as the user passes the risk control and completes the inventory lock on the server side, the user can be returned with the kill result. The subsequent steps such as order generation, SMS notification and statistical update do not have to be handled in the kill request.

Therefore, when the server completes the previous two steps to determine the second kill result of the request, it can immediately return the response to the user, and then put the request data into the message queue, and the message queue asynchronously carries out the subsequent operations.As you can see, message queues are used to implement asynchronous processing of services in this scenario. The benefits of this are:Returns results faster; The wait is reduced, and the concurrency between steps is naturally realized, improving the overall system performance.

  • Flow control

In order to avoid overwhelming our kill system with too many requests, our design idea is to use message queues to isolate the gateway and back-end services for traffic control and protection of back-end services.

After joining the message queue, the entire seckilling process becomes:

  1. After receiving the request, the gateway puts the request into the request message queue.
  2. The back-end service retrieves the APP request from the request message queue, completes the subsequent kill process, and returns the result

When a large number of slicing requests arrive at the gateway within a short period of time, they do not directly impact the back-end slicing service, but accumulate in the message queue. The back-end service consumes the request from the message queue to process it according to its maximum processing capacity.

The request with timeout can be directly discarded, and the APP will process the request with no response due to timeout as a seconds kill failure. O&m personnel can also increase the number of instances of the seckill service at any time for horizontal expansion without making any changes to other parts of the system.

The advantage of this design is that it can automatically adjust the flow according to the downstream processing capacity, so as to achieve the effect of “peak cutting and valley filling”. But this also comes at a cost: it increases the system call chain, resulting in a longer overall response time. Both upstream and downstream systems have to change synchronous calls to asynchronous messages, increasing the complexity of the system

The optimal scenario is that if we can predict the processing power of the seckill service, we can implement a token bucket with message queues, making traffic control easier

The principle of token bucket traffic control is that only a fixed number of tokens are issued to the token bucket per unit time, and the service must first take out a token from the token bucket before processing the request. If there is no token in the token bucket, the request is rejected. This ensures that the number of requests that can be processed per unit of time does not exceed the number of tokens issued, which plays a role of traffic control.The implementation is also very simple, there is no need to break the original call chain, as long as the gateway processing APP request to add a token acquisition logic.

Token buckets can be implemented simply by adding a “token generator” to a message queue of fixed capacity: According to the estimated processing capacity, the token generator uniformly produces tokens and puts them into the token queue (if the queue is full, the token is discarded). When receiving the request, the gateway consumes a token in the token queue. After obtaining the token, the gateway continues to invoke the back-end seconds kill service.

  • The service of decoupling

Another function of message queues is to decouple system applications. Take another example of e-commerce to illustrate the role and necessity of decoupling.

We know that orders are the core data in the e-commerce system. When a new order is created:

  1. Payment systems need to initiate payment processes;
  2. The risk control system needs to check the legitimacy of the order;
  3. The customer service system needs to send SMS messages to users to inform users;
  4. Business analysis systems need to update statistics;

The systems downstream of these orders need to get the order data in real time. As business development, the orders of downstream system increasing, changing, and each system may need to order only a subset of the data, is responsible for the development of the order service team have to spend a lot of energy, cope with the increasing changes downstream system, continually modify debug orders system and the downstream system interface. Any downstream system interface change requires the order module to go online again, which is almost unacceptable for the core service of an e-commerce company.

All e-commerce companies have chosen message queues as a solution to the similar problem of tight system coupling. When the message queue is introduced, the Order service sends a message to a topic Order in the message queue when the Order changes, and all downstream systems subscribe to the topic Order so that each downstream system can get a complete Order data in real time. No matter how the downstream systems are added or reduced, or how the downstream system requirements change, the order service does not need to make any changes, which decouples the order service from the downstream service.

Some problems and limitations of using message queues

The introduction of message queues brings latency issues

Increased system complexity

Data inconsistencies may arise