This article has participated in the “Digitalstar Project” and won a creative gift package to challenge the creative incentive money.

The Spring Framework makes it easy to use the @Transactional annotation to control a Transactional transaction. However, in distributed applications, there are many problems with traditional Transactional schemes:When a user places an order, we need to query the current user who places an order, goods in the shopping cart and goods inventory deduction. These contents are divided into different modules, so an order operation often requires remote call of multiple modules. Orders, we assume that the execution process, deducting the inventory successful, but when deducting the balance of the user to the exception, at this point the user module will roll back the amount of deduction, but as a result of the user module and inventory module were independent of each other, the inventory module cannot perceive the user module appeared abnormal and rollback operations, leading to the user without spending a dime white piao to goods, This is clearly not allowed to exist.

Distributed transaction solutions

Here are some solutions for distributed transactions:

  • 2 PC mode
  • TCC transaction compensation
  • Best effort notice
  • Reliable sources

2 PC mode

2PC stands for two-phase commit, also called XA Transactions. XA is a two-phase commit protocol that divides Transactions into two phases:

  1. Phase 1: The transaction coordinator asks each database involved in a transaction to pre-commit this operation and responds if it can be committed
  2. Phase 2: The transaction coordinator asks each database to commit data, and if any database rejects the commit, all databases are asked to roll back their changes during the transaction

2PC is very easy to understand, which is to establish a transaction coordinator on all the databases involved in transaction operations. The coordinator can manage the transactions of these databases, as shown in the figure below, the first stage:Stage 2:The advantages of 2PC are simplicity and low implementation cost, but the disadvantages are also very obvious, low performance and insufficient support in MySQL and some NoSQL databases.

TCC transaction compensation

2PC pattern follows the ACID principle, namely atomicity, consistency, isolation, and persistence. It is a highly consistent design, and in fact, in many cases, it is not possible to achieve strong consistency, or it is expensive to achieve strong consistency. Dan Pritchett, an architect of eBay, put forward the BASE theory in an article published on ACM, which points out that even if strong consistency cannot be achieved, applications can achieve the final consistency in an appropriate way. The final consistency does not require the system to keep the data up-to-date in real time. TCC transaction compensation scheme is a kind of flexible transaction design, which can ensure the final consistency of data and is generally implemented in the business layer.As shown in the figure, we must write three methods Try, Confirm and Cancel in the service that needs to be controlled by the transaction. At this time, the business service will Try to call the Try method of each service. If there is no problem, the transaction manager will call the Confirm method to commit. The Cancel method is called to roll back. The TCC transaction compensation scheme is also easier to understand, but it is more intrusive and coupled to the business.

Best effort notice

The maximum effort notification scheme is also the design of flexible transaction. It carries out message notification according to the rule, and does not guarantee the success of data notification, but provides queryable interface for verification. If you have connected to the Alipay payment service in the project, you should know that Alipay uses the best effort notification scheme after payment. Alipay will send a notification to inform the developer of the payment status of the order at regular intervals. Alipay will only stop notifying the developer after the return of SUCCESS data.

Reliable sources

Reliable message still only guarantee eventual consistency of data, and it needed a message middleware to complete, when a service before the transaction is committed, can send a message to the message middleware, according to the local transaction execution status again send message middleware, Commit or RollBack consumer according to the state of the corresponding to the corresponding transaction processing.

RabbitMQ implements the ultimate consistency of data

Imagine a scenario where, after placing an order, the user is given 30 minutes to pay. Before the user pays, the inventory of the item is not actually deducted, but locked. If the user in the specified time payment success, then need to deduct inventory; If the user does not pay within the specified time, the locked commodity inventory needs to be unlocked again, which involves the transaction operation between the two modules:When the user places an order, the order module needs to save the inventory work order of this order, which records the inventory number of commodities to be locked, and remotely calls the inventory module to lock the inventory. At this time, it needs to wait for the user to make payment. If the user does not pay, the inventory will be unlocked. So how do you know which orders need to be unlocked if the user has not paid over time? We could write a timed task that scans the order at regular intervals and determines whether it has timed out, if so unlocks the inventory. However, this approach is not ideal, not only low efficiency, and scan time is not easy to determine. We can use message-oriented middleware to solve this problem:The order module sends a message to RabbitMQ after the order has been generated, which RabbitMQ holds for 30 minutes. When the order expires, RabbitMQ sends the message to the inventory module for consumption. When the inventory module receives the message, it will know that the order has expired and unlock the corresponding inventory. So the next problem is how to store messages in RabbitMQ for 30 minutes and then pass them to the inventory module for consumption when they expire.

Delay queue

In RabbitMQ, we can implement a delay queue where messages are not consumed immediately but wait for a specified amount of time. To implement this, we need to know two things:

  • Dead letter
  • Dead-letter routing

Dead letter

In RabbitMQ, you can set a TTL (Time to live) for a message. When the TTL is exceeded, the message is considered dead. A message enters a dead-letter route if it meets the following conditions:

  1. A message is rejected by Consumer, and the Requeue value in the Reject parameter is false, that is, the message will not be queued after being rejected
  2. The TTL time of the message expired. Procedure
  3. When the message queue is full, the first message is discarded or dropped to a dead-letter route

Dead-letter routing

In the concept of dead letter, we always emphasize one word,Dead-letter routingIn fact, it is just a common route. However, when a queue is bound to a dead-letter route, the message in the message queue expires, and the message is automatically forwarded and thrown to the dead-letter route. With dead-letter and dead-letter routing, we can implement a delay queue as shown in the figure below:When a producer produces a message, it is put into a queue through the switch. This queue is special and the lifetime of the message in this queue is 30 minutes. When the message expires and becomes dead letter, it is sent to the dead letter route, which then puts the message into another message queue. The message queue guarantees that every time a message is placed, 30 minutes have passed since only dead letters can enter the queue. To do this, we need to set some property values for that particular message queue:

  • X-dead-letter-exchange: XXX Sets the dead-letter route
  • X-dead-letter-routing-key: XXX Sets the dead-letter routing key
  • X-message-ttl: 1800 000 Sets the keepalive time of messages to 30 minutes

Analogies it to a specific business scenario, such as the requirement of automatically unlocking inventory when an order expires, and its design is shown in the figure below:The execution process is as follows:

  1. Publisher sends a message to the route order.delay.exchange with the routing key order.delay
  2. Exchange puts the message to the message queue order.delay.queue bound to order.delay
  3. The lifetime of messages in the message queue order.delay.queue is 30 minutes. When the message expires, the message is sent to the route Order. exchange with the routing key ORDER
  4. The ord. exchange route puts the message into the message queue ord. queue, whose binding relationship is Order
  5. The Consumer listens on the message queue order.queue to ensure that consumed messages are expired

But the process can be simplified:The only difference is that there is one less route. Publisher will queue the message to order.delay.exchange after sending the message to order.delay.queue. We let the queue deliver the message to the route order.delay.exchange even after the message expires, thus saving a route’s resources.

Code implementation

The concept is so hyped that it’s hard to understand without actually implementing it, so let’s use an example to get a feel for the effect of delayed queuing.

Create a SpringBoot application and use code to create routes, message queues, and their relationships:

@Configuration public class MyRabbitMQConfig { @Bean public Queue orderDelayQueue() { Map<String, Object> arguments = new HashMap<>(); arguments.put("x-dead-letter-exchange", "order.delay.exchange"); arguments.put("x-dead-letter-routing-key", "order.release.order"); arguments.put("x-message-ttl", 1000 * 10); return new Queue("order.delay.queue", true, false, false, arguments); } @Bean public Queue orderReleaseOrderQueue() { return new Queue("order.release.order.queue", true, false, false); } @Bean public Exchange orderDelayExchange() { return new TopicExchange("order.delay.exchange", true, false); } @Bean public Binding orderCreateOrderBinding() { return new Binding("order.delay.queue", Binding.DestinationType.QUEUE, "order.delay.exchange", "order.create.order", null); } @Bean public Binding orderReleaseOrderBinding() { return new Binding("order.release.order.queue", Binding.DestinationType.QUEUE, "order.delay.exchange", "order.release.order", null); }}Copy the code

Note that using this method you need to send a message:

@RestController public class TestController { @Autowired private RabbitTemplate rabbitTemplate; @GetMapping("/test") public String test(){ rabbitTemplate.convertAndSend("order.delay.queue","message"); return "test"; }}Copy the code

When sending a message to a queueorder.delay.queueRabbitMQ creates queues and routes: Next we write a listener method that listens on the queueorder.release.order.queue :

@ RabbitListener (the queues = "order. Release. The order. The queue") public void listener (Message Message) {System. Out. Println (" messages are received: " + message); }Copy the code

We visit http://localhost:8080/test at this time, it will send a message to a queue, and then after 10 seconds, the message will be expired, the message will be queued in the order. The order. The queue, and we are listening is the queue, So you always get messages that expire in 10 seconds:

Received message: (Body:'message'......) Received message: (Body:'message'......) Received message: (Body:'message'......)Copy the code

In this way, we can achieve the ultimate consistency of the data.