Click for PDF manual V2.0 – > interview | springboot | springcloud | programming ideas

With the rapid development of Internet information technology, the amount of data is increasing, and the business logic is becoming increasingly complex. There are more and more scenarios of high concurrent access to the system and massive data processing. How to use low cost to achieve the system of high availability, easy to scale, extensible and other goals becomes more and more important.

In order to solve this series of problems, the system architecture is constantly evolving. Traditional centralized system has been gradually unable to meet the requirements, distributed system is used in more scenarios.

Distributed systems consist of independent servers loosely coupled through a network. In this system, each server is an independent host, which is connected through the internal network. Distributed systems have the following characteristics:

  • Scalability: Improves system performance and throughput through horizontal scaling.
  • High reliability: High fault tolerance, the system can still provide services even if one or several systems fail.
  • High concurrency: each machine processes and computes independently in parallel.
  • Cheap and efficient: multiple minicomputers instead of a single high-performance machine.

However, in distributed systems, the complexity of the environment and the uncertainty of the network result in clock inconsistencies, “Byzantine failures,” and more. Problems such as machine downtime and message loss that exist in a centralized system also become more complex in a distributed environment.

Based on these characteristics of distributed system, there are two kinds of typical problems that need to be focused and solved in distributed environment gradually:

  • Mutual exclusion problem.
  • Idempotent problem.

Today we will analyze these two problems.

– Mutual exclusion problem –

Let’s start with two common examples:

Example 1: A service records key data X and the current value is 100. Request A needs to increase X by 200; Meanwhile, request B needs to subtract 100 from X.

In an ideal case, A reads X=100, then X increases by 200, and finally X=300. The B request then reads X=300, decreases by 100, and finally writes X=200.

In the real world, however, if nothing is done, it is possible for both A and B to read X=100; B reads X before A writes; B writes data before A.

Example 2: A service provides A group of tasks. User A requests to obtain A task randomly from the task group. B requests a random task from the task group.

In an ideal world, A picks A task from the task group and the task group deletes it, B picks another task from the remaining tasks and the task group deletes the task.

Similarly, in the real situation, if no processing is done, A and B may choose the same task.

In both of these examples, operations are mutually exclusive. In common terms, mutual exclusion is the preemption of shared resources. If different requests read and modify the same or the same set of resources and cannot be guaranteed to be executed in order, the atomicity of an operation is not guaranteed, then unexpected conditions are likely to occur. Therefore, the problem of mutual exclusion of operations can also be understood as a problem that needs to ensure the timing and atomicity.

In traditional database-based architectures, preemption of data is usually guaranteed by database transaction (ACID). In distributed environment, distributed lock becomes a common and efficient solution for performance and consistency sensitivity.

In fact, the problem of operation mutual exclusion is not unique to distributed environment, and there is a good solution in the traditional multi-thread, multi-process situation. Therefore, before studying distributed lock, we first analyze the solutions of these two situations, in order to provide some ideas for the solution of distributed lock.

– Multi-threaded solutions and principles –

Thinking in Java writes:

Almost all concurrent modes solve thread conflicts by serializing access to shared resources.

In multithreaded environment, conflicts often occur because threads share some storage space. The most common way to resolve conflicts is to protect the resource or operations on the resource with a mutex.

Two mutex locks, Lock and synchronized, are available in the Java JDK. Different threads preempt the same resource, which is usually represented as an ordinary member variable of a class. Therefore, ReentrantLock or synchronized can basically solve the problem of resource preemption by locking the shared variables and their operations.

Let’s talk a little bit about how they work.

The principle –

ReentrantLock

ReentrantLock is mainly implemented using CAS+CLH queues. It supports fair and unfair locks, which are implemented similarly.

  • CAS: Compare and Swap. CAS has three operands: the memory value V, the expected value A, and the new value to be modified B. Change the memory value V to B if and only if the expected value A and memory value V are the same, otherwise nothing is done. This operation is an atomic operation that is widely used in Java’s underlying implementation. In Java, CAS is primarily implemented by the sun.misc.Unsafe class through JNI calls to CPU low-level instructions.
  • CLH queue: a bidirectional acyclic linked list of leading nodes (as shown in the figure below) :

The basic implementation of ReentrantLock can be summarized as: First try to obtain the lock through CAS. If a thread already occupies the lock at this point, it is added to the CLH queue and suspended. When the lock is released, the thread at the head of the CLH queue is woken up and the CAS tries to acquire the lock again. At this time, if:

  • Unfair lock: If another thread tries to acquire it at the same time, the thread may be able to acquire it first.
  • Fair lock: If another thread attempts to acquire the lock at the same time, when it discovers that it is not at the head of the queue, it goes to the end of the queue and the thread at the head of the queue acquires the lock.

Here are two fragments:

final boolean nonfairTryAcquire(int acquires) {
   final Thread current = Thread.currentThread();
   int c = getState();
   if (c == 0) {
       if (compareAndSetState(0, acquires)) {
           setExclusiveOwnerThread(current);
           return true;
       }
   }
   else if (current == getExclusiveOwnerThread()) {
       int nextc = c + acquires;
       if (nextc < 0) // overflow
           throw new Error("Maximum lock count exceeded");
       setState(nextc);
       return true;
   }
   return false;
}
Copy the code

The above method is called first when an attempt is made to acquire the lock. If the status is 0, no one is holding the lock. At this point, a set is attempted, and if successful, the lock is successfully owned. If the status is not 0, then determine whether the current thread has acquired the lock. If so, add state +1, since this is the current thread, so CAS is not used. This is how reentrant locks work.

final boolean acquireQueued(final Node node, int arg) {
   boolean failed = true;
   try {
       boolean interrupted = false;
       for (;;) {
           final Node p = node.predecessor();
           if (p == head && tryAcquire(arg)) {
               setHead(node);
               p.next = null; // help GC
               failed = false;
               return interrupted;
           }
           if (shouldParkAfterFailedAcquire(p, node) &&
               parkAndCheckInterrupt())
               interrupted = true;
       }
   } finally {
       if (failed)
           cancelAcquire(node);
   }
}
private final boolean parkAndCheckInterrupt() {
   LockSupport.park(this);
   return Thread.interrupted();
}
Copy the code

In this method, CAS attempts to obtain the lock again after the CHL node fails to obtain the lock. If the head node is found, CAS tries to obtain the lock again. Otherwise, the system determines whether to block based on the status of the preceding nodes. If blocking is required, the thread is blocked by calling the park method of LockSupport.

– synchronized –

There are two built-in synchronized syntax types in the Java language: synchronized statements and synchronized methods.

  • Synchronized statements: When the source code is compiled into bytecode, monitorenter and Monitorexit bytecode instructions are inserted at the entry and exit positions of the synchronized block, respectively;
  • Synchronized method: Position 1 of the synchronized flag in the access_flags field of the method in the method table of the Class file. This is not specified in the Specification.

The Monitorenter and Monitorexit bytecode directives are described in detail in the Specification of the Java Virtual machine:

Docs.oracle.com/Javase/spec…

monitorenter

The objectref must be of type reference.

Each object is associated with a monitor. A monitor is locked if and only if it has an owner. The thread that executes monitorenter attempts to gain ownership of the monitor associated with objectref, as follows:

  • If the entry count of the monitor associated with objectref is zero, the thread enters the monitor and sets its entry count to one. The thread is then the owner of the monitor.
  • If the thread already owns the monitor associated with objectref, it reenters the monitor, incrementing its entry count.
  • If another thread already owns the monitor associated with objectref, the thread blocks until the monitor’s entry count is zero, then tries again to gain ownership.

Each object has a lock, called a monitor. When monitor is occupied, it is sometimes locked. When a thread executes the Monitorenter directive, it attempts to acquire ownership of an object’s monitor as follows:

  • If the number of entries to Monitor is 0, the thread enters monitor, then sets the number of entries to 1, and the thread is the owner of Monitor.
  • If the thread already owns the monitor and just re-enters, the number of entries into the monitor is increased by one;
  • If another thread has occupied monitor, the thread blocks until the number of monitor entries is zero, and then tries again to acquire ownership of monitor.

monitorexit

The objectref must be of type reference.

The thread that executes monitorexit must be the owner of the monitor associated with the instance referenced by objectref.

The thread decrements the entry count of the monitor associated with objectref. If as a result the value of the entry count is zero, the thread exits the monitor and is no longer its owner. Other threads that are blocking to enter the monitor are allowed to attempt to do so.

The thread executing Monitorexit must be the owner of the corresponding Monitor. When the instruction is executed, the number of monitor entries decreases by 1. If the number of monitor entries decreases by 1, the thread exits the monitor and is no longer the owner of the monitor. Other threads blocked by the monitor can try to take ownership of the monitor.

In JDK1.6 and earlier versions, monitorenter and monitorexit bytecode rely on the underlying operating system’s Mutex Lock for implementation, but since Mutex Lock requires the current thread to be suspended and switched from user to kernel mode for execution, This kind of switching is very expensive. In most cases, however, synchronous methods run in a single-threaded (lockless race) environment. If Mutex Lock is called every time, performance of the program will be severely affected. As a result, a number of optimizations have been made to Lock implementations since JDK 1.6, which have largely reduced or avoided the use of Mutex locks.

– Multi-process solution –

In a multiprogramming system, there are many processes that share various resources, but many resources can only be used by one process at a time. These are critical resources. Critical resources in multi-process can be roughly divided into two categories, one is physical real resources, such as printers; One type is shared data in disk or memory, such as shared memory. Code with mutually exclusive access to critical resources within a process is called a critical section.

Locking at the JVM level becomes ineffective against mutually exclusive access to critical resources. In the case of multi-process, the preemption of critical resources is mainly solved by the principle of interprocess communication at the operating system level. A common approach is to use Semaphores.

There are two kinds of semaphores under POSIX standards, named semaphores and unknown semaphores. Nameless semaphores are usually stored in shared memory, while named semaphores are associated with a specific file name. A semaphore is an integer variable, and there are two kinds of counting semaphore and binary semaphore. Operations on semaphores are mainly P operations (WAIT) and V operations (signal).

  • P operation: first check the size of the semaphore. If the value is greater than zero, reduce the semaphore by 1. Meanwhile, the process obtains the access permission of the shared resource and continues the operation. If the value is less than or equal to zero, the process is blocked and enters the waiting queue.
  • V operation: this operation increases the value of the semaphore by 1. If a process is blocked waiting for the semaphore, one of the processes will be woken up.

For example, if the semaphore is set to 1, A process A performs P before entering the critical section. If the value is greater than zero, the semaphore is reduced to zero and the critical region is executed. At this time, if another process B also wants to enter the critical region, P operation, found that the semaphore is equal to 0, will be blocked. When process A exits the critical region, operation V is performed to increase the semaphore value by one and wake up the blocking process B. And then B can enter the critical region.

This way, in fact, and the multi-threaded environment under the unlock is very similar. Therefore, using semaphores to deal with critical resource preemption can be simply interpreted as locking critical regions.

Based on the above knowledge, we can summarize the basic way to solve the mutual exclusion problem, that is, resource preemption:

Unlock shared resources before and after operations (enter the exit critical area) to ensure that different threads or processes can mutually exclude ordered operation resources.

Add unlock mode, including explicit add unlock, such as ReentrantLock or semaphore; There are also implicit plus locks, such as synchronized. So in distributed environment, to ensure that there is no resource preemption between different JVMS and different hosts, then just add unlock to the critical section.

However, in multi-threading and multi-process, the lock has been more perfect implementation, can be directly used. But in distributed environment, we need to implement distributed lock ourselves.

– Distributed lock –

First, let’s look at the basic conditions for distributed locks.

Distributed lock condition

The basic conditions

Looking back at locks in multi-threaded and multi-process environments, we can see that the implementation of locks has a lot in common, and they all need to meet some basic conditions:

  1. You need space to store the lock, and the lock space is accessible.
  2. Locks need to be uniquely identified.
  3. The lock must have at least two states.

Analyze these three conditions carefully:

The storage space

Lock is an abstract concept. The realization of lock depends on a space where the lock can be stored. In multithreading it’s memory, in multithreading it’s memory or disk. More importantly, the space is accessible. In multithreading, different threads can access member variables in the heap; In multiple processes, different processes can access data in shared memory or files stored on disk. However, in a distributed environment, it is difficult for different hosts to access each other’s memory or disks. This requires an external space that can be accessed as storage space.

The most common external storage space is the database, and there are distributed locks (row locks, Version optimistic locks) based on the database, such as those used in the Quartz cluster architecture. In addition, there are various caches such as Redis, Tair, Memcached, MongoDB, of course, the dedicated distributed coordination service Zookeeper, and even another host. As long as the data can be stored and the lock can be accessed by multiple hosts, it can be used as storage space for distributed locks.

A unique identifier

Different shared resources must be protected by different locks. Therefore, the corresponding locks must be uniquely identified. In a multithreaded environment, a lock can be an object, and a reference to that object is a unique identifier. In a multi-process environment, semaphores are also uniquely identified by references in shared memory. But if it’s not in memory and you lose a reference to the lock, how do you uniquely identify it? The named semaphore mentioned above is uniquely identified by the file name on the hard disk. Therefore, in a distributed environment, the lock can be used as a unique identifier as long as it is given a name that is globally unique.

At least two states

To lock and unlock critical sections, two different states need to be stored. For example, in status of ReentrantLock, 0 indicates that there is no thread contention, and greater than 0 indicates that there is a thread contention. If the semaphore is greater than 0, it can enter the critical region; if it is less than or equal to 0, it needs to be blocked. Therefore, as long as the distributed environment, there are two or more lock states: if there is a lock, no lock; Existence, non-existence, etc., can be realized.

With these three conditions, you can basically implement a simple distributed lock. The following uses the database as an example to implement a simple distributed lock: the database table, the field is the lock ID (unique identifier), the lock status (0 means not locked, 1 means locked).

The pseudocode is:

lock = mysql.get(id);
while(lock.status == 1) {
   sleep(100);
}
mysql.update(lock.status = 1);
doSomething();
mysql.update(lock.status = 0);
Copy the code

The problem

The above approach can implement a crude distributed lock, but such an implementation, what are the problems?

** Problem 1: Atomicity of lock state judgment cannot guarantee **

There are two steps from reading the status of a lock to determining whether the status is locked. If the atomicity of these two steps is not guaranteed, it may result in more than one request acquiring the lock, which is obviously not possible. Therefore, we need to ensure the atomicity of the lock state judgment.

** Fault 2: The network is disconnected or the host is down. The lock status cannot be cleared **

Assume that the host has obtained the lock and the network is disconnected or the host is down. If no action is taken, the lock will remain locked. All subsequent requests will not be able to successfully preempt the lock. Therefore, we need to release the lock in time when the host holding the lock is down or the network is disconnected.

Problem 3: There is no guarantee that the lock you locked will be released **

After solving problem 2, imagine that host A with the lock encounters network jitter in the critical area and the network is disconnected, and the distributed lock releases the lock in time. Later, another host B takes possession of the lock, but at this time host A’s network is restored and the lock is unlocked when it exits the critical zone. Since it’s the same lock, A will unlock B’s lock. If a third host attempts to preempt the lock, it will succeed. Therefore, we need to make sure that the lock we unlock is our own lock.

Advanced condition

If the implementation of distributed lock can also solve the above three problems, then it can be regarded as a relatively complete distributed lock. However, in the actual system environment, there will be more advanced requirements for distributed locks.

  1. ReentrantLock: reentrant in a thread, meaning that the inner layer can acquire the lock after the outer function acquires it. ReentrantLock and synchronized are both reentrantlocks. Branching out to distributed environments still generally refers to reentrant threads, and in most distributed environments, distributed locks are required to be reentrant.
  2. Herd Effect: In distributed locking, Herd Effect refers to an attempt to preempt the lock if all waiting parties are awakened at the same time once the thread holding the lock is released while there are multiple requests waiting to acquire the lock. However, such a situation can cause relatively large overhead, so when implementing distributed locks, we should try to avoid the scare effect.
  3. Fair and unfair locks: Different distributed locks may be required for different needs. Unfair locks generally cost less than fair locks. But the business requirement is to implement fair locking if the competitors who must lock get the locks in order.
  4. Blocking locks and spin locks: Blocking locks and spin locks vary in efficiency for different usage scenarios. Blocking locks are context-switched, and if concurrency is high and critical sections are short, the performance overhead can be high. However, if the critical region operation takes a long time and keeps spinning, it will also cause a greater load on the CPU.

With all these issues and conditions in mind, let’s look at some typical implementations.

– Typical implementation –

The realization of the ZooKeeper

ZooKeeper (ZK) has a kind of node called sequential node. If we create three nodes under /lock/, the ZK cluster will create the nodes in the order in which they are initiated. The nodes are /lock/0000000001, /lock/0000000002, and /lock/0000000003.

There is also a type of ZK node called a temporary node, which is created by a client and automatically deleted when the client disconnects from the ZK cluster. EPHEMERAL_SEQUENTIAL is a temporary sequential node.

According to whether the node in ZK exists, it can be used as the lock state of distributed lock to realize a distributed lock. The following is the basic logic of distributed lock:

  • The client calls the create() method to create a temporary sequence node named “/ dsm-locks /lockname/lock-“.
  • The client calls the getChildren(” lockName “) method to get all the created child nodes.
  • After obtaining all the child nodes’ paths, the client is considered to have acquired the lock if it finds that the node it created in Step 1 has the lowest ordinal number of all nodes.
  • If the created node is not the smallest required of all nodes, then the largest node that is smaller than the sequence number of the node that you created is monitored and waits. Until the next time the monitored child node changes, the child node is acquired to determine whether to obtain the lock.

The lock release process is relatively simple, just delete the child node created by yourself, but you still need to consider exceptions such as node deletion failure.

The open source zK-based source code for Menagerie is a case in point:

Github.com/sfines/mena… .

The Lock in Menagerie was the first to implement reentrant locking, using ThreadLocal to store the number of entries, increasing the number of locks by one and reducing the number of locks by one. If it is determined that the current thread holds the lock, it does not need to go through the process of acquiring the lock.

The tryAcquireDistributed method is used to try to obtain the lock, and the loop checks whether the preceding node exists. If so, the node is monitored and the acquisition failure is returned. If the preceding node does not exist, the previous node is judged. If you determine that you are the first node, success is returned.

To block when another thread owns the lock, JUC’s condition is used in the code to do this. If the attempt to obtain the lock fails, wait and abandon localLock, waiting for the preceding node to wake up. LocalLock is a local fair lock, which enables condition to wake up fairly and realize a fair lock in conjunction with the loop to judge the preorder node.

This implementation is very similar to ReentrantLock’s CHL queue, and temporary nodes of ZK can directly avoid network disconnection or host downtime, lock status cannot be cleaned up, and sequential nodes can avoid stampeding effects. All of these features make ZK one of the most common solutions for distributed locking.

The realization of Redis

Redis’ distributed cache feature makes it a basic implementation of distributed locking. If there is a lock ID in Redis, you can determine whether the lock is enabled. Redis has SETNX (SET if Not eXists) and GETSET (atomic operation, which can be used to determine whether a lock eXists or Not) operations to ensure that only one thread acquires the same lock.

Redis does not have the natural implementation of ZK to prevent deadlocks after host downtime or network disconnection.

The following is a common but imperfect Redis distributed lock implementation step (as shown below) :

  • Thread A sends SETNX lock. orderID to attempt to acquire the lock, and if it does not exist, set and acquire the lock.
  • If the lock exists, it checks whether the lock value (timestamp) is greater than the current time. If there is no timeout, it waits and tries again.
  • {orderID} if the timestamp still times out, the lock has been obtained.
  • If another thread C performs the above operation one step faster, then A gets A timestamp that has not timed out, then A does not acquire the lock as expected and needs to wait or retry.

Another problem that needs to be considered in this implementation is the global clock problem. Because the host clock cannot be fully synchronized in the production environment, the judgment of the timestamp may be incorrect.

The above is a common implementation of Redis, which can also be implemented with SETNX+EXPIRE. Redisson is an official recommended Redis client and implements many distributed features. Its distributed lock provides a more complete solution, source:

Github.com/mrniko/redi…

The realization of the Tair

Tair and Redis are similar in implementation. The Tair client encapsulates an expireLock method: the lock status and expiration timestamp are used to determine whether the lock exists. Only the lock status that already exists and has not expired is considered to be locked. In the locked state, the lock cannot be added, but can be unlocked with a timestamp greater than or equal to the expiration time.

In this way, the timestamp is not stored in the Value and the atomicity of the lock is guaranteed. More significantly, since the timeout is determined by Tair, inconsistent clocks from host to host are avoided.

The above distributed lock implementation methods are common and some have been used in production environments. As the application environment becomes more complex, these implementations may still encounter some challenges.

Strong dependence on external components: The implementation of distributed locks depends on external data stores such as ZK and Redis, so once these external components fail, the distributed locks will not be available.

Unable to fully meet the requirements: different distributed lock implementation, have corresponding characteristics, for some requirements can not be well met, such as the implementation of fair lock, add timeout time to wait for lock.

With this in mind, and a combination of implementations, we developed Cerberus (named after the fierce dog that guards hell in Greek mythology) to provide a flexible and reliable distributed lock.

Cerberus distributed lock

Cerberus has several characteristics.

Feature 1: a set of interfaces for a variety of engines

Cerberus distributed lock uses a variety of engine implementations (Tair, ZK, Redis in the future), support users to choose the required one or more engines. In this way, the engine characteristics can be combined with the selection of practical business requirements and system architecture.

The Cerberus distributed lock abstracts the interfaces of different engines into a single set, masking the implementation details of different engines. This allows users to focus on business logic and to choose and switch engines without changing any business code.

If the user chooses more than one engine, the primary and secondary engines are distinguished by configuration order. Here are some recommendations for using the main engine:

Feature two: flexible use, low learning cost

Here are Cerberus’s Lock methods, which are in keeping with JUC’s ReentrantLock approach and are very flexible and require no additional learning time.

**void lock(); **

Gets the lock, and if the lock is occupied, the current thread is disabled and blocked until the lock is acquired.

**boolean tryLock(); **

The lock is acquired only if it is idle at the time of invocation. If the lock is available, the lock is acquired and the value true is returned immediately. If the lock is not available, this method immediately returns the value false.

**boolean tryLock(long time, TimeUnit unit) throws InterruptedException; **

If the lock is idle during the given wait time and the current thread is not interrupted, the lock is acquired. If the lock is available within a given time, the lock is acquired and the value true is returned immediately. If the lock remains unavailable for a given period of time, this method returns the value false immediately.

  • void lockInterruptibly() throws InterruptedException; Acquire the lock, and if the lock is occupied, wait until the thread is interrupted or the lock is acquired.
  • void unlock(); Release the currently held lock.

Feature 3: Supports one-click downgrade

Cerberus provides an interface for switching engines in real time:

  • String switchEngine() converts distributed lock engines in the order of the configured engines. Return value: Returns the current engine name, such as “zk”.
  • String switchEngine(String engineName) Converts the distributed lock engine to the specified engine. Parameter: engineName – engineName, same as configuration bean name, “zk”/” tair “. Return value: Returns the current engine name, such as “zk”.

When the user selects both engines, the distributed lock normally works on the main engine. Once the main engine fails, the user can automatically or manually invoke the switch engine interface to smoothly switch the distributed lock to another engine to minimize the risk. Automatic switching can be implemented using Hystrix. One recommended solution for manual switching is to use meituan-Dianping’s basic ZooKeeper-based MCC component to manually switch engines for all hosts in the distributed system by monitoring MCC configuration changes. Note that switching engines does not currently migrate existing locks on the original engine.

This is done for a combination of necessity, system complexity, and reliability. In practice, the time between engine failure and switch engine, especially manual switch engine, is much longer than the survival time of distributed lock. As a lightweight Cerberus, migrating locks brings unnecessary overhead and high system complexity. In view of this, if you want to ensure absolute reliability after engine failure, then you need to combine other solutions to deal with it.

In addition, Cerberus also provides built-in public clusters, eliminating the need to set up and configure clusters. Cerberus also has a comprehensive application authorization mechanism to prevent unevaluated use by business parties from impacting the cluster.

At present, Cerberus distributed lock has continuously iterated 8 versions, and has been running stably in several projects of Meituan-Dianping.

– Idempotence problem –

Idempotent, simply put, means that multiple calls to an interface produce the same result as one call. By extension, this interface can be interpreted as an HTTP or Thrift interface that publishes to the outside world, an internal interface that receives messages, or even an internal method or operation.

So why do we need interfaces to be idempotent? Consider the following scenario:

  1. When PLACING an order in the App, I clicked “confirm”, but there was no response, so I clicked it several times. In this case, if the idempotent nature of the interface is not guaranteed, the repeat order problem will occur.
  2. The message push repeats when the message is received. If the interface handling the message is not idempotent, the impact of repeated consumption of the message can be significant.

In the distributed environment, the network environment is more complex. Due to front-end operation jitter, network faults, message repetition, and slow response, the probability of repeated interface invocation is higher than that in the centralized environment. In particular, repeated messages are difficult to avoid in the distributed environment. Tyler Treat also wrote in You Cannot Have Exactly-Once Delivery:

Within the context of a distributed system, you cannot have exactly-once message delivery.

In distributed environments, some interfaces are naturally idempotent, such as query operations. Some modifications to data that are a constant and have no other records or operations are idempotent. In other cases, any modification of data, or change of state, is necessary to prevent repetitive operations. Indirectly implementing the idempotency of the interface to prevent the impact of repeated operations has become an effective solution.

GTIS

GTIS is one such solution. It is a lightweight repetitive level system that ensures uniqueness in a distributed environment. We can use it to indirectly guarantee the idempotency of each operation. It has the following characteristics:

  • High efficiency: low latency, average response time of a single method within 2ms, almost no impact on business;
  • Reliable: Provides a degradation strategy to deal with the impact of external storage engine failure; Provides application authentication and customization of cluster configurations to reduce interference between services.
  • Simple: access is simple and convenient, learning cost is low. Just a simple configuration, two method calls in the code to complete all the access work;
  • Flexible: Provides multiple interface parameters and policies to meet different service requirements.

Realize the principle of

The basic principle of

The idea behind GTIS is to make each different business operation unique. This uniqueness is achieved by generating a unique global ID for unique content properties that correspond to different operations. The basic principle is: The same operation generates the same global ID; Different operations generate different global ids.

The generated global ID needs to be stored in an external storage engine, such as a database, Redis, or Tair. Given Tair’s natural distributed and persistent advantages, GTIS are currently stored in Tair. The corresponding key and value are as follows:

  • Key: Generates trans_contents, a unique identifier, for different businesses using APP_KEY+ business operation content features. The unique identifier is then encrypted to generate the global ID as the Key.
  • Value: current_TIMESTAMP + trans_contents, current_timestamp is used to identify the current operating thread.

The SETNX method of Tair is mainly used to judge whether it is repeated. If there is no original value, set and return success; if there is already value, return failure.

Internal processes

The internal implementation process of GTIS is as follows:

  1. Before a business operation, the business side generates transContents that uniquely identifies the operation and passes in GTIS.
  2. GTIS generates the global ID with MD5 based on the incoming transContents.
  3. GTIS puts global ID as key, current_TIMESTAMP +transContents as value into Tair for setNx, and returns the result to the business party.
  4. The business side can determine whether to start business operations based on the returned results.
  5. If yes, start the operation. If no, end the current operation.
  6. The business side will input the operation results and request results into GTIS, and the system will conduct a test of the request results.
  7. If the operation succeeds, GTIS retrieves the value based on the key and compares it with the returned result. If the two are equal, the expiration time of the global ID is changed to a longer time.
  8. GTIS returns the final result

Implementation difficulties

The difficulty of GTIS is how to ensure the reliability of its repeated judgment. Due to the complexity of distributed environment and the uncertainty of business operation, the problems such as network disconnection or host downtime considered in the implementation of distributed lock in the previous chapter also need to be solved in GTIS. Here are a few typical scenarios:

  1. If the operation fails, ideally another identical operation can be performed immediately. Therefore, you need to determine the operation result of the business side. If the operation fails, delete the global ID immediately.
  2. If the operation times out or the host is down, the current operation cannot tell GTIS whether the operation is successful. Therefore, we must introduce a timeout mechanism. Once the operation feedback of the business side is not obtained for a long time, the global ID will also be invalid.
  3. Combining the two scenarios, since the global ID is invalidated and may be deleted, you need to ensure that it is not another global ID for the same operation. This requires that special markers be written down and judged from there. The identifier used here is the current timestamp.

As you can see, the approach to solving these problems is quite similar to the implementation in the previous chapter. In addition, there are more scenarios to consider and solve, and all the branching flows are as follows:

Directions for use

When used, the business side only needs to call the pre-method and post-method of GTIS before and after the operation, as shown in the figure below. If the preceding method returns that the operation can be performed, the operation is not repeated and can be performed. Otherwise, the operation ends.

The user needs to consider the following two parameters:

  1. Space global: The content features entered by the service side to mark the uniqueness of the operation can be unique String ids, maps, poJOs, etc. Such as order ID
  2. Time global: Determine how long repetition is not allowed, within an hour, within a month, or permanently.

In addition, GTIS provides different fault handling policies and retry mechanisms to reduce the impact of external storage engine exceptions on the system.

At present, GTIS has continuously iterated 7 versions, nearly one year after the first version, and has been running stably in several projects of Meituan Dianping.

Summary –

Mutual exclusion and idempotency problems are common in distributed environments. After analysis, we find out the basic ideas and realization principles to solve these two problems, and give specific solutions.

A common approach to the problem of mutual exclusion is to use distributed locks to handle the preemption of shared resources. The realization of distributed lock is based on the principle of mutex in multi-thread and multi-process environment. A distributed lock can be implemented as long as some basic storage conditions are met and exceptions such as network disconnection can be resolved.

Currently, there are typical distributed lock implementations based on storage engines such as Zookeeper and Redis. However, due to the limitation of single storage engine, we developed multi-engine distributed lock Cerberus based on ZooKeeper and Tair, which has many advantages such as flexible and convenient use, and also provides a perfect one-button degrade scheme.

To solve the problem of operation idempotency, we can indirectly implement interface idempotency by preventing repeated operations. GTIS provides a reliable solution: rely on the storage engine to prevent operation duplication by generating a unique global ID for unique content properties that correspond to different operations.

At present, Cerberus distributed lock and GTIS have been applied in production environment and run smoothly. The solutions provided by both have been able to solve the problems of mutual exclusion and idempotency of operations in most distributed environments. It is worth mentioning that distributed locks and GTIS are not a panch-all. Their strong dependence on external storage systems will affect the reliability in an unstable environment. In the case of high concurrency, it is not appropriate to use distributed locks if the granularity of the locks is not well controlled.

In general, business scenarios in a distributed environment are complex. To solve the problems of mutual exclusion and idempotence, a comprehensive consideration should be given to the current system architecture, business requirements and future evolution. Cerberus Distributed locks and GTIS will continue to iterate, providing more engine choices, more efficient and reliable implementations, and simpler access processes to meet more complex usage scenarios and business needs.

Source | blog.csdn.net/zdy0_2004/article/details/52760404

Recommend 3 original Springboot +Vue projects, with complete video explanation and documentation and source code:

Build a complete project from Springboot+ ElasticSearch + Canal

  • Video tutorial: www.bilibili.com/video/BV1Jq…
  • A complete development documents: www.zhuawaba.com/post/124
  • Online demos: www.zhuawaba.com/dailyhub

【VueAdmin】 hand to hand teach you to develop SpringBoot+Jwt+Vue back-end separation management system

  • Full 800 – minute video tutorial: www.bilibili.com/video/BV1af…
  • Complete development document front end: www.zhuawaba.com/post/18
  • Full development documentation backend: www.zhuawaba.com/post/19
  • Online demos: www.markerhub.com/vueadmin

【VueBlog】 Based on SpringBoot+Vue development of the front and back end separation blog project complete teaching

  • Full 200 – minute video tutorial: www.bilibili.com/video/BV1PQ…
  • Full development documentation: www.zhuawaba.com/post/17
  • Online demos: www.markerhub.com:8084/blogs

If you have any questions, please come to my official account [Java Q&A Society] and ask me