Distributed locking is more advanced than multithreaded locking. Its scope of action, also from single – machine to distributed, is a common means of resource coordination. Redis distributed lock and ZK distributed lock are commonly used. But what’s the difference? We are in ordinary use, and how to choose.

1. The parsing

This problem is demanding, it should not only understand the implementation method, but also have a grasp of the principle. So there are many layers to the answer.

As we all know, Redis claims to be lightweight and intuitively distributed locks are easier to implement, such as using setNx, but once the attribute of high availability is added, the difficulty of Redis locks will explode.

Add in several other attributes of the lock: optimism, pessimism, read/write locks, and things get even more complicated.

If you knew all about it, it would take you a whole day to talk.

2. Try the following

Let’s start with a more superficial, introductory analysis:

  • Redis distributed lock, can be based on setnx instruction implementation (but in fact, it is recommended to use the SET instruction with NX parameters)
  • The distributed lock of ZK is based on the order of temporary nodes and the monitoring mechanism of nodes

This way of answering, you just cut yourself in because there are so many details involved. Others just ask the difference, why to their own source level around it?

The following analysis is recommended:

  • Redis, using RedLock encapsulated by Redisson
  • Zk, InterProcessMutex encapsulated using a curator

Contrast:

  • Zookeeper >= redis
  • Server performance: Redis > Zookeeper
  • Client performance: Zookeeper > Redis
  • Reliability: Zookeeper > redis

Fine to chat:

2.1 Difficulty of Implementation

For direct manipulation of the underlying API, the implementation difficulty is about the same, with many boundary scenarios to consider. However, since Zk’s ZNode has a natural lock property, it is very easy to masturbate directly.

Redis needs to consider too many abnormal scenarios, such as lock timeout and high availability of locks, which makes it difficult to implement.

2.2 Server Performance

Zk is based on THE Zab protocol and requires half of the node ACKS to be successfully written, resulting in low throughput. If locks are frequently added or released, the server cluster becomes stressed.

Redis is based on memory, only Master write success, high throughput, Redis server pressure is small.

2.3 Client Performance

Because Zk has notification mechanism, the process of acquiring the lock, add a listener can be. Polling is avoided and performance consumption is low.

Redis does not have notification mechanism, it can only use polling mode like CAS to fight for locks, more idle, will cause pressure to the client.

2.4 reliability

This is pretty obvious. Zookeeper is born for coordination. It has strict Zab protocol to control data consistency and robust lock model.

Redis is a bit less reliable than it is verbose. Even with Redlock, there is no guarantee of 100% robustness, but it is often used for general applications that do not encounter extreme scenarios.

3. The extension

Zk distributed lock code example:

import org.apache.curator.framework.CuratorFramework;
import org.apache.curator.framework.recipes.locks.InterProcessMutex;
import java.util.concurrent.TimeUnit;

public class ExampleClientThatLocks
{
    private final InterProcessMutex lock;
    private final FakeLimitedResource resource;
    private final String clientName;

    public ExampleClientThatLocks(CuratorFramework client, String lockPath, FakeLimitedResource resource, String clientName)
    {
        this.resource = resource;
        this.clientName = clientName;
        lock = new InterProcessMutex(client, lockPath);
    }

    public void     doWork(long time, TimeUnit unit) throws Exception
    {
        if ( !lock.acquire(time, unit) )
        {
            throw new IllegalStateException(clientName + " could not acquire the lock");
        }
        try
        {
            System.out.println(clientName + " has the lock");
            resource.use();
        }
        finally
        {
            System.out.println(clientName + " releasing the lock");
            lock.release(); // always release the lock in a finally block}}}Copy the code

RedLock distributed lock example:

String resourceKey = "goodgirl";
RLock lock = redisson.getLock(resourceKey);
try {
    lock.lock(5, TimeUnit.SECONDS);
    // Real business
    Thread.sleep(100);
} catch (Exception ex) {
    ex.printStackTrace();
} finally {
    if(lock.isLocked()) { lock.unlock(); }}Copy the code

Add a code implementation of RedLock’s internal lock and UNLOCK to give you an idea of its complexity.

@Override
    <T> RFuture<T> tryLockInnerAsync(long leaseTime, TimeUnit unit, long threadId, RedisStrictCommand<T> command) {
        internalLockLeaseTime = unit.toMillis(leaseTime);

        return commandExecutor.evalWriteAsync(getName(), LongCodec.INSTANCE, command,
                                "local mode = redis.call('hget', KEYS[1], 'mode'); " +
                                "if (mode == false) then " +
                                  "redis.call('hset', KEYS[1], 'mode', 'read'); " +
                                  "redis.call('hset', KEYS[1], ARGV[2], 1); " +
                                  "redis.call('set', KEYS[2] .. : '1', 1); " +
                                  "redis.call('pexpire', KEYS[2] .. ':1', ARGV[1]); " +
                                  "redis.call('pexpire', KEYS[1], ARGV[1]); " +
                                  "return nil; " +
                                "end; " +
                                "if (mode == 'read') or (mode == 'write' and redis.call('hexists', KEYS[1], ARGV[3]) == 1) then " +
                                  "local ind = redis.call('hincrby', KEYS[1], ARGV[2], 1); " + 
                                  "local key = KEYS[2] .. ':'.. ind;" +
                                  "redis.call('set', key, 1); " +
                                  "redis.call('pexpire', key, ARGV[1]); " +
                                  "local remainTime = redis.call('pttl', KEYS[1]); " +
                                  "redis.call('pexpire', KEYS[1], math.max(remainTime, ARGV[1])); " +
                                  "return nil; " +
                                "end;" +
                                "return redis.call('pttl', KEYS[1]);",
                        Arrays.<Object>asList(getName(), getReadWriteTimeoutNamePrefix(threadId)), 
                        internalLockLeaseTime, getLockName(threadId), getWriteLockName(threadId));
    }

@Override
    protected RFuture<Boolean> unlockInnerAsync(long threadId) {
        String timeoutPrefix = getReadWriteTimeoutNamePrefix(threadId);
        String keyPrefix = getKeyPrefix(threadId, timeoutPrefix);

        return commandExecutor.evalWriteAsync(getName(), LongCodec.INSTANCE, RedisCommands.EVAL_BOOLEAN,
                "local mode = redis.call('hget', KEYS[1], 'mode'); " +
                "if (mode == false) then " +
                    "redis.call('publish', KEYS[2], ARGV[1]); " +
                    "return 1; " +
                "end; " +
                "local lockExists = redis.call('hexists', KEYS[1], ARGV[2]); " +
                "if (lockExists == 0) then " +
                    "return nil;" +
                "end; " +
                    
                "local counter = redis.call('hincrby', KEYS[1], ARGV[2], -1); " + 
                "if (counter == 0) then " +
                    "redis.call('hdel', KEYS[1], ARGV[2]); " + 
                "end;" +
                "redis.call('del', KEYS[3] .. ':'.. (counter+1)); " +
                
                "if (redis.call('hlen', KEYS[1]) > 1) then " +
                    "local maxRemainTime = -3; " + 
                    "local keys = redis.call('hkeys', KEYS[1]); " + 
                    "for n, key in ipairs(keys) do " + 
                        "counter = tonumber(redis.call('hget', KEYS[1], key)); " + 
                        "if type(counter) == 'number' then " + 
                            "for i=counter, 1, -1 do " + 
                                "local remainTime = redis.call('pttl', KEYS[4] .. ':'.. key .. ':rwlock_timeout:' .. i); " + 
                                "maxRemainTime = math.max(remainTime, maxRemainTime);" + 
                            "end; " + 
                        "end; " + 
                    "end; " +
                            
                    "if maxRemainTime > 0 then " +
                        "redis.call('pexpire', KEYS[1], maxRemainTime); " +
                        "return 0; " +
                    "end;" + 
                        
                    "if mode == 'write' then " + 
                        "return 0;" + 
                    "end; " +
                "end; " +
                    
                "redis.call('del', KEYS[1]); " +
                "redis.call('publish', KEYS[2], ARGV[1]); " +
                "return 1; ",
                Arrays.<Object>asList(getName(), getChannelName(), timeoutPrefix, keyPrefix), 
                LockPubSub.UNLOCK_MESSAGE, getLockName(threadId));
    }
Copy the code

Therefore, it is recommended to use already packaged components. If you have to use setNx or set to do this, XJjdog just wants to be abused. We can understand the basic principles, but we can’t figure out the details without a little effort.

Having said this for a long time, when we select the type, how should we do? It depends on your infrastructure. If your application uses ZK and the clustering performance is strong, zK is preferred. If you only have Redis and don’t want to introduce bloated ZK for a distributed lock, use Redis.

Xjjdog is a public account that doesn’t allow programmers to get sidetracked. Focus on infrastructure and Linux. Ten years architecture, ten billion daily flow, and you discuss the world of high concurrency, give you a different taste. My personal wechat xjjdog0, welcome to add friends, further communication.

Xjjdoc.cn has a detailed classification of 200+ original articles, making it easier to read. Welcome to collect.