Environment to prepare

I prefer to do a complete set, a Redis distributed lock application example, I prepared a variety of Redis environment, SpringBoot deployment of two services, with Tengine to do the load balancing of the two services, with Jmeter to do the stress test, is a sparrow is small, all the five.

I have prepared an article on setting up various mode environments for Redis:

Redis deployment and how it works – Single node, master-slave replication, Redis-Sentinel, and Redis-Cluster

Criticisms and corrections are welcome.

This article Redis distributed lock, from Redis single node, master and slave, sentinel, cluster all kinds of environment are practiced, in fact, the main play is configuration, configuration is right, call the interface can.

I have prepared various Redis environments on which our distributed lock code implementation is based.

A single node

The host name role The IP address port
redis-standalone 192.168.2.11 6379

Master slave (1 master 3 slave)

The host name role The IP address port
Redis-Master-01 Master 192.168.2.20 9736
Redis-Slave-02 Slave 192.168.2.21 9736
Redis-Slave-03 Slave 192.168.2.22 9736
Redis-Slave-04 Slave 192.168.2.23 9736

Sentinel (1 master 3 slave 3Sentinel)

The host name role The IP address port
Redis-Master-01 Master 192.168.2.20 9736
Redis-Slave-02 Slave 192.168.2.21 9736
Redis-Slave-03 Slave 192.168.2.22 9736
Redis-Slave-04 Slave 192.168.2.23 9736
Redis-Sentinel-01 Sentinel 192.168.2.30 29736
Redis-Sentinel-02 Sentinel 192.168.2.31 29736
Redis-Sentinel-03 Sentinel 192.168.2.32 29736

Cluster (3 active and 3 slave)

The host name role The IP address port
Redis-Cluster-01 Master 192.168.2.50 6379
Redis-Cluster-01 Slave 192.168.2.50 6380
Redis-Cluster-02 Master 192.168.2.51 6379
Redis-Cluster-02 Slave 192.168.2.51 6380
Redis-Cluster-03 Master 192.168.2.52 6379
Redis-Cluster-03 Slave 192.168.2.52 6380

Example of distributed lock application

I have previously reported on the implementation of distributed locks based on ETCD and ZooKeeper. The example I used was the second kill scenario and inventory reduction, which is also a classic business scenario using distributed locks.

What better way to implement distributed locks than Redis? There are, etcd!

Use ZooKeeper to implement distributed locks

In this case, we use Redis distributed lock to control the reading volume of an article, a shared resource.

# Store the amount of reading
set pview 0
Copy the code

Use Tengine (nginx) for load balancing

Tengine Host Information:

The host name role The IP address port
nginx-node Load balancing 192.168.2.10 80

Nginx can load balance two services (8080/8090) using only one address.

. Upstream distributed-lock {server 192.168.2.1:8080 weight=1; Server 192.168.2.1:8090 weight = 1; } server { listen 80; server_name localhost; location / { root html; index index.html index.htm; proxy_pass http://distributed-lock; }... }...Copy the code

JMeter pressure test configuration

Simulate 666 requests at the same time:

The wheels: Redisson

Redisson has good support for Redis distributed lock implementation, which is implemented by:

(1) Locking mechanism: Select a client based on the Hash node to execute the Lua script

(2) Lock mutual exclusion mechanism: if another client executes the same Lua script, it will prompt that the lock already exists, and then enter the loop to keep trying to lock

(3) Reentrant mechanism

(4) Automatic delay mechanism of Watch Dog

(5) Lock release mechanism

Code implementation

Don’t lock

@RequestMapping("/v1/pview")
public String incrPviewWithoutLock(a) {
    // Reading increases by 1
    long pview = redissonClient.getAtomicLong("pview").incrementAndGet();
    LOGGER.info("{} thread executes read + 1, current read: {}", Thread.currentThread().getName(), pview);
    return port + " increase pview end!";
}
Copy the code

666 concurrent requests at the same time, look at the result:

666 requests and the final result is 34!

Add a synchronized lock

Synchronized lock (8080); synchronized lock (8090); synchronized lock (8080); synchronized lock (8090);

@RequestMapping("/v2/pview")
public String incrPviewWithSync(a) {
    synchronized (this) {
        // Reading increases by 1
        int oldPview = Integer.valueOf((String) redissonClient.getBucket("pview".new StringCodec()).get());
        int newPview = oldPview + 1;
        redissonClient.getBucket("pview".new StringCodec()).set(String.valueOf(newPview));
        LOGGER.info("{} thread executes read + 1, current read: {}", Thread.currentThread().getName(), newPview);
    }
    return port + " increase pview end!";
}
Copy the code

Instead of 666 as expected, it turned out to be 391:

8080 and 8090 have repeated +1 processes on the same pView, although there are no duplicate processes on each port.

In other words, synchronized can only solve the problem of concurrency in the process, but cannot solve the problem of operation sharing resources brought by distributed system.

Enter the main character – distributed lock

To solve the problem of operation sharing resources in distributed system, use distributed lock.

Full code: github.com/xblzer/dist…

Tectonic RedissonClient:

public PviewController(RedisConfiguration redisConfiguration) {
    RedissonManager redissonManager;
    switch (redisConfiguration.deployType) {
        case "single":
            redissonManager = new SingleRedissonManager();
            break;
        case "master-slave":
            redissonManager = new MasterSlaveRedissonManager();
            break;
        case "sentinel":
            redissonManager = new SentinelRedissonManager();
            break;
        case "cluster":
            redissonManager = new ClusterRedissonManager();
            break;
        default:
            throw new IllegalStateException("Unexpected value: " + redisConfiguration.deployType);
    }
    this.redissonClient = redissonManager.initRedissonClient(redisConfiguration);
}
Copy the code

A policy pattern is used to initialize different RedissonClients depending on how Redis is deployed.

RedisLock:

Here in order to integrate the zookeeper, etcd distributed Lock, I abstract out a AbstractLock template method class, the class implements the Java. Util. Concurrent. The locks. The Lock.

Lock Lock = new XXX ().

This is reflected in the following article:

What better way to implement distributed locks than Redis? There are, etcd!

public class RedisLock extends AbstractLock {

    private RedissonClient redissonClient;

    private String lockKey;

    public RedisLock(RedissonClient redissonClient, String lockKey) {
        this.redissonClient = redissonClient;
        this.lockKey = lockKey;
    }

    @Override
    public void lock(a) {
        redissonClient.getLock(lockKey).lock();
    }

    / /... slightly

    @Override
    public void unlock(a) {
        redissonClient.getLock(lockKey).unlock();
    }

    / /...
}
Copy the code

Request API:

@RequestMapping("/v3/pview")
public String incrPviewWithDistributedLock(a) {
    Lock lock = new RedisLock(redissonClient, lockKey);
    try {
        / / lock
        lock.lock();
        int oldPview = Integer.valueOf((String) redissonClient.getBucket("pview".new StringCodec()).get());
        // The reading amount of executing business increased by 1
        int newPview = oldPview + 1;
        redissonClient.getBucket("pview".new StringCodec()).set(String.valueOf(newPview));
        LOGGER.info("{} successfully acquired the lock, read + 1, current read: {}", Thread.currentThread().getName(), newPview);
    } catch (Exception e) {
        e.printStackTrace();
    } finally {
        / / releases the lock
        lock.unlock();
    }
    return port + " increase pview end!";
}
Copy the code

Pressure test results:

According to the results, there is no problem.

RedissonLock lock source code analysis

RedissonLock RedissonLock RedissonLock

<T> RFuture<T> tryLockInnerAsync(long waitTime, long leaseTime, TimeUnit unit, long threadId, RedisStrictCommand<T> command) {
    this.internalLockLeaseTime = unit.toMillis(leaseTime);
    return this.evalWriteAsync(this.getName(), LongCodec.INSTANCE, command, 
            "if (redis.call('exists', KEYS[1]) == 0) then " +
            "redis.call('hincrby', KEYS[1], ARGV[2], 1); " +
            "redis.call('pexpire', KEYS[1], ARGV[1]); " +
            "return nil; " +
            "end; " +
            "if (redis.call('hexists', KEYS[1], ARGV[2]) == 1) then " +
            "redis.call('hincrby', KEYS[1], ARGV[2], 1); " +
            "redis.call('pexpire', KEYS[1], ARGV[1]); " +
            "return nil; " +
            "end; " +
            "return redis.call('pttl', KEYS[1]);",
            Collections.singletonList(this.getName()),
            this.internalLockLeaseTime,
            this.getLockName(threadId));
}
Copy the code

Lua scripts are executed, and the reason for using Lua scripts is

  • Atomic operation. Redis executes the entire script as a whole without interruption. Can be used for batch update, batch insert
  • Reduce network overhead. Multiple Redis operations are consolidated into one script to reduce network latency
  • Code reuse. Scripts sent by clients can be stored in Redis and called by other clients based on the id of the script.

There are several Redis commands used:

  • hincrby

    HINCRBY key field increment

    Increment increment increment increment (field, key);

    Increments can also be negative, equivalent to subtracting a given field.

    If the key does not exist, a new hash table is created and the HINCRBY command is executed.

    If the field does not exist, the value of the field is initialized to 0 before the command is executed.

    The return value:

    The value of the field in the hash table key after executing the HINCRBY command.

  • pexpire

    PEXPIRE key milliseconds

    The function of this command is similar to that of the EXPIRE command, but it sets the lifetime of the key in milliseconds, rather than seconds, as with the EXPIRE command.

    The return value:

    If the setting succeeds, return 1

    If the key does not exist or fails to be set, 0 is returned

  • hexists

    HEXISTS key field

    Check whether the given field exists in the hash table key.

    The return value:

    Returns 1 if the hash table contains the given field.

    Returns 0 if the hash table does not contain the given field, or if the key does not exist.

  • pttl

    PTTL key

    This command is similar to the TTL command, but it returns the remaining lifetime of the key in milliseconds, rather than seconds, as with the TTL command.

    The return value:

    If the key does not exist, -2 is returned.

    If the key exists but the remaining lifetime is not set, -1 is returned. Otherwise, the remaining lifetime of the key is returned, in milliseconds.

Now look at that Lua script,

  • If KEYS[1] did not exist,

ARGV[2] 1 hincrby KEYS[1] ARGV[2] 1 sets a hash with KEYS[1] k=ARGV[2],v=1 (because hincrby: If the field does not exist, its value is initialized to 0 before the command is executed.

Then execute pexpire KEYS[1] ARGV[1] to set the expiration time

  • If KEYS[1] exist,

Hincrby KEYS[1] ARGV[2] 1 represents the value of the field in the hash key plus 1, i.e. lock reentrant;

Then set the expiration time.

RedisRedLock red lock

The previous solution seems to solve the problem of operating shared resources on distributed systems, but this is based on the fact that Redis never crashes.

If the lock is in Redis Sentinel mode, some nodes fail:

  1. The client obtains the lock through MasterA, and the lock timeout time is 20 seconds.
  2. MasterA is down before the lock expiration time (that is, it has not been more than 20 seconds after the lock).
  3. Sentinel pulls up one of the Slave nodes to become MasterB;
  4. MasterB found that there was no lock and it was also locked;
  5. MasterB also went down during lock failure time, Sentinel pulled up a MasterC;
  6. MasterC locked…

In the end, three instances were locked at the same time! This is absolutely intolerable!

Redis provides us with RedLock RedLock solution.

RedLock algorithm steps

In a distributed Redis environment, we assume that there are N Redis masters. These nodes are completely independent of each other and there is no master-slave replication or other cluster coordination mechanism.

Taking 5 Redis nodes as an example, this is a reasonable setup, so we need to run these instances on 5 machines or 5 virtual machines so that they don’t all go down at the same time (for example, 5 instances on 1 machine).

To get the lock, the client should do the following:

  1. Gets the current Unix time in milliseconds.
  2. Try, in turn, to obtain locks from N instances using the same key and random values.

In Step 2, when setting a lock to Redis, the client should set a network connection and response timeout that is less than the lock expiration time.

For example, if your lock expires automatically in 10 seconds, the timeout should be between 5 and 50 milliseconds. This prevents the client from waiting for a response when Redis has already failed on the server. If the server does not respond within the specified time, the client should try another Redis instance as soon as possible.

  1. The client obtains the lock usage time by subtracting the current time from the time it started acquiring the lock (the time recorded in Step 1).

The lock is successful if and only if it has been taken from most of the Redis nodes (three nodes in this case) for less time than the lock expires.

  1. If a lock is obtained, the true validity time of the key is equal to the validity time minus the time used to obtain the lock (as calculated in Step 3).
  2. If, for some reason, the lock fails to be acquired (not in at least N/2+1 Redis instances or the lock time has expired), the client should unlock all Redis instances (even if some Redis instances are not locked at all).

Use RedLock for distributed locking

Here open 5 instances of Redis, using RedLock to implement distributed locking.

List of Redis instances used by distributed locks:

# Redis An instance of Redis used by distributed locks192.168.2.11:6479 192.168.2.11:6579 192.168.2.11:6679 192.168.2.11:6779 192.168.2.11:6889Copy the code

For convenience, store data on a single node Redis instance (also master/slave, sentinel, cluster) :

# Redis for storing data192.168.2.11:6379Copy the code

Red lock code implementation:

/ / = = = = = = = = = = = = = = red lock the begin convenient demo to write here You can write a management class = = = = = = = = = = = = = = = = = =
public static RLock create(String redisUrl, String lockKey) {
    Config config = new Config();
    // The password cannot be written to death without testing
    config.useSingleServer().setAddress(redisUrl).setPassword("redis123");
    RedissonClient client = Redisson.create(config);
    return client.getLock(lockKey);
}

RedissonRedLock redissonRedLock = new RedissonRedLock(
        create("Redis: / / 192.168.2.11:6479"."lock1"),
        create("Redis: / / 192.168.2.11:6579"."lock2"),
        create("Redis: / / 192.168.2.11:6679"."lock3"),
        create("Redis: / / 192.168.2.11:6779"."lock4"),
        create("Redis: / / 192.168.2.11:6889"."lock5"));@RequestMapping("/v4/pview")
public String incrPview(a) {
    Lock lock = new RedisRedLock(redissonRedLock);
    try {
        / / lock
        lock.lock();
        // The reading amount of executing business increased by 1
        int oldPview = Integer.valueOf((String) redissonClient.getBucket("pview".new StringCodec()).get());
        int newPview = oldPview + 1;
        redissonClient.getBucket("pview".new StringCodec()).set(String.valueOf(newPview));
        LOGGER.info("{} successfully acquired the lock, read + 1, current read: {}", Thread.currentThread().getName(), newPview);
    } catch (Exception e) {
        e.printStackTrace();
    } finally {
        / / releases the lock
        lock.unlock();
    }
    return port + " increase pview end!";
}
Copy the code

Pressure test results:

It worked out perfectly!

So we use RedLock RedLock of Redis to implement distributed lock.

Based on Reddisson implementation Redis red lock code in class org. Redisson. RedissonMultiLock:

The above.

Full code for this article: github.com/xblzer/dist…

The first public number: Line 100 li ER, welcome the attention of the old iron.