This is the sixth day of my participation in the November Gwen Challenge. Check out the event details: The last Gwen Challenge 2021

Redission Concurrent testing of distributed locks

preface

In a single-machine scenario, a built-in lock can be used to synchronize processes. However, in a distributed scenario, processes to be synchronized may reside on different nodes, so a distributed lock must be used in a distributed application cluster. That is, a method can only be executed by one thread on one machine. The purpose of distributed lock is to solve the consistency problem of the same method being called by the client in distributed environment. At the same time, Redis itself may be clustered.

However, in the cluster environment, the distributed lock of Redis will face a problem. The lock only works on one Redis node. If the master switches from master to slave for some reason, the lock may be lost. I will not go into this detail here, but remember that the Redisssion package itself has encapsulated the Redlock algorithm, so it can guarantee the security of Redis cluster and service node distribution.

Redission is widely used in distributed applications. This article will conduct a distributed stress test on Redission, and the results can be used as a reference for subsequent development.

I. Introduction of Redission into the project

<dependency> <groupId>org.redisson</groupId> <artifactId>redisson-spring-boot-starter</artifactId> The < version > 3.15.5 < / version > < / dependency >Copy the code

Yml configuration

spring: application: name: springboot-redisson redis: redisson: singleServerConfig: password: 123456 address: "Redis: / / 127.0.0.1:6379"Copy the code

Second, simulate concurrent request pressure test

2.1 Prepare Data in Advance

Set its NUM to 0 in Redis in advance.

A passing request increments the NUM value by 1.

2.2 No-lock test

In this version, instead of using distributed locks, we increment the value by 1 as follows

redisTemplate.opsForValue().increment("NUM",1);
Copy the code

Use JMeter to test

1) 10 threads request concurrently

The values are fine, so let’s start testing.

2) 1000 threads request concurrently

There’s no problem with 1000 concurrent requests,

3) 10000 threads request concurrently

At a concurrent pressure of 1W times a second, there was a problem.

4) 5000 threads concurrent test

At this time, the anomaly rate is 9.58%.

5) 3000 thread concurrent test

normal

6) 4000 thread concurrent test

normal

7) 4500 concurrent test threads

The anomaly rate was 3.16%

After this test, it was found that in the case of no lock, within 1s concurrent requests reached 4000-4500 will appear abnormal phenomenon.

2.3 Distributed Testing

Here we test with Redission’s distributed lock.

@PostMapping("/environment/find") public ResponseData<List<EnviromentReportVO>> enviList(@RequestBody RequestData<ReportDTO, UserVO> requestData){ RLock lock = redissonClient.getLock("lei"); lock.lock(); System.out.println(thread.currentThread ().getName()+" lock "); try { Thread.sleep(10); redisSdk.incrementInt("NUM",1); } catch (InterruptedException e) { e.printStackTrace(); }finally {system.out.println (thread.currentThread ().getName()+" unlock "); lock.unlock(); } return null; }Copy the code

The principle is that during the lock period, other locks cannot occupy the lock until the lock expires.

1) 5000 threads request concurrently

Under 5000 concurrent requests, there is no problem, but with the use of distributed locks, the throughput rate drops tens of times, taking 1 minute and 21 seconds. The value of redis also changed to 5000

2) 7000 threads request concurrently

7000 concurrent requests are normal, which takes nearly 2 minutes.

At 1000 concurrent requests, redis was unable to handle such a large IO and would voluntarily interrupt the connection, so it was not tested, but we can see from this test that Redisson’s lock worked. In the case of high concurrency, Redisson can be considered to deal with the problem of high concurrency in actual production. It can well solve the problem of multi-threading competing for resources under distributed conditions.