The cache

To improve system performance, part of the data is put into the cache to speed up access and reduce data pressure

What data is suitable for writing to the cache?

For some data with low requirements for immediacy and data consistency, and with a large number of visits and low update frequency, it is suitable to write into the cache

The flow chart

The simplest can put the data into a map (local cache), monomer applications do not have what problem, but when the system is a distributed system can appear a lot of problems, each service has its own cache, can appear inconsistent data and so on questions, in order to solve these problems, we used a cache of the same

Use Redis for caching

Integrate redis

pom

<dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-data-redis</artifactId>
        </dependency>
Copy the code

configuration

spring:
  redis:
    host: #
    port: 6379
Copy the code

Cache breakdown, penetration, avalanche

The cache to penetrate

Query a data do not exist, because the cache must not hit, but go to query the database, check data what also can’t check and we may not have the empty write to the cache as a result, lead to each sent the data access database, again lose cache effect, easy to use pressure lead to database, eventually lead to system collapse

Solution: Write null results to the cache as well

Question: What if this data exists in the future

Solution: Add a short expiration time!

Cache avalanche

When we set the same expiration time for many caches, at some point all the caches fail at the same time, causing a flood of requests to go directly to the database and causing the system to crash

Solution: Add a random value to the original expiration time when setting the expiration time

Cache breakdown

For example: a new product was put on sale at 20 o ‘clock this evening, but the cache was invalid at 19 o ‘clock. When 20 o ‘clock, a large number of requests poured into the database directly, and the system crashed

For keys that are set to expire, they may be accessed by a large number of requests at one time. If the cache fails before a large number of requests come in, a large number of requests will be sent directly to the database, causing the system to crash

Solution: Lock. When a large number of requests are accessed, only one request is allowed to query data. After checking, the data is written into the cache and the lock is released

Lock to solve cache breakdown problem

In monomer applications

use

synchronized(this){
}
Copy the code

The local lock is locked

This will solve the problem

This is problematic for distributed systems

If you have 10 services that need to access the database 10 times, you still can’t put only one process in

In a distributed system, a distributed lock is required to allow only one process to operate on the database

A distributed lock

The principle of

All threads go to one place to grab the lock, and when they get it, they do it

You can use redis, everyone comes in and sets a key-value, and before each thread comes in, see if anyone sets the k-v, and if no one sets the k-v, then you grab the lock, do the next operation, and then delete the k-v. If someone walks in and sees this K-V, it means the lock has been taken

Redis has a set k, v NX which means that the set is successful only if the k does not exist

** problem: ** and other programs get the lock after the execution of other operations occur abnormal directly exit, do not release the lock cause deadlock, how to do?

Solution: Put the release lock in final?

Question: What if the power is cut off before the final machine is executed

Solution: Set an automatic expiration time for the lock

Question: What if the machine crashes before you can set the expiration date

The bottom line is that locking and setting expiration times are not atomic operations

Set k v EX 300 NX lock set expiration time

Problem: When the service time exceeds the lock expiration time, the lock has expired, and the lock is deleted after the service is complete

Solution: When we set a lock, we do not set a value randomly, but set a UUID, only delete the value of the same value as their own lock, to prevent the deletion of others lock

Question: It takes time for us to check whether the value of this lock is the same as the value set by ourselves. If you happen to get the value of the lock you set and are returning it, the lock expires and another process preempts the lock and sets its own value, When the value of the lock you just checked is the same as the value of the lock you set yourself, and then delete the lock, the result will be that others delete the lock

After all, the comparison and comparison of successful lock deletion are not an atomic operation

Solution: Delete lock using Lua script assistance

String script = "if redis.call('get', KEYS[1]) == ARGV[1] then return redis.call('del', KEYS[1]) else return 0 end";
Copy the code

Use Redisson for distributed locking

Redisson solves all of our problems and lets us lock and release the locks gracefully

Official documentation github.com/redisson/re…

Rely on

<dependency>
    <groupId>org.redisson</groupId>
    <artifactId>redisson</artifactId>
    <version>3.14.1</version>
</dependency>
Copy the code

spring boot

<dependency>
    <groupId>org.redisson</groupId>
    <artifactId>redisson-spring-boot-starter</artifactId>
    <version>3.14.1</version>
</dependency>
Copy the code

configuration

Github.com/redisson/re…

Cache data consistency

Double write mode

Data Update -> Update database -> Update cache

Question:

Two threads: modify data successively

Thread 1-> write database ———————–> write cache

Thread 2————> Write database -> Write cache

Temporary dirty data is generated

Solution: lock

Failure mode

Data update -> Update database -> Delete cache

Question:

Solution: lock

Spring Cache

The integration of read data, write to the cache, and so on is repeated with the introduction of some annotations

IO /spring-fram…