Using Redis will inevitably encounter Redis cache penetration, cache breakdown, cache avalanche, hot Key issues. Some students may just use Redis to access, basically use the package of tools in the project to operate. However, as a developer, we may encounter the above problems when using Redis. If you don’t know what these terms mean, let’s explore them now.

The first logic we use for Redis is this

The cache to penetrate

Is for data that is not in the database or cache. Scenario: When a client initiates a query, the database is checked if there is no query in the cache, and an error message is returned to the client if there is no query in the cache. This is no problem, it seems that logic is perfect, but there is a loophole, that is no matter what kind of Key to come over to check, we have to accept it’s request, this may be the hacker, launched a large number of requests, and the Key is not found in our system, garage also check is less than the value of the corresponding call such a Key Key illegally. So when a large number of such requests come in, will they not hit Redis, and then hit DB? When DB receives so many connections at once, DB may fail to hold up and die. This is a hidden vulnerability that can be exploited by hackers or malicious attackers to bring your system down.

In actual development, you need to take this into account. You can add a layer of filtering at the system level to intercept keys that are considered illegal by the system and directly return error messages to the client. How to add this layer of filtering and which illegal keys are required depends on the actual service logic. This section only provides solutions.

Cache breakdown

Is for data that is not in the cache but is in the database. The scenario is that when the Key fails, if a large number of requests suddenly flood in to request the same Key, these requests will not hit Redis, but will be requested to DB, resulting in the database pressure is too large, or even can not bear, and die.

The solution to this problem is:

  1. Configure a hotspot Key to automatically detect the hotspot Key and increase the expiration time of the hotspot Key or set it to never expire or set it to never expire logically. Details about how to set the expiration time will be explained later.
  2. Add mutex. When Redis is not hit and the database is checked, lock will be added in the operation of updating cache. Whoever gets the lock will update it. At the same time, after getting the lock, it will get the lock again from cache. (Double check)

Cache avalanche

When a large number of keys fail at the same time, requests for these keys are sent to the DATABASE, which also causes the database to be overburdened or even fail.

The solution to this problem is to spread out the expiration time of the Key. You can add a random value to the uniform expiration time, or use a more advanced algorithm to spread out the expiration time.

Hot Key Issues

For hot data, we can set the expiration time of the hot Key to be large, or logically never expire. What does that mean?

That is, if we set the hotspot data expiration time to 24 hours, we can use a listener to monitor the hotspot data, and asynchronously start a thread to update the hotspot data when it is detected that it is about to expire. Can achieve the logical effect of never expire.

Another point is that we should have automatic detection mechanism for hot data. That is, a monitoring platform monitors the number of requests, expiration times, and database lookup times of each key in a certain period to analyze whether the key is a hotspot data. When the key reaches a certain threshold, it is upgraded to a hotspot key, and the hotspot data logic is followed.