This is the sixth day of my participation in the November Gwen Challenge. Check out the event details: The last Gwen Challenge 2021

Common problems with Redis cache exceptions include cache avalanche, cache breakdown, and cache penetration. Once these three problems occur, the requests pile up in the database layer. If the number of concurrent requests is high, the database can go down or fail.

Cache avalanche

Cache avalanche refers to the situation where a large number of application requests cannot be processed in the Redis cache and a large number of requests are sent to the database layer, resulting in a surge of pressure on the database layer.

Avalanches are triggered by two things.

First, a large number of caches in the cache expire at the same time, resulting in a large number of requests can not be processed;

Also, because the Redis cache instance is down and can’t handle requests, a large number of requests are sent to the database layer.

There is a solution to the problem of simultaneous cache invalidation.

First, try to avoid setting the same expiration time for a large number of data. If some data is required to be invalid at the service layer at the same time, you can add a small random number to the expiration time, for example, a random number of 1-3 minutes. In this way, a large number of data will not expire at the same time, and the data will expire at a similar time and still meet the business requirements.

Second, caching avalanches can be dealt with through service downgrades. There are different access methods for different data.

  • For non-core data, pause to retrieve the data from the cache and return predefined, null, or error messages.

  • For the core data, still go to the cache, cache can not find, continue to read through the database.

If the instance is down, you need to rely on the Redis cache high availability cluster to find that the node is down and switch it to another node.

Cache breakdown

Cache breakdown refers to the fact that a request to access hot data that is very frequently accessed cannot be processed in the cache, and the request to access this data will be sent to the database layer, resulting in a surge of database pressure, which will affect the database to process other requests.

Cache breakdown usually occurs when hotspot data expires.

The solution is to set no expiration time for hotspot data, so that hotspot data can be processed in the cache rather than retrieved from the database.

The cache to penetrate

Cache penetration is when some data is accessed that is neither in the cache nor in the database. Such requests go all the way to the database layer. If a large number of requests come in for that data, the cache becomes redundant and the requests go directly to the database, affecting access to other data.

The scenario where cache penetration occurs is,

  • Malicious attacks that specifically access data not in the database;
  • The data in the cache and database were deleted by mistake, resulting in no data.
  • When a new service comes online, no service data is available in the cache or database.

To solve the cache penetration problem, you can use the following methods:

  • Cache null or default values. When the queried data does not exist in the cache or database, a null value or default value can be cached in the cache to prevent non-existent data from accessing the database layer. When the data is added later, remove the null value from the cache.
  • Use a Bloom filter. Using the characteristics of bloom filter can verify the existence of data, each new data, a mark in bloom filter. In this way, when the cache is missing, the data can be detected through the Bloom filter first, so that the database does not need to access. Bloom filter can be usedRedisThe implementation.
  • The front end intercepts malicious requests. When malicious requests access non-existent data, the front end checks the validity of request parameters to filter malicious requests with unreasonable parameters, invalid parameter values, and non-existent fields. Do not allow them to access the database, so there is no cache penetration problem.