The cache to penetrate

Cache penetration refers to that a large number of requests frequently access data that does not exist in the database, leading to frequent query operations in the database. Because the data does not exist in the database, it cannot be cached in Redis, thus causing great pressure to the server.

The solution

1. A null value for cache: when a great deal of data request frequent query does not exist, although the data does not exist, but we can put an empty value in redis cache, let its subsequent access from redis queries, so as to reduce the pressure of the database, but must pay attention to the null set expiration time, not more than five minutes.

2. Set accessible whitelist: Use bitmaps type to define a whitelist that can be accessed. The id of the whitelist is used as the offset of bitmaps.

3. Use bloom filter

4. Real-time monitoring: When the hit ratio of Redis sharply decreases, you can check the access objects and set the access blacklist.

Cache breakdown

A hotspot data that is frequently accessed fails in an instant, resulting in a large number of requests to the data directly to the database, causing tremendous pressure on the database and causing the database to hang up.

The solution

1. Pre-set hotspot data. Before redis peak access, some hotspot data is stored in Redis in advance to extend the expiration time.

2. Real-time adjustment: Monitor Redis and adjust the expiration of hot data in real time.

3. Use locks: When data cache invalidation, we don’t go to query the database, but set up an exclusive lock (setex) first, and then to query data in the database, while the other’s request without access to the lock, when we check the data from the database and then back to the write cache, and then delete this exclusive locks, other requests and can pick up the data from the cache. But that would reduce efficiency.

Cache avalanche

Cache avalanche refers to the failure of a large number of caches, resulting in a large number of requests for corresponding data to the database within a short period of time, causing great pressure on the database, causing the database to hang up.

Solution:

1. Use multi-level cache architecture: Nginx cache + Redis cache + other caches, etc.

2. Use locks or queues: Use locks or queues to avoid concurrent access of data by a large number of requests.

3. Update cache: Set a cache expiration cue that will trigger another thread to update the cache of the actual key

4. Evenly distribute expiration time. When we set expiration time for keys, we can set a random value on the basis of expiration time, so that the expiration time of different keys can be dispersed, thus reducing the pressure on the database