Cache avalanche

Data is not loaded into the cache or the cache is invalidated on a large scale, causing all requests to access the database, causing the database CPU and memory overload and even the database cannot respond to requests and the system crashes.

The process of caching an avalanche

The Redis cluster has a large area of failure, but there are still a large number of requests to access the cache. Due to the cache failure, a large number of requests are diverted to the database such as MySQL, which cannot respond quickly or stops working. Since so many applications depend on MySQL and Redis, it will cause the server cluster to crash. Eventually the application (website) will crash.

The solution

Highly available caches Redis Sentinel and Redis Cluster are both highly available to prevent massive cache failures. Even if individual nodes or even regional machine rooms are shut down, services can still be provided.

In addition to providing high availability, Redis Sentinel also provides ancillary tasks such as monitoring, notification. Cache degradation Is to ensure that the core service is available. When the traffic increases dramatically and the distributed cache becomes unavailable, the service degrades. Accessing the local cache is the main solution. The system can automatically degrade pre-set key data or manually degrade through configuration switches. Manual degrade requires operation and maintenance or development cooperation.

For example, in the recommendation system, there are a lot of personalized requirements. If personalized requirements cannot provide services, popular data can be supplemented through demotion to avoid a lot of blank space in the front page.

For the service degradation plan, it is necessary to sort out in advance which core businesses must ensure normal service, which businesses can temporarily not provide services and use static pages to replace, and evaluate the core indicators of the server and the overall plan after setting.Copy the code

Redis backup and quick warm-up cache penetration Cache penetration means that if there is no hit in Redis, the request is forwarded to MySQL data for query. If there is no data in MySQL, it is not easy to write to cache. This will result in non-existent data being queried against the database on every request.

The solution

If the database query data is empty, you can set a default value to be stored in the cache so that a second request to fetch the value from the cache does not proceed to the database. You need to set an expiration time for the default values you set or replace the cached default values when they are queried by the database. You can also set write rules for keys to filter out irregular keys before query.Copy the code

Cache concurrency issues caused by multiple Redis clients setting Key values at the same time.

The solution

Redis is single-threaded, has epoll/kqueue, and is infinitely scalable in terms of I/O concurrency. Another solution, of course, is to serialize Redis. Set up operations one by one in the queue

Cache preheating The cache preheating is used to load related data to the cache after the system is online, and the client requests to query the data in the cache. Avoid the problem of querying data before writing to cache.

The solution

Refresh the cache function in the front end and manual operation when online; Or automatically load the data into the cache when the system starts up, when the data volume is small.

How do you load large data sets into Redis, say billions of keys? Pipe Mode is available after Redis 2.6.

Wechat official account synchronization