This is the 27th day of my participation in the August Genwen Challenge.More challenges in August

【Redis series 】 Redis learning 12, Redis cache penetration, cache breakdown, cache avalanche

Although we are very cool when using Redis cache, it greatly improves the performance and efficiency of our application, especially in data query, we do not have to directly query data in the persistent database, but to query data in memory

There are always two sides to everything, and while it is good to use, we must also face the problem it brings, namely the problem of data consistency, which is a question of weighing the pros and cons, let’s go on

When redis cache is used in conjunction with persistent databases, there are some high availability issues, such as:

  • The cache to penetrate
  • Cache breakdown
  • Cache avalanche

If we can solve the above problems, we can solve part of the server high availability problem

What is cache penetration

Let’s take a look at some of the underlying principles and actual source code analysis, and we’ll look at it together later

Cache penetration means that the user wants to query a data, but it cannot be queried in Redis, that is, it does not match the data in the cache, so it will directly query in persistent mysql, and find that there is no data, then the query fails

When there are a large number of users, the query cache does not find the query, so all these are to query the persistent mysql database, all the pressure on mysql, this is the cache penetration

There are generally two solutions:

  • Use a Bloom filter
  • Cache empty objects

Use a Bloom filter

A Bloom filter is a data structure that stores all possible query parameters in the hash format, and verifies them at the control layer first. If they do not match, they are discarded, thus avoiding pressure on the underlying storage system

Bloom filters are deployed in front of Redis to intercept data and reduce the impact on Redis, which in turn reduces the impact on the persistence layer

Cache empty objects

To cache an empty object, when we do not find the desired data in the persistent database, we return an empty object, cache it, and set an expiration time on it

Then the cache can access the empty object directly when there is a subsequent request to access this object, which protects the persistent data layer and reduces the impact on it

Using the above method of caching empty objects, it seems to solve the problem, but using persistent method, it will find that the number of empty objects corresponding to the key value is increasing, which will cause the following two problems:

  • A large number of empty objects are cached, so a large number of keys occupy memory space, occupying resources, and the memory pressure rises sharply
  • If the empty object expires, then the pressure of the request will still be placed on the persistent database, which will affect the data consistency business

What is cache breakdown

A cache breakdown occurs when there is too much data or when the cache expires

When a key expires, a large number of requests for the key data are hot data. When the cache expires, requests will access the persistent database to query the data and write the data to the cache. In this case, the database will be overburdened and breakdown will occur

The difference between penetration and penetration can be understood here:

Breakdown, is a key is very hot, a large number of access on the key, in the key failure moment, all requests hit the database, a hole, breakdown

Penetration is more likely to occur when the data being accessed does not exist, and a large number of requests are made to access data that does not exist

Cache breakdown solution

  • If the hotspot data is not set to expire and the expiration time is not set, problems will not occur when the hotspot key expires

  • Distributed lock is added to ensure that only one service can access each key at the same time. If other services do not obtain the lock, they cannot access the key of Redis, so they need to wait to obtain the lock

    In this way, the lock pressure is very large, access the lock before redis, equivalent to the lock to redis block a layer

What is cache avalanche

Cache avalanche is when a cache set expires or Redis goes down at some point in time

Such as:

In some hot activities, certain items are set to expire at a fixed time, so in Redis, a large number of keys expire at a fixed time, which causes the cache to be invalidated at this time.

And a lot of the requested data is placed on the persistent database, which is very uncomfortable. Under this pressure peak, all the pressure is placed on the persistent database, which can cause the persistent database to go down

The situation above, the key focus overdue problem is still not very pain, the pain is redis goes down, the formation of the cyclical nature of wave pressure, let’s persistence database or able to top up pressure, it happened that is abnormal in redis downtime, a hang a piece, it is very likely will persist in the rear of the database all hang, It was devastatingly overwhelming

Caching avalanche solutions:

  • Make Redis highly available

Set up redis cluster and live more in different places. Since we are worried about the failure of REDis, we should prepare more Redis to make it master and slave, or live more in different places

  • Current limiting the drop

When the cache fails, we restrict the order of data access by locking, or turn off some unimportant services, so that resources and performance can be fully provided to our main services

  • Do data preheating

Data preheating is before we officially go online, we will need to access the data in advance, so that you can write a large number of data to access the database in the cache

In this way, different keys can be manually triggered and loaded before the high concurrent access data, and different expiration times will be set. The main thing is to balance the cache failure, so as to avoid a large number of key expiration

References:

redis_doc

Welcome to like, follow and favorites

Friends, your support and encouragement, I insist on sharing, improve the quality of the power

All right, that’s it for this time

Technology is open, our mentality, should be more open. Embrace change, live in the sun, and strive to move forward.

I am Nezha, welcome to like, see you next time ~