There are more and more scenarios that involve interacting with Redis during testing. Redis itself will not be repeated, online random search, I have also done some sorting.

Today I’m just going to recap a couple of things I did with Redis during testing. The cases mentioned are abstracted and used as auxiliary illustrations for reference only.

Update Key exception

Note: Delete the original key first and then save it, or directly overwrite the original key?

For example, service A used to query the database every 8 hours and update it into the cache. Later requirements were adjusted to send MQ messages to service A when there was A change in the database, which then fully fetched the database and updated it to the cache.

If a large number of requests come in at this time, the cache will not hit. Therefore, it is suggested that when data is pulled from the database, it can first compare with the original key value in Redis, delete the redundant cache, and update the other overwrites.

Key deletion and loss

Note: Consider the upstream impact of key deletion or key loss.

For example, service A synchronizes A class of data to Redis and sends A message to service B. After B receives the message, he gets the redis data to find the corresponding key in MongoDB and performs the update operation. If the key cannot be found, the data will be deleted.

At this time, if data is lost in Redis and the key does not exist, data in MongoDB will not be deleted after synchronization.

Therefore, the suggested solution is: if redis is involved in deleting the key, update the key value to be null [], and then MongoDB will delete the corresponding data when it finds the key value to be null. In addition, if the redis key is lost, MongoDB should not delete the data, instead call a real-time interface to query the data and update it.

3. Improper KEY expiration policies cause memory leaks

Let’s review the TTL key directive in Redis:

  • If the key does not exist, -2 is returned
  • If the key exists but the remaining lifetime is not set, -1 is returned
  • Otherwise, the remaining lifetime of the key is returned in s

In general, most businesses use Redis to set an expiration time. Next, take a look at how key expirations are cleaned up.

Regular cleaning

Redis periodically and proactively removes a batch of expired keys (a random batch of keys is checked).

Disadvantages: Many keys may have expired and have not been cleared.

Inert to clean up

When retrieving a key, Redis checks that the key has expired and if it has expired, it deletes the key and returns nothing.

Disadvantages: If there are many expired keys that have not been queried, it is not possible to lazy delete, so a large number of expired keys may accumulate in memory, resulting in memory exhaustion.

In general, business inertia is used in conjunction with periodic cleanup.

Memory flushing mechanism

However, if a regular cleanup misses a lot of expired keys and you don’t check them in time, you won’t be lazy. In this case, a large number of expired keys may accumulate in the memory, resulting in memory exhaustion.

In this case, memory flushing mechanism is needed, and there are several:

  • Noeviction: New write operations will bug when memory is insufficient to accommodate new write data. This is rarely used.
  • Allkeys-lru: Removes the least-recently used key from the key space, which is the most commonly used, when memory is insufficient to accommodate new writes.
  • Allkeys-random: Randomly removes a key from the key space when memory is insufficient to accommodate new writes.
  • Volatile – lRU: Removes the least recently used key from the expired key space when memory is insufficient to accommodate new writes.
  • Volatile -random: Randomly removes a key from the expired key space when memory is insufficient to accommodate new writes.
  • Volatile – TTL: When the memory is insufficient to accommodate new data, the key whose expiration time is earlier is removed from the key space.

The above can be understood.

4. Processing when querying Redis exceptions

Most of the time, Redis is just a caching mechanism. If Redis fails or does not get data, is there a bottom-bottom solution to get data in real time? , need to be considered.

Five, Redis penetration, breakdown, avalanche

through

If the user wants to query a data and finds that there is no data in the redis in-memory database, that is to say, there is no cache hit, the user will also query the persistent layer database and find that there is no data, then the query fails. If there are a lot of users in a high concurrency scenario looking for this data, and the cache misses, the pressure goes directly to the persistence layer database, which is called cache penetration.

The solution is to use a Bloom filter and return an empty object (set expiration time).

The breakdown

Cache breakdown is when a key is very hot and is constantly carrying high concurrency. If the key fails, at the moment of failure, the continuous amount of concurrency will Pierce the cache and hit the persistence layer database directly, just like a hole is cut in the defense wall.

In the solution, you can set the hotspot data to never expire and add a mutex.

An avalanche

At some point in time, the cache set expires or Redis breaks down.

Solution:

  • Ex ante: Redis high availability, master slave + Sentinel, Redis Cluster, avoid total crash.
  • Issue: Local EhCache + Hystrix stream limiting & degrade to avoid MySQL being killed.
  • After: Redis persistence, once restarted, automatically load data from disk, fast recovery of cached data.

There was an article about these three: www.cnblogs.com/pingguo-sof…

Redis deadlock

Redis lock, be careful that the lock cannot be released due to improper use, resulting in deadlock.

Currently commonly used 2 kinds of locks:

SET Key UniqId Seconds

It is safe only in single-instance scenarios. [Fixed] If you do not use setnx+expire+del the intermediate link breaks and still causes a deadlock. If SET Key UnixTimestamp Seconds NX is not used, the same timestamp may exist in high concurrency.

Distributed Redis lock: Redlock

This method is more secure than the original single-node method.

  • Security: Multiple clients are not allowed to hold locks at the same time.
  • Active deadlock: The lock should eventually be released, even if the Client crashes or a network partition occurs (usually based on a timeout mechanism).
  • Fault tolerance: Locks can be acquired and released correctly as long as more than half of Redis nodes are available.

7. Redis persistence

When Redis data needs to be valid for a long time, you need to consider whether to implement RDB and AOF persistence. Generally, RDB and AOF are used together, but persistence may affect performance.

At present, it is very rare to contact the business to do persistence. For example, there is a recommendation system Redis whose data is valid for a long time, but in order to respond quickly and not affect the performance, it does not make persistence, and adopts other degradation schemes Hbase, as well as the bottom of the business.

Eight, cache and database double write data consistency

In general, if your system is not strictly consistent with the cache + database, and allows occasional inconsistencies between the cache and database, then it is best not to do this consistency solution.

If this scheme is implemented, read and write requests are serialized into a memory queue, which guarantees that no inconsistency will occur.

After serialization, however, the throughput of the system will be significantly reduced, requiring several times more machines than normal to support a single request on the line.

A more modest approach is to update the database first and then delete the cache. Temporary inconsistencies may occur, but they are very rare. At this time, usually write database and cache in parallel, you can add a transaction, write success only then successful, there is a link failed to rollback transaction, all failed.

On the consistency of double writing, in fact, can be another length to say, interested in the Internet can search, the subsequent may be sorted out.