This is the fifth day of my participation in the November Gwen Challenge. Check out the details: The last Gwen Challenge 2021

We know that the Redis cache uses memory to store data, but memory is limited. As the amount of data to be cached increases, the limited cache space will inevitably be filled. A cache elimination strategy is required to delete data.

Redis cache elimination strategy

Redis’s elimination strategy can be divided into two categories according to whether data elimination will be carried out:

  • Noeviction is the only design designed to exclude data obsolescence.
  • 7 other strategies that will be eliminated.

The seven strategies that will be eliminated can be further divided into two categories according to the scope of the elimination candidate data set:

  • Volatile – RANDOM, volatile- TTL, volatile- lRU, volatile- LFU (added after Redis 4.0).

  • Eliminate allKeys-LRU, ALLKeys-Random and ALLKeys-LFU (added after Redis 4.0) in all data range.

Prior to Redis3.0, the default was volatile- lRU; Following Dis3.0 (including 3.0), the default elimination strategy is Noeviction

Noeviction strategy

Noeviction means not to weed data, Redis will no longer service when cache data is full and new write requests come in, instead returning errors.

Elimination strategies based on expiration time

Volatile -random, volatile- TTL, volatile- lRU, and volatile- lFU apply to key – value pairs that have expired. When a key pair expires or Redis memory usage reaches the maxMemory threshold, Redis will flush out the key pair based on these policies.

  • Volatile – TTL deletes the key value pairs with expiration time. The earlier the key pair expires, the earlier the key pair is deleted.
  • Volatile -random, as its name suggests, randomly deletes key – value pairs with expiration dates.
  • Volatile – lRU filters key-value pairs with expiration dates using the LRU algorithm.
  • Volatile – lFU uses the LFU algorithm to select key-value pairs with expiration dates.

All data range elimination strategies

Allkeys-lru, AllKeys-Random and AllKeys-LFU expand the scope of data eliminated by allkeys-LRU, AllKeys-Random and AllKeys-LFU to all key-value pairs, regardless of whether expiration time is set for these key-value pairs. The rules for filtering data to be eliminated are as follows:

  • Allkeys-random policy, randomly select and delete data from all key-value pairs;

  • Allkeys-lru policy, which uses the LRU algorithm to filter all data.

  • Allkeys-lfu policy, which uses LFU algorithm to filter all data.

About the LRU algorithm

LRU algorithm is the most commonly used algorithm recently. Because LRU uses a linked list to maintain the list of used data, the more data it uses, the more time it takes to move elements, which inevitably affects the Redis main thread. Redis simplifies the LRU algorithm.

The core idea of the LRU strategy is that if a piece of data has just been accessed, it must be hot and will be accessed again.

According to this core idea, the LRU policy in Redis sets an LRU field in the RedisObject structure corresponding to each data, which records the access time stamp of the data. When performing data culling, the LRU policy will weed out the data with the smallest LRU field value (that is, the data that has been accessed the longest) in the candidate data set.

Therefore, in business scenarios where data is frequently accessed, the LRU policy can indeed effectively preserve the most recently accessed data. Moreover, because the retained data will be accessed again, business applications can be accessed faster.

When accessing a key-value pair, Redis records the timestamp of the last access. When Redis decides to eliminate data, it randomly selects N data as a candidate set and filters out the smallest timestamp. The next time data is weeded out, data with a timestamp value smaller than that of the first candidate set is selected and entered into the new candidate set. When the data reaches maxmemory-samples, the smallest values are eliminated.

Using this command, you can SET the number of candidate collections to be selected CONFIG SET maxmemory-samples N

Use advice

Based on the characteristics of policies, different policies can be selected for different scenarios to eliminate data.

  • If the cache data is not hot or cold, that is, the data access frequency difference is small, use this parameterallkeys-randomRandom strategy to eliminate data;
  • It is recommended when the data is clearly hot or coldallkeys-lruorvolatile-lruAlgorithm to keep the most recently accessed data in the cache data;
  • If the service has top requirements, that is, data that will not expire, the expiration time is not setvolatile-lruStrategy. In this way, such data will not be eliminated, while other data can be eliminated according to the LRU rules.