The Redis cache stores data in the memory to prevent services from reading data from the database and improve the system response speed. Memory is faster than disk access, but the cost of memory is much higher than disk access, so it is impossible to keep all the data in memory, so when the cache space is full, there is a problem of cache obsoletization.

What are the weeding strategies for the Redis cache

There are about eight, as shown below

Let’s introduce each of them

  • Noevction: Returns an error when the data is full
  • Volatile – TTL deletes the key-value pairs whose expiration time is set based on their expiration time. The earlier the pair expires, the earlier it is deleted.
  • Volatile -random, as its name suggests, is randomly deleted from a key-value pair with an expiration time set.
  • Volatile -lru uses the LRU algorithm to filter key-value pairs with expiration time set.
  • Volatile – LFU uses the LFU algorithm to select the key-value pair with the expiration time set.
  • Allkeys-randoms: Select and randomly delete all key-value pairs
  • Allkeys-lru policy: Filters all data using the LRU algorithm
  • Allkeys-lfu: Filters all data using the LFU algorithm

LRU

The LRU algorithm is Least Recently Used, which filters data according to the Least Recently Used principle. The Least Used data is filtered out, and the most frequently Used data is kept in the cache

LRU will maintain all data in a linked list. The head and tail of the linked list are MRL and LRU ends, respectively, representing the most commonly used and least commonly used data, as shown in the figure below

The LRU algorithm is easy to understand. It thinks that the data just accessed may still be accessed, so it is placed on the MRU side. The data that is not accessed for a long time is placed on the LRU side, and it is deleted when the cache is full

However, because LRU needs to use linked list to manage all data, it will bring additional overhead, and each data access needs to move data, which will be time-consuming and reduce the caching performance of Redis.

Therefore, Redis simplifies LRU. Redis will record the time stamp of the last access of each data, and then randomly select N data as a candidate set when weeding out data, and then redis will compare the LRU fields of these N data. Get rid of the smallest of them

Redis provides a configuration parameter maxmemory-samples, which is the number of data selected by Redis, N. For example, we can ask Redis to select 100 data as candidate data sets by executing the following command:

CONFIG SET maxmemory-samples 100
Copy the code

When it comes time to eliminate the data again, Redis needs to select the data to enter the candidate set created during the first elimination. The selection criteria here is that the LRU field that can enter the collection data must be smaller than the smallest LRU value in the collection. When new data enters the candidate dataset, if the number of data in the candidate dataset reaches maxmemory-samples, Redis weeds out the data with the smallest LRU field value.

The ALGORITHM of LFU will be introduced later. Let’s stop here today.