This article has participated in the third “topic writing” track at the Diggings Creators Camp. For more details, check out diggings program | Creators Camp.

πŸ“£Redis expiration policy

Let’s first look at Redis’ memory weeding mechanism.

πŸ›’ Periodic Deletion

πŸ’‹ overview

By default, Redis randomly selects some keys with expiration time every 100ms, checks whether they are expired, and deletes them if they are expired. Notice that this is a random selection. Why random? If redis has hundreds of thousands of keys, it would be a huge load on the CPU to iterate over all the keys that set the expiration time every 100ms.

🏠 advantages

Ensure that memory is freed as soon as possible

🚲 shortcomings

  1. If there are a large number of expired keys, deleting these keys takes a lot of CPU time. When CPU time is limited, the CPU cannot spend all the time doing important things. You need to spend time deleting these keys.
  2. It takes time to create a timer. If you create a timer for each key whose expiration time is set (a large number of timers will be generated), the performance will be severely affected

πŸ—“ summary

Trading processor performance for storage (time for space)

⛏ Lazy Deletion

The key is not deleted when it expires. Each time you obtain the key from the database, check whether it has expired. If it expires, the key is deleted and NULL is returned.

🎰 advantages

The deletion occurs only when the key is fetched from the database, and only the current key is deleted, so the CPU usage is relatively low, and the deletion is already necessary.

πŸ₯Ό shortcomings

If a large number of keys are not acquired for a long time after the timeout period, a memory leak may occur.

πŸ† summary

Trading storage for processor performance (space for time)

πŸ›  Delete periodically

If the expiration time is not set for any key in the current library, the next library traversal is directly performed, and a random key with expiration time is obtained to check whether the key is expired. If the key is expired, the key is deleted to determine whether the periodic deletion operation has reached the specified time. If the expiration time has reached, the periodic deletion operation exits directly. (By default, each library detects 20 keys)

🎲 advantages

  1. Reduce the CPU usage of deletes by limiting the duration and frequency of deletes — dealing with the downside of periodic deletes. 2) Periodically deletes expired keys — dealing with the downside of lazy deletes.
  2. Periodically delete expired keys — deals with the downside of “lazy delete.”

πŸ““ shortcomings

  1. In terms of memory friendliness, not as good as “scheduled delete”.
  2. In terms of CPU time friendliness, it is not as good as “lazy delete”.

πŸŽƒRedis memory obsolescence mechanism

πŸ“  briefly

Redis has an expiration policy. If your Redis can only store 1 GB of data, and you write 2 GB of data per request, and you don’t request the key in time, then lazy deletion will not take effect and Redis will use more and more memory.

Redis can set the memory size:

# maxMemory <bytes> # Set Redis maximum memory size to 100 maxMemory 100MBCopy the code

Redis has a default configuration, and this is the default Redis memory weeding mechanism:

# maxmemory-policy noeviction
Copy the code

Maxmemory-policy has 8 values. When memory is out of capacity:

  1. Noeviction: returns an error message without deletion.
  2. Allkeys-lru: Remove the keys that have been unused for the longest time (least frequently used). This is recommended.
  3. Volatile -lru: Removes the key that has been unused for the longest time in a key with expiration time set.
  4. Allkeys-random: removes a key randomly.
  5. Volatile -random: Randomly removes a key from a key whose expiration time is set.
  6. Volatile-ttl: removes the key that is about to expire from the key whose expiration time is set.
  7. Allkeys-lfu: removes the least recently used key.
  8. Volatile -lfu: Removes the least recently used key from the key whose expiration time has been set.

✨ The difference between LRU and LFU

🎁 LRU

LRU is the Least Recently Used page replacement algorithm, i.e. the pages that have not been Used for the longest time are first eliminated!

Let’s say I have 1, 1, 1, 2, 2, 3 and I already have a (1, 2) in the cache and when the 3 is added, I have to get rid of the 1 and make it a (3, 2).

🎨 LFU

LFU is the Least Frequently Used page replacement algorithm, i.e. the Least visited page in a given period of time!

For example, if you have data 1, 1, 1, 2, 2, 3 in the cache, you have (1 (3), 2(2)). When the 3 is added, you have to eliminate the next 2 to become (1 (3), 3(1)).

πŸŽͺRedis Restart how to restore data?

Redis checks the AOF file before starting, and loads the RDB file only if it does not exist. Because the data integrity of AOF is high, the maximum loss of data is 1 second.

🧧 summary

  1. AOF recovery is slow; The RDB file is small and recovery is fast.
  2. RDB is a data snapshot file, and AOF is a log file of command operations, which is appended.