Dry goods have quality, hydrology has feelings, wechat search [program control], pay attention to this interesting soul

preface

In the last article, we introduced Redis’s five basic object types (string, hash, list, set, and ordered set) and some advanced data structures, HyperLogLog, Geo, and Bit. The richness of types is a big advantage of Redis over Memcached and the like. On the basis of understanding the usage and characteristics of Redis object types, further understanding of Redis memory model is of great help to the use of Redis, such as estimation of Redis memory usage, optimization of memory usage, analysis of Redis blocking, memory occupation and other problems.

This article mainly introduces the memory model of Redis (take 4.0 as an example), including the situation of Redis memory occupation and how to query, the encoding way of different object types in memory, memory allocator (Jemalloc), simple dynamic string (SDS), RedisObject, etc. Then several applications of Redis memory model are introduced.

Memory usage

After the client connects to the server through the redis-CLI (unless otherwise specified, the client uses the redis-CLI), we can check the memory usage by using the Info memory command (the info command displays a lot of information about the Redis server, Including basic server information, CPU, memory, persistence, client connection information and so on; Memory is an argument, indicating that only memory-related information is displayed) :

127.0.0.1:6379> info memory # memory used_memory:673056 used_memory_human: 657.28k used_memory_rss:2416640 Used_memory_rss_human: 2.30 M used_memory_peak: 692400 used_memory_peak_human: 676.17 K used_memory_peak_perc: 97.21% Used_memory_overhead: 659810 used_memory_startup: 609488 used_memory_dataset: 13246 used_memory_dataset_perc: 20.84% Total_system_memory: 820248576 total_system_memory_human: 782.25 M used_memory_lua: 25600 used_memory_lua_human: 25.00 K Maxmemory: 3221225472 maxmemory_human: 3.00 G maxmemory_policy: noeviction mem_fragmentation_ratio: 3.59 Lazyfree_pending_objects mem_allocator: jemalloc - 4.0.3 active_defrag_running: 0:0Copy the code

Key introduction:

  • Used_memory: The amount of memory (in KB) allocated by the Redis allocator, including virtual memory (i.e. Swap), used_memory_human to make the results more user-friendly;

  • Used_memory_rss: The amount of memory (in KB) occupied by a Redis process, which is consistent with the values seen by the top and ps commands. This includes the amount of memory allocated by the allocator, as well as the amount of memory allocated by the process itself, but does not include virtual memory.

Used_memory and USED_memory_rss, the former from the Redis perspective and the latter from the operating system perspective. The difference between the two is that on the one hand, memory fragmentation and Redis process running require memory, making the former may be smaller than the latter; on the other hand, the existence of virtual memory makes the former may be larger than the latter.Copy the code
  • Mem_fragmentation_ratio: memory fragmentation ratio, which is the ratio of USED_memory_rss to USed_memory. Due to the large amount of Redis data in practical applications, the memory occupied by the process running at this time is much smaller than the amount of Redis data and memory fragments. Therefore, the ratio of USED_memory_RSS to USED_memory becomes a parameter to measure the memory fragmentation rate of Redis.

Mem_fragmentation_ratio > 1 If the fragmentation rate is too high, memory resources are wasted. A larger value indicates more serious fragmentation. Mem_fragmentation_ratio < 1: This parameter occurs when the OS swaps the Redis memory to the hard disk and the Redis has used swap partition. Due to the media of virtual memory is disk, the speed is much slower than memory, when this situation occurs, should be timely troubleshooting, if the memory is insufficient should be handled in time, such as increasing Redis nodes, increasing the memory of Redis server, optimize the application, etc.. Parameter around 1.03 is a healthy state (for jemalloc). The mem_fragmentation_ratio value in the above example is high because the data has not been saved to Redis yet. The memory that the Redis process itself runs makes USED_memory_RSS much larger than USED_memory.

  • Mem_allocator: memory allocator used by Redis. It is specified at compile time. It can be libc, jemalloc, or tcmalloc. The default jemalloc-4.0.3 is used in the example above.

Memory partitioning

As an in-memory database, Redis stores mainly data (key-value pairs) in memory. In addition to data, other parts of Redis also take up memory. Redis memory is mainly divided into the following parts (its own data memory, its own process memory, buffer memory, memory fragmentation) :

  • Its own data memory: As a database, the data is certainly the most important part of the memory, which is also in USED_memory. All data in Redis is key-value. Two objects are created each time a key-value is created, that is, a Key object and a Value object. Key objects are strings. Avoid using long keys. The Value object contains five types (String, List, Hash, Set, Zset), each of which occupies different memory. Inside Redis, each type may have two or more internal encoding implementations.

Redis will wrap data when storing data, such as redisObject and SDS, etc. The detailed internal encoding of each type will be explained below.

  • Own process memory: Redis processes themselves consume memory (code, constant pool), but only a few megabytes, which is negligible compared to Redis data in most production environments. This portion of memory is not allocated by Jemalloc, so it is not counted in USED_memory.

In addition to this, children created by Redis can also run in memory, such as children created by Redis when performing AOF or RDB overwrites. This memory is not owned by the Redis process and is not counted in usED_memory and USED_memory_RSS.

  • Buffer memory: Buffer memory includes the client buffer, replication backlog buffer, and AOF buffer, which is allocated by JEMalloc and therefore counted in USED_memory.

The client buffer refers to the input/output buffer of all connections to the Redis server. The input buffer cannot be controlled and the maximum space is 1 gb. The output buffer can be controlled by client-output-buffer-limit. Client buffers are divided into regular client, slave client, and subscription client. (1) The normal client (client-output-buffer-limit normal 0 0 0 0) does not limit the output buffer by default, but if a large number of slow clients access, this part of consumption can not be ignored, because the consumption is very slow, in the output buffer backlog. So you can set maxClients to limit. (2) Slave (client-output-buffer-limit slave 256MB 64MB 60) : The primary node establishes a separate connection for each secondary node for command replication. When the network latency of the primary node is high or a large number of secondary nodes are mounted to the primary node, the memory consumption occupies a large part. It is recommended that no more than two secondary nodes be mounted to the primary node. (3) Subscription client (client-output-buffer-limit pubSub 32MB 8MB 60) : When the speed of production message is faster than the speed of consumption, the output buffer is easy to backlog messages; The replication backlog buffer is used for partial replication, which we will cover in detail in the persistence mechanism article. It is controlled by the repl-backlog-size parameter, which defaults to 1MB. The primary node has a replication buffer, which is shared by all secondary nodes. Setting the corresponding size can effectively avoid full replication. The AOF buffer is used to save the most recent write commands and wait for them to be flushed to disk during AOF rewrite. Commands are written to the buffer and then synchronized to disk according to the response policy. The memory consumption is usually small, depending on the number of commands written and the rewrite time.

  • Memory fragmentation: The default Redis memory allocator is Jemalloc. The purpose of the memory allocator is to better manage and reuse memory. Memory fragmentation is generated when Redis allocates and reclaims physical memory. For example, if changes to the data are frequent and the size of the data varies greatly, the space freed by Redis may not be freed in physical memory, but redis cannot use it effectively, resulting in memory fragmentation. Memory fragmentation is not counted in USED_memory.

The generation of memory fragmentation is related to the operation on the data, the characteristics of the data and the memory allocator used. If the memory fragmentation of the Redis server is large, you can safely restart the server to reduce the memory fragmentation. After the restart, Redis reads the data from the backup file again, rearranges the data in memory, and selects the appropriate memory unit for each data to reduce the memory fragmentation.

Memory management

Redis limits the maximum available memory by the maxMemory (in redis.conf) parameter. The purpose of memory restriction is as follows: In cache scenarios, when the maxmemory limit is exceeded, LRU deletion policies are used to release space to prevent the used memory from exceeding the physical memory of the server. In the production environment, set as much memory as possible, because if the physical memory is used up, Swap will be used heavily and RDB files will be written slowly. If the Redis snapshot function is enabled, you should set the maximum memory value to 45% of the available memory of the system, because it takes twice as much memory to copy the entire data set during the snapshot. That is, if 45% is currently used, it will become 95%(45%+45%+5%) during the snapshot, of which 5% is reserved for other expenses. If snapshot is not enabled, maxMemory can be set to 95% of the available memory of the system. To dynamically adjust the memory limit, run the config set maxmemory NGB command.

There are two ways to delete cache in Redis: periodic deletion and lazy deletion. Periodic deletion means random check of some keys at intervals and delete expired keys. Why not delete all of them? Because it would all be a disaster, 100ms at a time, too time consuming. If a key is deleted periodically, some keys may not be deleted. Lazy deletion means that the user checks whether the key is expired when using it. If the key is expired, the user deletes it actively.

At this time, there is a problem. Finally, if I do not delete regularly and I do not query, what can I do?

We have a memory flush mechanism that triggers an overflow control policy when Redis reaches the maxmeory limit. The default Policy is volatile- lRU. Lru clears keys with expire time specified (not the actual expire time).

The official website to the memory elimination mechanism is the following:

Noenviction: Does not clear data, just returns an error, which causes more memory to be wasted, for most write commands (except DEL and a few others)

Allkeys-lru: culls the least recently used data from all datasets (server.db[I].dict) for new data to use volatile-lru: Cull the least recently used data from a set of expired data (server.db[I].expires) for new data to use allKeys-random: Optionally cull data from all datasets (server.db[I].dict) for new data to use volatile-random: Selects any data from an expired dataset (server.db[I].expires) to be discarded for new data to use volatile- TTL: Selects an expiring data from an expired dataset (server.db[I].expires) to be discarded for new data to use

Redis data store details

The details of Redis memory usage, partitioning, and management are described above, and the details of specific data types in stored procedures are discussed next, involving memory allocators, multiple object types and the internal encoding for their use, internal simple dynamic string SDS, and redisObject.

  • DictEntry: Every key-value pair in Redis has a dictEntry, which stores Pointers to Key and Value, pointing to the location of the corresponding data. Next points to the next dictEntry, independent of this key-value.
  • Key: The Hello corresponding to Key in the figure is not stored directly as a string, but in the SDS structure.
  • RedisObject: Value The corresponding Value world is stored neither directly as a string nor in an SDS, but in a redisObject. Various types of values in Redis are stored through redisObject. The type field in the redisObject refers to the type body string of the Value object, and the PTR refers to the memory address of the object, which is still an SDS structure to store the data.
  • Memory allocator Jemalloc: Whether DictEntry object, redisObject, SDS object, all need memory allocator (such as Jemalloc) to allocate memory for storage. The dictEntry object, for example, consists of three Pointers that make up 24 bytes on a 64-bit machine, and Jemalloc allocates 32 bytes of memory for it.

The following sections are detailed:

redisObject

Redis source: github.com/antirez/red… , pull code to local; Many object types of Redis are not stored directly, but are stored through redisObject, which contains object types, internal encoding, memory reclamation, shared objects and many other parts. The redisObject structure is as follows:

Typedef struct redisObject {unsigned type:4; // Object type unsigned Encoding :4; Unsigned LRU :REDIS_LRU_BITS; /* lru time (relative to server. Lruclock) // reference counter void * PTR; } robj;Copy the code
  1. Type Object type: The type field indicates the data type of the object, occupying four bits. When executing the Type object instruction, we can check the corresponding type.
	127.0.0.1:6379> set key value
	OK
	127.0.0.1:6379> type key
	string
	127.0.0.1:6379> hset std name xiaoming
	(integer) 1
	127.0.0.1:6379> type std
	hash
	127.0.0.1:6379> sadd lst member1 member2
	(integer) 2
	127.0.0.1:6379> type lst
	set
Copy the code
  1. Encoding Internal encoding type: Encoding Indicates the internal encoding of the object, which contains four bits. Each data type supported by Redis has at least two internal encodings, for example, string has int, EMbstr and RAW encoding, and list has ziplist and LinkedList encoding. The advantage of having multiple encodings for each type is that different encodings can be switched automatically according to different usage scenarios to improve efficiency, greatly improve flexibility, and also achieve decoupling of users and low-level coding optimization. We can see the corresponding encoding using the object Encoding key command.
	127.0.0.1:6379> object encoding key
	"embstr"
	127.0.0.1:6379> set key 100
	OK
	127.0.0.1:6379> object encoding key
	"int"
Copy the code

The specific encodings for each type are explained in the inner encodings of redis object types below, so read on.

  1. LRU timing clock: The LRU records the last time an object was accessed by a command. The number of bits occupied varies according to the version (24 bits in version 4.0, 22 bits in version 2.6). By calculating the difference between the LRU time and the current time, we can obtain the idling time of an object. The object idletime key command can display the idling time (in seconds). The object idletime command is special in that it does not change the LRU value of the object.
	127.0.0.1:6379> set name xiaoming
	OK
	127.0.0.1:6379> object idletime name
	(integer) 18
	127.0.0.1:6379> object idletime name
	(integer) 23
	127.0.0.1:6379> object idletime name
	(integer) 26
	127.0.0.1:6379> get name
	"xiaoming"
	127.0.0.1:6379> object idletime name
	(integer) 1
Copy the code

The lRU value is also related to memory reclaim. If The maxMemory option is enabled by Redis and the reclaim algorithm is volatile- lRU or allkeys-lRU, Redis will preferentially free the object with the longest idle time when the memory usage exceeds the value specified by MaxMemory.

  1. Refcount counts the number of times an object is referenced. Refcount counts the number of times an object is referenced. Reference counting is one of the JVM – like methods for determining the survival of objects. When a new object is created, the refCount is initialized to 1, incremented when the program calls it, decrement when the object is no longer being called, and freed when the refCount goes to 0.
  • For an extra look at shared objects, objects that are used multiple times in Redis are called shared objects (refCount >1). Redis to save memory, new programs do not create new objects when there are some duplicate objects, but use the original object, this reused object is called a shared object. Redis’s shared objects currently only support string objects with integer values. This is actually a balance between memory and CPU (time) : sharing objects reduces memory consumption, but it takes extra time to determine whether two objects are equal. For integer values, the complexity of operation is judged to be O(1). For ordinary strings, the judgment complexity is O(n). For hashes, lists, sets, and ordered sets, the judgment complexity is O(n^2). Although shared objects can only be string objects with integer values, they can be used by all five types (such as elements of hashes, lists, and so on). As currently implemented, the Redis server initializes 10,000 string objects with integer values ranging from 0 to 9999. These shared objects can be used directly when Redis needs to use string objects with values from 0 to 9999. The number 10000 can be changed by adjusting the value of the OBJ_SHARED_INTEGERS parameter. You can run the object refcount command to view the number of times a shared object is referenced.
  1. * PTR data pointer: THE PTR pointer points to specific data.

The structure of a redisObject is related to the object type, memory encoding, memory reclamation, and shared object. The size of a redisObject is 16 bytes: 4 bits +4 bits +24 bits +4 bytes +8 bytes =16 bytes.

SDS

Redis uses SDS, short for Simple Dynamic String, as the default String representation. SDS structure is as follows:

struct sdshdr { int len; //buf used length int free; //buf unused length char buf[]; //buf stands for byte array, used to store string};Copy the code

As can be seen from the structure of SDS, the length of buF array =free+len+1 (where 1 represents the null character at the end of the string); So, the space occupied by an SDS structure is: the length of free+ the length of len+ the length of buF array =4+4+ Free +len+1=free+len+9. So the question comes, since redis is written in C language, why not directly use C string, another way to ask is the structural advantage of SDS?

  • Access binary: SDS can store binary data, C strings cannot. Because strings in C end with empty strings, which might be contained for binary data content, C strings cannot be read properly. SDS uses len as the end of the string, so there is no such problem.
  • Memory reallocation: If you want to modify the C string, you need to reallocate the memory (free it before applying for it). If the C string is not reallocated, memory overflow occurs when the string increases, and memory leakage occurs when the string decreases. As SDS can record len and free, the coupling between string length and underlying array length is removed and can be optimized. The space preallocation strategy (that is, allocating more memory than is actually needed) greatly reduces the probability of reallocating memory when the string length increases. The lazy space free strategy greatly reduces the probability of reallocating memory when the string length decreases.
  • Buffer overflow: This buffer overflow problem was caused by the memory reallocation problem in the previous one. When using C strings, you can easily run out of memory if you forget to reallocate memory as the string length increases. SDS records the length, and the corresponding API automatically reallocates memory when it may cause buffer overflows, eliminating buffer overflows.

Memory allocator jemalloc

Redis can specify memory allocators at compile time: libc, jemalloc, tcMALloc, jemalloc by default. Jemalloc, the default memory allocator for Redis, does a relatively good job of reducing memory fragmentation. In the 64-bit system, jemalloc divides the memory space into three ranges: small, large and huge. Each range is divided into many small memory block units; When Redis stores data, it selects the most appropriate memory block for storage. Jemalloc’s memory partition unit is as follows:For example, if you need to store a 200-byte object, Jemalloc puts it in a 224-byte memory cell, or a 656-byte object in a 768-byte memory cell.

The internal encoding of the Redis object type

Each of the five data structure types supported by Redis supports at least two internal encodings. The advantage of this is that the interface is decoupled from the underlying encoding implementation, and users are not affected when they need to switch internal encodings according to different scenarios. As for the conversion of Redis internal code, it conforms to the following rules: the conversion is completed when Redis writes data, and the conversion process is irreversible. Only small memory encoding can be converted to large memory encoding.

string

Strings are the most basic type, because all keys are strings, and elements of several other complex types are strings. The value cannot exceed 512MB.

Internal code:

  • Int: a long integer of 8 bytes. When the string value is an integer, the value is represented by the long integer in C.
  • Embstr: a string of <=44 bytes. (This may vary from version to version)
  • Raw: a string of more than 44 bytes

Both EMBSTR and STR store data using RedisObject and SDS. Embstr allocates memory space once (RedisObject and SDS are contiguous) while RAW allocates memory space twice (RedisObject and SDS are contiguous).

Compared to RAW, embSTR has the advantage of allocating one less space on creation and freeing one less space on deletion. For embSTR, the entire redisObject and SDS need to be reallocated if memory needs to be reallocated if the string length increases.

Coding conversion test:

127.0.0.1:6379> set key1 66 OK 127.0.0.1:6379> Object encoding key1 "int" 127.0.0.1:6379> set key3 Qwertyuiolkjhgfdsazxcvbnmlkjhgfdsa OK 127.0.0.1:6379 > strlen key3 (integer) 34 127.0.0.1:6379 > object encoding key3 "Embstr" 127.0.0.1:6379 > set key3 qwertyuioplkjhgfdsazxcvbnm, lkjhgfdsaqwertyuiolkjhgfdsazxcvbnmlkjhgfdsa OK 127.0.0.1:6379> strlen key3 (integer) 70 127.0.0.1:6379> Object Encoding Key3 "RAW"Copy the code

The list of

Lists are used to store multiple ordered strings. Each string is called an element. A list can store 2^32-1 elements. Lists in Redis support both insertion and pop-up, and can obtain elements in a specified position (or range), which can act as arrays, queues, stacks, and so on.

In 3.0, the internal encoding of redis lists can be ziplist or linkedList, while in 4.0, there are only quickList and linkedList structures. Double-endian list: consists of a list structure and multiple listNode structures;Double-ended lists hold the head node head and tail node, and each node has Pointers to the front and back, as well as len, dup, free, and match, which set type-specific functions for node values, so lists can be used to hold various types of values. Each node in the list points to a redisObject of type string.

Coding conversion test:

	127.0.0.1:6379> rpush list2 va1
	(integer) 1
	127.0.0.1:6379> object encoding list2
	"quicklist"	
Copy the code

The hash

Hash is also one of the commonly used data structures, which is a key-value form of data structure (note the distinction between hash data structure and key-value database Redis). Data structure hash using internal encoding can be ziplist (ziplist) and hashtable (hashtable). Ziplist explained above that compressed lists are used for scenarios with fewer elements and smaller lengths than hash tables. Its advantage lies in centralized storage, saving space. Hashtable: a hashtable consists of one dict structure, two dictht structures, one dictEntry pointer array (called bucket), and multiple dictEntry structures. The hash table is shown below without elaboration.Coding conversion test:

A compressed list is used only if the following two conditions are met: the number of elements in the hash is less than 512; All key-value pairs in the hash have key and value strings of less than 64 bytes. If one condition is not met, hash tables are used; And encoding can only be converted from a compressed list to a hash table, not the other way around.

127.0.0.1:6379> hset STD name xiaoming (INTEGER) 1 127.0.0.1:6379> Object Encoding STD "ziplist" 127.0.0.1:6379> hset std year 111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111 111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111 111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111111 111111 (integer) 1 127.0.0.1:6379> Object Encoding STD "hashtable"Copy the code

A collection of

Collections are similar to lists, but collections cannot have duplicate elements and are unordered. A collection can store up to 2^32-1 elements; In addition to supporting regular add, delete, change and search, Redis also supports multiple sets of intersection, union, difference set. The internal encoding of a collection can be a collection of integers (intset) or a hashtable (hashtable). The hash table is the same as the previous hash table, but it is important to note that the values of the set are all set to null when using the hash table. The intset structure is as follows:

	typedef struct intset{
	    uint32_t encoding;
	    uint32_t length;
	    int8_t contents[];
	} intset;
Copy the code

Where encoding stands for the type of contents stored. Although contents is int8_T, it actually stores the value int16_t, INT32_t, or int64_t, depending on encoding. Length indicates the number of elements. The integer set is suitable for all the elements of the set are integers and the number of elements in the set is small. Compared with the hash table, the integer set has the advantage of centralized storage and space saving. Meanwhile, although the operation complexity of elements has changed from O(1) to O(n), the operation time is not significantly inferior due to the small number of sets.

Coding conversion test:

	127.0.0.1:6379> object encoding std
	"hashtable"
	127.0.0.1:6379> sadd set1 name year sex
	(integer) 3
	127.0.0.1:6379> object encoding set1
	"hashtable"
	127.0.0.1:6379> sadd set2 12 13 14
	(integer) 3
	127.0.0.1:6379> object encoding set2
	"intset"
	127.0.0.1:6379> sadd set3 13 15 year
	(integer) 3
	127.0.0.1:6379> object encoding set3
	"hashtable"
Copy the code

An ordered set

An ordered set is an unrepeatable element like a set, but the elements in an ordered set are ordered. Unlike lists, which use index subscripts for sorting, ordered collections assign a score to each element for sorting. The internal encoding of ordered collections can be ziplist or skiplist. Skip list is a kind of ordered data structure, a special ordered linked list, which maintains multiple Pointers to other nodes in each node to achieve the purpose of fast access to nodes. Skip list structure is shown in the figure below:

  • A skip list is a combination of multiple layers of ordered lists, with the lowest layer holding all the data, and each layer holding part of the data at the next level.
  • There is a reference relationship between the nodes with the same elements in two adjacent linked lists. Generally, there is a reference to the lower node in the upper node.
  • Skip lists are designed to improve query efficiency while sacrificing some storage space.

Coding conversion test:

Compressed lists are used only if both of the following conditions are met: the ordered set has less than 128 elements; All members of an ordered set are less than 64 bytes long. If one condition is not met, skip lists are used; And the encoding can only be transformed from a compressed list to a skip list, the reverse direction is not possible.

127.0.0.1:6379> Zadd Zset1 1 Zheng 2 Zhao 3 Zhang (INTEGER) 3 127.0.0.1:6379> Object Encoding Zset1 "ziplist" 127.0.0.1:6379 > zadd zset1 4 zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz ZZZZZZZZZZZZZZZZ (integer) 1 127.0.0.1:6379> Object Encoding zset1 "skiplist"Copy the code

A brief summary of the five types of internal encoding:

The data structure The internal encoding
string Int, embstr, raw
list Quicklist, linkedlist
hash Ziplist, hashtable
set Intset, hashtable
zset Ziplist, skiplist

The author remarks

This article mainly introduces the memory model of Redis (take 4.0 as an example), including the situation of Redis memory occupation and how to query, the encoding way of different object types in memory, memory allocator (Jemalloc), simple dynamic string (SDS), RedisObject and so on.

The more you know, the more you don’t know. Keep hungry Keep Foolish!