| preface Redis everyone not unfamiliar, even used, also had heard of.

As one of the most widely used KV in-memory databases, in today’s era of high traffic, the single-machine mode is a little thin, so it is inevitable to have some expansion schemes.

The following sections describe each of these scenarios, give an overview of scenarios, implementations, and some common pitfalls for beginners.

| text

Start with basic extensions so that you can understand more advanced models.

Ps: The background of this article is based on the latest stable version 5.0.8 on the official website when I wrote it, although it changed to 6.0.1 before I finished writing it.

partition

An overview of the

Partitioning is the simplest way to expand.

When we face the storage space bottleneck of a single machine, the first thing we can think of is data partitioning like traditional relational databases.

Or suppose that there are N machines that can be used as Redis servers, and the total memory of all machines is 256G, and a large memory storage space is needed for the client to clean up.

In addition to removing the memory chips and welding them to one machine, we can also choose to partition them, which expands the computing power.

In terms of single-finger partitioning, all data is distributed among multiple Redis instances. Each instance does not need to be associated and can be completely independent.

use

Client processing

Similar to the traditional database partition table, you can start with the key and calculate the corresponding data store instances. Scope Angle, such as orderId:1 orderId:1000 into instance 1, orderId:1001 orderId:2000 into instance 2. Hash computations, like our Hashmap, use hash functions plus bitwise operations or modulos. Advanced gameplay also includes consistent hash operations, such as finding the corresponding instance to operate on

With proxy middleware, we can develop independent proxy middleware, which shields the logic of processing data sharding and runs independently. Of course, there are already wheels built by others. Redis also has excellent proxy middleware, such as Twemproxy or CODIS, which can be used depending on the scenario.

disadvantages

There is no support for multiple key operations or transactions with multiple keys.

Maintenance costs, because each instance belongs to a separate node physically and logically, lack unified management.

Limited flexibility, scope sharding is fine, such as hash+MOD, but if you want to dynamically adjust the number of Redis instances, you have to consider a lot of data migration, which is very cumbersome.

As a developer, I know that we can always “save the country” with some features that are not supported in the current environment, but it’s always a bit more trouble.

master-slave

Overview Data Migration

What is often called master-slave, or Replication, you can call it what you want.

Like the partitioning above, this is the basis of the Redis high availability architecture, which may be mistaken for “high availability” by newcomers, which is not quite true.

Partitioning can temporarily solve the problem of data volume that cannot be accommodated in a single point, but a Key is still only on one instance, which is less reliable in the era of heavy traffic.

Master-slave is another latitude expansion, the node will synchronize data to the slave node, just like the instance “double”, reliability is improved a lot.

Some exaggeration of the picture, mainly want to reflect the structure of flexible, is a master from one, or a master from more, or a master from more… See your mood

With the instance splitter, read and write traffic can be separated and distributed evenly among slave nodes.

use

Master gathered in the era, chat software to prepare such an expression package.

What does this meme have to do with how it’s used? First, take a look at the usage mode: The Redis instance that serves as the master node does not require any parameters to be configured. Only the instance that serves as the slave node needs to be started normally. REPLICAOF the master node Host port can be used to complete the master/slave configuration using the configuration file or command mode

Is it the same as emoticons, “Dalao” did not move, I went to “hold thighs”.

This completes a master/slave minimum configuration, and the master/slave instances can provide services externally.

The “master node” in the command is relative, and the slave can also hold the slave’s leg, which is the flexible structure mentioned above.

disadvantages

All slave nodes are read-only. In the case of heavy write traffic, the slave node cannot be used. Why don’t I just turn off the slave read-only? Of course not, data replication is done from master to slave. Data unique to the slave node cannot be synchronized to the master node, so data is inconsistent.

Failover is not friendly. When the master node is down, there is no place for write processing, and a new master node needs to be manually set. For example, REPLICAOF No One is used to promote the master node, and then the new master configuration of other slave nodes is relatively troublesome.

The sentry

An overview of the

Manual failover of master and slave must have been hard to swallow, and Sentinel, a highly available solution, naturally emerged. When the master/slave architecture remains unchanged, we can directly add Redis Sentinel to monitor the nodes to complete automatic fault discovery and migration. It can also act as a configuration provider, providing information about the master node and the correct address in the event of a failover.

Sentinel itself is an instance of Redis, but is not used as a datastore, and the start command is different.

Although the drawing is a little complicated, it looks like summoning a light emissary.

It’s actually pretty easy to use.

use

Sentinel minimum configuration, one line:

1sentinel monitor < master node alias > < master node host> < master node port > < votes > Just configure master and then run the redis-sentinel < configuration file > command to enable it. The “minimum configuration” mentioned on the Redis website is as follows, in addition to the above line, there are some other configurations:

1Sentinel monitor myMaster 127.0.0.1 6379 2 2Sentinel down-after-milliseconds myMaster 60000 3sentinel Failover -timeout Mymaster 180000 4Sentinel PARALLEL - Syncs MyMaster 1 5 6sentinel Monitor Resque 192.168.1.3 6380 4 7sentinel down-after-milliseconds resque 10000 8sentinel failover-timeout resque 180000 9sentinel parallel-syncs resque 5Copy the code

This is because the official website added a modifier, is “typical minimum configuration”, the important parameters and multi-master examples are written out, take care of everyone CV method, do not forget important parameters, in fact, there are default values.

As shown in this example, the alias of the primary node is set to monitor multiple masters so that it corresponds to its additional configuration items, as well as sentinel commands such as sentinel get-master-addr-by-name. It is suggested that the number of sentries should be more than three and odd. Redis website also mentions the “array” method in various situations, which is very worthy of reference.

More and more

Since it is a highly available solution, it does not have a “disadvantage” in the strict sense, and needs to be considered according to the application scenario.

The parallel syncs parameter is used to specify the number of parallel instances that are not available during failover, which is more convenient than manual failover. The partitioning logic needs to be customized, and while Sentinel solves the high availability problem in master/slave mode, it does not provide a partitioning solution and requires developers to consider how to build.

Since it is still a Master/slave node, if abnormal write traffic crashes the Master node, will automatic “failover” become automatic “disaster transmission”, that is, the slave becomes Master and then fails, then is promoted and fails again?

But the last point is also the author’s guess, and have not heard of this kind of case, can not be deeply investigated.

The cluster

An overview of the

Redis Cluster is an official distributed solution released after version 3.0.

For developers, the term “official support” is probably very nice, ranging from issue to feature. Customizing to solve a problem is always more expensive.

With an official clustering scheme, it will be easier to use latitude from request routing, failover, and elastic scaling.

Cluster, unlike Sentinel, supports partitioning. The idea that Cluster is an upgrade of the sentinels is not serious.

The latitude is not the same, if Cluster also has failover function, it is a bit far-fetched to say that it is an upgrade of the sentinel.

Cluster uses the hash slot concept for partition management. There are 16,384 slots in a Cluster, and each instance is responsible for a portion of slots. The CRC16 (key) and 16383 formulas are used to calculate the corresponding slots for keys.

Although the concept of slots is introduced in nodes and keys, which seems difficult to understand, in fact, because of finer granularity, it reduces the difficulty of node expansion and contraction, which is still very advantageous compared with the traditional strategy. Of course, the “slot” is a virtual concept, the node itself to maintain the “slot” relationship, not really download and start a “slot service” running.

use

Redis is all about configuration files, and clustering is no exception.

1cluster-enabled yes
2cluster-config-file "redis-node.conf"
Copy the code

The key configuration is simple and straightforward. There are two steps: Enable the cluster Specify the cluster configuration file

The cluster-config-file is for internal use and can not be specified. Redis will help create one. Redis-server redis.conf is used to start the server

I started N instances of Redis as a cluster, and of course I wasn’t done.

The next step is what I call a “matchmaking slot,” and it sounds good.

The “matchmaking slot” approach is also evolving, from raw commands to scripts and now redis-CLI official support.

1redis-cli --cluster create 127.0.0.1:7000 127.0.0.1:7001\2127.0.0.1:7002 127.0.0.1:7003 127.0.0.1:7004 127.0.0.1:7005  \ 3--cluster-replicas 1Copy the code

The command above is the redis-CLI method provided on the official website of Redis. One line command is used to complete the operation of “three master and three slave” and automatic slot allocation. Then the cluster is set up. Of course, it is necessary to use the official check command to check.

1 redis - cli - cluster check 127.0.0.1:7001Copy the code

More and more

Although partitioning is well supported, there are some old problems with partitioning, such as the inability to use multi-key operations like Mset if the data is not in the same “slot”.

As mentioned on the select command page, only one library can be used in clustered mode, although this is generally used, but it is important to understand.

Operation and maintenance should also be cautious. As the saying goes, “the simpler the use, the more complex the bottom layer”. It is very convenient to start and build.

| end

An interesting thing happened when writing the master-slave scheme:

I initially remembered that SLAVEOF was the key command for the master and slave. Later, when I checked the official command, I found that it had been changed to REPLICAOF, although SLAVEOF still worked.

Some of the descriptions on the website are Slave in some places and Replication in others.

Curious, I checked out some of Redis author Antirez’s current blog and found that it was two years ago.

In fact, the variable name “Slave” gave some people the opportunity to “insult” the author, and the author made a partial compromise.

Interested friends can search their own look, technology outside the things do not do evaluation, see a happy oh on the line. My main goal is not to be confused by different “words” when reading official documents.

END This paper gives a general description of these extension schemes of Redis.

Specific use, also need to pay attention to the detailed configuration, as well as client support and other comprehensive circumstances to consider.