Redis sentinel mode

Redis Sentinel introduces

Redis Sentinel is a highly available Redis implementation. Sentinel is a tool that manages multiple Instances of Redis and enables monitoring, notification, and automatic failover of Redis.

Main functions of Redis Sentinel

Redis’ Sentinel system is used to manage multiple Redis servers (instance). The system performs three tasks:

  • Monitoring: Sentinel constantly checks that your master and slave servers are up and running.

  • Notification: When a monitored Redis server has a problem, Sentinel can send a Notification to administrators or other applications through the API.

  • Automatic failover: When a master server fails to work properly, Sentinel will start an automatic failover operation, which will upgrade one of the slave servers of the failed master server to the new master server, and make the other slave servers of the failed master server to replicate the new master server. When a client tries to connect to a failed primary server, the cluster also returns the address of the new primary server to the client, enabling the cluster to replace the failed primary server with the new primary server.

Redis Sentinel deployment

Redis cluster configuration

The Redis cluster is started

Copy three ids.conf configuration files

cp redis.conf /home/redis/redis6379.conf
cp redis.conf /home/redis/redis6380.conf
cp redis.conf /home/redis/redis6381.conf
Copy the code

Conf configuration file, for example, 6379

vim redis6379.conf
#Enabling background Startup
daemonize yes
#Pidfile position
pidfile "/home/redis/6379/redis6379.pid"
#port
port 6379
#Log file location
logfile "/home/redis/6379/log6379.log"
#Name of the RDB backup file
dbfilename "dump6379.rdb"
#RDB backup file path
dir "/home/redis/rdb/"
Copy the code

Modify the configuration files of Redis-slave, master, 6380 and 6381, and add the following

Slaveof 192.168.126.200 6379Copy the code

Start the Master service first and then the slave service

Master node service./redis-cli -p 6379 127.0.0.1:6379> info replication# ReplicationRole: master connected_slaves: 2 slave0: IP = 192.168.126.200, port = 6380, state = online, offset = 975350, lag = 1 = 192.168.126.200 slave1: IP and port = 6381, state = online, offset = 975350, lag = 1 master_repl_offset: 975495 repl_backlog_active: 1 Repl_backlog_size :1048576 REPL_backlog_first_BYte_offset :2 REPL_backlog_HISTlen :975494 Slave node service./redis-cli -p 6380# ReplicationRole :slave master_host:192.168.126.200 master_port:6379 master_link_status:up master_last_io_seconds_ago:0 master_sync_in_progress:0 slave_repl_offset:989499 slave_priority:100 slave_read_only:1 connected_slaves:0 master_repl_offset:0 repl_backlog_active:0 repl_backlog_size:1048576 repl_backlog_first_byte_offset:0 repl_backlog_histlen:0Copy the code

Sentinel Cluster Configuration

Write sentinel configuration file, master configuration

touch sentinel1.conf

vim sentinel1.conf
#Port of the Sentinel node
port 26379
dir "/home/redis/sentinel"
daemonize yes
logfile "sentinel-26379.log"

#The current Sentinel node monitors the primary node 127.0.0.1:6379
#Represents that at least two Sentinel nodes are required to agree to determine the primary node failure
#Mymaster is the alias of the master nodeSentinel Monitor MyMaster 192.168.126.200 6380 2
#Every Sentinel node should be pinged periodically to determine whether Redis data node and other Sentinel nodes are reachable. If there is no reply after 30,000 milliseconds, it is determined that the Redis data node is unreachable
sentinel down-after-milliseconds mymaster 30000

#When the Sentinel node set reaches an agreement on the failure determination of the primary node, the Sentinel leader node will perform failover and select a new primary node. The original slave node will initiate replication operations to the new primary node, and the number of slave nodes that initiate replication operations to the > new primary node is limited to 1
sentinel leader-epoch mymaster 1

#The failover timeout is 180,000 milliseconds
sentinel failover-timeout mymaster 180000

#Similarly, the configuration files sentinel2.conf and sentinel3.conf are created and modified
Copy the code

Start three Sentinel services

./redis-sentinel /home/redis/sentinel1.conf
./redis-sentinel /home/redis/sentinel2.conf
./redis-sentinel /home/redis/sentinel3.conf
Copy the code

SpringBoot combines Redis Sentinel mode

Create a SpringBoot project and add dependencies

<dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-redis</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId>  </dependency> <dependency> <groupId>org.projectlombok</groupId> <artifactId>lombok</artifactId> </dependency> <! </groupId> <artifactId> Commons -pool2</artifactId> </dependency>Copy the code

The core configuration file application.yml

Server: port: 80 spring: redis: pool: # Maximum number of connections in the connection pool (if a negative value is used, there is no limit) Default: 8 max-active: 8 Min -idle: 0:0:0:0:0 Mymaster # sentinel service IP and port nodes: 192.168.126.200:26379192168 126.200:26380192168 126.200:26381Copy the code

Program calls

@RestController
@RequestMapping("/redis")
public class RedisController {

    // Use SpringBoot to wrap the RestTemplate object
    @Autowired
    RedisTemplate<String, String> redisTemplate;

    @RequestMapping("/get")
    public String get(String key) {
        String value = redisTemplate.opsForValue().get(key);
        return value;
    }

    @RequestMapping("/set")
    public String set(String key, String value) {
        redisTemplate.opsForValue().set(key, value);
        return "success"; }}Copy the code

Simulated fault

After the master service of redis is manually shutdown, the background will try to reconnect. When the maximum waiting time is exceeded and the connection fails, sentinel will elect a new master node and the program will obtain the new master node to provide read and write services