This is the 21st day of my participation in Gwen Challenge

A master-slave replication

  • Introduction to the
  • Master slave replication workflows
    • The three stages
    • Three core
    • heartbeat
  • Common Problems with master/slave replication

I. Introduction to master/slave replication

  • It’s pretty much like Hadoop

  • High concurrency

  • A high performance

  • High availability

Master/slave replication workflow

  • There are roughly three stages
    • Establish connection phase
    • Data synchronization phase
    • Command propagation phase

Pseudo-distributed Construction -01 Establish a connection

  1. Set up two Redis services
    • These services are started by setting up different ports through the Redis configuration file. It’s like two machines
  2. Master-slave connection
    • Method 1: The client sends a command
    slaveof <masterip> <masterport>
    Copy the code
    • Method 2: Add master server parameters when starting the slave server
    redis-server --slaverof <masterip> <masterport>
    Copy the code
    • Method 3: Modify the configuration file from the server
    Add the following information to the configuration file
    slaveof <masterip> <masterport>
    Copy the code
  3. The primary and secondary connections are disconnected
    • The client sends commands
    slaveof no one
    Copy the code
  4. Authorized to access
    • Master set password
    Add config file
    requirepass <password>
    Copy the code
    • To connect the slave to the master with a password, add the following to the configuration file
    masterauth <password>
    Copy the code

Pseudo-distributed construction -02 Data synchronization

  1. Data Synchronization Process
    • Synchronization is automatically started when the slave connection is started
  2. Master Precautions
    • If the master has a large amount of data, the master will block
    • The size of the replication cache must be reasonable, otherwise it is easy to overflow and then replicate the second time in full, causing an endless loop
    # set buffer size field
    repl-backlog-size
    Copy the code
    • The master allocates some memory for besave commands and creating buffers
  3. Slave Precautions
    • Disable external services during data synchronization
    slave-server-stale-data yes|no
    Copy the code
    • Synchronizing multiple Salves at the same time can cause bandwidth shock, so off-peak synchronization is best
    • If the topology is too deep, synchronization will be delayed

Pseudo-distributed build-03 Command propagation phase

  1. Principle of command propagation

  1. Data synchronization and command transmission process

3. Heartbeat mechanism

Common Problems with master/slave replication

  1. Frequent full copy
  2. Frequent network interruptions
  3. Data inconsistency

The guard mode

  • The sentry introduction
  • Enable Sentinel mode
  • Sentry principle

I. Brief introduction of sentries

  • Function: Monitors the master and votes for a new maseter if the master fails
  • Similar to a zookeeper
  • Sentry is also a Redis service, usually with an odd number of sentries

Enable sentry mode

The configuration process

  • Configure the master-slave structure with one drag and two
  • Configure three sentinels (same configuration, different ports)
  • Start the sentry
Redis-sentinal sentinel- Port id. confCopy the code
  • The sentinel configuration file is sentinel.conf

Configure the Redis service

  1. Master the main service
port 6379
daemonize no
# logfile 6379.log
databases 16
save 10 2
dbfilename dump-6379.rdb
dir /usr/local/redis/data
rdbcompression yes
rdbchecksum yes
appendonly yes
appendfilename "appendonly-6379.aof"
appendfsync everysec
bind 127.0.0.1
Copy the code
  1. Slave1 from node
port 6380
daemonize no
# logfile 6380.log
# databases 16
# save 10 2
# dbfilename dump-6380.rdb
dir "/usr/local/redis/data"
# rdbcompression yes
# rdbchecksum yes
# appendonly yes
# appendfilename "appendonly-6380.aof"
# appendfsync everysec
# bind 127.0.0.1Slaveof 127.0.0.1 6381Copy the code
  1. Slave2 from node
port 6381
daemonize no
# logfile 6381.log
# databases 16
# save 10 2
# dbfilename dump-6381.rdb
dir "/usr/local/redis/data"
# rdbcompression yes
# rdbchecksum yes
# appendonly yes
# appendfilename "appendonly-6381.aof"
# appendfsync everysec
# bind 127.0.0.1
Copy the code

Configure the sentry

  1. Sentinel 1, Sentinel 2, Sentinel 3
There are three consecutive ports
port 26379
# Save data, change to your own directoryDir /temp sentinel monitor myMaster 127.0.0.1 6379 2 Sentinel down-after-milliseconds myMaster 30000 Sentinel parallel-syncs mymaster 1 sentinel failover-timeout mymaster 180000Copy the code

Start the

  1. Start the three Redis service nodes
  2. Activate three sentinels
./bin/redis-sentinel ./conf/sentinel-26379.conf 
Copy the code

test

Turn off the master node and observe the sentinel log. It will override the master node