The second chapter of the Continuation of the Basics:

Make your Redis not just install to uninstall (Linux environment)

Recently, I participated in the 2020 Annual Creators List singles list, [nickname: BWH_Steven] hope you can support me

The Creators of the Year 2020 page

Redis common configuration file details

Being able to reasonably view and understand how to modify the configuration file can help us better use Redis. The following is the sequence of the Redis configuration file

  • There are differences in size between 1K and 1KB, 1m and 1MB, 1g and 1GB, and they are case-insensitive
  • Include is equivalent to the concept of import, which can be introduced and then combined with multiple configuration files
  • The network configuration is as follows (to address remote connection issues, comment out bind 127.0.0.1 and change the protected-mode to no)
    • Bind 127.0.0.1 — The IP to bind
    • Protected-mode yes — Indicates the protection mode
    • Port 6379 — Port Settings
  • Run as a daemon, default is no, self-enabled is yes
  • Background run, need to specify pid file
  • Logs related
    • Loglevel specifies the loglevel: debug, verbose, notice, and warning. Notice indicates the production environment
    • Logfile – The specific log name
    • Database — Number of databases. Default is 16
    • Always-show-logo — Whether the logo is always displayed
  • Persistence: Because Redis is a memory-based database, persistence is about persisting data to specific files
    • There are two ways to do it, RDB, AOF and we’ll talk about persistence in a little bit more detail, but we’ll talk about it briefly
    • Save 900 1: If at least one key is changed within 900 seconds, the operation is persisted
    • Stop-writes-on-bgsave-error: Indicates whether to continue working when persistent errors occur
    • Rdbcompression: Whether to compress RDB files (consumes CPU)
    • Rdbchecksum: Error check when saving RDB file
    • Dir: directory for saving RDB files
  • Master slave replication is covered in the following sections, which are skipped for the moment

  • Next, in the SECURITY section, the comment mentions the password setting. Here, I will mention more about the way to set the password in the client

127.0.0.1:6379> ping PONG 127.0.0.1:6379> config get requirepass # Config set requirepass "123456" # set redis password OK 127.0.0.1:6379> config get requirepass # error NOAUTH Authentication required. 127.0.0.1:6379> ping (error) NOAUTH Authentication required. 127.0.0.1:6379> auth 123456 # 127.0.0.1:6379> config get requirepass 1) "requirepass" 2) "123456"Copy the code
  • CLIENTS Client connection section, too many comments, here is not easy to screenshot, a brief description

    • Maxclients — Maximum number of clients

    • Maxmemory – Maximum memory limit

    • Maxmemory-policy noeviction — Processing strategy for memory to Max out

    • The default expiration policy in Redis is volatile- lRU, set as follows:

      • config set maxmemory-policy volatile-lru

Maxmemory-policy has six modes

  • Volatile – lRU: LRU only for keys with expiration time (default)

  • Allkeys-lru: deletes the key of the LRU algorithm

  • Volatile -random: deletes expired keys randomly

  • Allkeys-random: deletes volatile- TTL randomly: deletes expiring items

  • Noeviction: Never expired, returns errors

  • APPEND ONLY is part of one of the persistence methods AOF configuration, the following will be detailed about these two types of persistence, so here also mentioned
    • Appendonly no — AOF mode is not enabled by default, RDB is persisted by default, RDB is generally sufficient
    • Appendfilename “appendone. aof” — The name of the persistent file
    • Appendfsync always — sync every change (consumes performance)
    • Appendfsync everysec — Perform sync once per second and may lose data for 1s
    • Appendfsync no — Without sync, the operating system synchronizes data itself, fastest

Redis persistence

As mentioned before, Redis is an in-memory database, that is to say, all our data is stored in memory, and our common MySQL and Oracle SQL database will store the data in the hard disk, everything has advantages and disadvantages, although the memory database read and write speed is much faster than the hard disk read and write database. But there was a very troublesome problem, that is to say, after the Redis server reboot or downtime, all data in memory will be lost, in order to solve this problem, Redis provides a persistence technologies, and data in memory is stored in the hard disk, convenient in the future we use these files to restore data in the database

RDB, AOF, RDB, AOF, RDB

(1) RDB mode

(1) concept

After a specified interval, the snapshot of the data set in memory is written to the database. During recovery, the snapshot file is read directly for data recovery

Simple understanding: for a certain period of time, detect key changes and persist data

By default, Redis saves database snapshots in a binary file named dump.rdb.

The filename can be customized in the configuration file, for example, dbfilename dump.rdb

(2) Working principle

The main thread of Redis does not perform IO operations during RDB. The main thread forks a child thread to perform this operation.

  1. Redis calls Forks and has both a parent and child process.
  2. The child process writes the data set to a temporary RDB file.
  3. When the child process finishes writing to the new RDB file, Redis replaces the old RDB file with the new one and deletes the old RDB file.

This way of working allows Redis to benefit from copy-on-write (since the child process is used to write, while the parent process can still receive requests from the client).

We have seen how a process can start executing quickly by using a request page callback to only call in the page containing the first instruction. However, process creation through system calls to fork() can initially bypass the need to request paging by using techniques similar to page sharing. This technique provides fast process creation and minimizes the number of new pages that must be assigned to the newly created process.

Recall that the system call fork() creates a copy of the parent process as a child process. Traditionally, fork() creates a copy of the parent’s address space for the child process, copying pages belonging to the parent process. However, given that many child processes call the system call exec() immediately after creation, replication of the parent’s address space may not be necessary.

Therefore, you can employ a technique called copy-on-write, which works by allowing parent and child processes to initially share the same page. These shared pages are marked as copy-on-write, which means that if any process writes to the shared page, a copy of the shared page is created.

(3) Persistent trigger conditions

  1. Meeting the SAVE criteria triggers the RDB principle automatically

    • For example, save 900 1: If at least one key is modified within 900 seconds, the operation is persisted
  2. The RDB principle is also triggered by the save/bgsave/flushall command

    • Save: The data in memory is persisted immediately, but it will block, that is, it will not accept any other operations. This is because the save command is a synchronization command and will occupy the main Redis process. If there is too much Redis data, the blocking time will be very long
    • Bgsave: Asynchronous requests that, when persisted, can continue to respond to client requests. Blocking occurs in the FOCK phase, which is usually fast but consumes memory
    • Flushall: This command also triggers persistence;
  3. If you exit Redis, the RDB file will be generated automatically (default generated location is Redis startup directory).

(4) Restore the RDB file

If you put the RDB file in the Redis startup directory, Redis will automatically check the dump. RDB file in this directory and restore the data in it

Command to query the location in a configuration file

127.0.0.1:6379> config get dir
1) "dir"
2) "/usr/local/bin"
Copy the code

(5) the advantages and disadvantages

Advantages:

  1. Suitable for large-scale data recovery
  2. Data integrity requirements are not high

Disadvantages:

  1. It is easy to lose the last operation because it requires a certain interval of time to operate. If Redis unexpectedly goes down, the last modified data will be lost
  2. When the process is forked, it takes up some memory space

(2) AOF method

(1) concept

Record every write operation in the form of a log, record all instructions executed by Redis (read operation is not recorded), only appending files but cannot rewrite files, Redis will read the file to rebuild data, in other words, When redis is restarted, write instructions are executed from front to back according to the contents of the log file to complete data recovery.

If you don’t dig too deeply into the operation behind it, it can be simply interpreted as: After each operation is executed, persist it

To use the AOF mode, you need to actively open it because RDB is used by default

In the configuration file, we find the two lines that set the startup of the AOF and the name of its persistence file

  • Appendonly no: no closes aof; change to yes: enables aOF

  • Appendfilename “appendone. aof” — The name of the persistent file

Here’s one way to modify its persistence

  • Appendfsync always — sync every change (consumes performance)

  • Appendfsync everysec — Perform sync once per second and may lose data for 1s

  • Appendfsync no — Without sync, the operating system synchronizes data itself, fastest

The default is infinite append mode. If the aOF file is larger than 64MB, fork a new process to rewrite our file

no-appendfsync-on-rewrite no
aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
Copy the code

(2) AOF file dislocation solution

If the AOF file is misplaced, redis will not start

Redis provides us with a tool called redis-check-aof –fix

#Order sample
redis-check-aof --fix appendonly.aof
Copy the code

(3) the advantages and disadvantages

advantages

  1. File integrity is better because every change is synchronized
  2. Appendfsync No provides the highest speed and efficiency

disadvantages

  1. The AOF file size is much larger than RDB, and therefore the repair speed is slower than RDB
  2. Aof is also slower than RDB, so our redis default configuration is RDB persistence
  3. If synchronization is performed once per second, one second of data may be lost

(3) Extension points (from the network, invasion and deletion)

  1. If you only want your data to exist as long as the server is running, you can also use it as a cache without any persistence

  2. Enable both persistence methods

    In this case, when Redis restarts, the AOF file will be loaded first to recover the original data, since AOF is normally used

    Files hold more complete data sets than RDB files do.

    RDB data is not real-time, and the server will only look for AOF files when both are used, so should only use AOF? The author

    Don’t do it, because RDB is better for backing up databases (AOF is constantly changing), fast restart, and there won’t be any

    AOF may have potential bugs left as a precaution.

  3. Performance Suggestions

    Since RDB files are only used for backup purposes, it is recommended that RDB files be persisted only on the Slave and backed up only once every 15 minutes

    Save 900 1.

    If you Enable AOF, the benefit is that in the worst case, only two seconds of data will be lost, the startup script is simpler and only load from

    Your AOF file is ok, the price is as follows:

    • One is to bring continuous IO,

    • The second is the end of AOF rewrite. The blocking caused by writing new data to new files in the rewrite process is almost inevitable.

    The default base size of AOF rewrite (64M) is too small and can be set to more than 5G. The default base size of AOF rewrite can be set to more than 100% of the original size.

    If AOF is not enabled, a master-slave Repllcation can also be used to implement high availability, saving a large amount of I/O

    Reduces the system volatility that comes with rewriting.

    • The price is that if the Master/Slave is dropped at the same time, the data will be lost for more than ten minutes. The startup script also compares the RDB files in the two Master/Slave files and loads the newer one.

Redis publishing and subscribing

(1) Concept

This part, which is not used very much, is a supplement. Here is a reworked version of the definition I posted in Runoob

Definition: Redis publish subscription (PUB/SUB) is a message communication mode: the sender (PUB) sends the message and the subscriber (sub) receives the message.

The Redis client can subscribe to any number of channels.

The following figure shows channel Channel1 and the relationship between the three clients that subscribe to this channel — Client2, Client5, and Client1:

When a new message is sent to channel Channel1 via PUBLISH, the message is sent to the three clients that subscribed to it:

(2) Orders

  • PSUBSCRIBE pattern [pattern..]Subscribe to one or more channels that fit a given pattern.
  • PUNSUBSCRIBE pattern [pattern..]Unsubscribe from one or more channels that fit the given pattern.
  • PUBSUB subcommand [argument[argument]]View subscription and publish system status.
  • PUBLISH channel message— Publishes messages to specified channels
  • SUBSCRIBE channel [channel..]Subscribe to a given channel or channels.
  • SUBSCRIBE channel [channel..]— Unsubscribe from one or more channels

demo

-- -- -- -- -- -- -- -- -- -- -- -- subscription side -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 127.0.0.1:6379 > SUBSCRIBE the ideal channel - 20 # to SUBSCRIBE to the ideal - 20 Reading messages... (press Ctrl-c to quit) # wait to receive the message 1) "subscribe" # subscribe the successful message 2) "ide-20" 3) (integer) 1 1) "message" # Receive from Ide-20 "Hello ideal" 2) "ideal-20" 3) "hello ideal" 1) "message" # Hello I am ideal-20 2) "The ideal - 20" 3) "Hi, I am BWH_Steven" -- -- -- -- -- -- -- -- -- -- -- -- -- -- news end -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 127.0.0.1:6379 > PUBLISH the ideal - 20 "hello the ideal" PUBLISH ideal-20 "Hi, I am BWH_Steven" # PUBLISH message (integer) 1 -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- check the active channel -- -- -- -- -- -- -- -- -- -- -- -- 127.0.0.1:6379 > PUBSUB channels 1) "the ideal - 20"Copy the code

(3) Brief introduction of principle

Each Redis server process maintains a redis.h/redisServer structure that represents the state of the server. The pubsub_channels attribute of the structure is a dictionary used to store information about subscription channels

  • The key of the dictionary is the channel that is being subscribed to, and the value of the dictionary is a linked list of all clients that subscribe to the channel

Example Schematic: In the pubsub_Channels example shown below, client2, Client5, and Client1 subscribe to Channel1, as do other channels

With this structural concept in mind, the actions of subscribing and publishing are easy to understand:

  • SUBSCRIBE: When a client calls the SUBSCRIBE command to SUBSCRIBE to a channel, the program associates each client with the channel to SUBSCRIBE in pubsub_channels

  • Publish: The program first locates the dictionary key based on channel (for example, finding channel1), and then sends the information to all clients in the dictionary value list (for example, client2, client5, client1).

(4) Disadvantages

  1. Depending on the reliability of data transmission, a subscriber disconnection will result in the loss of messages published by the publisher during the disconnection
  2. If the client does not read the messages sent by the subscribed channel quickly enough, the backlog of messages will cause the Redis output cache to grow larger and larger, slowing down the Redis speed or crashing

(5) Application

  1. Multiplayer online chat room
  2. Message subscription, in the form of public accounts, but most of it is actually done in MQ (more on that later)

Redis primary/secondary replication

(1) Reasons for use

First, using a Redis server in a project is definitely problematic:

  • A single server handling all requests is too stressful and prone to failures, which can cause problems for the entire associated service

  • A server has a limited amount of memory and it is impossible to use all of it for Redis storage (no more than 20GB is recommended)

  • In most scenarios, most of the operations are read, and there are relatively few write operations, so the read requirements will be higher

Master-slave replication, on the other hand, separates read and write

(2) Concept

Primary/secondary replication refers to the replication of data from one Redis server to other Redis servers

  • The former is called the Master node (Master/Leader) and the latter is called the Slave node (Slave/Follower)

  • Data replication is one-way! Replication can only be performed from the primary node to the secondary node (write is the primary node and read is the secondary node).

One server serves as the master machine, and the other servers serve as slave machines. They are connected by command or configuration, so that the slave machine can get data from the master machine, and the slave machine can help the master to share a lot of read requests, etc

(3) Function

  1. Data redundancy: Master/slave replication implements hot backup of data and is an alternative to persistence.
  2. Fault recovery: When the primary node fails, the secondary node can temporarily replace the primary node to provide services
  3. Load balancing: On the basis of master/slave replication, with read/write separation, the master node performs write operations and the slave node performs read operations to share server load. In particular, multiple slave nodes are used to share the load to improve the concurrency.
  4. High availability cornerstone: Master-slave replication is the foundation upon which sentry and clustering can be implemented.

(IV) Cluster environment construction (simulation)

In the normal case, there should be multiple different servers. For the sake of demonstration, here we use several different ports to simulate different Redis servers

First of all, to use different ports, we naturally need multiple different configuration files. We will first make three copies of the original configuration files (respectively representing one host and two slave machines).

#Myconfig file is located in the myconfig directory under the redis startup directory
[root@centos7 ~]# cd /usr/local/bin 
[root@centos7 bin]# ls
appendonly.aof  dump.rdb  myconfig  redis-benchmark  redis-check-aof  redis-check-rdb  redis-cli  redis-sentinel  redis-server  temp-2415.rdb
[root@centos7 bin]# cd myconfig/
[root@centos7 myconfig]# ls
redis.conf

#Make three copies and name them according to the port number
[root@centos7 myconfig]# cp redis.conf redis6379.conf
[root@centos7 myconfig]# cp redis.conf redis6380.conf
[root@centos7 myconfig]# cp redis.conf redis6381.conf

#So three copies will be assigned
[root@centos7 myconfig]# ls
redis6379.conf  redis6380.conf  redis6381.conf  redis.conf
Copy the code

After copying, you need to use vim to change the port, daemonize, PID, logFile, and dbfilename of each configuration file

Such as:

port 6380
daemonize yes
pidfile /var/run/redis_6380.pid
logfile "6380.log"
dbfilename dump6380.rdb
Copy the code

Open two more Windows in XShell and run Redis with different port numbers

Run the Redis service in the first window, using the 6379 configuration file

[root@centos7 bin]# redis-server myconfig/redis6379.conf
Copy the code

Same thing for the other two, 6380 and 6381

Check that Redis is enabled for all three ports

(5) One master and two subordinates

A master and two from, is to represent a host, and two from the machine, and Redis is the default host, that is to say, we simulated the above several Redis server, is still the host, and there is no relationship between each other

You can run the info replication command on the client to view the current information

127.0.0.1:6379 > info replication# ReplicationRole: master # is currently a master host connected_slaves: 0 master_replid: bfee90411a4ee99e80ace78ee587fdb7b564b4b4 master_replid2:0000000000000000000000000000000000000000 master_repl_offset:0 second_repl_offset:-1 repl_backlog_active:0  repl_backlog_size:1048576 repl_backlog_first_byte_offset:0 repl_backlog_histlen:0Copy the code

Note: In the following demonstration, the host port number is 6379, and the two slave machines are 6380 and 6381 respectively

(1) Mode of command (for now)

To configure a master server and a slave server, run SLAVEOF 127.0.0.1 6379

Execute in Windows 6380 and 6381 respectively

Then query the information of the slave machine itself, such as query 6380

127.0.0.1:6380 > info replication# ReplicationMaster_host :127.0.0.1 master_port:6379 master_link_status:up master_last_io_SECONds_ago :1 master_sync_in_progress:0 slave_repl_offset:364 slave_priority:100 slave_read_only:1 connected_slaves:0 master_replid:bd7b9c5f3bb1287211b23a3f62e41f24e009b77e master_replid2:0000000000000000000000000000000000000000 master_repl_offset:364 second_repl_offset:-1 repl_backlog_active:1 repl_backlog_size:1048576 repl_backlog_first_byte_offset:85 repl_backlog_histlen:280Copy the code

In the same way, you can also see that two slave servers are connected

127.0.0.1:6379 > info replication# ReplicationRole :master Connected_SLAVES :2 Slave0: IP =127.0.0.1,port=6381,state=online,offset=84,lag=0 # Slave1: IP = 127.0.0.1 port = 6380, state = online, offset = 84, lag = 0 # second master_replid: bd7b9c5f3bb1287211b23a3f62e41f24e009b77e master_replid2:0000000000000000000000000000000000000000 master_repl_offset:84 second_repl_offset:-1 repl_backlog_active:1 repl_backlog_size:1048576 repl_backlog_first_byte_offset:1 repl_backlog_histlen:84Copy the code

(2) How to modify the configuration file (permanent)

The above way of using the command, need to execute the command every time restart, and write it to the configuration file, can be automatically loaded according to the configuration, first modify replicaof in the slave configuration file followed by the host IP address and port

If the host Redis has a password, don’t forget to add the host password to masterauth in the slave machine

(3) Explanation of rules

  1. The slave machine can only read, but not write. The master machine can read and write, but is mostly used for writing.

    • Read only You can’t write against a read only replica.
  2. After a host is powered off or down, the default role does not change, but the slave host remains the slave host. The cluster only loses write operations and returns to the original state after the host recovers

    • If the host is faulty and a slave host is required, you can use either of the following methods
      • ① Execute the command manually from the machine slaveof no oneMake it a host
      • ② Use sentry mode for automatic election.
  3. After the slave server is powered off or down, if the slave server is called the slave server by command, the slave server cannot be obtained after the server is started. You can configure the slave server again or use the configuration file to obtain all the data on the host again after the restart

(4) Replication principle

A sync command is sent after the connection to the Master is successfully started.

The Master receives the command to start the background saving process and collects all the commands received to modify the data set. After the background process is complete, the Master sends the entire data file to the slave and completes a complete synchronization.

Full replication: After receiving the database file data, the slave service saves it and loads it to the memory.

Incremental replication: The Master sends all the new collected modification commands to the slaves in turn to complete the synchronization

However, as soon as the master is reconnected, a full synchronization (full replication) is performed automatically and all data is visible from the machine

13 Sentinel Mode

(1) Concept

In the previous master-slave replication concept, if the master server goes down, you need to manually switch a slave server to the master server, which is cumbersome. Another option is sentinel mode, which is also recommended

Definition: Sentinel mode is a special mode, first of all, Redis provides sentinel command, sentinel is an independent process, as a process, it runs independently. The idea is that sentry monitors multiple instances of Redis running by sending commands and waiting for the Redis server to respond.

Its functions are as follows:

  • Send commands to the Redis server to return to monitor the running status of both primary and secondary servers.
  • When the sentry detects that the master is down, it automatically switches the slave to master and notifies the other slave servers in public-subscribe mode to modify their configuration files and make them switch hosts.

Single sentinel and multiple sentinel modes:

Single Sentry mode: Monitors the normal running of three Redis servers in an independent process

Multi-sentry mode: In addition to monitoring Redis servers, sentries also monitor each other

(2) Configuration and startup

The redis-sentinel in the Redis startup directory is the sentinel we want to start, but we need to specify a configuration file for it so that the sentinel knows who to monitor

I created a configuration file called sentinel.conf in myconfig under my Redis startup directory /usr/local/bin/

[root@centos7 bin]# vim myconfig/sentinel.conf
Copy the code

In it, the core configuration content is written, that is, to specify the host monitoring our local port 6379, followed by the number 1, which means that after the host down, the voting algorithm mechanism will be used to select a slave machine as the new host

#Host port 1 Indicates the name to be monitoredSentinel Monitor Myredis 127.0.0.1 6379 1Copy the code

Then we go back to the Redis startup directory and start Sentinel with the same configuration file

[root@centos7 bin]# redis-sentinel myconfig/sentinel.conf
Copy the code

The successful startup is shown as follows:

Once the host is disconnected, wait for the sentry to detect it, vote (there is only one sentry, so it is 100%), switch to a slave machine to become the new host, and once the host is back online, it can only become a slave machine of the new host

It can be seen from the log automatically popped up on the sentry side that, after the host 6379 was disconnected, one sentry thought it was disconnected, and then the switch below selected the new 6380 as the new host. After 6379 came online again, it could only be used as the slave of 6380

Check the information of the 6380, it has indeed become the host

127.0.0.1:6380 > info replication# ReplicationSlave :2 Slave0: IP =127.0.0.1,port=6381,state=online,offset=147896,lag=0 Slave1: IP = 127.0.0.1 port = 6379, state = online, offset = 147764, lag = 0 master_replid: d32e400babb8bfdabfd8ea1d3fc559f714ef0d5a master_replid2:bd7b9c5f3bb1287211b23a3f62e41f24e009b77e master_repl_offset:147896 second_repl_offset:7221 repl_backlog_active:1 repl_backlog_size:1048576 repl_backlog_first_byte_offset:85 repl_backlog_histlen:147812Copy the code

(3) Complete configuration files

Sentinel Monitor MyMaster 127.0.0.1 6379 1

There are port changes will be used, others can be set according to the situation

The configuration file is complex

# Example sentinel.conf
 
#The sentinel instance runs on port 26379 by default
port 26379
 
#Sentinel's working directory
dir /tmp
 
#IP port of the redis master node monitored by Sentinel sentinel 
#Master-name Specifies the name of the master node that can be named by itself. The name contains only letters a-z and digits 0-9". - _"Composition.
#Quorum When these quorum number sentinels consider the master node to be disconnected, the master node is objectively considered to be disconnected
# sentinel monitor <master-name> <ip> <redis-port> <quorum>Sentinel Monitor MyMaster 127.0.0.1 6379 1 
#When RequirePass Foobared authorization password is enabled in a Redis instance all clients connected to the Redis instance must provide the password
#Set the sentinel connection password for the primary and secondary nodes. Notice The primary and secondary nodes must have the same authentication password
# sentinel auth-pass <master-name> <password>
sentinel auth-pass mymaster MySUPER--secret-0123passw0rd
 
 
#Specifies the number of milliseconds after the primary node does not answer sentinel sentinel at this point the sentinel subjectively considers the primary node to be offline by default 30 seconds
# sentinel down-after-milliseconds <master-name> <milliseconds>
sentinel down-after-milliseconds mymaster 30000
 
#This configuration item specifies the maximum number of slaves that can simultaneously synchronize the new master during a failover.The smaller the number, the longer it takes to complete failover, but the larger the number, the more slaves are unavailable because of Replication. Setting this value to 1 ensures that only one slave at a time is unable to process command requests.# sentinel parallel-syncs <master-name> <numslaves>
sentinel parallel-syncs mymaster 1
 
 
 
#Failover -timeout can be used for the following purposes: 
#1. Interval between two failover operations performed by a sentinel for a master.
#2. Time is calculated when a slave synchronizes data from an incorrect master. Until the slave is corrected to synchronize data to the correct master.
#3. Time required to cancel an ongoing failover.  
#4. When performing failover, configure the maximum time that all Slaves point to the new master. However, even after this timeout, slaves will still be configured correctly to point to the master, but not according to the rules configured for parallel Syncs
#Default three minutes
# sentinel failover-timeout <master-name> <milliseconds>
sentinel failover-timeout mymaster 180000
 
# SCRIPTS EXECUTION
 
#This section describes how to configure scripts to be executed when an event occurs. You can use scripts to notify the administrator. For example, if the system is not running properly, you can send an email to inform related personnel.
#The following rules apply to the result of running a script:
#If 1 is returned after the script is executed, the script will be executed again later. The default number of times is 10
#If the script returns 2 after execution, or a value higher than 2, the script will not be executed again.
#If the script is terminated during execution due to a system interrupt signal, it behaves the same as if the value is 1.
#The maximum execution time of a script is 60 seconds. If this time is exceeded, the script will be terminated by a SIGKILL signal and executed again.
 
#Notification script: This script will be called when sentinel has any warning level events (such as subjective and objective failures of redis instances, etc.)
#The script should then notify the system administrator of system malfunctions via email, SMS, etc. When the script is called, it is passed two parameters,
#One is the type of event,
#One is the description of events.
#If the script path is configured in the sentinel.conf configuration file, the script must exist in the path and be executable, otherwise sentinel will not start successfully.
#Notify the script
# sentinel notification-script <master-name> <script-path>
  sentinel notification-script mymaster /var/redis/notify.sh
 
#The client reconfigures the master node parameter script
#This script is called when a master changes due to a failover, notifying the client that the master address has changed.
#The following parameters will be passed to the script when it is called:
# <master-name> <role> <state> <from-ip> <from-port> <to-ip> <to-port>
#Currently 
      
        is always "failover",
      
#
      
        is one of the "leader" or "observer".
       
#The from-ip, from-port, to-ip, to-port parameters are used to communicate with the old master and the new master(i.e. the old slave)
#The script should be generic and can be called multiple times, not specific.
# sentinel client-reconfig-script <master-name> <script-path>
sentinel client-reconfig-script mymaster /var/redis/reconfig.sh
Copy the code

Redis cache penetration, breakdown, and avalanche

This part is a supplementary knowledge point, and the focus of this paper is still a basic introduction to Redis, while the following knowledge points are more about some problems in specific scenes, and each content is very complicated to expand, so here only a basic concept is introduced, without detailed explanation

(1) Cache penetration

When users query data, they first check it in Redis cache. If there is no data, that is, the cache is not hit, they will check it in persistent layer database, such as MySQL.

Cache penetration: in the case of a large number of cache misses, a large number of requests to the persistence layer database, the persistence layer database bears a lot of pressure, and problems occur.

There are two common solutions:

  • Bloom filter
  • Caching empty objects

① Bloom filter:

All possible query parameters are stored in the Hash format to quickly determine whether the value exists. Interception and verification are performed at the control layer first. The verification is not directly sent back, reducing the pressure on the storage system.

② Cache empty object:

If the second request is not found in the cache or database, an empty object in the cache is used to process the subsequent request

However, there are two problems with this approach:

  • If null values can also be cached, more space is required to store more null values
  • Even if the expiration time is set for null values, data at the cache layer and storage layer will be inconsistent for a period of time, which will affect services that need to maintain consistency

(2) Cache breakdown

Definition: cache breakdown, refers to a key is very hot, in the continuous carrying large concurrency, large concurrency focused on this point to access, when the key in the moment of failure, continuous large concurrency will Pierce the cache, directly request the database, just like in a barrier to cut a hole

Solution:

  1. Set the hotspot data to never expire

    In this way, hot data will not expire, but Redis will also clean up some data when the memory space is full, and this scheme will take up space, once the hot data is too much, it will take up some space.

  2. Add mutex (distributed lock)

    Before accessing a key, SETNX (Set if not exists) is used to set another short-term key to lock the access to the current key and delete the short-term key after the access. Ensure that only one thread accesses at a time. So the requirement of lock is very high.

(3) Cache avalanche

A large number of keys are set with the same expiration time. As a result, all cache failures occur at the same time, resulting in a large number of DB requests and a sudden increase in pressure, causing an avalanche.

Solution:

① Redis is highly available

  • This idea means that since redis may fail, I will add more Redis, so that the others can continue to work after the failure of one, which is actually a cluster built

② Current limiting degradation

  • The idea behind this solution is to control the number of threads that read the database write cache by locking or queuing after the cache is invalidated. For example, only one thread is allowed to query data and write to the cache for a key, while the other threads wait

③ Data preheating

  • The idea of data heating is that I pre-access all possible data prior to deployment, so that some of the potentially heavily accessed data is loaded into the cache. Before large concurrent access occurs, manually trigger the loading of different cache keys, set different expiration times, and make the cache failure time as even as possible