Redis Advanced applications

1. Basic applications

1.1. Transaction Management

Redis supports transaction management, allowing multiple commands to be executed at once, with one transaction equivalent to one atomic operation.

Commands in a transaction are executed in serialized order and are not interrupted by other request commands

All or none of the commands in a transaction are executed

The execution of a transaction goes through three phases: start the transaction, enqueue the task, and execute the transaction. After a transaction is started through multi, all commands are enqueued and executed uniformly through exec.

127.0.0.1:7379> multi
OK
127.0.0.1:7379> set book java
QUEUED
127.0.0.1:7379> set book python
QUEUED
127.0.0.1:7379> exec
1) OK
2) OK
127.0.0.1:7379> get book
"python"
Copy the code

Redis supports cancellation of a transaction using a discard, which will not be executed.

127.0.0.1:7379> multi
OK
127.0.0.1:7379> set book c++
QUEUED
127.0.0.1:7379> discard
OK
Copy the code

Redis does not support transaction rollback when an exception occurs. After a transaction is executed, the commands before the exception are executed normally and the data is submitted, and the commands after the exception are executed successfully. Only the exception commands cannot be executed successfully.

127.0.0.1:7379> multi OK 127.0.0.1:7379> Set book c QUEUED 127.0.0.1:7379> incr book QUEUED 127.0.0.1:7379> set book C ++ QUEUED 127.0.0.1:7379> exec 1) OK 2) (error) ERR value is not an integer or out of range 3) OK 127.0.0.1:7379> get book  "c++"Copy the code

1.2 publish and subscribe

A Redis client can subscribe to any number of channels to receive data. Similar to the observer mechanism, the object receives the message with the change of the subject and carries out corresponding processing.

When a message is published to a channel, all clients subscribed to the channel receive the message.

  • Create and subscribe channels
127.0.0.1:7379> subscribe msg
Reading messages... (press Ctrl-C to quit)
1) "subscribe"
2) "msg"
3) (integer) 1
Copy the code
  • Push a message to a channel
127.0.0.1:7379> publish msg "hello"
(integer) 1
Copy the code
  • The subscriber client received the message
127.0.0.1:7379> subscribe msg
Reading messages... (press Ctrl-C to quit)
1) "subscribe"
2) "msg"
3) (integer) 1
1) "message"
2) "msg"
3) "hello"
Copy the code

1.3. Connect to Redis

Connections are made through Jedis, with related connection dependencies imported first.

<dependency>
    <groupId>redis.clients</groupId>
    <artifactId>jedis</artifactId>
    <version>2.7.2</version>
</dependency>
Copy the code

After dependency import, data can be linked. Jedis provides a simple single-node connection mode, requiring only port and address to connect, and adding a password to set access password.

Jedis jedis = new Jedis("ip", post);
Copy the code

Check the remote server’s firewall port open status or change the binding address if the connection cannot be obtained during runtime. The default Redis binding 127.0.0.1 can only be used for local connections, but cannot provide remote connections. Therefore, change the internal IP address of bind.

In actual framework development, it is not convenient to use single-node connection release management, so connection pooling is used to create and allocate and release connections.

@Configuration
@PropertySource("classpath:redisConfig.properties")
public class RedisConfig {
    /** * Host name */
    @Value("${redis.host}")
    private String host;
    /** * open port */
    @Value("${redis.port}")
    private Integer port;
    /** * Password */
    @Value("${redis.password}")
    private String password;
    /** * redis connection pool */
    @Bean(name = "jedisPool")
    public JedisPool jedisPool(a) {
        JedisPool jedisPool = new JedisPool(init(), host, port, 3000, password);
        return jedisPool;
    }

    /** * Initial configuration *@return* /
    public JedisPoolConfig init(a) {
        JedisPoolConfig config = new JedisPoolConfig();
        config.setMaxTotal(10);
        config.setMaxIdle(10);
        config.setMaxWaitMillis(5 * 1000);
        config.setTestOnBorrow(false);
        config.setTestOnReturn(true);
        returnconfig; }}Copy the code

2. Data persistence

Although Redis data is stored in memory, it also supports disk persistence to save data and prevent data loss in memory caused by emergencies, that is, persistent backup of memory data can be carried out.

2.1 and RDB

An RDB is a binary file that writes in-memory data to a temporary file at a certain point in time and then replaces the previously persistent file to save the data. It is enabled by default. You can back up data at intervals. The time can be set in the configuration file. It is suitable for data preservation in situations where data requirements are not strict.

  • advantages

    A separate child process is used for persistence, avoiding IO operations by the main process and ensuring the high performance of Redis.

  • disadvantages

    If an abnormal failure occurs during the interval, data will be lost during that time.

Save 900 1 # 1 changes 900 seconds save 300 10 # 10 changes 300 seconds save 60 10000 # 10000 changes 60 seconds stop-writes on-bgsave-error yes RDB # filename dir./ # Save pathCopy the code

2.2, AOF

Append-only File appends the operation + data to the end of the operation File in the form of formatting instructions. The data will be actually updated after the log is written, which is equivalent to a history File. When the server is recovering, the file is read and redone to restore all operations and recover data changes.

  • advantages

    Relatively reliable, convenient content analysis, can maintain higher data integrity, set a short time for file appending to ensure a low probability of data loss. When the file size is too large, the system automatically merges commands, for example, deleting unnecessary commands.

  • disadvantages

    Files are larger than RDB, slow to recover, and log recovery is not supported once a file is written incompletely.

It can be simply considered that AOF is a log file. If a large number of continuous change operations are performed, a large number of change operation records will be recorded in the file, resulting in an unusually large file and a slow recovery process. In practice, however, multiple changes to a single piece of data retain only the current state, and historical operations will be discarded, with AOF rewrite in the persistent mode.

The nature of AOF makes it relatively secure and can be used if you expect less data loss in exceptional cases. However, it is necessary to check the integrity of AOF files when the server is restarted after failure. If the records are found to be incomplete, you can check and correct the incomplete records manually or programmatically so as to restore to normal.

Appendonly no # appendfilename "appendone. aof" # appendfsync everysec # Always everysec no no-appendfsync-on-rewrite no # Defer file synchronization auto-aof-rewrite-percentage 100 # Specifies the percentage of data volume to be rewritten next time Auto-aof -rewrite-min-size 64MB # File rewrite limitCopy the code

AOF is file operations, so the changes of intensive service will certainly to cause the disk IO load increase, moreover the Linux file delay write optimization ways (write manipulation will first enter the execution queue, reach the threshold before actually writing), an exception occurs during this period will result in multiple records lost.

Synchronization strategies The characteristics of disadvantages
always Synchronize immediately after operation Weighted I/O operation
everysec Sync every second Record lost in one second
no The operating system handles file synchronization Data loss is related to OS configuration

2.3, contrast

RDB AOF
The characteristics of Write data to a temporary file within a certain period of time, and then replace the original data file with a temporary file Add operations and data as instructions to the end of the file
advantages If a single sub-process is executed, the overall performance is not affected Higher data integrity
disadvantages If an exception occurs within the interval, data will be lost The file size is relatively large and the recovery time is relatively long

3. Master/slave replication

3.1. Question extraction

Persistence ensures that data will not be lost even if the server is restarted, but data can be lost if the hard disk storing data files or log files is damaged. This single point of failure can be avoided by using a master-slave replication mechanism.

  • Data is synchronized between the master machine and slave machine in real time. When the master machine writes data, the data is automatically copied to the slave machine.
  • The master/slave replication does not block the host and can continue processing client requests.

3.2. Construction steps

In the work generally use one master and two subordinate or one master and one subordinate.

  1. Copy a slave machine as user root

    [root@localhost home]# cp redis/ redis1 -r
    [root@localhost home]# ll
    Copy the code
  2. Example Modify the binding of a slave machine to a host

    Replicaof 127.0.0.1 6379Copy the code
  3. Add host password authentication

    masterauth 123456
    Copy the code
  4. Example Modify the bond port of the slave machine

    port 6380
    Copy the code
  5. Clear persistence files

    [root@localhost bin]# rm -rf dump.rdb
    Copy the code
  6. Initiate slave connection access

    [root@localhost redis1]#./bin/redis-cli -h 127.0.0.1 -p 6380 127.0.0.1:6380> auth beordie OK 127.0.0.1:6380> keys * 1) "lisi" 2) "fruit" 3) "book" 4) "name"Copy the code

It can be seen from the result that the slave machine has synchronized the data of the host. You can also view the log file when starting the slave machine to find the data synchronization process record of the connection to the host. Once the host data changes, it will be synchronized to the slave machine, but the slave machine cannot write data, only read data.

127.0.0.1:6380> get username
"zhangsan"
127.0.0.1:6380> set username lisi
(error) READONLY You can't write against a read only slave.
Copy the code

3.3 Principle of Replication

  • When the slave and master libraries are createdMSRelationship, is sent to the primary databaseSYNCCommand;
  • The primary database receivedSYNCAfter the command is executed, the snapshot is saved in the background and the command received during the snapshot is cached.
  • After the snapshot is created, the host sends the snapshot file and all cache commands to the slave.
  • After receiving from the machine, load the snapshot file and execute the cache command;
  • Each time the host receives a write command, it sends the command to the slave to ensure data consistency.
  • This process is done internally, so the slave machine does not support manual data writes by the client.

4. Sentinel mode

4.1. Question extraction

In the replication architecture, nodes go down. If the secondary node goes down, restart it. If the host is down, you need to run the slaveof no one command on the slave machine to disconnect the relationship between the master and slave and automatically promote the master library. In this case, the slave machine is the new node host and can write data. When the host is completely repaired, you need to execute slaveof to set it to slave, that is, to switch the properties of the two nodes.

Such manual switching is troublesome and error-prone, so a method is needed to automatically complete the selection and replacement of the node host in the case of error.

4.2 Sentinel mode

Sentry is an independent process, which can be observed in the bin directory. It is used to monitor the operation of the Redis system, that is, a node that stands guard.

  • Responsible for monitoring the normal operation of the primary and secondary databases
  • Responsible for promoting the secondary database to the primary database if the primary database fails

4.3. Configure sentinels

  1. The configuration file

    Generally, sentry monitors the running status of the primary node. Therefore, you need to specify the port of the primary node in the configuration file.

    #Go to the bin directory to create a configuration file
    [root@localhost bin]vim sentinel.conf
    Copy the code
    #Adding Configuration Information
    port 26379	
    #Specifies the listening port, default
    sentinel monitor master-name ip port count	
    #Master-name Specifies the name. IP specifies the IP address of the primary node. Port Specifies the port number of the primary node
    sentinel auth-pass master-name password	
    #Password The password must be consistent with the primary and secondary nodes
    sentinel down-after-milliseconds master-name milliseconds	
    #Milliseconds Indicates the time that the main library failed
    sentinel parallel-syncs master-name count	
    #Count Number of synchronization counts during primary/secondary switchover
    Copy the code
  2. Start the sentry

    Sentinel is a separate process that needs to ensure that the primary and secondary databases are up and running before starting.

    [root@localhost bin]# ./redis-sentinel ./sentinel.conf >sent.log &
    Copy the code

    Look at the sentinel.conf file to see that listening for the service has been completed.

    sentinel myid 34302b166051b9cfeaddf69fc90920ced7d30b8b
    sentinel deny-scripts-reconfig yes
    # Generated by CONFIG REWRITEPort 26379 dir "/home/redis/bin" protected-mode no sentinel monitor redisMaster 172.19.231.182 6380 1 Sentinel auth-pass  redisMaster 123456 sentinel config-epoch redisMaster 5 sentinel leader-epoch redisMaster 5 sentinel known-replica RedisMaster 172.19.231.182 6379 Sentinel Current - Epoch 5Copy the code

    If you want to restart the Sentry process, you need to remove myID from the configuration file. It is best to restart the sentry process each time using a new configuration file.

    Although sentinel is interpreted as a single executable file, redis-sentinel, it is actually just a Redis server running in a special mode that can be specified by specifying the –sentinel parameter when a normal Redis server is started.

  3. Kill the main library

    Kill -9 pid # Query the process id of the primary databaseCopy the code

    Check the information about the slave machine. It is found that the slave node is promoted to master. If the switchover between the master node and slave node is not completed automatically, check whether the passwords are consistent and whether the protection mode of the slave machine is disabled.

    127.0.0.1:6380 > info replication# ReplicationRole: slave master_host: 172.19.231.182 master_port: 6379Copy the code
    127.0.0.1:6380 > info replication# Replication
    role:master
    Copy the code

    When you start the host again, it is automatically configured as the secondary database.

5. Cluster solution

5.1 Redis-Cluster architecture

  • All nodes are interconnected through the PING PONG mechanism, and the binary protocol is used internally to optimize the transmission speed and bandwidth.
  • The failure judgment of a node is determined by the communication failure of more than half of the nodes in the cluster.
  • The client connects directly to the node and only needs to connect to any available node.
  • Redis-clusterMap all the nodes to[0-16383] slotOn the whole, clustering is maintenancenode-slot-valueThe relationship between

There are 16384 hash slots in the Redis cluster. When a key-value needs to be added to the cluster, the cluster will use CRC16 algorithm to calculate the key first, and then mod the number of slots to find the corresponding mapping node, and then write data to the actual node database.

5.2. Redis-cluster voting

  • All the primary nodes in the cluster participate in the voting. If the communication between more than half of the nodes and one of the nodes times out, the node can be considered as dead.
  • Timeout is set bycluster-node-timeoutThe specified;
  • If any of the primary nodes in the cluster fail and no secondary nodes can be retrieved, the entire cluster entersfailState, that is, the mapping slot cannot find the corresponding node connection relationship;
  • If more than half of the primary nodes in the cluster fail, all nodes are entered with or without slave nodesfailState.

5.3. Redis-cluster construction

  1. Create a directory for centralized cluster management

    [root@localhost redis]# mkdir redis-cluster
    Copy the code
  2. Create six machines, one master, one slave, and three hosts

    [root@localhost redis]# cp redis/ redis-cluster/redis-7001 -r
    [root@localhost redis]# cd redis-cluster/redis-7001
    Copy the code

    Delete the persistent file to avoid startup failure, enable cluster support configuration from the configuration file, and change the startup port.

    Cluster-enable yes
    port 7001
    Copy the code

    Several additional machines are copied for cluster creation.

    [root@localhost redis-cluster]# cd redis-7002 [root@localhost redis-7002]# vim redis.conf [root@localhost redis-7002]# cd .. /redis-7003 [root@localhost redis-7003]# vim redis.conf [root@localhost redis-7003]# cd .. /redis-7003 [root@localhost redis-7003]# vim redis.conf [root@localhost redis-7003]# cd .. /redis-7004 [root@localhost redis-7004]# vim redis.conf [root@localhost redis-7004]# cd .. /redis-7005 [root@localhost redis-7005]# vim redis.conf [root@localhost redis-7005]# cd .. /redis-7006 [root@localhost redis-7006]# vim redis.confCopy the code
  3. Write scripts to facilitate startup management

    cd redis-7001
    ./bin/redis-server ./redis.conf
    cd ..
    cd redis-7002
    ./bin/redis-server ./redis.conf
    cd ..
    cd redis-7003
    ./bin/redis-server ./redis.conf
    cd ..
    cd redis-7004
    ./bin/redis-server ./redis.conf
    cd ..
    cd redis-7005
    ./bin/redis-server ./redis.conf
    cd ..
    cd redis-7006
    ./bin/redis-server ./redis.conf
    Copy the code

    Change the execution permission to start all instances. When shutting down the cluster, you only need to shut down each instance and delete the persistent file and the automatically generated nodes.conf file.

    [root@beordie redis-cluster]# chmod u+x startall.sh [root@beordie redis-cluster]# ./startall.sh [root@beordie redis-cluster]# ps -ef | grep redis root 10008 1 0 18:45 ? 00:00:00./bin/redis-server 192.168.120.130:7001 [cluster] root 10010 10 18:45? 00:00:00./bin/redis-server 192.168.120.130:7002 [cluster] root 10018 10 18:45? 00:00:00./bin/redis-server 192.168.120.130:7003 [cluster] root 10023 10 18:45? 00:00:00./bin/redis-server 192.168.120.130:7004 [cluster] root 10028 10 18:45? 00:00:00./bin/redis-server 192.168.120.130:7005 [cluster] root 10033 10 18:45? 00:00:00. / bin/redis - server 192.168.120.130:7006 [their]Copy the code
  4. To create a cluster, you only need to create it on one machine

    [root@beordie bin]#./redis-cli --cluster create 192.168.120.130:7001 192.168.120.130:7002 192.168.120.130:7003 192.168.120.130:7004 192.168.120.130:7005 192.168.120.130:7006 --cluster-replicas 1 -a 123456>>> Performing hash slots allocation on 6 nodes...
    Master[0] -> Slots 0 - 5460
    Master[1] -> Slots 5461 - 10922
    Master[2] -> Slots 10923 - 16383
    Adding replica 192.168.120.130:7005 to 192.168.120.130:7001
    Adding replica 192.168.120.130:7006 to 192.168.120.130:7002
    Adding replica 192.168.120.130:7004 to 192.168.120.130:7003
    Copy the code
  5. Connect the cluster

    [root@beordie bin]# ./redis-cli -h 192.168.120.130 -p 7001 -c	# -c 是以集群的方式登录
    192.168.120.130:7001> auth 123456
    192.168.120.130:7001> set name zhangsan
    -> Redirected to slot [5798] located at 192.168.120.130:7002
    Copy the code

5.4. Redis-cluster information

  • Viewing Cluster Information

    192.168.120.130:7002> cluster info Cluster_state: OK Cluster_SLOts_assigned :16384 Cluster_SLOts_OK :16384 cluster_slots_pfail:0 cluster_slots_fail:0 cluster_known_nodes:6 cluster_size:3 cluster_current_epoch:6 cluster_my_epoch:2 cluster_stats_messages_ping_sent:263 cluster_stats_messages_pong_sent:256 cluster_stats_messages_meet_sent:2 cluster_stats_messages_sent:521 cluster_stats_messages_ping_received:253 cluster_stats_messages_pong_received:265 cluster_stats_messages_meet_received:3 cluster_stats_messages_received:521Copy the code
  • Viewing Node Information

    192.168.120.130:7002 > cluster nodes 78 e1fa7a1be93c54b7c95748e386999a84398044 192.168.120.130:7005 @ 17005 slave bf36a0e07d21247c623e3e5ee3218c437019ebca 0 1642072159000 5 connected 768202ebf2783e9ae4576950547e90a14cbc0268 192.168.120.130:7006 @ 17006 slave c11987d525cdf7631192e432fd5950d04630c 0 968 1642072160000 6 connected 968 c11987d525cdf7631192e432fd5950d04630c 192.168.120.130: master 7001 @ 17001-0. 1642072161333 1 connected 0-5460 6 bcf00a7f61a840197e7e3fad5c594cd0f44528f 192.168.120.130:7002 @ 17002 myself, master 1642072159000-0 2 connected 5461-10922 bf36a0e07d21247c623e3e5ee3218c437019ebca 192.168.120.130:7003 @ 17003 master - 0 1642072160315 3 connected 10923-16383, 2258831 c38d3109b90201bedf665cb508df39a81 192.168.120.130:7004 @ 17004 slave 6bcf00a7f61a840197e7e3fad5c594cd0f44528f 0 1642072162352 4 connectedCopy the code

5.5 Redis-cluster connection

// The JedisCluster singleton exists
Set<HostAndPort> nodes = new HashSet<HostAndPort>();
nodes.add(new HostAndPort("192.168.120.130".7001));
nodes.add(new HostAndPort("192.168.120.130".7002));
nodes.add(new HostAndPort("192.168.120.130".7003));
nodes.add(new HostAndPort("192.168.120.130".7004));
nodes.add(new HostAndPort("192.168.120.130".7005));
nodes.add(new HostAndPort("192.168.120.130".7006));
JedisCluster cluster = new JedisCluster(nodes);
// The execution method corresponds to the single-node redis instruction.
cluster.close();
Copy the code