Notes on bi li bi miles Up the crazy god teacher www.bilibili.com/video/BV1S5…

Redis

1. Introduction to NoSQL

NoSQL(Not Only SQL), meaning Not Only SQL, refers to non-relational databases. Nosql is a technology category that has been proposed in the early days and has been growing in popularity since 2009.

2. 为什么是NoSQL

With the rise of The Internet website, the traditional relational database has been unable to cope with the dynamic website, especially the super-large scale and high concurrency of the pure dynamic website, exposing a lot of insurmountable problems. Such as the mall website on the frequent query of commodity data, hot search of goods ranking statistics, order timeout problems, and wechat circle of friends (audio, video) storage and other related to the use of traditional relational database is very complex, although can achieve the corresponding functions but in performance is not so optimistic. The emergence of noSQL, a technical category, better solves these problems, it tells the world not only SQL.

3. Four categories of NoSQL

3.1 Key-value Stores databases

# 1. Description:
-This type of database mainly uses a hash table, which has a specific key and a pointer to a specific data.# 2. Characteristics
-The advantage of the Key/ Value model for IT systems lies in its simplicity and ease of deployment.-However, if the DBA queries or updates only part of the value, the Key/value becomes inefficient.# 3. Related products
- Tokyo Cabinet/Tyrant,
- Redis
- SSDB
- Voldemort 
- Oracle BDB
Copy the code

3.2 Column storage database

# 1
-This part of the database is usually used to deal with large amounts of data in distributed storage.# 2. Characteristics
-Keys still exist, but they have the characteristic of pointing to multiple columns. The columns are arranged by the column family.# 3. Related products
-Cassandra, HBase, and Riak.Copy the code

3.3 Document Database

# 1
-The document database was inspired by the Lotus Notes office software and is similar to the first type of key-value store. This type of data model is versioned documents, semi-structured documents stored in a specific format, such as JSON. The document database can be considered an updated version of the key-value database, allowing for nested keys. And the query efficiency of document database is higher than key-value database# 2. Characteristics
-Store it as a document# 3. Related products
-MongoDB, CouchDB, MongoDB (4.x). There is also a document database SequoiaDB in China, which is open source.Copy the code

3.4 Graph database

# 1
-Unlike other columns and SQL databases with rigid structures, a graphically structured database uses a flexible graphical model and can be scaled to multiple servers.-NoSQL databases do not have a standard query language (SQL), so data models are required to perform database queries. Many NoSQL databases have REST-style data interfaces or query apis.# 2. Characteristics

# 3. Related products
-Neo4J, InfoGrid, Infinite Graph,Copy the code

4. NoSQL application scenarios

  • The data model is simple
  • More flexible IT systems are needed
  • High database performance requirements
  • A high degree of data consistency is not required

5. What is Redis

Redis is an open source (BSD licensed), in-memory data structure store, used as a database, cache and message broker.

Redis open source follows THE BSD-based memory data store used as database caching message middleware

  • Summary: Redis is an in-memory database

6. Redis features

  • Redis is a high performance key/value in-memory database
  • Redis supports rich data types
  • Redis supports persistence
  • Redis single thread, single process

7. Redis installation

# 0. Prepare the environment
- vmware15.x+
- centos7.x+

# 1. Download the Redis source package
- https://redis.io/
Copy the code

# 2. Download the full source package
-Redis - 4.0.10. Tar. GzCopy the code

# 3. Upload the downloaded Redis package to Linux
Copy the code

# 4. Unzip the file
[root@localhost ~]# tar -zxvf redis-4.0.10.tar.gz
[root@localhost ~]# ll
Copy the code

# 5. Install GCC
- yum install -y gcc

# 6. Go to the decompressed directory and run the following command
- make MALLOC=libc

# 7. After compiling, run the following command
- make install PREFIX=/usr/redis

# 8. Go to /usr/redis to start the redis service
- ./redis-server
Copy the code

The default Redis service port is 6379

# 10. Go to the bin directory to connect to the client
-. / redis - cli - p, 6379Copy the code

The above interface is successfully connected
Copy the code

8. Redis database related instructions

8.1 Database Operation Instructions

# 1. Library description in Redis
-When the redis service is enabled using the default configurator of Redis, there are 16 libraries, numbered from 0 to 15, by default-127.0.0.1:6379> select 1 OK 127.0.0.1:6379[1]># 2. Instructions for manipulating libraries in Redis
-Flushes the current library FLUSHDB-FLUSHALL the libraries in FLUSHALL# 3. Redis client displays Chinese
-	./redis-cli  -p 7000 --raw

# 4. Exit the client
- exit

# 5. Check the process
- ps -ef|grep redis
$ ps -ef|grep redis
root      38504  32280  0 10:27 pts/1    00:00:00 ./redis-server *:6379 root 38714 38520 0 10:32 pts/2 00:00:00 grep --color=auto redisCopy the code

8.2 Operation Key related commands

# 1. DEL command
-Syntax: DEL key [key...-Deletes one or more of the given keys. Non-existent keys are ignored.-Available version: >= 1.0.0-Returned value: The number of deleted keys.# 2. The EXISTS instructions
-Syntax: EXISTS key-Function: Checks whether the given key exists.-Available version: >= 1.0.0-Return value: 1 if key exists, 0 otherwise.# 3.EXPIRE
-Syntax: EXPIRE key seconds-Effect: Sets the lifetime for a given key. When the key expires (lifetime 0), it will be deleted automatically.-Available version: >= 1.0.0-Time complexity: O(1)-Return value: 1 is returned on success.# 4.KEYS
-Grammar: KEYS pattern-Function: Find all keys that match the given pattern.-Grammar:KEYS * matches all KEYS in the database. KEYS h? Llo matches Hello, HallO, and HXlLO. KEYS H * LLO matched hlLO and Heeeeello et al. KEYS H [ae]llo matches Hello and Hallo, but not Hillo. Special symbols are separated by "\" - Available version: >= 1.0.0 - Returned value: list of keys that match the given pattern.
# 5.MOVE
-Syntax: MOVE key DB-Action: Moves the key of the current database to the given database db.-Available version: >= 1.0.0-Return value: 1 on success, 0 on failure.# 6.PEXPIRE
-Syntax: PEXPIRE key milliseconds-Function: This command has a similar function to the EXPIRE command, but it sets the lifetime of the key in milliseconds rather than seconds, as the EXPIRE command does.-Available version: >= 2.6.0-Time complexity: O(1)-Returned value: 1 is returned if the setting is successful or 0 is returned if the setting fails# 7.PEXPIREAT
-Syntax: PEXPIREAT key milliseconds-timestamp-What it does: This command is similar to the EXPIREAT command, but it sets the expired Unix timestamp of the key in milliseconds, rather than seconds, as EXPIREAT does.-Available version: >= 2.6.0-Return value: 1 is returned if the survival time is set successfully. Returns 0 if the key does not exist or the lifetime cannot be set. (See the EXPIRE command for more information)# 8.TTL
-Syntax: TTL key-Function: Returns the TTL (time to live) of a given key, in seconds.-Available version: >= 1.0.0-The return value:If the key does not exist, -2 is returned. If the key exists but the remaining lifetime is not set, -1 is returned. Otherwise, the remaining lifetime of the key is returned in seconds. - Note: Before Redis 2.8, the command returns -1 if the key does not exist, or if the key does not set the remaining lifetime.
# 9.PTTL
-Syntax: PTTL key-What it does: This command is similar to the TTL command, but it returns the remaining lifetime of the key in milliseconds, rather than seconds like the TTL command.-Available version: >= 2.6.0-Return value: -2 is returned if key does not exist. If the key exists but the remaining lifetime is not set, -1 is returned.-Otherwise, the remaining lifetime of the key is returned, in milliseconds.-Note: Prior to Redis 2.8, commands returned -1 when the key did not exist or had no remaining lifetime set for the key.# 10.RANDOMKEY
-Grammar: RANDOMKEY-Returns (not deletes) a random key from the current database.-Available version: >= 1.0.0-Return value: Returns a key if the database is not empty. Returns nil when the database is empty.# 11.RENAME
-Syntax: RENAME key newkey-Effect: Rename key to newkey. An error is returned if the key is the same as the newkey, or if the key does not exist. When newkey already exists, the RENAME command overwrites the old value.-Available version: >= 1.0.0-Return value: An OK message is displayed on success, and an error is returned on failure.# 12.TYPE
-Syntax: TYPE key-Function: Returns the type of the value stored by key.-Available version: >= 1.0.0-The return value:None (key does not exist) String list set zset hashCopy the code
127.0.0.1:6379> set name kuangshen # set key OK 127.0.0.1:6379> Keys * 1) "name" 127.0.0.1:6379> set age 1 OK 127.0.0.1:6379> keys * 1) "age" 2) "name" 127.0.0.1:6379> EXISTS name # Check whether the current key EXISTS (integer) 1 127.0.0.1:6379> EXISTS name1 (integer) 0 127.0.0.1:6379> Move Name 1 # Remove the current key (integer) 1 127.0.0.1:6379> keys * 1) "age" 127.0.0.1:6379> set name Qinjiang OK 127.0.0.1:6379> keys * 1) "age" 2) "name" 127.0.0.1:6379> clear 127.0.0.1:6379> keys * 1) "age" 2) "name" 127.0.0.1:6379> get name" Qinjiang "127.0.0.1:6379> EXPIRE name 10 # set key EXPIRE time, The unit is second (integer) 1 127.0.0.1:6379> TTL name # Checking the remaining time of the current key (integer) 4 127.0.0.1:6379> TTL name (integer) 3 127.0.0.1:6379> TTL name (integer) 2 127.0.0.1:6379> TTL name (integer) 1 127.0.0.1:6379> TTL name (integer) -2 127.0.0.1:6379> get name (nil) 127.0.0.1:6379> type name String 127.0.0.1:6379> type age stringCopy the code

8.3 type String

1. Memory storage model

2. Common operation commands

The command instructions
set Set a key/value
get Obtain the corresponding value based on the key
mset Set multiple key values at a time
mget Get multiple key values at once
getset Gets the value of the original key and sets the new value
strlen Get the length of the corresponding key store value
append Appends the value of the corresponding key
Getrange index 0 starts Intercepts the contents of value
setex Set the lifetime of a key in seconds
psetex Set the lifetime of a key in milliseconds
setnx Exists No operation is performed, and no addition is performed
Msetnx atom operation (as long as there is one does nothing) Multiple keys can be set at the same time
decr Performs a -1 operation of numeric type
decrby Perform subtraction operations based on the data provided
Incr Perform a +1 operation of numeric type
incrby Add based on the supplied data
Incrbyfloat Add a floating point number based on the supplied data
## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #127.0.0.1:6379> set key1 v1 # set value OK 127.0.0.1:6379> get key1 # obtain value "v1" 127.0.0.1:6379> keys * # obtain all keys 1) "key1" 127.0.0.1:6379> EXISTS key1 # Determine whether a key EXISTS (integer) 1 127.0.0.1:6379> APPEND key1 "hello" # APPEND the string if the current key does not exist, This is equivalent to setKey (integer) 7 127.0.0.1:6379> get key1 "v1hello" 127.0.0.1:6379> STRLEN key1 (integer) 7 127.0.0.1:6379> APPEND KEY1 ",kaungshen" (integer) 17 127.0.0.1:6379> STRLEN key1 (integer) 17 127.0.0.1:6379 > get key1 "v1hello, kaungshen"
################################################################### # i++
#Step I + =127.0.0.1:6379> set views 0 # get views "0" 127.0.0.1:6379> incr views 1 127.0.0.1:6379> INCr views (integer) 2 127.0.0.1:6379> get views "2" 127.0.0.1:6379> decr views 1 127.0.0.1:6379> Decr views (INTEGER) 0 127.0.0.1:6379> DECr views (integer) -1 127.0.0.1:6379> get views "-1" 127.0.0.1:6379> INCRBY views 10 # (INTEGER) 9 127.0.0.1:6379> INCRBY views 10 (integer) 19 127.0.0.1:6379> DECRBY views 5 (integer) 14
## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
#String range127.0.0.1:6379> set key1 "hello,kuangshen" # set key1 to OK 127.0.0.1:6379> get key1 "hello,kuangshen" 127.0.0.1:6379> GETRANGE key1 0 3 # intercept string [0,3] "hell" 127.0.0.1:6379> GETRANGE key1 0 -1
#Replace!127.0.0.1:6379> set key2 abcdefg OK 127.0.0.1:6379> get key2 "abcdefg" 127.0.0.1:6379> SETRANGE key2 1 xx # Replaces the string at the beginning of the specified position! (INTEGER) 7 127.0.0.1:6379> get key2 "axxdefg"
## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# setex (set with expire)	# Set expiration time
# setnx (set if not exist)	# does not exist in Settings (often used in distributed locks!)127.0.0.1:6379> setex key3 30 "hello" # set key3 to hello and expire after 30 seconds OK 127.0.0.1:6379> TTL key3 (integer) 26 127.0.0.1:6379> Get key3 "hello" 127.0.0.1:6379> setnx mykey "redis" # Create mykey (integer) 1 127.0.0.1:6379> keys * 1)"key2" 2)"mykey" 3)"key1" 127.0.0.1:6379> TTL key3 (integer) -2 127.0.0.1:6379> setnx mykey "MongoDB" (INTEGER) 0 127.0.0.1:6379> get mykey "redis"
## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #Mset mget 127.0.0.1:6379> mset k1 v1 k2 v2 k3 v3 # set multiple values OK 127.0.0.1:6379> keys * 1) "k1" 2) "k2" 3) "k3" 1) "v1" 2) "v2" 3) "v3" 127.0.0.1:6379> msetnx k1 v1 k4 v4 # msetnx We either succeed together or we fail together! (integer) 0 127.0.0.1:6379> get k4 (nil)
#object127.0.0.1:6379 > set user: 1 {name: zhangsan, age: 3}#Set a user:1 object value to json character to save an object!Filed} user:{id} {filed} user:{id} {filed}
127.0.0.1:6379> mset user:1:name zhangsan user:1:age 2 OK
127.0.0.1:6379> mget user:1:name user:1:age
1) "zhangsan" 
2) "2"

## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #Set 127.0.0.1:6379> getSet db redis 127.0.0.1:6379> get db "redis 127.0.0.1:6379> getset db mongodb # Redis 127.0.0.1:6379> get db" mongodb"Copy the code

The data structure is the same!

String is used in a similar way: value can be our number as well as our String!

  • counter
  • Count the number of multiple units
  • Number of fans
  • Object cache storage

8.4 the List type

List A list is the Java equivalent of a list collection. The elements are ordered and repeatable

1. Memory storage model

2. Common operation instructions

The command instructions
lpush Adds a value to the head of a key list
lpushx Same as Lpush, but you have to make sure that this key exists
rpush Adds a value to the end of a key list
rpushx Same as Rpush, but you have to make sure that this key exists
lpop Returns and removes the first element on the left of the list
rpop Returns and removes the first element to the right of the list
lrange Gets the element in a subscript interval
llen Gets the number of list elements
lset Sets the value of a specified index (the index must exist)
lindex Gets the element at a specified index position
lrem Delete duplicate elements
ltrim Keep elements within a specific range of the list
linsert Insert new elements before and after an element
## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #127.0.0.1:6379> LPUSH list one Insert into the list header (left) (INTEGER) 1 127.0.0.1:6379> LPUSH list two (integer) 2 127.0.0.1:6379> LPUSH list three (integer) 3 127.0.0.1:6379> LRANGE list 0 -1 1)"three" 2)"two" 3)"one" 127.0.0.1:6379> LRANGE list 0 1 1)"three" 2)"two" 127.0.0.1:6379> Rpush list righr # (INTEGER) 4 127.0.0.1:6379> LRANGE list 0-1 1)"three" 2)"two" 3)"one" 4)"righr"
## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #LPOP RPOP 127.0.0.1:6379> LRANGE list 0-1 1)"three" 2)"two" 3)"one" 4)"righr" 127.0.0.1:6379> LPOP list # remove the first element of the list "Three" 127.0.0.1:6379> Rpop list # remove last element of list "righr" 127.0.0.1:6379> LRANGE list 0-1 1)"two" 2)"one" # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # Lindex 127.0.0.1:6379 > LRANGE list 0 and 1, 1) "two" 2)"one" 127.0.0.1:6379> lindex list 1 "One" 127.0.0.1:6379> lindex list 0 "two"
## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #Llen 127.0.0.1:6379> Lpush list one (integer) 1 127.0.0.1:6379> Lpush list two (integer) 2 127.0.0.1:6379> Lpush list Three (integer) 3 127.0.0.1:6379> Llen list # Return the length of the list (integer) 3
## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
#Removes the specified value! 
#Take off the uidLrem 127.0.0.1:6379> LRANGE list 0-1 1)"three" 2)"three" 3)"two" 4)"one" 127.0.0.1:6379> Lrem list 1 one # Remove the specified number of values from the list. Exact matching (INTEGER) 1 127.0.0.1:6379> LRANGE list 0-1 1)"three" 2)"three" 3)"two" 127.0.0.1:6379> lrem list 1 three (integer) 1 127.0.0.1:6379> LRANGE list 0-1 1)"three" 2)"two" 127.0.0.1:6379> Lpush list three (integer) 3 127.0.0.1:6379> lrem list 2 three (integer) 2 127.0.0.1:6379> LRANGE list 0-1 1) "two"

## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #127.0.0.1:6379> keys * (empty list or set) 127.0.0.1:6379> Rpush myList "hello" (integer) 1 127.0.0.1:6379> Rpush myList "hello1" (INTEGER) 2 127.0.0.1:6379> Rpush myList "hello2" (integer) 3 127.0.0.1:6379> Rpush myList "Hello3" (integer) 4 127.0.0.1:6379> ltrim myList 12 # Truncate the list by subscript, truncating only the truncated elements! OK 127.0.0.1:6379> LRANGE myList 0-1 1)"hello1"

## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #Rpoplpush # remove the last element of the list and move it to the new list! 127.0.0.1:6379> rpush myList "hello" (integer) 1 127.0.0.1:6379> rpush myList "hello1" (integer) 2 127.0.0.1:6379> rpush Mylist "hello2" (integer) 3 127.0.0.1:6379> rpoplpush myList myotherList # remove the last element of the list and move it to the new list! "Hello2" 127.0.0.1:6379> lrange myList 0-1 # Check the original list 1)"hello" 2)"hello1" 127.0.0.1:6379> lrange myOtherList 0-1 # Check the target list, there is a change! 1) "hello2"

## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #Lset # replaces the index value in the list with another value, 127.0.0.1:6379> EXISTS list # Check whether the list EXISTS (integer) 0 127.0.0.1:6379> lset list 0 item # If the list does not exist, an ERR will be reported No such key 127.0.0.1:6379> lpush list value1 (integer) 1 127.0.0.1:6379> LRANGE list 0 0 1) "value1" 127.0.0.1:6379> 127.0.0.1:6379> LRANGE list 0 0 1) "item" 127.0.0.1:6379> lset list 1 An error will be reported! (error) ERR index out of range

## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #Linsert # inserts a specific value into the column before or after an element in your column! 127.0.0.1:6379> Rpush myList "hello" (integer) 1 127.0.0.1:6379> Rpush myList "world" (integer) 2 127.0.0.1:6379> LINSERT myList before "world" "other" (integer) 3 127.0.0.1:6379> LRANGE myList 0-1 1)"hello" 2)"other" 3)"world" 127.0.0.1:6379> LINSERT myList after world new (integer) 4 127.0.0.1:6379> LRANGE myList 0-1 1)" Hello "2)"other" 3)"world" 4)"new"Copy the code

8.5 the Set type

Set elements in a Set are unordered and cannot be repeated

1. Memory storage model

2. Common commands

The command instructions
sadd Add elements to the collection
smembers Shows that all elements in the collection are out of order
scard Returns the number of elements in the collection
spop Returns a random element and deletes it from the collection
smove Moving elements from one collection to another must be of the same type
srem Removes an element from the collection
sismember Determines whether a collection contains this element
srandmember Random return element
sdiff Remove the same elements from the other sets in the first set
sinter masked
sunion Sum set
## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #127.0.0.1:6379> sadd myset "hello" # set Add uniform speed (integer) 1 127.0.0.1:6379> sadd myset "kuangshen" (integer) 1 127.0.0.1:6379> sadd myset "loveKuangshen" (integer) 1 127.0.0.1:6379> SMEMBERS myset 2)"lovekuangshen" 3)"kuangshen" 127.0.0.1:6379> SISMEMBER myset hello # (integer) 1 127.0.0.1:6379> SISMEMBER MySet World (integer) 0

## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #127.0.0.1:6379> scard mySet (integer) 4

## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #Rem 127.0.0.1:6379> srem myset Hello # Remove the specified element from set (integer) 1 127.0.0.1:6379> scard myset (integer) 3 127.0.0.1:6379>  SMEMBERS myset 1)"lovekuangshen2" 2)"lovekuangshen" 3)"kuangshen"

## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #Set # Unordered non-repeating collection. Take out random! 127.0.0.1:6379> SMEMBERS myset 1)" kuangshen2" 2)" kuangshen" 3)"kuangshen" 127.0.0.1:6379> SRANDMEMBER myset # 127.0.0.1:6379> SRANDMEMBER myset "kuangshen" 127.0.0.1:6379> SRANDMEMBER myset "kuangshen" 127.0.0.1:6379> SRANDMEMBER myset "kuangshen" 127.0.0.1:6379> SRANDMEMBER myset "kuangshen" 2) "lovekuangshen2" 127.0.0.1:6379 > SRANDMEMBER myset 2 1) "lovekuangshen" 2) "lovekuangshen2" 127.0.0.1:6379 > SRANDMEMBER Myset # randomly select an element "LoveKuangshen2"

## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
#Delete certain key, randomly delete key!127.0.0.1:6379> SMEMBERS myset 1)"lovekuangshen2" 2)"lovekuangshen" 3)"kuangshen" 127.0.0.1:6379> spop myset # Randomly remove some elements from the set! 127.0.0.1:6379> SMEMBERS myset 1) "kuangshen"

## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
#Moves a specified value to anothersetThe collection!127.0.0.1:6379> sadd myset "Hello" (integer) 1 127.0.0.1:6379> sadd myset "world" (integer) 1 127.0.0.1:6379> sadd myset "Kuangshen" (integer) 1 127.0.0.1:6379> sadd myset2 "set2" (integer) 1 127.0.0.1:6379> smove myset myset2 "kuangshen" # Move a specified value to another set! (integer) 1 127.0.0.1:6379> SMEMBERS myset 1)"world" 2)"hello" 127.0.0.1:6379> SMEMBERS myset2 1)"kuangshen" 2)"set2"

## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #(union) Number set class: - Difference set SDIFF - intersection - union 127.0.0.1:6379> SDIFF key1 key2 # Difference set 1) "b" 2) "a" 127.0.0.1:6379> SINTER key1 key2 # intersection common friends can be implemented like this 1) "c" 127.0.0.1:6379 > SUNION key1 key2 # 1) and set the "b" 2) "c" 3) "e" 4) "a" 5) "d"Copy the code

8.6 ZSet type

Features: Sortable set sorting is not repeatable

ZSET official sortable SET sortSet

1. Memory model

2. Common commands

The command instructions
zadd Add an ordered collection element
zcard Returns the number of elements in the collection
Zrange ascending Zrevrange descending Returns a range of elements
zrangebyscore Find elements in a range by score
zrank Return to top
zrevrank List in reverse chronological order
zscore Displays the score of an element
zrem Remove an element
zincrby Give credit to a particular element
## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #127.0.0.1:6379> zadd myset 1 one # add a value (integer) 1 127.0.0.1:6379> zadd myset 2 two three 127.0.0.1:6379> ZRANGE myset 0 -1 1)"one" 2)"two" 3)"three"
## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
#How to implement sorting127.0.0.1:6379> zadd Salary 5000 zhangsan (integer) 1 127.0.0.1:6379> zhangsan (integer) 1 127.0.0.1:6379> zadd Salary 500 kaungshen (integer) 1
# ZRANGEBYSCORE key min max127.0.0.1:6379> ZRANGEBYSCORE salary - INF + INF # display all users from small to large! 1)"kaungshen" 2)"xiaohong" 3)" Zhangsan "127.0.0.1:6379> ZREVRANGE salary 0-1 # 1)"zhangsan" 2)"kaungshen" 127.0.0.1:6379> ZRANGEBYSCORE salary - INF + INF withscores # "500" 3) "xiaohong" 4) "2500" 5) "zhangsan" 6) "5000" 127.0.0.1:6379> ZRANGEBYSCORE salary - INF 2500 withscores # Display the ascending order of employees with salary less than 2500! 1) "kaungshen" 2) "500" 3) "xiaohong" 4) "2500"

## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
#Remove elements from REM127.0.0.1:6379> zrange salary 0-1 1)"kaungshen" 2)"xiaohong" 3)" Zhangsan "127.0.0.1:6379> Zrem salary 0-1 1)"kaungshen" 2)" Xiaohong" 3)" Zhangsan "127.0.0.1:6379> Zrem salary 0-1 Remove the specified element from the ordered set (integer) 1 127.0.0.1:6379> zrange salary 0-1 1)"kaungshen" 2)" Zhangsan "127.0.0.1:6379> zcard salary # Gets the number of ordered sets (integer) 2

## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #127.0.0.1:6379> zadd myset 1 Hello (integer) 1 127.0.0.1:6379> zadd myset 2 world 3 kuangshen (integer) 2 127.0.0.1:6379> zcount myset 1 (integer) 3 127.0.0.1:6379> zcount myset 12 (integer) 2Copy the code

8.7 the hash type

Features: Value is a map structure with unordered keys

1. Memory model

2. Common commands

The command instructions
hset Set a key/value pair
hget Get a value for a key
hgetall Get all key/value pairs
hdel Example Delete a key/value pair
hexists Check whether a key exists
hkeys Get all the keys
hvals Get all values
hmset Set multiple keys and values
hmget Get the value of multiple keys
hsetnx Set the value of a key that does not exist
hincrby Add value
hincrbyfloat Adds a floating point value to value

## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #127.0.0.1:6379> hset myhash field1 kuangshen # set a specific key-vlaue (integer) 1 127.0.0.1:6379> hget myhash field1 # 127.0.0.1:6379> hmset myhash field1 Hello field2 world # set multiple keys -vlaue OK 127.0.0.1:6379> hmget 1)"hello" 2)"world" 127.0.0.1:6379> hgetall myHash # 1)"field1" 2)"hello" 3)"field2" 4)"world" 127.0.0.1:6379> hdel myhash field1 The corresponding value disappears! (integer) 1 127.0.0.1:6379> hgetall myhash 1)"field2" 2)"world" # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # hlen 127.0.0.1:6379 > hmset myhash field1 hello Field2 world OK 127.0.0.1:6379> HGETALL myHash 1)"field2" 2)"world" 3)"field1" 4)"hello" 127.0.0.1:6379> hlen myhash # Get the number of hash table fields! (integer) 2

## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #127.0.0.1:6379> HEXISTS myhash field1 (integer) 1 127.0.0.1:6379> HEXISTS myHash Field3 (integer) 0

## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
#Only get all fields 
#Just get all the values127.0.0.1:6379> hkeys myhash # delete values 1)"world" 2)"hello"
## # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #Incr decr 127.0.0.1:6379> hset MyHash field3 5 # (integer) 1 127.0.0.1:6379> HINCRBY MyHash Field3 1 (integer) 6 127.0.0.1:6379> HINCRBY MyHash Field3-1 (integer) 5 127.0.0.1:6379> hsetnx myHash field4 Hello # If it does not exist (integer) 1 127.0.0.1:6379> hsetnx myHash field4 World # If it exists, (integer) 0 cannot be setCopy the code

Hash changed data user name age, especially user information, such as frequently changing information! Hash is better for objects and String is better for strings!

9. Persistence mechanism

Client redis[Memory] —–> Memory data – Data Persistence -> Disk

Redis provides two different persistence methods to store data to disk:

  • The Snapshot (the Snapshot)
  • AOF (Append Only File) Appends Only log files

9.1 the Snapshot (the Snapshot)

Characteristics of 1.

This way, you can write all the data at one point in time to the hard disk, of course, this is the default redis enabled persistence, save files in.rDB file end so this way is also called RDB mode.

2. Snapshot generation mode

  • Client mode: BGSAVE and SAVE directives
  • The server configuration is automatically triggered
# 1. Client-side BGSAVE
-A. The client can use the BGSAVE command to create a snapshot. When receiving the BGSAVE command from the client, Redis calls fork¹ to create a child process that writes the snapshot to disk, while the parent process continues to process the command request.'Noun explanation: Fork when the child to create a process, at the bottom of the operating system can create a copy of the process, creating the child process in unix-like systems operation optimization: in the beginning, the father and son to share the same memory, until the parent or the child to write memory, to be written to memory sharing service ` will endCopy the code

# 2. Client-side SAVE
-B. A client can also use the SAVE command to create a snapshot, whereby the Redis server receiving the SAVE command does not respond to any other commands until the snapshot is createdCopy the code

  • Note: The SAVE command is not commonly used. Redis is blocked before the snapshot is created
# 3. The server configuration mode meets the configuration automatic trigger
-At the same time, if a user sets a save configuration option in redis.conf, Redis automatically triggers a BGSAVE command once the save configuration criteria are metCopy the code

# 4. The server receives the client shutdown command
-When Redis receives the shutdown request, it executes a save command, blocks all clients, stops executing any commands sent by the client execution, and shuts down the server after the save command is executedCopy the code

3. Set the name and location of the snapshot to be generated

#1. Change the name of the generated snapshot
- dbfilename dump.rdb

# 2. Change the build location
- dir ./
Copy the code


9.2 AOF Appends only log files

Characteristics of 1.

This way you can record all client write command of execution to the log file, AOF persistence will write command will be executed to write AOF the end of the file, in order to record the data changes, so as long as redis to do this from the beginning to the end AOF file contains all the write command, you can restore AOF file record data set.

2. Enable AOF persistence

AOF persistence is not enabled in the default redis configuration, so it needs to be enabled in the configuration

# 1. Enable AOF persistence
-A. Modify appendonly yes to enable persistence-B. Modify appendfilename "appendone. aof" to specify the generated filenameCopy the code

3. Frequency of adding logs

# 1. Always
-Note: Each Redis write command must be synchronously written to the hard disk, which seriously reduces redis speed-Explanation: If the user uses the always option, each Redis write command is written to the hard disk, minimizing data loss in the event of a system crash. Unfortunately, because this synchronization strategy requires a large number of writes to the disk, the speed at which Redis can process commands is limited by disk performance;-Note: rotary hard disk at this frequency about 200 commands /s; Solid-state drives (SSDS) with millions of commands per second;-Caution: SSD users should be careful to use the always option. This mode of constantly writing small amounts of data can cause serious write magnifying problems, reducing the SSD life from years to months.# 2. Everysec [recommended, default]
-Note: Perform synchronization once per second to explicitly synchronize multiple write commands to disk-Note: For data security and write performance, consider using the Everysec option, which allows Redis to synchronize AOF files once per second. Redis synchronizes AOF files once per second with performance similar to that without any persistence features, and by synchronizing AOF files once per second, Redis ensures that if the system crashes, the user loses at most one second of the data generated.# 3. No.
-Note: The operating system decides when to synchronize-Explanation: Finally, using the no option, will have operating system to decide when to synchronize AOF the log file, this option will not affect the performance of redis but when system crashes, lose variable amount of data, and if the user disk processing write fast enough, when the buffer is waiting to be written to the hard disk data fill, redis will in the blocking state, Redis is slow to process command requests.Copy the code

4. Change the synchronization frequency

# 1. Change the log synchronization frequency
-Modify appendfsync everysec | always | no specifiedCopy the code


9.3 Rewrite of AOF files

1. Problems caused by AOF

The AOF approach also poses another problem. Persistence files get bigger and bigger. For example, if we call the incr test command 100 times, we must save all 100 commands in the file, but 99 of them are redundant. To restore the state of the database, simply save a set test 100 file. Redis provides aOF rewriting (ReWriter) mechanism to compress aOF persistence files.

2. AOF rewrite

Used to reduce the size of AOF files to some extent

3. Trigger the override mode

# 1. Client-side approach triggers rewriting
-Running the BGREWRITEAOF command does not block the Redis service# 2. Server configuration mode automatically triggers
-Configure the auto-aof-rewrite-percentage option in redis.conf. Enter the following figure ↓↓↓-If the value of auto-aof-rewrite-percentage is set to 100 and auto-aof-rewrite-min-size 64mb, and AOF persistence is enabled. If the AOF file size is larger than 64mb and the AOF file size is at least 100% larger than that after the last rewrite, the AOF file is automatically triggered. If rewriting is too frequent, set auto-aof-rewrite-percentage to A biggerCopy the code

4. Rewriting principle

Note that instead of reading the old AOF file, the operation of overwriting the aOF file commands the entire contents of the database in memory to replace the original aOF file similar to snapshot.

# Rewrite process
-1. Redis calls fork, and now the child processes write commands to rebuild the state of the database in a temporary file based on an in-memory database snapshot-2. The parent process continues to process the client request, except for writing the write command to the original AOF file. It also caches the received write commands. This ensures that there will be no problem if the child process rewrite fails.-3. When the child process writes the snapshot content to a temporary file in command mode, the child process sends a signal to notify the parent process. The parent process then writes cached write commands to the temporary file as well.-4. Now the parent process can replace the old AOF file with a temporary file and rename it, and the subsequent write commands will start appending to the new AOF file.Copy the code


9.4 Persistent Summary

The two persistence schemes can be used together (AOF), separately, or in some cases at all, depending on the user’s data and application.

Whether AOF or snapshot persistence is used, it is necessary to persist data to hard disk. In addition to persistence, users should also back up persistent files (preferably in several different locations).


Java operation Redis

10.1 Environment Preparations

1. Introduce dependencies

<! -- Jedis connection dependency -->
<dependency>
  <groupId>redis.clients</groupId>
  <artifactId>jedis</artifactId>
  <version>2.9.0</version>
</dependency>
Copy the code

2. Create a Jedis object

 public static void main(String[] args) {
   //1. Create jedis objects
   Jedis jedis = new Jedis("192.168.40.4".6379);Redis service must disable firewall. 2. Redis service must enable remote connection
   jedis.select(0);// The default library for the operation is library 0
   //2. Perform related operations
   //....
   //3. Release resources
   jedis.close();
 }
Copy the code

10.2 Apis related to Operation Keys

private Jedis jedis;
    @Before
    public void before(a){
        this.jedis = new Jedis("192.168.202.205".7000);
    }
    @After
    public void after(a){
        jedis.close();
    }

    // Test key related
    @Test
    public void testKeys(a){
        // Delete a key
        jedis.del("name");
        // Delete multiple keys
        jedis.del("name"."age");

        // Determine whether a key has exits
        Boolean name = jedis.exists("name");
        System.out.println(name);

        Expire pexpire sets a key timeout
        Long age = jedis.expire("age".100);
        System.out.println(age);

        // Get a key timeout TTL
        Long age1 = jedis.ttl("newage");
        System.out.println(age1);

        // Get a random key
        String s = jedis.randomKey();

        // Change the key name
        jedis.rename("age"."newage");

        // Check the types of values available
        String name1 = jedis.type("name");
        System.out.println(name1);
        String maps = jedis.type("maps");
        System.out.println(maps);
    }
Copy the code

10.3 Operation String Related apis

// Test String correlation
    @Test
    public void testString(a){
        //set
        jedis.set("name"."Chen");
        //get
        String s = jedis.get("name");
        System.out.println(s);
        //mset
        jedis.mset("content"."Good man"."address".Haidian District);
        //mget
        List<String> mget = jedis.mget("name"."content"."address");
        mget.forEach(v-> System.out.println("v = " + v));
        //getset
        String set = jedis.getSet("name"."Xiao Ming");
        System.out.println(set);

        / /...
    }
Copy the code

10.4 Operating list-related apis

// Test List correlation
    @Test
    public void testList(a){

        //lpush
        jedis.lpush("names1"."Zhang"."Fifty"."Zhao Liu"."win7");

        //rpush
        jedis.rpush("names1"."xiaomingming");

        //lrange

        List<String> names1 = jedis.lrange("names1".0, -1);
        names1.forEach(name-> System.out.println("name = " + name));

        //lpop rpop
        String names11 = jedis.lpop("names1");
        System.out.println(names11);

        //llen
        jedis.linsert("lists", BinaryClient.LIST_POSITION.BEFORE,"xiaohei"."xiaobai");

      	/ /...

    }
Copy the code

10.5 Apis related to Operating Set

// Test SET correlation
@Test
public void testSet(a){

  //sadd
  jedis.sadd("names"."zhangsan"."lisi");

  //smembers
  jedis.smembers("names");

  //sismember
  jedis.sismember("names"."xiaochen");

  / /...
}
Copy the code

10.6 Operating ZSet apis

// Test ZSET correlation
@Test
public void testZset(a){

  //zadd
  jedis.zadd("names".10."Zhang");

  //zrange
  jedis.zrange("names".0, -1);

  //zcard
  jedis.zcard("names");

  //zrangeByScore
  jedis.zrangeByScore("names"."0"."100".0.5);

  / /..

}
Copy the code

10.7 Operating Hash apis

// Test HASH correlation
@Test
public void testHash(a){
  //hset
  jedis.hset("maps"."name"."zhangsan");
  //hget
  jedis.hget("maps"."name");
  //hgetall
  jedis.hgetAll("mps");
  //hkeys
  jedis.hkeys("maps");
  //hvals
  jedis.hvals("maps");
  //....
}
Copy the code


11. Redis SpringBoot integration

Spring Boot Data Redis provides RedisTemplate and StringRedisTemplate, where StringRedisTemplate is a subclass of RedisTemplate. The two methods are basically the same. The main difference is that the data type of the operation is different. Both of the generics in RedisTemplate are Object, meaning that the stored key and value can be an Object, whereas both of the generics in StringRedisTemplate are strings, Meaning that the key and value of a StringRedisTemplate must both be strings.

Note: Using RedisTemplate defaults to serializing objects to Redis, so objects placed must implement the object serialization interface

11.1 Environment Preparations

1. Introduce dependencies

<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>
Copy the code

2. Configuration application. Propertie

spring.redis.host=localhost
spring.redis.port=6379
spring.redis.database=0
Copy the code

11.2 Use StringRedisTemplate and RedisTemplate

	@Autowired
    private StringRedisTemplate stringRedisTemplate;  // String support is friendly and cannot store objects
    @Autowired
    private RedisTemplate redisTemplate;  // Store objects

    @Test
    public void testRedisTemplate(a){
        System.out.println(redisTemplate);
        // Set the redistemplate value to use the object serialization strategy
        redisTemplate.setValueSerializer(new JdkSerializationRedisSerializer());// Specify values that use object serialization
        / / redisTemplate. OpsForValue (). The set (" user ", the new user (" 21 ", "black", 23, new Date ()));
        User user = (User) redisTemplate.opsForValue().get("user");
        System.out.println(user);
// Set keys = redisTemplate.keys("*");
// keys.forEach(key -> System.out.println(key));
        /*Object name = redisTemplate.opsForValue().get("name"); System.out.println(name); * /

        //Object xiaohei = redisTemplate.opsForValue().get("xiaohei");
        //System.out.println(xiaohei);
        /*redisTemplate.opsForValue().set("name","xxxx"); Object name = redisTemplate.opsForValue().get("name"); System.out.println(name); * /
        /*redisTemplate.opsForList().leftPushAll("lists","xxxx","1111"); List lists = redisTemplate.opsForList().range("lists", 0, -1); lists.forEach(list-> System.out.println(list)); * /
    }


    If the operation on a key is very frequent in the future, the key can be bound to the corresponding redistemplate, and the key will be operated based on the binding operation in the future
    //boundValueOps is used to bind keys to String values
    //boundListOps is used to bind keys to List values
    //boundSetOps is used to bind keys to Set values
    //boundZsetOps is used to bind keys to Zset values
    //boundHashOps is used to bind the Hash key

    @Test
    public void testBoundKey(a){
        BoundValueOperations<String, String> nameValueOperations = stringRedisTemplate.boundValueOps("name");
        nameValueOperations.set("1");
        //yuew
        nameValueOperations.set("2");
        String s = nameValueOperations.get();
        System.out.println(s);

    }


    // Hash operations opsForHash
    @Test
    public void testHash(a){
        stringRedisTemplate.opsForHash().put("maps"."name"."Black");
        Object o = stringRedisTemplate.opsForHash().get("maps"."name");
        System.out.println(o);
    }

    //zset related operation opsForZSet
    @Test
    public void testZSet(a){
        stringRedisTemplate.opsForZSet().add("zsets"."Black".10);
        Set<String> zsets = stringRedisTemplate.opsForZSet().range("zsets".0, -1);
        zsets.forEach(value-> System.out.println(value));
    }

    //set related operation opsForSet
    @Test
    public void testSet(a){
        stringRedisTemplate.opsForSet().add("sets"."xiaosan"."xiaosi"."xiaowu");
        Set<String> sets = stringRedisTemplate.opsForSet().members("sets");
        sets.forEach(value-> System.out.println(value));
    }

    // List related operations opsForList
    @Test
    public void testList(a){
        / / stringRedisTemplate. OpsForList (.) leftPushAll (" lists ", "zhang", "bill", "detective");
        List<String> lists = stringRedisTemplate.opsForList().range("lists".0, -1);
        lists.forEach(key -> System.out.println(key));
    }


    //String related operation opsForValue
    @Test
    public void testString(a){
        / / stringRedisTemplate. OpsForValue (). The set (" 166 ", "good students");
        String s = stringRedisTemplate.opsForValue().get("166");
        System.out.println(s);
        Long size = stringRedisTemplate.opsForValue().size("166");
        System.out.println(size);
    }


    // Key-related operations
    @Test
    public void test(a){
        Set<String> keys = stringRedisTemplate.keys("*");// View all keys
        Boolean name = stringRedisTemplate.hasKey("name");// Check whether a key exists
        stringRedisTemplate.delete("age");// Delete with the specified key
        stringRedisTemplate.rename(""."");// Change the name of the key
        stringRedisTemplate.expire("key".10, TimeUnit.HOURS);
      	// Set the key timeout parameter. 1: set the key name parameter. 2: Time parameter
        stringRedisTemplate.move("".1);/ / move the key
    }
Copy the code

Redis primary/secondary replication

12.1 Primary/Secondary Replication

The master-slave replication architecture is only used for redundant backup of data, and slave nodes are only used for synchronization of data

If the fault persists: 1. Automatic failover occurs on the master node

12.2 Master/Slave Replication Architecture Diagram

12.3 Setting up primary/Secondary Replication

# 1. Prepare 3 machines and modify the configuration
- master
Port 6379 bind 0.0.0.0-slave1 port 6380 bind 0.0.0.0 slaveof masterip masterport
- slave2
Port 6381 Bind 0.0.0.0 slaveof masterip MasterportCopy the code

# 2. Start 3 machines for testing
- cd /usr/redis/bin
- ./redis-server /root/master/redis.conf
- ./redis-server /root/slave1/redis.conf
- ./redis-server /root/slave2/redis.conf
Copy the code

13. Redis sentinel mechanism

13.1 Sentinel mechanism

Sentinel is Redis high availability solution: The Sentinel system, which consists of one or more Sentinel instances, can monitor any number of master servers and all slave servers under these master servers, and automatically upgrade a slave server under the offline master server to a new master server when the monitored master server goes offline. Sentinel is simply a master-slave architecture with automatic failover.

The following problems cannot be solved: 1. The concurrent pressure of a single node 2

13.2 Sentry Architecture Principles

13.3 Setting up the Sentinel Architecture

# 1. Create the sentinel configuration on the primary node
-Create a sentinel. Conf file in the same directory as Master redis.# 2. Configure the sentinel.conf file with the following contents:
-The name of the sentinel monitor database is IP port 1# 3. Start sentinel mode for testing
- redis-sentinel  /root/sentinel/sentinel.conf
Note: The number 2 indicates that two or more sentinel services will perform a master/slave switchover if they detect a master outage.Copy the code

13.4 Performing SpringBoot Operations on sentinels

# Redis Sentinel configuration
The master script is the name that the sentry listens for
spring.redis.sentinel.master=mymaster
# Instead of connecting to a specific Redis host, write multiple sentinel nodes
spring.redis.sentinel.nodes=192.168.202.206:26379
Copy the code
  • Note: if the connection in the following error: RedisConnectionException: DENIED Redis is running in protected mode because protected mode is enabled, no bind address was specified, no authentication password is requested to clients. In this mode connections are only accepted from the loopback interface. If you want to connect from external computers to Redis you may adopt one of the following solutions: 1) Just disable protected mode sending the command ‘CONFIG SET protected-mode no’ from the loopback interface by connecting to Redis from the same host the server is running, however MAKE SURE Redis is not publicly accessible from internet if you do so. Use CONFIG REWRITE to make this change permanent. 2)
  • Solution: Add bind 0.0.0.0 to sentry’s configuration file to enable remote connection privileges

14. Redis cluster

14.1 the cluster

Redis began to support Cluster mode after 3.0. At present, Redis Cluster supports automatic node discovery, slave-master election and fault tolerance, online sharding shard and other features. reshard

14.2 Cluster Architecture Diagram

14.3 Cluster Details

-All redis nodes are ping-pong with each other and use binary protocols internally to optimize transmission speed and bandwidth.-Nodes fail only when more than half of the nodes in the cluster have detected failure.-The client is directly connected to the Redis node without the intermediate proxy layer. The client does not need to connect to all nodes in the cluster, but to any available node in the cluster-Redis-cluster maps all physical nodes to slot [0-16383]. The cluster maintains node<->slot<->valueCopy the code

14.4 Cluster Construction

Judge whether one is the nodes in the cluster is available, used the master node in the cluster is the election process, if more than half of the node that hang up the current node, then the current node is to hang out, so set up advice when redis cluster node number is an odd number of best, need at least three main building cluster nodes, three from the node, will take at least six nodes.

# 1. Prepare the environment to install Ruby and Redis cluster dependencies
- yum install -y ruby rubygems
- gem install redis-xxx.gem
Copy the code

[imG-kBN359IC-1604574467991] (redis.assets /image-20200627193348905.png) [img-kBN359IC-1604574467991]

# 2. Create 7 directories on one machine
Copy the code

# 3. Make a copy of the configuration file per directory
[root@localhost ~]# cp redis-4.0.10/redis.conf 7000/
[root@localhost ~]# cp redis-4.0.10/redis.conf 7001/
[root@localhost ~]# cp redis-4.0.10/redis.conf 7002/
[root@localhost ~]# cp redis-4.0.10/redis.conf 7003/
[root@localhost ~]# cp redis-4.0.10/redis.conf 7004/
[root@localhost ~]# cp redis-4.0.10/redis.conf 7005/
[root@localhost ~]# cp redis-4.0.10/redis.conf 7006/
Copy the code

# 4. Modify different directory configuration files
-port 6379 ..... // Modify the port-Bind 0.0.0.0 // Enable the remote connection-Cluster-enabled yes // Enable the cluster mode-Cluster-config-file nodes-port.conf // Cluster node configuration file-Cluster-node-timeout 5000 // Cluster node timeout duration-Appendonly yes // Enable AOF persistence# 5. Specify different directory profiles to start seven nodes
- [root@localhost bin]# ./redis-server  /root/7000/redis.conf
- [root@localhost bin]# ./redis-server  /root/7001/redis.conf
- [root@localhost bin]# ./redis-server  /root/7002/redis.conf
- [root@localhost bin]# ./redis-server  /root/7003/redis.conf
- [root@localhost bin]# ./redis-server  /root/7004/redis.conf
- [root@localhost bin]# ./redis-server  /root/7005/redis.conf
- [root@localhost bin]# ./redis-server  /root/7006/redis.conf
Copy the code

# 6. View the process
- [root@localhost bin]# ps aux|grep redis
Copy the code

1. Create a cluster

# 1. Copy the cluster operation script to the bin directory
-[root@localhost bin]# cp /root/redis-4.0.10/src/redis-trib.rb# 2. Create cluster
- ./redis-trib.rb create --replicas 1 192.168.202.205:7000 192.168.202.205:7001 192.168.202.205:7002 192.168.202.205:7003 192.168.202.205:7004 192.168.202.205:7005
Copy the code

# 3. The following message is displayed indicating that the cluster is successfully created
Copy the code

2. Check the cluster status

Check [any node in the original cluster] [None]
-. / redis - trib. Rb check 192.168.202.205:7000# 2. Description of cluster node status
-The master nodeThe primary node has Hash slots, and the Hash slots of the primary node are not crossed. The primary node cannot be deleted. A primary node can have multiple secondary nodes
-From the nodeSlave nodes do not have Hash slots. Slave nodes can be removed. Slave nodes are not responsible for writing data, only for synchronizing dataCopy the code

3. Add the primary node

# 1. Add primary node add-node [new node] [any node in the original cluster]
-. / redis - trib. Rb add -node 192.168.1.158:7006 192.168.1.158:7005-Note:The node must be started in cluster mode. 2. By default, the node is added as a master nodeCopy the code

4. Add a secondary node

# 1. Add slave node add-node --slave [newly added node] [any node in cluster]
- ./redis-trib.rb  add-node --slave 192.168.1.158:7006 192.168.1.158:7000
-Note:When no primary node is specified when adding replica nodes, Redis randomly adds the current replica node # 2 to the primary node with fewer replica nodes. Add the master node to the specified master node. Add-node --slave --master-id Master node ID [newly added node] [Any node in the cluster] -./redis-trib.rb add-node --slave - master - id 3 c3a0c74aae0b56170ccb03a76b60cfe7dc1912e 127.0.0.1:127.0.0.1 7006:7000Copy the code

5. Delete the replica node

Delete node del-node [delete node ID]
-. / redis - trib. Rb del -node 127.0.0.1:7002 0 ca3f102ecf0c888fc7a7ce43a13e9be9f6d3dd1-Note: 1. The nodes to be removed must be from nodes or nodes not assigned Hash SlotsCopy the code

6. Online cluster fragmentation

# 1. Online shard reshard [any node in cluster] [none]
-. / redis - trib. Rb reshard 192.168.1.158:7000Copy the code

15.Redis implements distributed Session management

15.1 Management Mechanism

Session management of Redis is to use the session management solution provided by Spring to store an application session in Redis, and all session requests in the entire application will go to Redis to obtain the corresponding session data.

15.2 Developing Session Management

1. Introduce dependencies

<dependency>
  <groupId>org.springframework.session</groupId>
  <artifactId>spring-session-data-redis</artifactId>
</dependency>
Copy the code

2. Develop the Session management configuration class

@Configuration
@EnableRedisHttpSession
public class RedisSessionManager {}Copy the code

3. Pack and test


If the article is helpful to you, please give a like, add a follow…