1. Implementation of AOP cache

1.1 Custom annotations

Create the Annotation package in JT_common


CacheFind

package com.jt.annotation;

import java.lang.annotation.ElementType;
import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;
import java.lang.annotation.Target;

//@Target({ElementType.METHOD,ElementType.FIELD,ElementType.TYPE})
// The scope of an annotation identifies the annotation used in a method, property, or class

@Target({ElementType.METHOD})// Identify annotations used in methods
@Retention(RetentionPolicy.RUNTIME) // For what period
public @interface CacheFind {
    //key-value specifies the return value of the method
    String key(a);   // The user must specify a key
    int seconds(a) default- 1;  // Set the timeout period. -1 No timeout is required
}

Copy the code

1.2 Using cached annotations

Identify a key

1.3 edit RedisAOP

Change the type of ItemCatService in ItemSerevic

package com.jt.aop;

import org.aspectj.lang.JoinPoint;
import org.aspectj.lang.ProceedingJoinPoint;
import org.aspectj.lang.annotation.Around;
import org.aspectj.lang.annotation.Aspect;
import org.aspectj.lang.annotation.Before;
import org.aspectj.lang.annotation.Pointcut;
import org.springframework.stereotype.Component;

import java.lang.reflect.Method;

@Component // Give our objects to the Spring container to manage
@Aspect // represents the AOP aspect
public class RedisAop {

    // Notification selection: whether to control whether the target method is executed. Surrounding the notification
    // Pointcut expressions: Control annotation @annotation(syntax....)

    /** * Requirements: How to dynamically obtain attribute values in annotations. * Principle: Reflection mechanism * get target object ~~~~~ get method object ~~~ get annotation raw API *@param joinPoint
     * @return
     * @throws Throwable
     *
     */

    @Around("@annotation(com.jt.annotation.CacheFind)")// Intercepts a comment
    public Object around(ProceedingJoinPoint joinPoint) throws Throwable {
        //1. Get the target object type and reflection mechanism
        Class targetClass = joinPoint.getTarget().getClass();
        //2
        String name= joinPoint.getSignature().getName();// Get name/ method name
        Object[] objArgs = joinPoint.getArgs();// Get parameters
        // Object conversion class
        Class[] classArgs = new Class[objArgs.length];
        for (int i=0; i<objArgs.length; i++){ Object obj=objArgs[i]; classArgs[i] = obj.getClass(); } Method method = targetClass.getMethod(name, classArgs);// Get the method name and method object
        // Get the annotation inside
        CacheFind cacheFind = method.getAnnotation(CacheFind.class);
        String key= cacheFind.key();
        // Output test:
        System.out.println(key);
        returnjoinPoint.proceed(); }}Copy the code

Testing:


Annotating this method, let’s optimize the rework

Method 2:

    /** * Requirements: How to dynamically obtain attribute values in annotations. * Principle: Reflection mechanism * get target object ~~~~~ get method object ~~~ get annotation raw API **@param joinPoint
     * @return
     * @throwsThrowable * * Up cast: parent = Subclass * Down cast: Subclass = (forced type conversion) parent * */
    @Around("@annotation(com.jt.annotation.CacheFind)")// Intercepts a comment
    public Object around(ProceedingJoinPoint joinPoint) throws Throwable {
        // Cast down: a reference to a subclass is cast to a superclass object
        // Casting is risky. If there is a problem, you need to solve it yourself
        
        // Cast up: superclass references to subclass objects are not forced
        // The subclass attribute method >= the parent class can be converted to the small, the big discard the unwanted things
        MethodSignature methodSignature = (MethodSignature) joinPoint.getSignature();
        Method method = methodSignature.getMethod();// Get the object
        CacheFind cacheFind = method.getAnnotation(CacheFind.class);// Get the annotation
        String key = cacheFind.key();
        System.out.println("Get key:"+key);
        return joinPoint.proceed();
    }
Copy the code

Testing:


Based on the methods provided by Spring, we continue to optimize

Method 3:

     Error at ::0 Formal unbound in pointcut /* AOP syntax specification 1. * If you want to accept the annotation object dynamically, you can write the annotation parameter name directly in the pointcut expression * The name is seen, but is parsed as: package name. Type * /
    / / @ Around (" @ the annotation (com. Jt. An annotation. CacheFind) ") / / intercept an annotation
      @Around("@annotation(cacheFind)")// Intercepts a comment
      public Object around(ProceedingJoinPoint joinPoint,CacheFind cacheFind) throws Throwable {
          String key = cacheFind.key();
          System.out.println("Get key:"+key);
          return joinPoint.proceed();
      }
Copy the code


Functions:

package com.jt.aop;

import com.jt.annotation.CacheFind;
import com.jt.util.ObjectMapperUtil;
import org.aspectj.lang.JoinPoint;
import org.aspectj.lang.ProceedingJoinPoint;
import org.aspectj.lang.annotation.Around;
import org.aspectj.lang.annotation.Aspect;
import org.aspectj.lang.annotation.Before;
import org.aspectj.lang.annotation.Pointcut;
import org.aspectj.lang.reflect.MethodSignature;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Component;
import redis.clients.jedis.Jedis;

import java.lang.reflect.Method;
import java.util.Arrays;
import java.util.List;

@Component // Give our objects to the Spring container to manage
@Aspect // represents the AOP aspect
public class RedisAop {
    // Inject objects
    @Autowired
    private Jedis jedis;

    // Notification selection: whether to control whether the target method is executed. Surrounding the notification
    // Pointcut expressions: Control annotation @annotation(syntax....)

    /** * Requirements: How to dynamically obtain attribute values in annotations. * Principle: Reflection mechanism * get target object ~~~~~ get method object ~~~ get annotation raw API **@param joinPoint
     * @return
     * @throwsSyntax specification in AOP 1.: * If the notification method has parameters to add, joinPoint must be the first. Error at ::0 In pointcut *AOP syntax 2: * If you need to accept annotation objects dynamically, you can write the annotation parameter name directly in the pointcut expression * although the name is seen, it is parsed as the package name. Type * /

    @Around("@annotation(cacheFind)")// Intercepts a comment
    public Object around(ProceedingJoinPoint joinPoint, CacheFind cacheFind) throws Throwable {
        Object result = null;
        / / 1. Get the key = "ITEM_CAT_PARENTID"
        String key = cacheFind.key();
        //2. Dynamic splicing key to obtain parameter information
        // Array is converted to a string
        String args = Arrays.toString(joinPoint.getArgs());
        key += "... "" + args;

        //3. Redis cache implementation
        if (jedis.exists(key)) {// Cache exists
            // Get the cache
            String json = jedis.get(key);
            // Convert to a concrete object
            MethodSignature methodSignature = (MethodSignature) joinPoint.getSignature();
            // Get the object
            Class returnType = methodSignature.getReturnType();

            result = ObjectMapperUtil.toObject(json, returnType); //target: indicates the type of the returned value
            System.out.println("AOP cache queries!");

        } else {// The database does not exist
            // Query database execution target method
            result = joinPoint.proceed();
            / / the JSON conversion
            String json = ObjectMapperUtil.toJSON(result);
            // Set the timeout
            if (cacheFind.seconds() > 0)
                jedis.setex(key, cacheFind.seconds(), json);
            else
                jedis.set(key, json);
                System.out.println("AOP query database");
        }
        return result;
    }


/ / @ Around (" @ the annotation (com. Jt. An annotation. CacheFind) ") / / intercept an annotation
// public Object around(ProceedingJoinPoint joinPoint) throws Throwable {
// // Downward casting: a subclass's reference to a superclass object forces a cast
// // Casting is risky. If there is a problem, you need to resolve it by yourself
//
// // Upcast: Superclass references to subclass objects are not strong-cast
// // subclass attribute method >= parent class large can be converted to small, big discard unwanted things
// MethodSignature methodSignature = (MethodSignature) joinPoint.getSignature();
// Method method = methodSignature.getMethod(); // Get the object
// CacheFind cacheFind = method.getAnnotation(CacheFind.class); // Get the annotation
// String key = cacheFind.key();
// system.out. println(" get key: "+key); // system.out. println(" get key: "+key);
// return joinPoint.proceed();
/ /}


/ / @ Around (" @ the annotation (com. Jt. An annotation. CacheFind) ") / / intercept an annotation
// public Object around(ProceedingJoinPoint joinPoint) throws Throwable {
// //1. Obtain the target object type and reflection mechanism
// Class targetClass = joinPoint.getTarget().getClass();
// //2. Obtain information
// String name= joinPoint.getSignature().getName(); // Get name/ method name
// Object[] objArgs = joinPoint.getArgs(); // Get parameters
// // Object conversion class
// Class[] classArgs = new Class[objArgs.length];
// for (int i=0; i
// Object obj=objArgs[i];
// classArgs[i] = obj.getClass();
/ /}
// Method method = targetClass.getMethod(name, classArgs); // Get the method name and method object
// // get the notes inside
// CacheFind cacheFind = method.getAnnotation(CacheFind.class);
// String key= cacheFind.key();
// // Output test:
// System.out.println(key);
// return joinPoint.proceed();
/ /}


    /** * 1. Define pointcut expression * bean: The objects managed by the Spring container are called beans * 1.1 beans (bean ID) matching 1 * bean(itemCatServiceImpl) * 

* 1.2 within(package name) by class. Class name) matches a bunch of * within(com.jt.service.*) *

* 1.3 Execution (package name of return value type. * execution(* com.jt.service.. *. * (..) A universal expression * service. Level * service.. Execution (Integer com.jt.service...); execution(Integer com.jt.service...); execution(Integer com.jt.service...); *.add*(int)) // Execution (int com.jt.service.. *.add*(int)) can only be the basic type */

/* //@Pointcut("execution(* com.jt.service.. *. * (..) )") @pointcut ("bean(itemCatServiceImpl)") public void Pointcut() { JoinPoint // What is a JoinPoint? //ProceedingJoinPoint is only supported for around advice // Only surround advice can control target method @before ("pointCut()") Public void before(JoinPoint JoinPoint) {// get the target methodName String methodName = joinPoint.getSignature().getName(); String className = JoinPoint.getSignature ().getDeclaringTypename (); Object[] args = JoinPoint.getargs (); Object target = joinPoint.gettarget (); / / output System. Out. Println (methodName); System.out.println(className); System.out.println(args); System.out.println(target); System.out.println(" I am a pre-notification "); } @Around("pointCut()") public Object around(ProceedingJoinPoint joinPoint) throws Throwable { System.out.println(" surround start "); Object result = joinPoint.proceed(); // Execute the next notification, target method system.out.println (" wrap end "); return result; } * / } Copy the code

2 about Redis persistence mechanism

2.1 Service Requirements

The operating environment in Redis is in memory, but memory features are erased when power is cut off.

Idea: Can I save memory data in Redis without losing it?

Persistence: Memory data is periodically saved to disks.


2.2 RDB schema

2.2.1 Description of RDB Mode

1.Redis periodically saves memory data to the RDB file.

2.RDB mode redis default rules.

3. In RDB mode, snapshots of memory data are recorded and the persistence efficiency is higher (only the latest data is retained).

4.RDB mode Data may be lost due to periodic persistence.

2.2.2 RDB command

1: Persistent operation: save

Synchronous operations may block other threads

2: backend persistence: BGSave

Asynchronous operation the user operation does not block. It is not clear when the operation will be completed.

2.2.3 RDB Mode Configuration

Save 900 1 900 seconds Save 300 10 10 Updates within 300 seconds Save 60 10000 10000 updates within 60 seconds This parameter is persisted once


Interview question: What does “save 1” mean?

Users perform operations once a second and update data once to ensure data security problems: However, the efficiency is very low and easy to block…


If you want better persistence performance, you need to use monitoring flexibly.

The more frequently a user operates, the shorter the persistence period.


2.3 AOF mode

2.3.1 Enabling AOF Mode

AOF mode is disabled by default.

Enter the command:vim redis.confShow line number:set nuLooking for:/appendonly Open AOF

2.3.2 Features of AOF mode

Description:

After the AOF policy is enabled,redis persistence is dominated by AOF.

Features:

  1. AOF files are closed by default. You need to manually enable them
  2. AOF files record user operations. Real-time persistence can be achieved (almost no data loss).
  3. AOF files do appending operations, and all persistence files are large
  4. AOF persistence is performed asynchronously
  5. AOF files need to be cleaned periodically

2.3.3 AOF Persistence Principle

Appendfsync always The user performs an operation and persists it once. Appendfsync Everysec Persists it once every second. Appendfsync No Does not actively persist it

2.3.4 How to choose AOF and RDB

Business:

1. If the user is after speed, allow a small amount of data loss preferred RDB mode because it is fast

2. If the user pursues data security. Preferred AOF mode, data will not be lost

Interview question: 1. What are the redis persistence strategies?

  • RDB mode is the default of Redis, which records snapshots of in-memory data,
  • AOF records the user’s operation process, which can realize almost real-time persistent operation

2. If you run the flushAll command in Redis, how do you save it?

FlushAll: delete the command from the AOF file. Restart Redis.

Note: Redis will normally enable both AOF and RDB modes. Run the vim appendone. AOF command and run the wq command to save the configuration and restart redis: redis-cli shutdown and start: redis-server


3 About Redis memory optimization strategy

3.1 Instructions on memory optimization

Redis runtime environment, in memory. But memory resources are limited. You can’t just expand. So you need to optimize the memory data. Redis memory size setting:vim redis.conf

Maximum memory setting:

3.2 LRU algorithm

LRU stands for Least Recently Used. It is a common page replacement algorithm that selects pages (data) that have not been Used for the most recent time to eliminate. The algorithm assigns each page an access field, which is used to record the time T experienced by a page since it was visited last time. When a page needs to be eliminated, the page with the largest T value in the existing page, that is, the least recently used page, is selected to be eliminated.

Dimension: time T Bottom layer: linked list

3.3 LFU algorithm

LFU (least frequently used (LFU) Page-replacement algorithm). That is, the least frequently used page replacement algorithm requires that the page with the smallest reference count be replaced during page replacement, because frequently used pages should have a larger number of references. However, pages that are used a lot at the beginning but are no longer used will remain in memory for a long time, so the reference count register can be periodically moved one bit to the right, resulting in an exponentially decaying average number of uses.

Dimension: number of references Bottom layer: linked list

3.4 Random Algorithm

Note: Randomly generated select data to delete.

3.5 TTL algorithm

Note: According to the data with the timeout set, the data that is about to timeout will be deleted in advance.

3.6 Algorithm Optimization

  • Volatile – LRU The LRU algorithm is used to delete data for which the timeout period is set.
  • Allkeys-lru Deletes data using lRU algorithm in all data.
  • Volatile – LFU Deletes data for which the timeout period is set using the LFU algorithm.
  • Allkeys-lfu in all data. The LFU algorithm is used to delete data.
  • Volatile -random sets the timeout period and randomly deletes the data
  • Allkeys-random Random algorithm is used for all data.
  • Volatile – TTL Specifies the timeout period for the data, which is deleted using the TTL algorithm.

Noeviction does not delete data by default. If memory is full, an error is returned, default policy.

4. Redis cache

4.1 What is cache penetration

Note: Under high concurrency conditions,Users frequently access data that does not exist, resulting in a large number of requests sent directly to the database. The server breaks down. Solution:

  1. As long as the data database that can ensure user access must exist. Bloom filter
  2. IP traffic limiting, blacklist, and micro services can be controlled through the API gateway

4.2 What is cache breakdown

Note: In the case of high concurrency, users frequently access a hotspot data. However, the hotspot data is invalidated in the cache. The user accesses the database directly.

As the saying goes: Call it ill, kill it

4.3 What is cache avalanche

Note: Under the condition of high concurrency, a large number of cache data fails, resulting in low hit ratio of redis cache server. This leads to direct access to the database.

Major problem: Hotspot data fails

Solution:

1. Use a random number when setting the timeout period for hotspot data. 10 seconds + Random number ensures that data will not be invalid at the same time. Setting up multi-level caching

4.4 Bloom Filter

4.4.1 introduction

Bloom Filter was proposed by Bloom in 1970. It’s actually a long binary vector and a series of random mapping functions. Bloom filters can be used to retrieve whether an element is in a collection. The advantage of this algorithm is that the space efficiency and query time are much better than the general algorithm, but the disadvantage is that there is a certain error recognition rate and deletion difficulty.

4.4.2 Computer base conversion

  • 1 byte = 8 bits
  • 1kb = 1024 bytes
  • 1mb = 1024kb

4.4.3 Service Scenarios

Suppose there are 10 million records in the database, and each record is about 2KB. If all the data is hot data, how much memory is needed to store it? 19 g data

1000000 * 2/1024/1024 = 19

Question: Can you optimize the storage of in-memory data? Use as little memory as possible!

Idea: If you use 1 bit(01) to represent a single piece of data, ask how much space is occupied 1.19m

4.4.4 Principle of Bloom Filter

Core 1: long binary vector core 2: Multiple hash functions to solve the problem: check data exists!!

1). Data loading process2). Data verification3). The good Bloom algorithm can reduce the misjudgment rate to < 0.03%

5 Redis sharding mechanism

5.1 Why Sharding is Needed

Note: If a large amount of memory data needs to be saved, but the data is saved in a REDis, the efficiency of the query is too low. If the Redis server goes down, the entire cache becomes unusable.

The solution: Redis sharding mechanism is adopted.

5.2 Redis fragment construction

5.2.1 Preparing the Configuration File

5.2.2 Changing the Port Number

Note: Configure port numbers 6379/6380/6381 respectively

5.2.3 Start three Redis

[root@localhost shards]# redis-server 6379.conf & redis-server  6380.conf & redis-server 6381.conf &

Copy the code

Check whether the startup is successful

5.2.4 Redis Sharding Example

public class TestRedis2 {

    /** * How to store three Redes keys? 79/80/81 * /
    @Test
    public void testShards(a){
        // Create a collection object
        List<JedisShardInfo> shards = new ArrayList<>();
        // Add a port number
        shards.add(new JedisShardInfo("192.168.126.129".6379));
        shards.add(new JedisShardInfo("192.168.126.129".6380));
        shards.add(new JedisShardInfo("192.168.126.129".6381));
        // Pass the port to shardedJedis
        ShardedJedis shardedJedis = new ShardedJedis(shards);
        // Operate by user's operation
        shardedJedis.set("shards".Redis sharding);
        // Output test
        System.out.println(shardedJedis.get("shards")); }}Copy the code

5.3 Consistent Hash Algorithm

5.3.1 introduction

Consistent hashing algorithm was proposed by MIT in 1997. It is a special hashing algorithm to solve the problem of distributed cache. [1] When removing or adding a server, the mapping between existing service requests and the server that processes them can be changed as little as possible. Consistent Hash solves the dynamic scaling problems of simple Hash algorithms in Distributed Hash tables (DHT) [2].

Core Knowledge:

1. Consistent hash resolves the mapping between data and nodes (who manages data) 2. When nodes are added or decreased, data can be flexibly scaled.

5.3.2 Principles of the Consistent Hash Algorithm

Clockwise: because the increase is faster than the decrease, such as in life: capacitor, mobile phone charging when plugged in seconds, after pulling out the power and pre-charge.

5.3.3 Features – Balance

Note: Balance means that the results of the hash should be evenly distributed among nodes, which solves the problem of load balancing algorithmically

Solution: Introduce virtual nodes

5.3.4 Features – Monotonicity

(2) Monotonicity means that the normal operation of the system is not affected when nodes are added or deleted. Note: Regardless of node increase/decrease, data can be found to match the node for data mounting.

5.3.5 Features – Dispersion

(3) Dispersion means that the data should be distributed among the nodes in the distributed cluster (the nodes can have their own backup), and it is not necessary for each node to store all the data. Don’t put all your eggs in one basket.

Baidu Schematic Diagram 1:

Baidu Schematic Diagram 2:

5.4 SpringBoot integrates Redis sharding

5.4.1 Editing a Configuration File

#redis.host=192.168126.129.
#redis.port=6379#Redis sharding mechanism Redis. Nodes =192.168126.129.:6379.192.168126.129.:6380.192.168126.129.:6381
Copy the code

5.4.2 Editing a Configuration Class

package com.jt.config;

import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.PropertySource;
import redis.clients.jedis.*;

import java.util.ArrayList;
import java.util.List;

@Configuration  // Identifies as a configuration class that generally integrates third parties
@PropertySource("classpath:/properties/redis.properties")
public class RedisConfig {

    // Inject node information
    @Value("${redis.nodes}")
    private String nodes;       //node,node,node


    @Bean
    public ShardedJedis shardedJedis(a){
        // New an object
        List<JedisShardInfo> shards = new ArrayList<>();
        // Assign a value to the data, get the URL and IP
        // Split nodes into an array of nodes by a number
        String[] nodeArray = nodes.split(",");
        //node=host:port
        for (String node : nodeArray){
            // Get the element
            String host = node.split(":") [0];
            // Split port number, use tool API to convert character to int
            int port = Integer.parseInt(node.split(":") [1]);
            // Create an object that encapsulates the data
            JedisShardInfo info = new JedisShardInfo(host, port);
            // Encapsulate to shards
            shards.add(info);
        }
        return new ShardedJedis(shards);
        
        //2. Edit the Redis profile to adjust the number of links
        /*JedisPoolConfig jedisPoolConfig = new JedisPoolConfig(); . / / adjust the size of the connection pool jedisPoolConfig setMinIdle (10); / / the minimum number of idle, the minimum number of connections. JedisPoolConfig setMaxIdle (40); . / / the largest number of idle jedisPoolConfig setMaxTotal (1000); ShardedJedisPool ShardedJedisPool = new ShardedJedisPool(jedisPoolConfig, shards); return shardedJedisPool.getResource(); * /
    }



   /* @Value("${redis.host}") private String host; @Value("${redis.port}") private Integer port; Public Jedis Jedis(){return new Jedis(host,port); } * /
}
Copy the code

5.4.3 revision RedisAOP

    // Inject objects
    @Autowired
    //private Jedis jedis; / / a single redis
    private ShardedJedis jedis;  // Sharding Redis memory capacity expansion
Copy the code

Run tests:


6.Redis sentinel mechanism

6.1 Characteristics of Redis sharding

Redis sharding can realize the expansion of memory data, but if the node down will directly affect the operation of the program!

Key point: If the node is down, implement high availability

6.2 Redis Data Synchronization Configuration

Close all sharding:

6.2.1 Copying a Directory

Note: Copy the shards directory and rename it sentinel

cp -r -shards sentinel
Copy the code


6.2.2 Starting three Redis

1. Delete the original persistent file

2. Start three Redis


6.2.3 Primary/secondary structure construction

  • 6379 host
  • 6380/6381 slave machine

2. Master/slave mount command slaveof: only takes effect in the current thread. If the slave machine is down or shut down, you need to mount it again

slaveof   192.168126.129. 6379
Copy the code
  • Set 6380 as slave

  • Set 6381 to slave

3. Write data and check whether the data is successfully mounted


6.3 Redis sentry implementation

6.3.1 Sentry Principle

Description:

1. When sentry is started, it will link to redis master node to obtain relevant information of all nodes

2. Run the PING PONG command to check whether the server is running properly

3. If the sentinel finds that the server does not respond for three consecutive times, the host is down

4. After that, the sentry selects one of the slave machines by random algorithm and selects the host machine. And the other nodes act as slaves of the new host


6.3.2 Configuring the Sentinel Service

1). Copy files

cp sentinel.conf sentinel/
Copy the code

2). Turn off the protected mode

3). Start background operation

4). Sentinel monitoring

So 1 is the number of votes, how many votes are valid, let’s say there are 3 sentinels and write 2, and more than half agree5). Modify the downtime

And then save


6.3.3 Sentinel Command

1. Start redis-sentinel sentinel.conf

2. Close the sentry ps – ef | grep redis kill 9 PID number

6.3.4 Sentinel High availability Test

1). Shut down host 6379 and check whether the slave machine is selected as the host

2). Check whether the configuration file of the slave machine is associated with the host. The master/slave relationship always exists

tail -10 6380.conf
Copy the code

If the connection is incorrect, delete the last relationship and restart the server. To set up

6.4 SpringBoot integration sentinel

6.4.1 Getting Started

    /** * Test sentry's API */
    @Test
    public void testSentinel(a){
        //2. Create objects
        Set<String> sets = new HashSet<>();
        //3. Add the sentry port number
        sets.add("192.168.126.129:1685");
        //4. Edit the Redis configuration file to adjust the number of connections
        JedisPoolConfig poolConfig = new JedisPoolConfig();
        //5. Adjust the connection pool size
        poolConfig.setMinIdle(10); // Minimum number of idle, minimum number of connections
        poolConfig.setMaxIdle(40); // The maximum amount of free time
        poolConfig.setMaxTotal(1000);// Maximum number of connections
        //1. Create a connection pool masterName host name (dynamically retrieved), sentinels
        JedisSentinelPool pool = new JedisSentinelPool("MasterName Host name",sets,poolConfig);
        //6. Use the connection pool to fetch connections (operate a single redis)
        Jedis jedis = pool.getResource();
        //7. Store data
        jedis.set("AAA"."Hello Redis");
        //8. Output test
        System.out.println(jedis.get("AAA"));
        //9. Return the connection to the pool
        jedis.close();
    }
Copy the code

6.5 Summary of Redis Sharding/Sentry

1. Sharding function: Expand memory to store massive data. Disadvantages: High availability not implemented

2. Sentinel function: high availability of nodes is realized

Disadvantages:

  • 1. The sentry cannot be expanded
  • 2. Sentinel itself does not have high availability

Idea: Can there be a mechanism for both memory expansion and high availability (no third party required)

7.Redis cluster rules

Job required knowledge:

The cluster configuration reference: blog.csdn.net/QQ104305101…

7.1 Error Description about Cluster Establishment

Cluster Setup Plan

Master and slave division:

  • 3 host
  • 3 slave machines 6 in total
  • Ports 7000-7005

1). Close the Redis cluster

2). Delete unnecessary configuration files

3) restart redis

 sh start.sh
Copy the code

4). Set up redis cluster

#5.0Redis-cli --cluster create --cluster-replicas is used to manage the cluster internally1 192.16835.130.:7000 192.16835.130.:7001 192.16835.130.:7002 192.16835.130.:7003 192.16835.130.:7004 192.16835.130.:7005
Copy the code