1. What are the uses of RDB and AOF in Redis?

You can use RDB for unimportant data (such as caching data) and AOF for important data, but you can use both methods.

RDB trigger mechanism:

The first type: save(synchronization) 1 client enter save command —- “redis server —-” synchronous creation of RDB binary files 2 Will cause redis blocking (when the data is very large) 3 File policy: if the old RDB exists, it will replace the old 4 complexity O (n)

The second: Bgsave (asynchronous, Backgroud saving Started) 1 Client type save command —- redis server —- Asynchronously create RDB binary (fork generates a child process (fork blocks REids), 3 file policy: same as save, if old RDB exists, old 4 complexity o(n) will be replaced.

The third: (Common mode) (******) Automatically (through configuration file) Configure seconds changes save 900 1 Save 300 10 save 60 10000 If 1W data changes during 60s, Automatic RDB generation If 10 data changes in 300 seconds, automatic RDB generation If one data changes in 900 seconds, automatic RDB generation

If any of the above three criteria are met, the RDB is automatically generated and bgSave is used internally.

RDB triggers typically use the third approach, but this approach has its drawbacks. If the number of changes is not within the set range then it will not be triggered, causing a lot of cases where the data is not persisted. Therefore, we generally adopt the following method: AOF.

AOF

Problem: Time consuming, performance consuming. If you can’t control it, you may lose data.

Implementation strategy:

The log is not written directly to the hard disk, but is first placed in the buffer, which is written to the hard disk according to some policy. # first type: always: redis– write commands to the refreshed buffer — Redis — “write command to flush the buffer –” every second buffer fsync to the hard disk — “AOF file # 3: no:redis –” write command to flush the buffer — “the operating system decides that the buffer fsync to the hard disk –” AOF file

RDB and AOF selection

RDB best policy

RDB off, master slave operation; Centralized management: Backup data by day and hour;

Master/slave configuration, slave node enabled.

Aof best strategy

On: caching and storage, turned on in most cases, aOF rewrite centralized management of Everysec: with a refresh per second policy

The best strategy

Small sharding: the maximum memory of each Redis is 4g; Cache or storage: An invalid policy is used based on the feature. Monitor hard disk, memory, load network, etc.; There is enough memory.

2. Realization of Fibonacci sequence.

Implementation one: based on recursive form implementation

public static int getFib(int n){ if(n < 0){ return -1; }else if(n == 0){ return 0; }else if(n == 1 || n ==2){ return 1; }else{ return getFib(n – 1) + getFib(n – 2); }}

Recursion is the simplest way to implement it, but there are many problems with recursion. When the value of n is very large, it will occupy a lot of memory space. Since this sequence defines F(n) =F(n-1)+F(n-2) (n ≥2, n∈N*), we can calculate from beginning to end, first calculating the first value, and then gradually calculating the NTH value.

Method 2: Based on variable form implementation

public static int getFib2(int n){ if(n < 0){ return -1; }else if(n == 0){ return 0; }else if (n == 1 || n == 2){ return 1; }else{ int c = 0, a = 1, b = 1; for(int i = 3; i <= n; i++){ c = a + b; a = b; b = c; } return c; }}

From the above implementation we define three variables A, b, and C where C =a+b, and then compute step by step to get a value with subscript n. Since we can define variables to store, we can also define an array where each element is the value of a Fibonacci sequence, so that we can not only get the NTH value, but also get the entire Fibonacci sequence.

Method three: array-based implementation

public static int getFib3(int n){ if(n < 0){ return -1; }else if(n == 0){ return 0; }else if (n == 1 || n == 2){ return 1; }else{ int[] fibAry = new int[n + 1]; fibAry[0] = 0; fibAry[1] = fibAry[2] = 1; for(int i = 3; i <= n; i++){ fibAry[i] = fibAry[i – 1] + fibAry[i – 2]; } return fibAry[n]; }}

3. Optimization scheme for inserting large quantities of data into the database.

3.1 Orderly data insertion;

Since index data needs to be maintained when a database is inserted, out-of-order records increase the cost of index maintenance. We can refer to InnoDB’s B+tree index. If every insert is at the end of the index, the index positioning efficiency is very high and the index adjustment is small. If the inserted record is in the middle of the index, B+ Tree splits and merges the inserted record, consuming a lot of computing resources. In addition, the index location efficiency of the inserted record decreases, and frequent disk operations occur when the data volume is large.

3.2. Insert in the transaction;

Using transactions can improve the efficiency of data insertion, because when an INSERT is performed, a transaction is set up within MySQL and the actual INSERT is performed within the transaction. By using transactions, you can reduce the cost of creating a transaction, with all inserts committed after they are executed.

Conclusion:

The performance of the merge data + transaction method is significantly improved when the data volume is small. When the data volume is large (over 10 million), the performance deteriorates dramatically. This is because the data volume exceeds the capacity of innodb_Buffer. However, the method of combining data + transaction + ordered data still performs well when the amount of data reaches tens of millions. When the amount of data is large, ordered data index location is convenient and frequent disk read and write operations are not required, so it can maintain high performance.

Matters needing attention:

The length of SQL statements is limited. When data is merged in the same SQL, the length must not exceed the limit. You can modify the length by configuring max_allowed_packet.

Transactions need to be controlled for size, and transactions that are too large may affect the efficiency of their execution. MySQL has an innodb_log_buffer_size configuration item. Exceeding this value will flush InnoDB data to disk, which is inefficient. Therefore, it is better to commit the transaction before the data reaches this value.

3.3. Stored Procedures;

3.4. Add cache and use Redis to preload data;

3.5. Temporary table;

3.6. Queue;

3.7 Sub-database sub-table (larger data volume), etc.

Today’s other interview questions:

Hashing principle; Hash structure or B +tree which is faster; How to divide database and table; Distributed transaction solution; How kafka can't send messages larger than 10K How to select nacOS and ErUIKA; Is your project online yet? How many day fire, how many statistical users? Talk about multi-table subqueries; What is the brush plate strategy; Dc distributed center; Questions about the project; Project from development to the end is how the process; There are many people in the project team; How the project is deployed and published; The application scenario of Redis, please list one scenario and explain how to achieve it. SQL optimization; Why did the project choose Spring Cloud instead of Dubbo? Describe which components Spring Cloud uses; MongoDB application scenarios, why choose MongoDB, not mysql; Introduce what you know about MQ; Spring Boot common annotations and functions; How to use Vue, common tags; If you were to design a message queue, how would you design it? Development code specification, you talk about; The difference between Mybatis and Mybatis - Plus, how to select; Talk about the bean lifecycle; How can cache avalanche be resolved? How is cache penetration resolved? How to complete the order list scenario using Redis? MySQL has 2000W data, Redis only 20W data, how to ensure that Redis is hot data? What advantages does Redis have over memcached? If there are 100 million keys in Redis, 10W of them start with a fixed, known prefix, how do you find them all? I said login verification. What is the encryption? Did you participate in the deployment? Why use MQ and not Kafuka? What is your understanding of high concurrency? What about mysql tuning? What is the third-party payment interface in the project? What about the search engine that was used in the project? Incremental updates to Redis; How to ensure that the lazy style in multi-threading is unique and does not affect efficiency; Springboot boot class, why can be added after the boot; How MQ producers produce messages to ensure that messages are consumed; After bubble sort, the index value corresponds to the output result; Dictionary, set storage, how to get the words in the dictionary; What are your career plans in the future?Copy the code