Preface: Mysql > alter table increments auto_increment primary key IDS auto_increment primary key IDS auto_increment primary key IDS auto_increment primary key ids auto_increment primary key ids auto_increment primary key ids In this blog we will analyze this problem and explore the internal causes.

Mysql and program instances

1.1 To illustrate this problem, let’s first create three tables

User_auto_key user_uuid user_random_key user_auto_key user_uuid user_random_key user_auto_key Select primary key from each table using a different strategy, while the other fields are the same. Then test the insert speed and query speed of the table:

Note: the random key here actually refers to the random ID calculated by the snowflake algorithm: a string of 18-bit long values

Id automatic generation table:

 

User uuid table

  

Random primary key table:

  

1.2 Light theory does not work, directly on the program, using spring jdbcTemplate to implement the increment test:

Technical Framework:

springboot+jdbcTemplate+junit+hutool

The principle of the program is to connect to its own test database, and then write the same amount of data in the same environment, to analyze the insert time to synthesize its efficiency, in order to achieve the most real effect, all data are randomly generated, such as name, email, address are randomly generated.

package com.wyq.mysqldemo;
import cn.hutool.core.collection.CollectionUtil;
import com.wyq.mysqldemo.databaseobject.UserKeyAuto;
import com.wyq.mysqldemo.databaseobject.UserKeyRandom;
import com.wyq.mysqldemo.databaseobject.UserKeyUUID;
import com.wyq.mysqldemo.diffkeytest.AutoKeyTableService;
import com.wyq.mysqldemo.diffkeytest.RandomKeyTableService;
import com.wyq.mysqldemo.diffkeytest.UUIDKeyTableService;
import com.wyq.mysqldemo.util.JdbcTemplateService;
import org.junit.jupiter.api.Test;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.util.StopWatch;
import java.util.List;
@SpringBootTest
class MysqlDemoApplicationTests {


    @Autowired
    private JdbcTemplateService jdbcTemplateService;


    @Autowired
    private AutoKeyTableService autoKeyTableService;


    @Autowired
    private UUIDKeyTableService uuidKeyTableService;


    @Autowired
    private RandomKeyTableService randomKeyTableService;




    @Test
    void testDBTime() {


        StopWatch stopwatch = new StopWatch(SQL execution time consumption);




        /** * Auto_increment key task */
        final String insertSql = "INSERT INTO user_key_auto(user_id,user_name,sex,address,city,email,state) VALUES(? ,? ,? ,? ,? ,? ,?) ";


        List<UserKeyAuto> insertData = autoKeyTableService.getInsertData();
        stopwatch.start("Automatic key table creation task begins");
        long start1 = System.currentTimeMillis();
        if (CollectionUtil.isNotEmpty(insertData)) {
            boolean insertResult = jdbcTemplateService.insert(insertSql, insertData, false);
            System.out.println(insertResult);
        }
        long end1 = System.currentTimeMillis();
        System.out.println(Time consumed by auto Key: + (end1 - start1));


        stopwatch.stop();




        /** * uudID key */
        final String insertSql2 = "INSERT INTO user_uuid(id,user_id,user_name,sex,address,city,email,state) VALUES(? ,? ,? ,? ,? ,? ,? ,?) ";


        List<UserKeyUUID> insertData2 = uuidKeyTableService.getInsertData();
        stopwatch.start("Key table task for UUID started");
        long begin = System.currentTimeMillis();
        if (CollectionUtil.isNotEmpty(insertData)) {
            boolean insertResult = jdbcTemplateService.insert(insertSql2, insertData2, true);
            System.out.println(insertResult);
        }
        long over = System.currentTimeMillis();
        System.out.println("UUID key consumption time :" + (over - begin));


        stopwatch.stop();




        /** * random long value key */
        final String insertSql3 = "INSERT INTO user_random_key(id,user_id,user_name,sex,address,city,email,state) VALUES(? ,? ,? ,? ,? ,? ,? ,?) ";
        List<UserKeyRandom> insertData3 = randomKeyTableService.getInsertData();
        stopwatch.start("Random long key table task started");
        Long start = System.currentTimeMillis();
        if (CollectionUtil.isNotEmpty(insertData)) {
            boolean insertResult = jdbcTemplateService.insert(insertSql3, insertData3, true);
            System.out.println(insertResult);
        }
        Long end = System.currentTimeMillis();
        System.out.println("Random key task cost :" + (end - start));
        stopwatch.stop();




        String result = stopwatch.prettyPrint();
        System.out.println(result);
    }
Copy the code

1.3 Program writing results

User_key_auto writes result:

User_random_key writes the result:

User_uuid table write result:

1.4 Efficiency test results \

When the existing data volume is 130W: Let’s test the insert 10W data and see what the result will be:

It can be seen that when the amount of data is about 100W, the insertion efficiency of UUID is at the bottom, and when 130W data is added in the sequence, the time of UUID drops sharply.

Auto_key >random_key> UUID, uUID has the lowest efficiency, in the case of a large amount of data, the efficiency plummets.

So why does this happen? With that in mind, let’s explore this question:

2. Compare the index structure with uUID and increment ID

2.1 Internal structure using the self-increasing ID

The values of the incremented primary keys are sequential, so Innodb stores each record at the end of a record. When the maximum fill factor of a page is reached (InnoDB defaults to 15/16 of the page size, leaving 1/16 of the page space for future changes) :

①. The next record will be written to a new page. Once the data is loaded in this sequential manner, the primary key page will fill up with nearly sequential records, increasing the maximum fill rate of the page without wasting pages

(2) The newly inserted row must be one row below the original maximum data row,mysql is quick to locate and address, there is no extra cost to calculate the position of the new row

③. Reduce the generation of page splitting and fragmentation

2.2 Internal index structure using UUID

Because uuid is random relative to sequential increment ids, the value of a new row does not have to be greater than the value of the previous primary key, so InnoDB cannot always insert a new row to the end of the index. Instead, it needs to find a new appropriate place for the new row to allocate new space.

This process requires a lot of extra operations, and the data being out of order leads to the data distribution being disorganized, which can lead to the following problems:

① The written target page may have been flushed to disk and removed from the cache, or not yet loaded into the cache, InnoDB will have to find and read the target page from disk into memory before inserting, which will cause a lot of random I/O

② Because writes are out of order, InnoDB has to do page splitting frequently to allocate space for new rows. Page splitting results in moving a large amount of data, and at least three pages need to be modified at a time

③. Due to frequent page splitting, pages will become sparse and be filled irregularly, which will eventually lead to data fragmentation

After loading random values (UUID and snowflake ID) into the clustered index (innoDB’s default index type), sometimes you need to do an OPTIMEIZE TABLE to rebuild the TABLE and optimize the page population, which again takes some time.

Conclusion: Use InnoDB to insert rows in the incrementally increasing order of primary keys as much as possible, and insert rows as monotonously as possible with the values of the incrementally increasing cluster keys

2.3 Disadvantages of using an auto-added ID

So there’s no harm in using an incremented ID at all? No, there are several problems with an increment id:

①. Once others climb your database, you can obtain your business growth information according to the self-increasing ID of the database, and it is easy to analyze your business situation

② For high concurrent load, InnoDB will cause obvious lock contention when pressing primary key to insert. The upper bound of primary key will become a hot spot for contention because all inserts take place here and concurrent inserts will cause gap lock contention

③ The Auto_Increment lock mechanism will cause the snatching of the autoincrement lock, resulting in certain performance loss

The attached:

If innodb_autoinc_lock_mode is to be fixed, you need to tune innodb_autoinc_lock_mode

Third, summary

This blog first raises questions from the beginning, builds the table to uses the jdbcTemplate to test the different ID generation strategy in the large amount of data insert performance, and then analyzes the different ID mechanism in mysql index structure and advantages and disadvantages, in-depth explanation why uUID and random non-repeat ID in the data insert performance loss. Explains the problem in detail.

In the actual development or according to the official recommendation of mysql, it is best to use the self-increasing ID. Mysql is extensive and profound, and there are many points worth optimizing in the internal need for us to learn.

The source and: cnblogs.com/wyq178/p/12548864.html