The first task is to export the data in the order table to the file according to the conditions. The first task is to export the data in the order table to the file according to the conditions. The first task is to export the data in the order table to the file according to the conditions.

Here is the statement:

select * from orders where name='lilei'and create_time>'2020-01-01 00:00:00' limit start.end

Copy the code

Unexpectedly, after going online for a period of time, the production warning began, indicating that the SQL was slow and took more than 50 seconds to execute, which seriously affected services.

Ox quickly consult big ape ape to help find the reason, ape ape soon to help solve it, and cattle to do the following experiment:

One, test experiment

Mysql > select * from ‘limit start’ where ‘count’ = ‘limit start’;

select * from product limit start, count

Copy the code

When the start page is small, the query has no performance problem. Let’s take a look at the execution time of pages starting from 10, 100, 1000 and 10000 respectively (20 entries per page), as follows:

select * from product limit 10.20 0.016secondsselect * from product limit 100.20 0.016secondsselect * from product limit 1000.20 0.047secondsselect * from product limit 10000.20 0.094secondsCopy the code

We have seen that the time increases as the start record increases, which shows that the limit of the paging statement is strongly related to the start page number, so let’s change the start record to 40W (which is about half of the record).

select * from product limit 400000.20 3.229secondsCopy the code

Now look at the time we got the last page

select * from product limit 866613.20 37.44secondsCopy the code

Pages like this with the largest page number are obviously unbearable.

There are also two things we can conclude from this:

The query time of the LIMIT statement is proportional to the position of the starting record.

Mysql’s limit statement is convenient, but not suitable for use directly on tables with large records.

Second, the limit paging problem performance optimization method

2.1 Use the overwrite index of the table to speed up paging queries

As we all know, if only that index column is included in a statement that uses an index query (overwriting the index), the query will be fast.

Because the use of index search has an optimization algorithm, and the data is in the query index above, do not have to find the relevant data address, which saves a lot of time.

In addition, Mysql also has a related index cache, so it is better to use the cache when concurrency is high.

In our example, we know that the ID field is the primary key, and therefore contains the default primary key index. Now let’s take a look at the query using the override index: this time we query the last page of data (using the override index, only containing the ID column), as follows:

select id from product limit 866613.20

Copy the code

The query time is 0.2 seconds, which is about 100 times faster than the 37.44 seconds for all columns.

So if we want to query all the columns as well, there are two ways,

2.2 Using id>= form:

SELECT * FROM product 
WHERE ID > =(select id from product limit 866613.1) limit 20

Copy the code

The query time is 0.2 seconds, which is a great leap forward.

2.3 using the join

SELECT * FROM product a 
JOIN (select id from product limit 866613.20) b ON a.ID = b.id

Copy the code

Conclusion: Select * from table 600000 where select * from table 600000 where select * from table 600000 where select * from table 600000 where select * from table 600000 where select * from table 600000 where select * from table 600000 where select * from table 600000 It is much faster to query records by association, because the index looks up the ID that meets the criteria quickly, and then returns to the table 10 times. We’ll get the data we need.

For more exciting content, please follow my official account “Programmer Cow”

My personal blog site: www.kuya123.com