view

A view translates the query results into a new table (a virtual table) that can be queried on later.

The database stores only the view definitions, not the data corresponding to the view, which is still stored in the original base table. So once the data in the base table changes, the data queried from the view also changes;

1. Create or modify views

The syntax for creating a view is:

CREATE [OR REPLACE]

VIEW view_name [column_list...]

AS select_statement
Copy the code

Example:

# plusORREPLACE if the view already exists, you can directly REPLACE the column_list in the viewselectThe fields returned by the query can be aliasedcreate OR REPLACE 
VIEW test_view(t_sellerid,t_name,t_nickname) 
as
SELECT sellerid,name,nickname FROM tb_seller;
Copy the code

The syntax for modifying a view is:

ALTER VIEW view_name [column_list...] AS select_statement
Copy the code

Example:

alter VIEW test_view(t_sellerid,t_name) 
as
SELECT sellerid,name FROM tb_seller;
Copy the code

Views can also be modified using the created syntax, just by adding the OR REPLACE argument!

2. View the view

To query the definition of a VIEW, run the SHOW CREATE VIEW command. Since the view itself is also a table, the command used to query the view data is the same as the command used to query the table.

# View the definition of the viewshow create viewtest_view; # View the data of the viewSELECT * from  test_view;
Copy the code

3. Delete the view

DROP VIEW [IF EXISTS] view_name [view_name...] 
Copy the code

Example: delete test_view from test_view

drop VIEW if EXISTS test_view;
Copy the code

Differences between views and tables

Differences between views and tables:

  1. Views are created and dropped without affecting the table
  2. A view is a compiled SQL statement that is a visual table based on the result set of the SQL statement, whereas a table is not
  3. View users can query or modify only the fields and data they can see
  4. The view is virtual and does not occupy disk space. Tables, on the other hand, exist and are stored in disk space

Why select the view instead of directly manipulating the table?

  • Security: The view can give the user no access to the data table and thus be unaware of the table structure. Users can only query or modify the data they can see,
  • Simple: Create a view for a very complex table (or a very complex query that needs frequent use). Most fields and result sets can be directly filtered out. Users can operate directly on this view to greatly improve the operation efficiency of users.

Stored procedures and functions

Stored procedures and functions are a set of SQL statements that have been compiled in advance and stored in the database. Calling stored procedures and functions can simplify a lot of work of application developers, reduce data transfer between the database and the application server, and improve the efficiency of data processing.

The difference between stored procedures and functions is that functions must have a return value, while stored procedures do not.

process

1. Create a stored procedure

CREATE PROCEDUREprocedure_name ([parameter...] )begin
	- SQL statement
end ;
Copy the code

Example:

create procedure pro_test1()
begin
	select 'Hello Mysql' ;
end
Copy the code

2. Call the stored procedure

call procedure_name() ;	
Copy the code

3. View the stored procedure

-- Query the status of a stored procedure
show procedure status;

Query the definition of a stored procedure, specifying the database. If this parameter is not specified, the database to which the current session belongs is queried by default
show create procedure test.pro_test1 ;
Copy the code

4. Delete the stored procedure

DROP PROCEDURE [IF EXISTS] procedure_name;
Copy the code

5, variables,

Stored procedures are programmable, which means that you can use variables, expressions, and control structures to accomplish complex functions.

5.1, DECLARE

DECLARE defines a local variable that is scoped only to BEGIN… In the END block

DECLARE var_name type [DEFAULT value],...
Copy the code

Example:

 create procedure pro_test2() 
 begin 
 	declare num int default 5;
 	select num+ 10; 
 end
Copy the code

5.2, the SET

Direct assignment uses SET, which can be either a constant or an expression. The syntax is as follows:

SET var_name = expr
Copy the code

Example:

CREATE  PROCEDURE pro_test3()
  BEGIN
  	DECLARE NAME VARCHAR(20);
  	SET NAME = 'MYSQL';
  	SELECT NAME ;
  END
Copy the code

You can also use select… Into:

CREATE  PROCEDURE pro_test5()
BEGIN
	declare  countnum int;
	select count(*) into countnum from city;
	select countnum;
END
Copy the code

5.3. If condition judgment

The grammatical structure is as follows:

If Judgment conditionthenOperation [elseif judge conditionthenOperating]... [elseOperation]end if;
Copy the code

Requirements: Determine the body type of the current height according to the defined height variables (general body under 170, standard body between 170 and 180, tall body above 180)

create procedure pro_test6()
begin
  declare  height  int  default  175; 
  declare  description  varchar(50);
  
  if  height > = 180  then
    set description = 'Tall and tall';
  elseif height > = 170 and height < 180  then
    set description = 'Standard body';
  else
    set description = 'Average body';
  end if;
  select description ;
end
Copy the code

5.4. Pass parameters

create procedure procedure_name([in/out/inout] Parameter name Parameter type)IN: This parameter can be used as input, that is, the value to be passed by the caller, defaultOUT: This parameter is used as output, that is, this parameter can be used as a return valueINOUT: Can be used as an input or output parameterCopy the code

IN – Input example, requirements: according to the defined height variable, to determine the current height of the body type

create procedure pro_test5(in height int)
begin
    declare description varchar(50) default ' ';
  if height > = 180 then
    set description='Tall and tall';
  elseif height > = 170 and height < 180 then
    set description='Standard body';
  else
    set description='Average body';
  end if;
  select concat('height', height , 'The corresponding body type is :',description);
end
Copy the code

OUT- Output example, requirements: according to the passed in the height variable, to get the current height of the body type

create procedure pro_test5(in height int , out description varchar(100))
begin
  if height > = 180 then
    set description='Tall and tall';
  elseif height > = 170 and height < 180 then
    set description='Standard body';
  else
    set description='Average body';
  end if;
end
Copy the code

Call:

call pro_test5(168.@description)
select @description
Copy the code

Little knowledge:

  • @description: A user session variable with an “@” sign in front of its name is called a user session variable, which means that it is active throughout the session. This is similar to a global variable
  • @@sort_buffer_size: A variable is called a system variable by adding the “@@” symbol before it

5.5. Case Structure

The grammatical structure is as follows:

CASE

  WHENJudge conditionsTHENOperation [WHENJudge conditionsTHENOperating]... [ELSEOperation]END CASE;
Copy the code

Demand: Given a month, then calculate the quarter

create procedure pro_test9(month int)
begin
  declare result varchar(20);
  case 
    when month > = 1 and month < =3 then 
      set result = 'First quarter';
    when month > = 4 and month < =6 then 
      set result = 'Second quarter';
    when month > = 7 and month < =9 then 
      set result = 'Third quarter';
    when month > = 10 and month < =12 then 
      set result = 'Fourth quarter';
  end case;
  select concat('The month you entered is :'.month , ', the month is: ' , result) as content ;
end
Copy the code

5.6. While loop

Grammatical structure:

While judge the condition do operationend while;
Copy the code

Requirement: Calculate the value from 1 to n

create procedure pro_test8(n int)
begin
  declare total int default 0;
  declare num int default 1;
  while num< =n do
    set total = total + num;
	set num = num + 1;
  end while;
  select total;
end
Copy the code

5.7. Repeat structure

Loop control statement to exit the loop when the condition is met. While is when the condition is met, repeat is when the condition is met and the loop exits.

The grammatical structure is as follows:

REPEAT Operation UNTIL judgment conditionEND REPEAT;
Copy the code

Requirement: Calculate the value from 1 to n

create procedure pro_test10(n int)
begin
  declare total int default 0;
  
  repeat 
    set total = total + n;
    set n = n - 1;
    until n=0  
  end repeat;
  select total ;
end
Copy the code

5.8. Loop statement

LOOP implements a simple LOOP. The condition for exiting the LOOP needs to be defined by another statement. Usually, the syntax of the LEAVE statement is as follows:

[begin_label:] LOOP operationEND LOOP [end_label]
Copy the code

The LOOP statement can be used to implement a simple infinite LOOP if you do not add a statement to the operation to exit the LOOP.

5.9. Leave statement

Used to exit from the annotated process construct, usually with BEGIN… END or loop. Here’s a simple example of using LOOP and LEAVE to exit a LOOP:

CREATE PROCEDURE pro_test11(n int)
BEGIN
  declare total int default 0;
  
  ins: LOOP
    
    IF n < = 0 then
      leave ins;
    END IF;
    
    set total = total + n;
    set n = n - 1;
  	
  END LOOP ins;
  
  select total;
END
Copy the code

5.10. Cursor/cursor

A cursor is a data type used to store a query result set, which can be used in stored procedures and functions to iterate over the result set. The use of the cursor includes cursor declaration, OPEN, FETCH, and CLOSE. The syntax is as follows:

# Declare cursorDECLARE cursor_name CURSOR FOR select_statement ;

#OPENThe cursorOPEN cursor_name ;

#FETCHThe cursorFETCH cursor_name INTO [var_name...] 

#CLOSEThe cursorCLOSE cursor_name ;
Copy the code

Example:

Initialize script:

create table emp(
  id int(11) not null auto_increment ,
  name varchar(50) not null comment 'name',
  age int(11) comment 'age',
  salary int(11) comment 'salaries'.primary key(`id`)
)engine=innodb default charset=utf8 ;

insert into emp(id,name,age,salary) values(null.'The Golden Lion'.55.3800), (null.'The Eagle Of the White Eyebrow'.60.4000), (null.'King Green Winged Bat'.38.2800), (null.'Dragon King in Purple'.42.1800);

Copy the code
-- Query the data in the EMP table and obtain it row by row for display
create procedure pro_test11()
begin
  declare e_id int(11);
  declare e_name varchar(50);
  declare e_age int(11);
  declare e_salary int(11);
  declare emp_result cursor for select * from emp;
  
  open emp_result;
  
  fetch emp_result into e_id,e_name,e_age,e_salary;
  select concat('id=',e_id , ', name=',e_name, ', age=', e_age, ', the salary is: ',e_salary);
  
  fetch emp_result into e_id,e_name,e_age,e_salary;
  select concat('id=',e_id , ', name=',e_name, ', age=', e_age, ', the salary is: ',e_salary);
  
  fetch emp_result into e_id,e_name,e_age,e_salary;
  select concat('id=',e_id , ', name=',e_name, ', age=', e_age, ', the salary is: ',e_salary);
  
  fetch emp_result into e_id,e_name,e_age,e_salary;
  select concat('id=',e_id , ', name=',e_name, ', age=', e_age, ', the salary is: ',e_salary); # This line executes an error,No data - zero rows fetched, selected, orProcessed. Because the current pointer is pointing to no datafetch emp_result into e_id,e_name,e_age,e_salary;
  select concat('id=',e_id , ', name=',e_name, ', age=', e_age, ', the salary is: ',e_salary);
  
  close emp_result;
end

Copy the code

Retrieve the data in the cursor via a loop structure:

create procedure pro_test12()
begin
  DECLARE id int(11);
  DECLARE name varchar(50);
  DECLARE age int(11);
  DECLARE salary int(11);
  DECLARE has_data int default 1;
  
  DECLARE emp_result CURSOR FOR select * fromemp; # This line defines a solution to the value errorDECLARE EXIT HANDLER FOR NOT FOUND set has_data = 0;
  
  open emp_result;
  
  repeat
    fetch emp_result into id , name , age , salary;
    select concat('id',id, ', the name of ' ,name , ', the age for ' ,age , And the salary is: ', salary);
    until has_data = 0
  end repeat;
  
  close emp_result;
end
Copy the code

function

The grammatical structure is as follows:

CREATE FUNCTIONfunction_name([param type ... ] )RETURNS type 
BEGIN.END;
Copy the code

Example: Define a function that requests the total number of records that satisfy a condition;

create function count_city(countryId int)
returns int
begin
  declare cnum int ;
  select count(*) into cnum from city where country_id = countryId;
  return cnum;
end
Copy the code

Call:

select count_city(1);
Copy the code

If you execute a function that prompts:

This function has none of DETERMINISTIC.NO SQL.or READS SQL DATA in its declaration and binary logging is enabled (you *might* want to use the less safe log_bin_trust_function_creators variable)
Copy the code

This is because mysql is set to not allow functions to be created by default;

Solution 1: Apply: SET GLOBAL log_bin_trust_function_creators = 1; (This method will not work after restart.)

Solution 2: apply with log-bin-trust-function-creators=1 in my.cnf, but you need to restart the application

The trigger

A trigger is a database object associated with a table that triggers and executes a collection of SQL statements defined in the trigger before or after insert/ Update/DELETE. This feature of the trigger can be used on the database side to ensure data integrity, logging, data verification, and other operations.

The aliases OLD and NEW are used to refer to the changed record content in the trigger, similar to other databases. Trigger firing is currently only supported at row level, not statement level

Flip-flop type The use of NEW and OLD
INSERT trigger NEW indicates the data to be added or has been added
UPDATE flip-flop OLD indicates the data before modification, and NEW indicates the data to be modified or has been modified
DELETE flip-flop OLD indicates the data that will or has been deleted

1. Create trigger

Grammatical structure:

create trigger trigger_name 

before/after insert/update/delete

on tbl_name 

[ for each row ]  -- Row level trigger

begin

	trigger_stmt ;

end;
Copy the code

Requirements: Record emP table data change log through trigger, including add, modify, delete;

First create a log table:

create table emp_logs(
  id int(11) not null auto_increment,
  operation varchar(20) not null comment 'Operation type, INSERT/UPDATE/DELETE',
  operate_time datetime not null comment 'Operation time',
  operate_id int(11) not null comment 'ID of operation table',
  operate_params varchar(500) comment 'Operational parameters'.primary key(`id`)
)engine=innodb default charset=utf8;
Copy the code

Create an INSERT trigger to complete logging when inserting data:

create trigger emp_logs_insert_trigger
after insert 
on emp 
for each row 
begin
  insert into emp_logs (id,operation,operate_time,operate_id,operate_params) values(null.'insert',now(),new.id,concat(Insert (id:'),new.id,', name:',new.name,', age:',new.age,', salary:',new.salary,') '));	
end
Copy the code

Create an update trigger to complete logging when data is updated:

create trigger emp_logs_update_trigger
after update 
on emp 
for each row 
begin
  insert into emp_logs (id,operation,operate_time,operate_id,operate_params) values(null.'update',now(),new.id,concat('Before change (id:'),old.id,', name:',old.name,', age:',old.age,', salary:',old.salary,'), modified (id'),new.id, 'name:',new.name,', age:',new.age,', salary:',new.salary,') '));                                                                      
end
Copy the code

Create a trigger for the DELETE row to complete the log record when deleting data:

create trigger emp_logs_delete_trigger
after delete 
on emp 
for each row 
begin
  insert into emp_logs (id,operation,operate_time,operate_id,operate_params) values(null.'delete',now(),old.id,concat('Delete before (id:'),old.id,', name:',old.name,', age:',old.age,', salary:',old.salary,') '));                                                                      
end
Copy the code

Testing:

insert into emp(id,name,age,salary) values(null.'Bright left make'.30.3500);

update emp set age = 39 where id = 3;

delete from emp where id = 5;
Copy the code

2. Delete the trigger

Grammatical structure:

drop trigger [schema_name.]trigger_name
Copy the code

If schema_name is not specified, the current database defaults.

3. View triggers

You can run the SHOW TRIGGERS command to view the status and syntax of the TRIGGERS.

Grammatical structure:

show triggers;
Copy the code

The storage engine

1, an overview of the

A storage engine is a way to store data, build indexes, update query data, and so on. The storage engine is table based, not library based. So a storage engine can also be called a table type;

Databases such as Oracle and SqlServer have only one storage engine. MySQL provides a plug-in storage engine architecture. So MySQL has a variety of storage engines, you can use the corresponding engine according to the need, or write a storage engine;

To query the storage engines supported by the current database, run the show engines command:

If you do not specify a storage engine when creating a new table, the system will use the default storage engine. Before MySQL5.5, the default storage engine was MyISAM, and after 5.5, InnoDB.

Mysql > show variables like ‘%storage_engine%’;

Differences between InnoDB and MyISAM (focus on the following three points) :

The characteristics of InnoDB MyISAM
The transaction security support Does not support
Locking mechanism Row locking (for high concurrency) Table locks
Support foreign keys support Does not support

2, InnoDB

InnoDB storage engine is the default storage engine for Mysql, but compared to MyISAM storage engine, InnoDB writes less efficiently and takes up more disk space to retain data and indexes.

InnoDB storage engine is different from other storage engines:

Transaction control

InnoDB storage engine provides transaction security with commit, rollback, crash recovery capabilities:

starttransaction; # After insert, access data on another machine and find it is not added.insert into emp(id,name,age,salary) values(null.'Bright left make'.30.3500);

commit;
Copy the code

Tests show that there are transactions in InnoDB;

Foreign key constraints

InnoDB is the only MySQL storage engine that supports foreign keys. When creating foreign keys, the parent table must have corresponding indexes. When creating foreign keys, the child table will automatically create corresponding indexes.

storage

InnoDB stores tables and indexes in the following two ways:

  1. The table structure is stored in. FRM files. The data and indexes are stored in table Spaces defined by Innodb_datA_home_DIR and Innodb_data_file_PATH.

  2. Using multi-tablespace storage, the table structure of a table created this way still exists in an.frm file, but the data and indexes for each table are kept separately in.ibd.

MySql left connection, right connection, internal connection, full connection

There are two tables tcount_tbl and runoob_tbl in the local database. The data in the two data sheets are as follows:

Left join

LEFT JOIN reads all data in the LEFT table, even if there is no corresponding data in the right table.

Right join

RIGHT JOIN will read all data from the RIGHT table, even if the left table has no corresponding data.

Inner Join (inner Join)

Retrieve a record of the matching relationship between fields in two tables

Full join

Mysql does not support full join, but can be implemented with left outer join + union+ right outer join;

Transaction (ACID)

The business is generally the following four points:

  1. Atomicity
  2. Consistency
  3. Isolation
  4. “Durability”

1. Atomicity

The atom is the smallest unit in the world and is indivisible.

A transaction is the logical unit of work of a database. Each operation contained in a transaction is either done or not done.

2. Consistency

The AID in ACID is a database characteristic that depends on the implementation of the database. This C alone, in fact, is dependent on the application layer, which is dependent on the developer. (Consistency here refers to the system from one correct state, to another correct state. What is the right state? If the current state meets predetermined constraints, it is called the correct state. Transactions have the property of C in ACID to say that we are consistent through the AID of the transaction.

Here’s an example of the financial system everyone is talking about:

A has to pay B $100, and A only has $90 in his account, and we have A constraint that this column, given the account balance, can’t be less than 0. So obviously this transaction is going to fail, because 90 minus 100 is minus 10, which is less than our constraint.

In this example, the data in our database conforms to the constraint before payment, but if the transaction succeeds, our database data breaks the constraint, so the transaction cannot succeed. Here we say that the transaction provides a guarantee of consistency. And then let’s do another example

A has to pay 100 yuan to B, and only 90 yuan is in A’s account. There is no constraint in our account balance column. However, our business does not allow the account balance to be less than 0, so after the payment is completed, we will check the account balance of A and find that the balance is less than 0, so we carry out the rollback of the transaction.

In this example, if the transaction succeeds, it does not break the database constraint, but it does break our application layer constraint. The transaction rollback guarantees our constraint, so we can say that the transaction provides a consistency guarantee (ps: in fact, it is our application layer that uses the transaction rollback to ensure that our constraint is not broken). Finally, let’s do one more example

A has to pay 100 yuan to B, and only 90 yuan is in A’s account. There is no constraint in our account balance column. And the payment was made.

Here, if you follow many people’s understanding, don’t transactions guarantee consistency? The intuition is why the account balance can be negative. But here, before and after the transaction is executed, no constraints on our system are broken. Keep it right all the time.

So, to sum up. You can understand consistency as going from one correct state to another. ACID means that the transaction can guarantee the C process through the AID. C is the end and AID is the means.

3. Isolation

The execution of one transaction cannot be interfered with by other transactions. That is, the operations and data used within a transaction are isolated from other concurrent transactions, and the concurrent transactions cannot interfere with each other.

4. Durability

Also known as permanent. once a transaction is committed, its changes to the data in the database should be permanent. Subsequent actions or failures should not have any impact on the outcome of its execution.

The isolation level of the transaction

CREATE TABLE `demo` (
  `id` int(11) NOT NULL,
  `name` varchar(255) DEFAULT NULL.PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

INSERT INTO `demo_01`.`demo`(`id`, `name`) VALUES (1.'a');
INSERT INTO `demo_01`.`demo`(`id`, `name`) VALUES (5.'b');
INSERT INTO `demo_01`.`demo`(`id`, `name`) VALUES (7.'c');
INSERT INTO `demo_01`.`demo`(`id`, `name`) VALUES (9.'d');
Copy the code

Read uncommitted

If one transaction has already started writing data, the other transaction is not allowed to write at the same time, but other transactions are allowed to read the data on the row. This avoids update loss, but may cause dirty reads. That is, transaction B reads uncommitted data from transaction A.

But dirty reading is still possible:

A transaction Transaction 2

Read committed

If it is a read transaction, other transactions are allowed to read and write. If it is a write transaction, other transactions are forbidden to access the data in the row. This isolation level avoids dirty reads, but non-repeatable reads may occur. Transaction A reads the data, transaction B updates the data and commits the transaction, and by the time transaction A reads the data again, the data has changed.

However, it is still possible to have non-repeatable reads (this changes if the data is read repeatedly) :

A transaction Transaction 2

Repeatable read(Repeatable read)

Repeatable read refers to multiple reads of the same data within a transaction, and the same data is read. Therefore, it is called repeatable read isolation level.

There are also magic reads that occur when using the current read (in fact, there are also non-repeatable reads, but MySQL officially allows them) :

This is not a bug. It's intentional: "This is not a bug but an intended and documented behavior."Copy the code
A transaction Transaction 2

Serializable

Provides strict transaction isolation, which requires serialized execution of transactions that can only be executed one after another, but not concurrently. Serialization is the highest transaction isolation level, the highest cost, the lowest performance, and is rarely used!

In MYSQL database, the above four isolation levels are supported. The default is Repeatable Read. In Oracle, there are only two levels, Serializeble and Read COMMITTED. The default is Read COMMITTED.

To view or change the isolation level of the current transaction in the MYSQL database:

Set transaction isolation level to repeatable readsetsession transaction isolation level REPEATABLE READ; View the current transaction isolation levelshow VARIABLES like 'transaction_isolation';
Copy the code

Why can one transaction read uncommitted data that has been modified by another transaction? Before committing, data is updated to the mysql table. After committing, data is updated from the log to the mysql table.

When the transaction isolation level is set to Read Ucommitted, one transaction (A) can Read uncommitted data from another transaction (B).

Do not repeat read, magic read and dirty read

Dirty read:

A dirty read is when one transaction accesses data and makes changes to the data that have not yet been committed to the database, and then another transaction accesses the data and uses the data.

Unrepeatable read:

Read the same data multiple times within a transaction. Before the transaction ends, another transaction accesses the same data and modifies it. Then, between the two reads in the first transaction, the data read by the first transaction may be different due to the second transaction’s modification.

Phantom reads:

When a transaction queries the same scope twice, the last query sees rows not seen by the previous query

Note:

  1. At the repeatable read isolation level, normal queries are snapshot reads and do not see the insert data of other transactions. Therefore, magic reads occur only under the current read.
  2. Magic reading only means that the newly inserted rows are read and modified to meet the conditions and are read. It cannot be called magic reading.

Snapshot read and current read

The current read: select... lockinshare mode; select... for update; Current read: UPDATE,insert,deleteThe snapshot to read:select
Copy the code

Read the snapshot

Innodb will do snapshot read by default when performing select, which means taking a photo of the current data, and then returning the current photo when performing select. When other transactions commit, it doesn’t matter to you, so it’s repeatable.

So when was this picture generated? Not when you start the transaction, when you first perform A select, that is, when A starts the transaction and does nothing, then B inserts and commits, and then A performs A select, and then B returns the data that B added…… It does not matter if any other transaction is committed after that, because the photo has already been generated and will not be regenerated. This photo will be referenced in the future!

The current reading

The current read will read the latest record, which means you can see other transactions that haven’t committed yet.

The index

1. What are indexes

An index, also known as a “key” or “key” in MySQL (primary key, unique key, and index key), is a data structure used by the storage engine to quickly find records. Indexes are critical for good performance, especially as the amount of data in the table increases, and they become increasingly important for performance, reducing I/O times and speeding up queries. Select * from unique key where primary key is not null and index key is unique; select * from unique key where primary key is null and index key is unique; select * from unique key where index key is unique; select * from unique key where primary key is null and index key is unique; select * from unique key where index key is unique; select * from unique key where index key is unique;

An index is the phonetic list of a dictionary. If you want to look up a word, if you don’t use the phonetic list, you need to look up one page from hundreds of pages.

Once an index has been created for the table, it is a good idea for future queries to look up the index first and then use the results of the index location to find the data

Example:

SELECT * FROM Employee WHERE Employee_Name = 'Jesus'
Copy the code

Once we run this query, what exactly happens in the process of finding an employee named Jesus? The database has to go through every row in the Employee table and determine if the Employee’s name (Employee_Name) is’ Jesus’. Since we want to get the information of every employee named Jesus, we cannot stop the query after we find the first qualifying row, because there may be other qualifying rows. So, you have to search line by line until the last row, which means the database has to check thousands of rows to find the employee with the name Jesus. This is called a full table scan.

2. Classification of indexes

1) Single-valued index: that is, an index contains only a single column, and a table can have multiple single-column indexes

2) Unique index: The value of the index column must be unique, but empty values are allowed

3) Composite index: a single index contains multiple columns

3. Clustered index and common index

InnoDB has two types of indexes:

  1. Clustered Index: Data is stored in the same physical order as indexes. Leaf nodes store rows. By default, data is Clustered using a primary key.
  2. Non-clustered indexes: all non-clustered indexes on a table are Secondary indexes, and all non-clustered indexes are Secondary indexes.

So, for clustered indexes, when you create a primary key, you automatically create a clustered index for the primary key, and the clustered index will exist no matter what!

  • Normal: indicates a common index.
  • Unique: indicates that the index is unique and cannot be repeated. If the information in this field is guaranteed not to be repeated, for example, if the ID number is used as the index, you can set it to UNIQUE.
  • Full Textl: An index for full-text searches. FULLTEXT works best when used to search for long articles. For short text, if it’s just one or two lines, INDEX is fine.

In summary, the type of index is determined by the characteristics of the field content of the index, normally normal is the most common.

4. Index overwrite

If an index contains (or overrides) the values of all fields that need to be queried, it is called an ‘overridden index’. That is, you only need to scan the index without returning to the table.

Back table: first locate the primary key (the primary key is stored in the leaf node of the normal index), then locate the row record by the clustered index. This is called a backtable query, which locates the primary key and then the row, and has lower performance than a sweep of the index tree.

5. Use the index

5.1 Creating an Index

CREATE 	[UNIQUE|FULLTEXT|SPATIAL]  INDEX index_name [USING  index_type] ONtbl_name(index_col_name,...) Example:CREATE INDEX index_city_name ON city(city_name);
Copy the code

5.2 Viewing indexes

show index  fromtable_name; Example:show INDEX from city;
Copy the code

5.3 Deleting an Index

DROP  INDEX  index_name  ONtbl_name; Example:DROP INDEX index_city_name ON city;
Copy the code

5.4 the ALTER command

1). alter  table  tb_name  add  primarykey(column_list); This statement adds a primary key, which means that the index value must be unique and cannot beNULL
	
2). alter  table  tb_name  add  uniqueindex_name(column_list); The value that this statement creates the index must be unique (exceptNULLOutside,NULLMay occur multiple times)3). alter  table  tb_name  addindex index_name(column_list); Add normal index, index value can appear more than once.4). alter  table  tb_name  addfulltext index_name(column_list); This statement specifies the index as FULLTEXT, which is used for full-text indexingCopy the code

6. Avoid index failure

1). Full value match, specifying a value for all columns in the index.

In this case, the index takes effect and the execution efficiency is high.

explain select * from tb_seller where name='Xiaomi Technology' and status='1' and address='Beijing';
Copy the code

2). Leftmost prefix rule

If you index more than one column, follow the leftmost prefix rule. This means that the query starts at the top left of the index and does not skip the columns in the index.

To match the leftmost prefix rule, go index:

explain select * from tb_seller where name='Xiaomi Technology';
explain select * from tb_seller where name='Xiaomi Technology' and status='1';
explain select * from tb_seller where name='Xiaomi Technology' and status='1' and address='Beijing';
Copy the code

The leftmost prefix rule is illegal, the index is invalid:

explain select * from tb_seller where status='1';
explain select * from tb_seller where status='1' and address='Beijing';
Copy the code

If the leftmost rule is true, but a column jump occurs, only the leftmost column index is valid:

explain select * from tb_seller where name='Xiaomi Technology' and address='Beijing';
Copy the code

The most left prefix rule is like upstairs, do not go up first floor how to go up the second floor three floors, on the first floor, and do not go up the second floor how to go up the third floor!

3). The column on the right of the range query cannot be indexed.

# Address index failure explainselect * from tb_seller where name='Xiaomi Technology' and status>'1' and address='Beijing';
Copy the code

The status query is indexed by name, but the address query is not indexed by address.

4). Do not operate on the index column, the index will become invalid.

explain select * from tb_seller where SUBSTR(name,3.2)='technology';
Copy the code

5). The string does not contain single quotation marks, causing the index to become invalid.

#name, status = explainselect * from tb_seller where name='Xiaomi Technology' and status='1'; # only name is indexed to explainselect * from tb_seller where name='Xiaomi Technology' and status=1;
Copy the code

The MySQL query optimizer automatically casts a string without a single quotation mark, causing the index to fail.

6). Use overwrite indexes to avoid select *

Queries that access only the index (the index columns contain all the query columns).

explain select * from tb_seller where name='Xiaomi Technology' and status='1' and address='Beijing';
explain select name from tb_seller where name='Xiaomi Technology' and status='1' and address='Beijing';
explain select name,status from tb_seller where name='Xiaomi Technology' and status='1' and address='Beijing';
explain select name,status,address from tb_seller where name='Xiaomi Technology' and status='1' and address='Beijing';
Copy the code

If the column is queried and exceeds the index column, performance will also be degraded.

The #password field also needs to go back to the table to query explainselect name,status,address,password from tb_seller where name='Xiaomi Technology' and status='1' and address='Beijing';
Copy the code
TIP : 
	
    usingIndex: Appears when an overwrite index is usedusing where: When an index is used for lookup, you need to go back to the table to query the required datausing index condition: The lookup uses an index, but needs to query the data back to the tableusing index ; using where: The search uses an index, but the data is found in the index column, so there is no need to query the data back to the tableCopy the code

7). If the column before or has an index and the column behind it does not have an index, the indexes involved will not be used.

Createtime = createtime; createtime = createtime; createtime = createtime;

explain select * from tb_seller where name='Dark Horse Programmer' or createtime = 'the 2088-01-01 12:00:00';	
Copy the code

8). If ‘%’ starts with ‘Like’, index is invalid.

If only a trailing fuzzy match is used, the index does not fail. If the header is fuzzy, the index is invalid.

# Index failure explainselect * from tb_seller where name like '% Xiaomi Technology '; # Go index explainselect * from tb_seller where name like 'Xiaomi technology %'; # Index failure explainselect * from tb_seller where name like '% Xiaomi technology %';
Copy the code

Solution: Overwrite the index

explain select name from tb_seller where name like '% Xiaomi Technology ';
explain select name from tb_seller where name like '% Xiaomi technology %';
Copy the code

9). If MySQL is slower to evaluate using indexes than full tables, do not use indexes.

# Create address indexCREATE INDEX index_address ONtb_seller(address); # do not go to address index explainselect * from tb_seller where address = 'Beijing'; # Address index explainselect * from tb_seller where address = Xi 'an city;
Copy the code

The reasons are shown in the figure below:

10).is NULL, is NOT NULL Sometimes indexes become invalid (same reason as # 9).

11). IN index, NOT IN index invalid.

# Go index explainselect * from tb_seller where name IN ('Xiaomi Technology'); # don't go index explainselect * from tb_seller where name NOT IN ('Xiaomi Technology');
Copy the code

12). Single-column indexes and composite indexes.

Use composite indexes rather than single-column indexes.

create index idx_name_sta_address ontb_seller(name, status, address); Creating a composite index is equivalent to creating three indexes: name name+ status
	name + status + address

Copy the code

Create a single-column index

create index idx_seller_name on tb_seller(name);
create index idx_seller_status on tb_seller(status);
create index idx_seller_address on tb_seller(address);
Copy the code

The database will choose the best index (the most recognizable index) to use, not all indexes.

7. View index usage

show status like 'Handler_read%';	
show global status like 'Handler_read%';
Copy the code

The meanings of each value are as follows:

Handler_read_first: Number of times the first read in the index was read. A high value indicates that the server is performing a large number of full index scans (the lower this value, the better). Handler_read_key: If the index is working, this value represents the number of times a row has been read by the index value. A lower value indicates that the index is not getting much performance improvement because the index is not used very often (a higher value is better). Handler_read_next: Number of requests to read the next line in key order. This value increases if you use a range constraint or if you perform an index scan to query the index column. Handler_read_prev: Number of requests to read the previous line in key order. This read method is mainly used for optimizationORDER BY.DESC. Handler_read_rnd: The number of requests to read a line at a fixed location. This value is high if you are performing a large number of queries and need to sort the results. You may be using a large number of queries that require MySQL to scan the entire table or your join may not be using the correct keys. A high value indicates low performance and should be indexed to remedy this. Handler_read_rnd_next: Number of requests to read the next line in the data file. This value is high if you are doing a large number of table scans. This usually means that your table index is incorrect or that the query was written without using the index.Copy the code

Optimizing SQL steps

When faced with a database with SQL performance problems, we should start from the system analysis, so that the problem SQL as soon as possible and as soon as possible to solve the problem.

1. Check the SQL execution frequency

After the success of the MySQL client connection, through the show [session | global] status command can provide server status information.

  • Session: Statistics on current connections (default)
  • Global: statistics since the database was last started

For example, the following command displays the values of all statistical parameters in the current session:

show status;
show status like 'Com_______';
show status like 'Innodb_rows_%';
Copy the code

Com_xxx indicates the number of times each XXX statement is executed. We usually care about the following statistical parameters:

parameter meaning
Com_select The number of times a select operation is performed
Com_insert The number of INSERT operations performed, which are added up only once for batch INSERT operations
Com_update Number of UPDATE operations performed
Com_delete Number of DELETE operations performed
Innodb_rows_read Select Number of rows returned by the query
Innodb_rows_inserted Number of rows that were inserted by the INSERT operation
Innodb_rows_updated Number of rows updated by the UPDATE operation
Innodb_rows_deleted Number of rows deleted by the DELETE operation
Connections Number of attempts to connect to the MySQL server
Uptime Server working time
Slow_queries Number of slow queries

Note:

  • Com_*** : These parameters are accumulated for all storage engine table operations
  • Innodb_*** : These parameters are only for InnoDB storage engine, and the algorithm of accumulation is slightly different

2. Locate inefficient SQL execution

You can locate inefficient SQL statements in either of the following ways:

  1. –log-slow-queries[=file_name] –log-slow-queries[=file_name] –log-slow-queries[=file_name] –log-slow-queries[=file_name] –log-slow-queries[=file_name] –log-slow-queries[=file_name]
  2. Show processList: View the current thread running in MySQL, including the status of the thread, whether the table is locked, etc., to see the SQL execution in real time, and to optimize some lock table operations

The meanings of the corresponding fields are as follows:

1) id column: 'connection_id' assigned by the system when a user logs in to mysql. You can use 'connection_id' to view this. 3) Host column: shows which IP address and port the statement was sent from. This can be used to trace the user whose statement is in question. 4) DB column: shows which database the process is currently connected to. 6) Time column: displays the duration of the state, in seconds. 7) State column: displays the status of the SQL statement using the current connection. The important column displays the status of the SQL statement. State describes a state in the execution of a statement. An SQL statement, such as a query, may go through copying to TMP table, sorting result, and sending data before it is complete. 8) Info column: Displays this SQL statement and is an important basis for identifying problem statementsCopy the code

3. Explain the execution plan

After inefficient SQL statements are queried in the preceding steps, you can use the EXPLAIN or DESC command to obtain information about how MySQL executes SELECT statements, including how tables are joined and the join order during SELECT statement execution.

Query execution plan of SQL statement:

explain  select * from tb_item where id = 1;
Copy the code

The meanings of the corresponding fields are as follows:

1) select_type column: select_type column: select_type column: select_type column: select_type column: select_type column: select_type column: select_type column: select_type column: select_type column: select_type column: select_type column: select_type column: select_type column: Indicates the type of SELECT. Common values include SIMPLE (a SIMPLE table, that is, no table joins or subqueries), PRIMARY (a PRIMARY query), SELECT * from 'UNION'; SELECT * from 'SUBQUERY'; SELECT * from 'UNION'; Represents the join type of the table, Performance from good to bad connection type (system - > const -- -- -- -- -- > eq_ref -- -- -- -- -- - > ref -- -- -- -- -- -- -- > ref_or_null -- -- -- -- > index_merge - > index_subquery -----> range -----> index ------> all) 5) possible_keys: Indicates the index (one or more) that can be used in a query. 7) key_len: specifies the length of the index field (this is the maximum possible length of the index field, not the actual length of the index field, the shorter the better without losing accuracy) Description and description of the implementationCopy the code

Here are some of the more difficult fields to understand:

3.1 Id of explain

There are three types of ID cases:

  1. Same ID: Indicates that the table is loaded from top to bottom
  2. Different IDS: The larger the ID is, the higher the priority is
  3. The ids may be the same or different: Those with the same ID are considered as a group and executed from top to bottom. In all groups, the larger the ID value, the higher the priority and the higher the priority

Explain select_type

Select_type Specifies a common value, as shown in the following table:

select_type meaning
SIMPLE A simple SELECT query with no subqueries or UNION
PRIMARY If the query contains any complex subqueries, the outermost query is marked with that identity
SUBQUERY Subqueries are included in the SELECT or WHERE list
DERIVED The subqueries contained in the FROM list, which are labeled DERIVED, MYSQL executes these subqueries recursively, placing the results in temporary tables
UNION If the second SELECT appears after UNION, it is marked as UNION; If UNION is included in a subquery of the FROM clause, the outer SELECT will be marked as DERIVED
UNION RESULT SELECT to get results from the UNION table

3.3. Explain type

Type indicates the access type and is an important indicator. The value can be:

type meaning
NULL MySQL does not access any tables, indexes, and returns results directly
system A table with only one row (equal to the system table) is a special case of const type and is not normally present
const Const is used to compare a primary key or a unique index. Because it only matches one row, it’s fast. By placing the primary key in the WHERE list, MySQL can convert this query to a steady on. Const compares all parts of a “primary key” or “unique” index to a constant value
eq_ref Similar to ref, the difference is that the associated query uses a unique index and the associated query uses a primary key. Only one record can be obtained from the associated query. This is common for primary key or unique index scans
ref A nonunique index scan that returns all rows matching a single value. It is essentially an index access that returns all rows (multiple) that match a single value
range Retrieves only the rows that are returned given, using an index to select rows. Between, <, >, in and so on appear after where.
index The difference between index and ALL is that index only traverses the index tree, which is usually faster than ALL. ALL traverses data files.
all The entire table is traversed to find a matching row

NULL > system > const > eq_ref > ref > range > index > ALL

3.4 explain extra

Additional execution plan information is displayed in this column

extra meaning
using filesort Note mysql uses an external index to sort data instead of reading data according to the index order in the table. This is called file sort, which is inefficient.
using temporary Temporary tables are used to hold intermediate results. MySQL uses temporary tables when sorting query results. Common in order by and group by; Low efficiency
using index Indicates that the corresponding SELECT operation uses the overwrite index, avoiding accessing the data row of the table, good efficiency.

4, show profile analysis SQL

Show Profiles can help us understand where time is going when doing SQL optimizations.

MySQL supports profilesSELECT @@have_profiling; MySQL supports profilesSELECT @@profiling; # Enable the profiling switchset profiling=1; 
Copy the code

Profiles allow us to better understand the process of SQL execution. For example, we can perform the following series of operations

show databases;

use demo_01;

show tables;

select * from tb_item where id < 5;

select count(*) from tb_item;
Copy the code

Then run the show profiles command to see how long the SQL statement takes to execute:

Show profile for query query_id

Note: Sending data status indicates that the MySQL thread has started to access the data row and return the result to the client, not just to the client. The Sending data state is often the longest of all queries because MySQL threads tend to do a lot of disk reads

After obtaining the thread state that consumes the most time, MySQL can further select the all, CPU, Block I/O, Context Switch, page Faults and other detail types to see what resources MySQL is using with excessive time. For example, select view CPU elapsed time:

Meaning of each field:

field meaning
Status Execution status of SQL statements
Duration The time spent at each step in the SQL execution process
CPU_user CPU occupied by the current user
CPU_system CPU occupied by the system

5, Trace analysis optimizer execution plan

You can learn more about why the optimizer chose Plan A over Plan B through the trace file.

Open trace and set the format to JSONSET optimizer_trace="enabled=on",end_markers_in_json=on; # Set the maximum memory size that Trace can use to avoid the default memory size being too small to be fully displayed during parsingset optimizer_trace_max_mem_size=1000000; # query statementselect * from tb_item WHERE id<5; MySQL > select * from 'MySQL' where 'MySQL' is executedSQL
select * from information_schema.OPTIMIZER_TRACE; 
Copy the code

The query statement trace is as follows:

{
    "steps": [{"join_preparation": {
                "select#": 1."steps": [{"expanded_query": "/* select#1 */ select `tb_item`.`id` AS `id`,`tb_item`.`title` AS `title`,`tb_item`.`price` AS `price`,`tb_item`.`num` AS `num`,`tb_item`.`categoryid` AS `categoryid`,`tb_item`.`status` AS `status`,`tb_item`.`sellerid` AS `sellerid`,`tb_item`.`createtime` AS `createtime`,`tb_item`.`updatetime` AS `updatetime` from `tb_item` where (`tb_item`.`id` < 5)"}}, {"join_optimization": {
                "select#": 1."steps": [{"condition_processing": {
                            "condition": "WHERE"."original_condition": "(`tb_item`.`id` < 5)"."steps": [{"transformation": "equality_propagation"."resulting_condition": "(`tb_item`.`id` < 5)"
                                },
                                {
                                    "transformation": "constant_propagation"."resulting_condition": "(`tb_item`.`id` < 5)"
                                },
                                {
                                    "transformation": "trivial_condition_removal"."resulting_condition": "(`tb_item`.`id` < 5)"}}, {"substitute_generated_columns": {}}, {"table_dependencies": [{"table": "`tb_item`"."row_may_be_null": false."map_bit": 0."depends_on_map_bits"[]}]}, {"ref_optimizer_key_uses": []}, {"rows_estimation": [{"table": "`tb_item`"."range_analysis": {
                                    "table_scan": {
                                        "rows": 656482."cost": 68907
                                    },
                                    "potential_range_indexes": [{"index": "PRIMARY"."usable": true."key_parts": [
                                                "id"]}],"setup_range_conditions": []."group_index_range": {
                                        "chosen": false."cause": "not_group_by_or_distinct"
                                    },
                                    "skip_scan_range": {
                                        "potential_skip_scan_indexes": [{"index": "PRIMARY"."usable": false."cause": "query_references_nonkey_column"}},"analyzing_range_alternatives": {
                                        "range_scan_alternatives": [{"index": "PRIMARY"."ranges": [
                                                    "id < 5"]."index_dives_for_eq_ranges": true."rowid_ordered": true."using_mrr": false."index_only": false."rows": 4."cost": 1.3666."chosen": true}]."analyzing_roworder_intersect": {
                                            "usable": false."cause": "too_few_roworder_scans"}},"chosen_range_access_summary": {
                                        "range_access_plan": {
                                            "type": "range_scan"."index": "PRIMARY"."rows": 4."ranges": [
                                                "id < 5"]},"rows_for_plan": 4."cost_for_plan": 1.3666."chosen": true}}}]}, {"considered_execution_plans": [{"plan_prefix": []."table": "`tb_item`"."best_access_path": {
                                    "considered_access_paths": [{"rows_to_scan": 4."access_type": "range"."range_details": {
                                                "used_index": "PRIMARY"
                                            },
                                            "resulting_rows": 4."cost": 1.7666."chosen": true}},"condition_filtering_pct": 100."rows_for_plan": 4."cost_for_plan": 1.7666."chosen": true}]}, {"attaching_conditions_to_tables": {
                            "original_condition": "(`tb_item`.`id` < 5)"."attached_conditions_computation": []."attached_conditions_summary": [{"table": "`tb_item`"."attached": "(`tb_item`.`id` < 5)"}}, {"finalizing_table_conditions": [{"table": "`tb_item`"."original_table_condition": "(`tb_item`.`id` < 5)"."final_table_condition ": "(`tb_item`.`id` < 5)"}]}, {"refine_plan": [{"table": "`tb_item`"}}}, {"join_execution": {
                "select#": 1."steps": []}}]}Copy the code

SQL optimization

1. Insert data in large quantities

When using the load command to import data, proper Settings can improve the import efficiency. For Tables of InnoDB type, there are several ways to improve the import efficiency:

1) Primary key insertion in sequence

Since InnoDB tables are stored in the order of primary keys, the import data can be arranged in the order of primary keys to effectively improve the efficiency of data import. If InnoDB table does not have a primary key, the system automatically creates an internal column as the primary key, so if you can create a primary key for the table, you can take advantage of this to improve the efficiency of data import

2) Disable the uniqueness check

Before importing data, run SET UNIQUE_CHECKS=0 to disable the uniqueness check. After importing data, run SET UNIQUE_CHECKS=1 to restore the uniqueness check. This improves the import efficiency.

3) Commit the transaction manually

If the application uses automatic commit, you are advised to run SET AUTOCOMMIT=0 to disable automatic commit before the import and run SET AUTOCOMMIT=1 to enable automatic commit after the import to improve the import efficiency.

2. Optimize insert statements

There are several optimizations to consider when performing insert operations on data.

If you need to insert multiple rows into a table at the same time, you should try to use more than one value table INSERT statement, this way will greatly reduce the client and database connection, shutdown, and other costs. Faster than a single INSERT statement executed separately;

Example:

# Original mode:insert into tb_test values(1.'Tom');
insert into tb_test values(2.'Cat');
insert into tb_test values(3.'Jerry'); # The optimized scheme is:insert into tb_test values(1.'Tom'), (2.'Cat'), (3.'Jerry');
Copy the code

Data insertion in a transaction.

start transaction;
insert into tb_test values(1.'Tom');
insert into tb_test values(2.'Cat');
insert into tb_test values(3.'Jerry');
commit;
Copy the code

Sequential data insertion

# Original mode:insert into tb_test values(4.'Tim');
insert into tb_test values(1.'Tom');
insert into tb_test values(3.'Jerry');
insert into tb_test values(5.'Rose');
insert into tb_test values(2.'Cat'); # The optimized scheme is:insert into tb_test values(1.'Tom');
insert into tb_test values(2.'Cat');
insert into tb_test values(3.'Jerry');
insert into tb_test values(4.'Tim');
insert into tb_test values(5.'Rose');
Copy the code

Optimize the Order BY statement

There are two types of order BY:

  1. By sorting the returned data, commonly known as filesort sorting, all sorts that do not return the sorted result directly by index are called filesort sorting
  2. Using Index: Using index: using index: using index: using index: using index: using index

The optimization of Filesort

By creating appropriate indexes, you can reduce the number of Filesort cases, but in some cases, conditions cannot make Filesort cases disappear, so you need to speed up the sorting of Filesort cases. MySQL has two sorting algorithms for Filesort:

  1. Two-scan algorithm: Before MySQL4.1, use this sort (I won’t go into depth here).
  2. One scan algorithm: all the fields that meet the condition are retrieved at one time, and the result set is directly output after sorting in the sort buffer. Sorting costs a lot of memory, but sorting efficiency is higher than two-scan algorithm.

Select * from max_length_for_sort_data where max_length_for_sort_data = 1 and max_length_for_sort_data = 1; select * from max_length_for_sort_data where max_length_for_sort_data = 1; Otherwise use the first option.

You can increase the sort_buffer_size and max_LENGTH_FOR_sort_data system variables to increase the size of the sort range and improve sorting efficiency.

show VARIABLES LIKE'max_length_for_sort_data';
show VARIABLES like 'sort_buffer_size';
Copy the code

Optimize the group by statement

Because GROUP BY actually does the same sort operation, and compared to ORDER BY, GROUP BY is mostly just a GROUP operation after sorting. Of course, if other aggregate functions are used when grouping, then some aggregate function calculations are required. So, in the implementation of GROUP BY, just like ORDER BY, indexes can be used.

If the query contains group BY but the user wants to avoid the cost of sorting the result, order by NULL can be executed to disallow sorting. As follows:

explain select age,count(*) from emp group by age;
Copy the code

The optimized

explain select age,count(*) from emp group by age order by null;
Copy the code

As you can see from the example above, the first SQL statement requires “filesort”, while the second SQL statement does not require “filesort” due to order by NULL. As mentioned above, filesort is very time-consuming.

Optimize nested queries

Using subqueries allows you to perform many logically multi-step SQL operations at once, avoid transaction or table locking, and write easily. However, in some cases, subqueries can be replaced by more efficient joins.

For example, find all user information with roles:

 explain select * from t_user where id in (select user_id from user_role );
Copy the code

After the optimization:

explain select * from t_user u , user_role ur where u.id = ur.user_id;
Copy the code

Join queries are more efficient because MySQL does not need to create temporary tables in memory to perform a logically two-step query.

Optimizing OR conditions

For query clauses that contain OR, if indexes are to be used, each condition column between OR must use an index, and compound indexes cannot be used. If there is no index, you should consider adding an index.

Example:

explain select * from emp where id = 1 or age = 30;
Copy the code

It is recommended to use union instead of or;

Optimizing paging queries

The performance of paging queries can be improved by creating an override index. A common and very painful problem is the limit 200000,10, this requires MySQL to sort the first 200010 records, only return 200000-200010 records, other records discarded, query sort cost is very large.

Optimization idea 1: Complete sorting and paging on the index, and then associate the primary key back to the original table to query the required content of other columns.

EXPLAIN SELECT * from tb_item t,(SELECT id from tb_item ORDER BY id LIMIT 200000.10) a WHERE a.id = t.id;
Copy the code

Optimization idea 2: This scheme is applicable to the primary key increment table, can transform the Limit query into a query of a certain location.

EXPLAIN SELECT * from tb_item t WHERE id > 200000 LIMIT 10;
Copy the code

Using SQL prompts

SQL prompt, is an important means to optimize the database, simply put, is to add some human prompts in SQL statements to achieve the purpose of optimizing operations.

  • USE INDEX: Add USE INDEX to the end of the table name in the query statement to provide a list of indexes that you want MySQL to refer to.
  • IGNORE INDEX: if you simply want MySQL to IGNORE one or more indexes, use IGNORE INDEX.
  • FORCE INDEX: To FORCE MySQL to use a specific INDEX, you can use FORCE INDEX in the query.

Example:

EXPLAIN SELECT * from tb_seller use index(index_name) WHERE name = 'Xiaomi Technology';
explain select * from tb_seller ignore index(index_name) where name = 'Xiaomi Technology';
explain select * from tb_seller FORCE index(index_name) where name = 'Xiaomi Technology';
Copy the code