A dusty man came over and said to me: Young man, I saw that you are proficient in mysql on your resume, so LET me ask you about your knowledge of mysql.

Me: Ok (don’t ask, don’t ask ~ ~)

Interviewer: Do you know much about mysql locks?

Me: It’s ok (actually it’s fine).

Interviewer: Ok, I’m going to ask about locks

Me: good good (put horse come over, I also be polite once.

Interviewer: What levels of locks does mysql support? I: support library lock, table lock, row lock.

Interviewer: Let’s start with library lock. How many ways can you lock a library?

FTWRL (Flush tables with read Lock) and SET Global ReadOnly =true

Interviewer: What’s the difference?

The FTWRL mode is less risky. If the client is disconnected, the FTWRL lock will be released automatically. Global readOnly =true does not release the lock automatically.

Interviewer: What’s the difference between myISam and InnoDB locks?

Me: MyISam does not support row locking, only table-level locking, InnoDB supports more fine-grained row locking. Interviewer: Have table locks been used?

I: have not used, table lock performance is poor.

Interviewer: Do you know the MDL lock?

I: Metadata lock (MDL) is a table lock at the server layer. It is implicit and does not need to be used explicitly. When mysql reads or writes data (insert, update, SELECT, delete), it must obtain the MDL read lock first. The MDL write lock is acquired when a table structure change is made to the database. The MDL write lock is mutually exclusive with any MDL lock, whether it is an MDL read lock or an MDL write lock.

Interviewer: What does the MDL lock do?

Me: MDL locks are meant to resolve conflicts between DDL and DML.

  1. Let’s say transaction A queries for data, and then transaction B performs A field change. When transaction A checks again, the data does not match.

  1. If transaction A updates the data before committing, and transaction B changes the field commit, slave will modify the field first and then update the data, then there will be A problem

When using a MDL lock, the DDL operation must acquire the MDL write lock first. We know that write lock and write lock, write lock and read lock are in conflict, so if there is any query or update before THE DDL, must block wait, do not let the DDL execute, thus resolving the conflict problem.

Interviewer: Is online DDL secure with MDL lock?

Me: Not necessarily.

Assume that session1 executes the query first, but does not commit. Session2 then performs DDL to add fields, and finally session3 performs query. Session2 and session3 will block. If sessionN is used, session3 will block. If sessionN is used, session3 will block.

Interviewer: Can you explain why it is blocked in this case?

I: Session2 (DDL) is used to acquire the write lock. Session2 (DDL) is used to acquire the write lock. Session2 (DDL) is used to acquire the write lock. It is waiting for session1 to release the read lock, session3 is executed after session2, session3 needs a read lock, but since locks are acquired sequentially, they queue, and the write lock has higher priority than the read lock, which is why session3 is stuck.

Interviewer: So session1 runs after commit, session2 runs first, and session3 runs first? I just tried your example, it looks like session2 and session3 run almost at the same time, can you tell me why specifically? (See if you can get into the hole)

I: Session3: commit commit () session3: commit commit () session3: commit commit () So after session1 commit, it looks like session2 and session3 are running almost simultaneously. If you have Session3 explicitly start the transaction, you can see the details of the run.

Session2 is still stuck after session1 commit. Session2 is still stuck after Session3 commit. So the real thing is session3 runs first, then session2.

Interviewer: Does that contradict what you said above about getting the MDL lock queue?

Mysql supports online DDL and does not block user operations. When executing a DDL, its flow looks like this:

  1. MDL write locks
  2. Degraded to an MDL read lock
  3. Really do DDL
  4. Upgrade to MDL write lock
  5. The release of MDL lock

Session3 (select, commit, MDL read lock); Session2 was blocked when the MDL write lock was upgraded because session3 did not release the read lock.

Interviewer :(nice guy) do you know why 1-2 is demoted and 3-4 is upgraded in DDL?

Me :(I knew I was going to ask, I’m glad I was prepared)

First in MDL during the write lock, do is create a temporary FRM and idb file, the process to secure and is exclusive, at the same time the process is fast, after temporary files created, there is no need for exclusivity, then downgraded to a read lock and support normal add and delete, it is also one of the reasons why the DDL support online. After the new data file is written, the old data file needs to be replaced. This process needs to be secure, so after the execution of 3, the upgrade to MDL write lock is attempted. This process is also fast, which is another reason for supporting online DDL.

Interviewer: We know that InnoDB supports row-level locking. Do you know that there are two types of row-level locking?

Me: Yes, S (shared lock) and X (exclusive lock), S lock and S lock are shared, X lock and any lock mutually exclusive.

Interviewer: Since lock X and any lock are mutually exclusive, if there are two transactions, transaction A updates the data and does not commit, then transaction B to query the data will not block?

Me: No, because InnoDB supports MVCC (multi-version control), when a transaction is performing a query, it can use undo log to query a snapshot, so there is no lock.

Interviewer: Do you know IS (intentional shared lock) and IX (intentional exclusive lock)?

I: First of all, it is both table level lock, because InnoDB support row locks, after some line already on X lock, again want to this table locks, will have to confirm no X in the current table lock, in the absence of intent locks, have to line to judge, so efficiency is very low, after had the intention to lock, wouldn’t need the line judge, Here’s an example:

select * from user where id=1 for update;
Copy the code

When a transaction locks X on the row id=1, an IX lock is added to the user table.

LOCK TABLES user READ;
Copy the code

Try to add a read lock to the table, but find an IX lock on the table, so it blocks and cannot execute. Similarly, if a transaction imposes a shared lock on a row of data

select * from user where id=1 lock in share mode;
Copy the code

The corresponding table IS locked. In this case, if you execute

LOCK TABLES user WRITE;
Copy the code

It can also block because there IS an IS lock on the table.

Interviewer: Let’s talk about row locks. What row locks does InnoDB support?

I: Record Lock, Gap Lock, next-key Lock

Interviewer: Suppose you have a table with 10 records and the field user_id, and user_id is a normal index.

+----+---------+
| id | user_id |
+----+---------+
|  1 |      10 |
|  2 |      20 |
|  3 |      30 |
|  4 |      40 |
|  5 |      50 |
|  6 |      60 |
|  7 |      70 |
|  8 |      80 |
|  9 |      90 |
| 10 |     100 |
+----+---------+
Copy the code

If transaction A executes:

SELECT * FROM user WHERE user_id=50 FOR UPDATE;
Copy the code

What happens next when transaction B executes the following SQL? :

INSERT INTO user set user_id=45;
Copy the code

Me: Blocked.

Interviewer: Can you tell me why?

Me: Because InnoDB’s next-key Lock algorithm not only locks the record user_id=50, but also locks the gap around 50. Next-key Lock locks the range between (40,50) and (50,60).Since the 45 to be inserted is between 40 and 50, it will block.

Interviewer: So, according to your interval locking method, it contains 60 data, so if you insert 60 data, it will block?

INSERT INTO user set user_id=60;
Copy the code

I: In fact, no, this involves the optimization of the next key lock. In the equivalence query, when the right traversal and the last value does not meet the equivalence condition, the next-key lock will degenerate into a Gap lock, so the interval of (50,60) will eventually degrade to (50,60), So 60 is not locked, it can be inserted successfully.

Interviewer: So the 40 is not in the lock range, so you can insert the 40?

INSERT INTO user set user_id=40;
Copy the code

I: For the non-clustered index user_id, the leaf node must be sorted, like (40,4), (50,5). For the primary key id, the primary key id must be greater than 4. For the current 10 pieces of data, the id of the next insert must be 11, and the data (4,11) must be behind (40,4). In this case, the data will fall into the gap lock, so it will be blocked. In fact, the data (60) above can be inserted in the same way.

Interviewer: What happens to all inserts if I use select Lock in share mode instead of select for update?

Me: No change, still the same, because inserting requires an X lock, which is mutually exclusive with any lock.

Interviewer: What happens if user_id is not a normal index but a unique index?

Me: When the index is unique, then degradation occurs, and the Next Key Lock degrades to a Record Lock, which eventually only locks 50 records.

Interviewer: What if user_id has no index?

Me: Then all records will be locked and any insertions will be blocked.

Interviewer: Do you know why there is a gap lock?

Me: To solve the illusion. For example, when transaction A executes the following query:

SELECT * FROM user WHERE id> =9 for update
Copy the code

Two records (id=9 and ID =10) should be returned if another transaction B executes

INSERT INTO user set user_id=110;
Copy the code

Without clearance of lock, the transaction again A query will find much more A record, the phantom in the reading, lock if there is A gap, so [9, + up) this interval will be locked, transaction B insertion will be blocked. But only if the transaction isolation level set to repeatable read, to support the clearance lock.

Interviewer: What is the problem if a locked transaction is not committed and subsequent transactions that need to acquire locks are blocked?

Me: If more and more transactions are blocked, then more and more threads will be blocked, and in serious cases, the connection pool will be full and mysql will no longer be able to provide services. InnoDB, however, supports automatic abandonment of SQL lock waiting commands after blocking timeout. The default value is 50s.

+--------------------------+-------+
| Variable_name            | Value |
+--------------------------+-------+
| innodb_lock_wait_timeout | 50    |
+--------------------------+-------+
Copy the code

The interviewer looked at me: Do you know auto-Inc Locking?

I: Yes, self-growth lock. In the InnoDB engine, each table maintains a table-level self-growth counter. When inserting a table, the following command is used to obtain the current self-growth value.

SELECT MAX(auto_inc_col)  FROM user FOR UPDATE;
Copy the code

The insert adds one to this base to get the self-growing ID to be inserted.

Interviewer: We know that the lock in the transaction is released after the transaction commits, so after updating the auto-growth ID, when the transaction does not come and commit, other transactions will have to wait to acquire the auto-growth ID? Isn’t that a little inefficient?

In order to improve insert performance, self-growing locks are not released until the transaction commits, but immediately after the related insert SQL statement completes. This is why some transaction rollback results in discontinuous ids:

select * from user; +----+---------+ | id | user_id | +----+---------+ | .. |.. 9 | | | | 90 | | 100 | | | | + 120-12 + -- -- -- -- -- -- -- -- -- + # 11 the data rollback, but id has been consumed, id not rolled back.Copy the code

Interviewer: Although auto-Inc Locking can be released without waiting for transaction submission, in concurrent cases, auto-Inc Locking itself will lock the self-increasing ID, which will still affect the efficiency. How to solve this problem?

Me: Now InnoDB supports mutex to realize self-growth. Through mutex, counters in memory can be accumulated, which is faster than auto-Inc Locking.

Interviewer: Do you know about deadlocks?

Me: Yes.

Interviewer: Under what circumstances can deadlock occur?

Me: The condition of deadlock is request and hold, that is, each party keeps the resources needed by the other party while requesting the resources occupied by the other party. Such as:

Transaction 1 lock id = 1 the data first, followed by a transaction 2 lock id = 2 this data, then the transaction 1 try again to lock the id = 2 this data, but found that was 2 accounted for in the transaction, so the transaction 1 will be blocked at this time, the last transaction 2 try to get id = 1 lock, but found that is accounted for in the transaction 1, so will be blocked, so at this time was at an impasse, This is a deadlock.

Interviewer: How do you solve the deadlock problem?

I:

  1. InnoDB provides a lock timeout function. When a transaction obtains a lock timeout, it is automatically abandoned and another transaction can be executed.
+--------------------------+-------+
| Variable_name            | Value |
+--------------------------+-------+
| innodb_lock_wait_timeout | 50    |
+--------------------------+-------+
Copy the code
  1. You can also avoid deadlocks by having each update be updated in the agreed order.

  1. Check ahead of time with deadlock detection, which InnoDB turns on by default.
+------------------------+-------+
| Variable_name          | Value |
+------------------------+-------+
| innodb_deadlock_detect | ON    |
+------------------------+-------+
Copy the code

InnoDB carries out deadlock detection through the way of wait graph, which requires to store the information list of locks and the wait list of transactions, and then constructs a wait graph through the linked list, and determines whether a deadlock will be caused by the wait graph every time the lock is acquired.

The interviewer looks at his rolex :(it seems that he can’t lock the young man today, and the time is almost up) then you wait here while I call hr.

Me: Ok ok (finally over).

Past highlights:

  • Learn about rollback and persistence
  • Evolution of redis IO model
  • How do you build a robust service

Wechat search [pretend to know programming], get e-books, share the interview experience of big factory