“This article has participated in the good article call order activity, click to see: back end, big front end double track submission, 20,000 yuan prize pool for you to challenge!”

Hello, I’m Yes.

Some time ago to write an article about MySQL lock, some friends after reading some questions, these questions are quite representative, so here to do an experiment, to use the fact to explore some.

That article mentioned Record Locks, which, as the name implies, lock records, records that act on indexes.

It may not be easy to understand that the lock is on the index, and it may not be clear whether the blocking between the lock on the secondary index is hit at the repeatable read and read commit isolation levels.

This may be a bit of a mouthful, but let me show you a few experiments and give you a conceptual summary at the end, which should make it all very clear.

The experimental MySQL version is 5.7.26.

Experiment 1: An experiment in which the isolation level is read commit and non-indexed columns are locked

Create a very simple table with only primary key indexes and no secondary indexes.

CREATE TABLE `yes` (
  `id` bigint(20) NOT NULL AUTO_INCREMENT,
  `name` varchar(45) DEFAULT NULL,
  `address` varchar(45) DEFAULT NULL.PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=4 DEFAULT CHARSET=utf8mb4
Copy the code

The isolation levels are as follows:

To turn off autocommit transactions:

Prepared data:

At this point, transaction A is initiated, the following statement is executed, and the transaction is not committed:

Next, initiate transaction B and execute the following statement:

You might think that transaction B will not block because transaction B locks name=xx and transaction A locks name=yes. There is no conflict between transaction B and transaction A locks name=yes. But from there, transaction B blocks. Let’s see who’s waiting for who

As you can see, transaction 6517(B) is waiting for transaction 6516(A).

SELECT * FROM innodb_locks; View information about related locks

This result indicates that transaction B (6517) wants A record lock with ID 1, but the record is held by transaction A (6516).

Yes, the 1 here doesn’t actually mean the first record, it means the record with ID 1.

One might wonder why lock_data is 1.

(I have not seen the source code, I conclude as follows 🙂 select… Select * from table where name = “yes”; select * from table where name = “yes”; select * from table where name = “yes”; The back didn’t meet the criteria so it wasn’t locked. In this case, transaction B is locked, and the process is the same as transaction A. The first record is already locked by transaction A, so the first record conflicts, and the first record id is 1, so lock_data is 1.

Now, IF I commit transaction A, transaction B will get the result immediately.

From the above experiment, if the query condition is locked, but no corresponding secondary index can be matched, the lock will be locked on the primary key (cluster) index.

The non-leaf nodes of the cluster index have only primary key information, but no name information, so the full table scan can only be performed sequentially. The records that meet the conditions are locked, but the locked records will be blocked in the scanning process, even if the records are not the target records.

Look at this experiment, and you’ll see.

The experiment is to change the execution order of transaction A and TRANSACTION B.

Select * from transaction C where id = 2;

Then, another transaction D executes:

This is also blocked, but look at the lock information and you will see:

Transaction C scans for the record with id 2, releases the record with id 1, then continues to scan for the record with ID 2, then locks the record with id 2.

Transaction D also scans for the record with ID 1, meets the condition, and then locks it. Transaction D then scans backwards for the record with ID 2, which is locked by transaction C and blocked.

This is also consistent with my inferences above.

Let’s continue the experiment.

This time let’s try the update, where the new transaction E:

Another affair F:

There was no blocking, which was actually what we expected. At the read commit level, the full table scan for UPDATE is not with select, even if there is no index. For update, the entire table is locked in order to determine the condition, instead of finding the record that matches, and then locked.

Let’s continue the experiment.

After all transactions are committed, a new transaction G executes the following statement and does not commit the transaction:

Next, another transaction H executes the following statement:

As you can see, transaction H is not blocked, silky.

Note At the read commit level, only the existing records are locked, and the insert is not protected. Even if the insert name is yes, the insert will not be blocked.

Experiment 2: The isolation level is repeatable read, the experiment of locking non-index columns

The isolation level is repeatable read:

Again, the previous data:

At this point, transaction A is initiated, the following statement is executed, and the transaction is not committed:

Next, initiate transaction B and execute the following statement:

The expected result is that transaction B is blocked, the lock information is as follows, and the lock conflict with ID 1 is recorded.

Commit transaction A, transaction B, transaction C

Then create another transaction D:

Transaction C, transaction D, transaction A, transaction B, transaction C, transaction D, transaction A, transaction B

Transaction C is scanning the entire table and iterates from the first entry. Even if it accesses an invalid record, it will not release the record before the transaction commits.

This is different from reading committed.

Commit transaction A, B, C, D, and create transaction E:

Next, the retry transaction F executes the following statement:

If transaction F is blocked, look at the lock information:

The conflicting lock_data is the supremum record, which was mentioned in the previous article. By default, the MySQL page has two records, maximum and minimum, which do not store data and act like a dummy node in a linked list.

As a result, the maximum record is also locked by transaction F. The table ID is increable, so the insert record is inserted right at the end of the table, and the conflict occurs.

This was a bit of a surprise to me, as I thought the insertion of transaction F was blocked by the gap lock imposed by transaction E.

I create a record with id 6, and the data in the table is as follows:

Another transaction also executes and is not committed:

Next, I insert another transaction, but specify that the insert ID is 4, so the record will be inserted before the record id is 6.

Check the lock information:

The insert transaction requires a record lock + GAP lock, but is blocked by a record lock with ID 6 occupied by the previous transaction.

This involves my blind spot, the above insert still only needs the record lock, this time the insert needs to apply for the gap lock again? But it’s not because the gap is blocked, right? I’ll look into it at another time, and if anyone knows, please guide me in the comments section.

Let’s continue the experiment, clean up the data and restore to the initial state:

Start a transaction G execution:

Then start another transaction H execution:

The lock is blocked. The lock is blocked.

As you can see, the lock for update at the repeatable read level is not the same as the lock for read commit. The lock_data for update is 1, indicating that the lock is not released after transaction G scans the record with ID 1.

If the startup sequence of G and H is reversed, that is, the statement of H is executed first and then the statement of G is executed, the result is the same, and lock_data of the lock is 1. This indicates that the update that can be read repeatedly is not to determine whether the condition is met before locking, but to determine whether the condition is met before locking.

Update will be blocked. The final conclusion is:

At the repeatable read level, locking a non-indexed column causes a lock on a full table record to block all inserts and modifications.

To summarize:

At this point, list the reader’s questions:

The answer context of the message is in the repeatable reading level, now I will summarize the answer:

Under the read commit level:

Select * from primary key; select * from primary key; select * from primary key; select * from primary key; For update is locked from front to back in the order of full table scan. The records traversed are locked first. If the conditions are not met after the lock is locked, the lock is released and the traversal continues until the full table scan ends.

Inserts will not be blocked.

If the record is locked, the block will be blocked. If the record is not locked, the block will not be blocked.

At the repeatable read level:

Select * from primary key; select * from primary key; select * from primary key; select * from primary key; For update is locked from front to back in the order of full table scan. The records traversed are locked first. If the conditions are not met after the lock is locked, the lock is not released.

So as long as there is a lock for a full table scan, insert will be blocked.

Update with lock and select.. For Update Consistent.

Related to that, there’s another question:

All lock_type values are RECORD, which is a row level lock.

Experiment 3: Experiment with lock index column at read commit isolation level

At this point, the name column is indexed.

CREATE TABLE `yes` (
  `id` bigint(20) NOT NULL AUTO_INCREMENT,
  `name` varchar(45) DEFAULT NULL,
  `address` varchar(45) DEFAULT NULL.PRIMARY KEY (`id`),
  KEY `idx_name` (`name`)
) ENGINE=InnoDB AUTO_INCREMENT=4 DEFAULT CHARSET=utf8mb4
Copy the code

Also prepare the following data:

Initiate transaction A, execute the following statement, and the transaction is not committed:

Then initiate transaction B by executing the following statement:

As you can see, it’s not blocked, it’s silky.

At the read commit level, only the name index is locked, and not the entire table row is locked. Therefore, transaction A and transaction B are not blocked.

At this point, another transaction C executes the following statement:

You can see that a block has occurred, and view the lock information at this point:

As you can see, the index of the lock is indeed changed to IDx_name, and lock_data shows that the lock is the record of yes with id 1.

In the case that the secondary index can be hit, the lock is the corresponding secondary index.

We went on with the experiment.

After all the above transactions are committed.

C executes the following statement and the transaction is not committed:

Next, transaction D executes the following statement:

There is no blocking and data is inserted silky.

Inserting the same name will not block.

So at the read commit level, inserts are not blocked.

I won’t do the update experiment. The difference between experiment 1 and experiment 1 is that the lock index is replaced by the name index, otherwise the performance is the same.

Experiment 4: the isolation level is repeatable read, locked index column experiment

Also prepare the following data:

At the repeatable read level, transaction A executes:

Next, transaction B executes:

Check the lock information:

This is expected blocking, because the yes record is last (alphabetically) indexed by name. To prevent phantom reading, the reread isolation level places a gap lock before and after the record, and the insertion of a new record is required after the yes record.

However, as shown in the screenshot, lock_mode is the record lock and lock_data is the supremum. So it’s not because the gap lock is blocked, it’s because the maximum record row lock is blocked?

Transaction A and TRANSACTION B are committed, and then we execute transaction C:

Then execute transaction D:

The insert is not blocked because transaction C locks the gap between the yes and yes records, and transaction B commits, so the insert is not locked.

If transaction C then executes again:

If the lock is blocked, let’s look at the lock information:

As you can see, the lock that is blocked is the record lock + gap lock (next-key lock), which is consistent with our cognition and the figure above, because the data to be inserted is between yes and the public number: Yes training guide.

summary

Only the index column is locked if the index column is matched.

If the committed level is being read at this point:

select.. The records for update and update are themselves locked, so insertions at this location are blocked and insertions at other locations are unaffected.

If this is at the repeatable read level:

select.. Records for update and update are locked before and after the index position, and records themselves are locked, so insertions at these locations are blocked. Insertions at other locations are unaffected.

The last

There are four kinds of experiments, and one has done thirteen experiments.

Still quite harvest, the surprise is to find the details of the blind area, after research and then out of an article.

Here’s another conceptual summary from the experiment:

  • The lock is applied to the index, so if the secondary index can be hit, the lock will be applied to the secondary index, otherwise the cluster index will be forced to lock.
  • Forcing a lock on a clustered index results in a full table scan lock.
  • Under repeatable reads, no matter which index is hit, no matter whether select.. For update or update, all scanned records are locked, regardless of whether the conditions are met, and are released after the transaction is committed.
  • Under read commit, select.. For update shows that the scanned records are locked first, and then the conditions are determined. If the records do not meet the conditions, the records are released immediately, without waiting for the transaction to commit. In contrast, for UPDATE, the records are first determined whether the conditions are met, and then the records are locked.

Statement: The experiment above is based on MySQL 5.7.26, and the storage engine is InnoDB.

I spent three working nights to do these experiments before. Due to the scattered time, errors occurred in the intermediate experiments. There was a problem in setting the transaction isolation level statement during the experiments, which led me to do the experiments under the wrong premise

Then spent a day on Saturday to the management, the experimental figure a lot, may forget to look behind the front, back and forth Suggestions combined with a conclusion, the conclusion will have a more profound understanding, but some experimental conclusion I is inferred according to the experimental phenomena, I didn’t go to look for related website, if there are any errors, please correct me, if you have any doubt please make your own experiment, Talk about it in the comments section.

More interview questions can be seen in my summary warehouse, each interview question contains answers, Java foundation has been updated ~

Interview Warehouse (with answers)

Also have my personal original article summary

Today’s share to this end, wait for my next ha, if feel good article.

Come to a triple yo!