MySQL > create a lock for MySQL

1 What is a lock

1.1 Overview of locks

There are many examples of lock in life, from the ancient simple door lock, to the password lock, and now the fingerprint unlock, face recognition lock, this is a clear example of lock, so, we understand the lock should be very simple.

As for MySQL, locks are an important feature. Database locks are designed to support concurrent access to shared resources and provide data integrity and consistency. In this way, data will not have problems when accessing the database in the case of high concurrency.

1.2 Two concepts of locking

In databases, both lock and latch can be called locks, but they have different meanings.

Latch is commonly referred to as a lightweight lock because it requires a very short period of time. If the duration is long, the application performance will be very poor. In InnoDB engine, Latch can be divided into mutex (mutex) and RWlock (read/write lock). Its purpose is to ensure that concurrent threads operate on critical resources correctly, and there is usually no mechanism for deadlock detection.

The object of Lock is a transaction. It is used to Lock objects in the database, such as tables, pages, and rows. And generally locked objects are released only after the transaction commit or ROLLBACK (the release time may vary depending on transaction isolation levels).

2 Locks in InnoDB storage engine

2.1 Granularity of locks

In the database, lock granularity can be divided into different line page table locks, lock, lock, the lock is also happens between the granularity of the upgrade, lock escalation means dealing with current lock granularity is reduced, the database can take a table 1000 rows locked to upgrade to a page, or to upgrade for the table lock page locks, respectively introduce the three lock granularity below (refer to the blog: https://blog.csdn.net/baolingye/article/details/102506072).

Table locks

Table level locking is the most granular locking mechanism in MySQL’s storage engines. The biggest feature of this locking mechanism is that the implementation logic is very simple, and the system has the least negative impact. So locks are acquired and released quickly. Because table-level locking locks the entire table at once, it’s a good way to avoid the deadlocks that plague us.

Of course, the biggest negative impact of large locking granularity is that the probability of locking resource contention will be the highest, resulting in a large discount.

Table level locking is mainly used by MyISAM, MEMORY, CSV and other non-transactional storage engines.

Features: low overhead, fast lock; No deadlocks occur; The lock granularity is large, and the probability of lock conflict is high and the concurrency is low.

Page locks

Page-level locking is a unique level of locking in MySQL and is not common in other database management software. Page-level locking is characterized by locking granularity between row-level locking and table-level locking, so the resource cost and concurrent processing capacity required to acquire the lock are also in the middle of the above two. In addition, page-level locking, like row-level locking, is deadlocked. In the process of database resource locking, as the granularity of locked resources decreases, the amount of memory required to lock the same amount of data becomes more and more, and the implementation algorithm becomes more and more complex. However, as the granularity of locked resources decreases, the likelihood of application access requests encountering lock waits decreases, and the overall concurrency of the system increases. Page – level locking is primarily used by BerkeleyDB storage engines.

Features: Overhead and locking time are between table and row locks; Deadlocks occur; The locking granularity is between table locks and row locks, and the concurrency is average.

Row locks

The biggest characteristic of row-level locking is that the granularity of the locked object is very small, and it is also the smallest granularity of locking achieved by the major database management software. Because the granularity of locking is very small, the probability of locking resource contention is also minimum, which can give the application as much concurrent processing capacity as possible and improve the overall performance of some applications requiring high concurrency.

While it has the advantage of concurrency, row-level locking has its drawbacks. Because the granularity of the locked resource is small, there is more work to do to acquire and release the lock each time, resulting in a higher cost. In addition, row-level locking is also the most prone to deadlocks.

Features: high overhead, slow lock; Deadlocks occur; The lock granularity is the lowest, the probability of lock conflict is the lowest, and the concurrency is the highest.

By comparing table locks, we can find that the characteristics of these two kinds of locks are basically opposite. From the perspective of locking, table-level locks are more suitable for the applications that are mainly queries and only a small amount of data are updated according to index conditions, such as Web applications. Row-level locking is more suitable for applications with a large number of concurrent updates of a small amount of different data based on index conditions and concurrent queries, such as some online transaction processing (OLTP) systems.

Granularity of locks supported by different MySQL engines

2.2 Types of locks

There are different types of locks in the InnoDB storage engine, which are described below.

S or X (shared lock, exclusive lock)

In fact, there are only two kinds of data operations, namely read and write, and the database will use different locks for these two operations when implementing locks. InnoDB implements standard row-level locks, namely Shared and Exclusive locks.

  • A shared Lock (read Lock) (S Lock) allows a transaction to read a row of data.
  • An exclusive Lock (write Lock) (X Lock) allows a transaction to delete or update a row of data.

IS or IX intent lock

InnoDB storage engine supports an additional type of lock called intentional lock. Intentional lock is table-level lock in InnoDB. Intentional lock is divided into:

  • Intent shared lock: Indicates that a transaction wants to acquire a shared lock on rows of a table.
  • Intentional exclusive lock: Indicates that a transaction wants to acquire exclusive locks on rows of a table.

In addition, these locks are not necessarily compatible, and some locks are incompatible. The so-called compatibility means that after transaction A obtains A certain lock on A certain line, transaction B also tries to obtain A certain lock on this line. If it can obtain A lock immediately, it is called lock compatibility, and vice versa.

Let’s take a look at the compatibility of the two locks.

  • S or X (shared locks, exclusive locks) compatibility

  • IS or IX (shared, exclusive) compatibility with intent locks

3. Summary in front

Here is a mind map to summarize the previous concepts.

4 Consistent unlocked read and consistent locked read

Locking Reads

When data is queried in a transaction, the normal SELECT statement does not lock the queried data, and other transactions can still update and delete the queried data. Therefore, InnoDB provides two types of locked reads for additional security:

  • SELECT ... LOCK IN SHARE MODE
  • SELECT ... FOR UPDATE

SELECT … LOCK IN SHARE MODE: Add S locks to read rows. Other things can add S locks to those rows. If you add X locks, it will block.

SELECT … FOR UPDATE: An X lock is placed on the queried row and its associated index record, blocking either the S lock or the X lock on other transaction requests. When the transaction is committed or rolled back, locks added through these two statements are released. Note: SELECT FOR UPDATE can lock rows only if auto commit is disabled. If auto commit is enabled, matching rows will not be locked.

Consistent non-locked read

Consistent Nonlocking read refers to the way InnoDB storage engine reads rows in the current database through multi-version control (MVVC). If the read row is performing a DELETE or UPDATE operation, the read operation does not therefore wait for the row lock to be released. Instead, InnoDB reads a snapshot of the row. Therefore, the non-locked read mechanism greatly improves database concurrency.

Consistent unlocked reads are InnoDB’s default read mode, i.e. reads do not occupy and wait for locks on rows. InnoDB uses consistent unlocked reads at READ COMMITTED and REPEATABLE READ transaction isolation levels.

However, snapshot data is defined differently. At the READ COMMITTED transaction isolation level, consistent unlocked reads always READ the latest snapshot data of the locked row. With REPEATABLE READ transaction isolation level, the version of the row data at the start of the transaction is READ.

Let’s use a simple example to illustrate the difference between the two approaches.

First, create a table;

Insert a piece of data;

insert into lock_test values(1);Copy the code

View the isolation level;

select @@tx_isolation;Copy the code

The following are divided into two types of transactions.

At REPEATABLE READ transaction isolation level;

In REPEATABLE READ transaction isolation level, the row data at the beginning of the transaction is READ, so after the data is modified by session B, the data can still be queried through the previous query.

At the READ COMMITTED transaction isolation level;

At the READ COMMITTED transaction isolation level, the latest snapshot for this edition is READ, so session B has modified the data and COMMITTED the transaction, so session A cannot READ the data.

5 line lock algorithm

InnoDB storage engine has three algorithms for row locking, which are:

  • Record Lock: A Lock on a single row Record.
  • Gap Lock: A Gap Lock that locks a range but does not contain the record itself.
  • Next-key Lock: Gap Lock+Record Lock locks a range, and locks the Record itself.

Record Lock: Index records are always locked. If the InnoDB storage engine creates a table without setting any indexes, the InnoDB storage engine uses an implicit primary key to Lock the table.

Next-Key Lock: a locking algorithm combining Gap Lock and Record Lock. Under the next-key Lock algorithm, InnoDB uses this locking algorithm for row queries. For example, 10,20,30, the index may be next-key Locking:

In addition to next-key Locking, there arePrevious-Key LockingThis technique is the opposite of a next-key Lock, which locks the interval between the range and the previous value. For the same value mentioned above, previous-key Locking technology is used, so the Locking interval is:

Not all indexes have a next-key Lock, and there is a special case where next-key Lock is degraded to a Record Lock if the query column is a unique index (including a primary key index).

Let’s explain this with an example.

CREATE TABLE test (x INT, y INT, PRIMARY KEY(x), // x is the PRIMARY KEY(y) // y is the common index); INSERT INTO test select 3, 2; INSERT INTO test select 5, 3; INSERT INTO test select 7, 6; INSERT INTO test select 10, 8;Copy the code

We now execute the following statement in session A;

SELECT * FROM test WHERE y = 3 FOR UPDATECopy the code

Let’s analyze the locking situation at this point.

  • For the primary key x

  • Secondary index y

The user can display the Gap Lock in the following two ways:

  • Set the transaction isolation level to READ COMMITED.
  • The parameter innodblocksunsafeforSet binlog to 1.

Gap Lock is designed to prevent multiple transactions from inserting records into the same range. It is designed to solve Phontom Problem. Repeatable Read is the default isolation level of MySQL and InnoDB uses it to solve phantom Read problems.

Phantom read: When the same SQL statement is executed twice in a row in the same transaction, the result may be different. The second SQL statement may return a row that did not exist before, i.e. a new row was inserted between the first execution and the second execution.

6 Problems caused by locking

6.1 dirty read

Dirty read: Under different transactions, the current transaction can read uncommitted data from another transaction. REPEATABLE READ is the default isolation level of MySQL, and dirty reads do not occur. The condition for dirty reads is that the transaction isolation level is READ UNCOMMITTED, so if dirty reads occur, this isolation level may be the cause.

Let’s look at it through an example.

As you can see from the above example, when the isolation level of our transaction was READ UNCOMMITTED, session B could query the data that session A did not commit.

6.2 Cannot be read repeatedly

Non-repeatable read: a transaction reads the same set of data multiple times, but the data read is different. This violates the principle of database transaction consistency. However, this is different from dirty reads. Dirty reads are uncommitted data, but unrepeatable reads are committed data.

Let’s look at the following example to see how this happens.

As can be seen from the above example, session B inserts data in session A, resulting in inconsistent query results. As A result, unrepeatable read problems occur.

It is important to note that unrepeatable reads READ COMMITTED data, which is acceptable.

If we need to avoid non-repeatable READ problems, we can use the next-key Lock algorithm (set transaction isolation level to READ REPEATABLE) to avoid non-repeatable READ problems. In MySQL, non-repeatable READ problems are Phantom problems.

6.3 Update Loss

Lost updates: A situation in which the update operation of one transaction is overwritten by the update operation of another transaction, resulting in data inconsistency. The lost-update problem, which does not occur at any isolation level of the current database, can occur in a multi-user computer system environment.

To avoid the problem of missing updates, we simply serialize the transactions and do not execute them in parallel.

We usually use SELECT… FOR UPDATE statement that adds an exclusive X lock to the operation.

6.4 summary

Let’s make a summary here, which is mainly a comparison of the problems that occur at different transaction isolation levels, so that it becomes clearer.

If there are any improper articles, please correct them. If you like reading on wechat, you can also follow mineWechat official account:Good good study Java, access to quality learning resources.

This article is published by OpenWrite!