What is a global lock? What is the use of global locking? How to use global lock?

Global locking is mainly used for logical backup processes. For InnoDB engine libraries, use — single-transaction; MySQL provides a method to add a global read lock by running Flush tables with read lock (FTWRL) to make the entire library read-only.

What is a table lock? What does a table lock do? How to use table lock?

Table locks are typically used when the database engine does not support row locks. The syntax for table locking is lock tables… read/write; Adding a read lock does not restrict other threads from reading, but it does restrict other threads from writing. With a write lock, other threads can’t read or write.

What is a row lock? What does a row lock do? How to use row lock?

A row lock is a lock on a row in a table.

For example, if transaction A updates A row and transaction B updates the same row, it must wait until transaction A completes.

In InnoDB transactions, row locks are added when they are needed, but are not released immediately when they are not needed, but wait until the end of the transaction. This is the two-phase lock protocol.



The UPDATE statement of transaction B is blocked until transaction A commits.

It must be known that the row locks on the two records held by transaction A were released at commit time. If the row lock is released not after the commit but after the statement is executed, transaction B will not be locked.

If you need to lock more than one row in your transaction, put the locks that are most likely to cause lock conflicts and affect concurrency as far back as possible.

Reordering statements does not completely avoid deadlocks.

Deadlocks and deadlock detection

In a concurrent system, different threads have a circular resource dependency. When all the threads involved are waiting for other threads to release resources, these threads will enter an infinite waiting state, which is called a deadlock. To avoid this operation, deadlock detection is often used. Initiate deadlock detection. When a deadlock is found, one of the transactions in the chain is actively rolled back to allow other transactions to continue. Setting innodb_deadlock_detect to ON indicates that this logic is enabled. The complexity of deadlock detection algorithm is very high. N processes traverse N times, and M resources operate once for each resource. The complexity is O(M*N^2). Assuming 1000 concurrent threads are updating the same row at the same time, deadlock detection operations are on the order of 1 million. The result may be no deadlocks, but it consumes a lot of CPU resources in the process.

When deadlock detection occurs

Is each transaction checked before execution? No, he only checks if there’s a lock on the row he’s accessing. Consistency read does not lock, so there is no need to do deadlock detection. Not every deadlock detection sweeps all transactions. For example, at A certain moment, the transaction wait state is like this: B is waiting for A, D is waiting for C, and now E comes and finds that E needs to wait for D, then E determines whether deadlock will be formed with D and C, and this detection does not care about B and A. Deadlock detection is actually an algorithm, ring detection. It is not necessary to traverse the current transaction every time. It only needs to judge whether there is a ring generated after adding a new thing to the transaction linked list, and then a deadlock will be formed. This method should be the same as loop detection in Leetcode’s linked list.

How to avoid high level deadlock detection

To avoid this problem, there are generally two ways to do it: 1. If you are sure that the business will not have deadlocks, you can temporarily turn off deadlock detection. If a deadlock occurs, the timeout will occur (50s).

1. Queue up updates for the same row before entering the engine. 2. Reduce row update lock conflicts by splitting a single row into multiple logical rows

practice

If you want to delete the first 10,000 rows of a table, there are three ways to do it:

Delete from T limit 10000; The second option is to loop delete from T limit 500 20 times in one connection; Third, delete from T limit 500 is performed on all 20 connections simultaneously.

In scheme 1, if the transaction is relatively long, the lock is occupied for a long time. As a result, other clients wait for resources for a long time. Scheme 2, serial execution, the relatively long transaction is divided into several relatively short transactions, each transaction occupies a relatively short lock time, other clients wait for corresponding resources in a relatively short time. Such an operation, which also means that resources are sharded (different fragments of resources are used for each execution), improves concurrency. Scheme three, artificial lock competition, increase concurrency.