I haven’t had much time to write lately, it’s really hard to keep sharing, and I don’t know what to share, and someone is going to ask me about these things, so spare some time and write about this topic. Similarly, this article promises readers that I will lose if I don’t understand or gain anything, haha!

As we all know, the read mode in mysql can be divided into locked read and unlocked read according to whether the traditional lock is needed. Read lock needless to say, that’s a bunch of algorithm, row locks, lock, clearance next – key lock, is in order to guarantee that a transaction locking read one or more data, can’t read the other affairs not submit changes (not dirty read), can’t read in the same transaction twice inconsistent data content (should be repeatable read), not the same, The number of data items is inconsistent between the two readings (cannot be faked). For the most common RR isolation levels, at least one type of lock (row lock, gap lock, next-key lock) should be used to avoid dirty reads, unrepeatable reads, and phantom reads. Several transaction levels are associated, from small to large lock granularity, RU(uncommitted read), RC (committed read), RR (repeatable read), S (serialized)

RU(read unsubmitted) : English full name is left to the reader to pack B ha ha), apparently, no moral integrity level, this level is the equivalent of in my room and do something secret, I have not think others can come in, the door think I’m finished, and then opened to me, then you came in, you somehow let me tidy up, notice again once, I finished, put to come in. Of course, it will not be like some people understand that the door was not locked at all, always want to do half of the things, you can come in, and will not be so unprincipled. In mysql, even the lowest level RU will not let you read half of the data, so there is still a lock, but this lock is not our subjective decision to open, it will think that each data can be unlocked after execution. Although you haven’t committed the transaction manually yet, it’s clear that at this level, you’ll still see things you shouldn’t. Dirty reading is probably from here.

RC(read submitted) : Obviously, you can’t come in until I finish my work and tell you you can come in. So that you can’t see the dirt, but, in the case of a door open the door you go to the toilet, then a new roll of toilet paper, and then I go in with a little, then the door opened and in you find paper less half, do you wonder how this twice to see nima paper is not the same ah, this is very common in our daily life, I even feel there is nothing wrong with this course. But in the database domain, it just feels wrong, like a piece of code, let’s say you want to buy a roll of paper for 100 yuan (ok, I admit it’s a little expensive), first check your credit card and find 100 yuan, and then it turns out that it’s time for your girl to pay with wechat and spend 50 yuan (obviously she uses wechat to bind your card), then take the paper, deduct the money, Result 50-100 = -50 deduction failed. What is this fairy logic, will not be able to in the process of I deal with money, don’t let your sister can do, I want to assume audit, deductions if a complete logic, bju international it don’t need time, you could just won’t have this kind of embarrassing, because if each operating time is fast enough, or sister a moment before I deal with 50, Or you could have 50 after me, instead of 50 between me. I think it’s just because our system can’t do it that fast that we need to think about defining this thing, that we’re going to have this unrepeatable situation

RR(repeatable) : In order to solve the unrepeatable problem above, who let us condition not fast enough? Apparently, when I started asking for money, I just locked my card away and didn’t let anyone use it until I was done.

S(serialization) : Obviously, lock the door, get in line, and make sure my business is done before letting anyone else in

Having reviewed lock reads and the four isolation levels, let’s move on to MVCC and Undo log. MVCC, as a multi-version concurrency control, uses undo log to implement the same isolation levels, but not through traditional locks. , of course, in view of the RU (read uncommitted isolation level, all changes the statements of other transactions can be seen directly, that doesn’t keep multiple versions of the necessary, use is only the latest version of the same level of S (serialization), line up one by one to read and write, and there is no necessary to keep multiple versions of data, as is done with the latest data.

Here, to get to the point, I actually want to talk about why, there are a lot of good mysql books on the Internet, why I still share undo log and MVCC. Not enough knowledge, not enough chaos? Learning these before it’s because I have seen a lot of misleading, so to speak so a lot of online, I have not seen a blog or online education, the two things said accurate and clear, so a lot of books, I think the average person if not carefully ponder for a while, perhaps will always live in their own world, and to the truth, Here’s a list of some of the mistakes or inaccuracies on the web. Come back later

DB_ROW_ID: DATA_TRX_ID: DATA_ROLL_PTR: DATA_TRX_ID: DATA_ROLL_PTR: DATA_TRX_ID: DATA_ROLL_PTR DATA_TRX_ID indicates the transaction version of the current data, which is fine. DATA_ROLL_PTR indicates the transaction delete version number, Nani! PTR = PTR = PTR = PTR = PTR = PTR I can’t deny that it makes you feel like you understand, but is there anything wrong with being asked that way unless the interviewer understands it (bless you)?

2) MVCC uses a readable view ReadView to assist in determining the transaction version number, which mainly consists of several elements: currently active transaction ID set mIds, mIds minimum transaction ID, mIds maximum transaction ID, 99% of the data on the network are described in this way. It can only be said that maybe those authors do not understand clearly or are copying what others say. Correct how to explain later

3) The undo log of insert operation is different from the undo log of DELETE and update operation. The record of INSERT operation is visible only to the transaction itself and not to other transactions (this is the requirement of transaction isolation). Therefore, the undo log can be deleted directly after the transaction is committed. The undo log for delete and update needs to wait for the Purge thread to delete under the right conditions. Insert undo log can be deleted directly, why there is a difference, readers see this sentence is directly memorized by rote, you can understand from this sentence is why, but unfortunately, I have not seen any books explain, probably because I read less books.

4) all say redo is physical log (for the most part, don’t tangle), undo is the logical logs, and you add, the undo log records deleted, you will update his record instead of update, there are two points, how to understand the physical log and the logical logs, how to understand it records the opposite operation, add and delete instead can understand, X = x – 10 instead of x = x + 10, you subtract 10, instead I add 10 to you, I believe there must be a lot of people understand this, this understanding is a big hole.

5) Can MVCC solve phantom reading? If you can solve the problem, why brag about the RR level for mysql? Other databases don’t have MVCC. If not, how dare brag about it? This is really a talent issue. More on that later.

We all know that the mysql database is the primary key ID used as indexes of B + Tree structure data of the entire table, stored in xx. IBD file, data stored in a leaf node block, after every piece has a leaf node pointer, we forgot to emphasize here, of course, this to InnDB engine to show, not need someone struggle again. Of course, the basic knowledge of indexing, including the various lock algorithms described above, can be read by yourself or someone else’s blog, or I have to share another article.

Back to the point, what the hell is MVCC?

1) Official explanation: when accessing (read or write) a database concurrently, manage multiple versions of the data being processed within a transaction. In order to avoid write operation clogging, thus causing concurrent problems of read operation.

2) no moral integrity explanation, made an example toilet paper towels, in order to make different people come in to see paper towels are the same for many times, so every time someone with A paper towel, to take A markup version A, (A) can see, and then placed in A cupboard, and then copy A paper towels in the same paper towel box, marked as the current version (see). Then suppose we make a rule that everyone comes in and only sees the same roll of paper towels as before. Make sure that everyone comes in and sees the same roll of paper towels. Details will be explained later

What does MVCC do?

1) For rollback of transactions

2) MVCC

What types of undo log do we care about?

1) Insert undo log

2) Update undo log

InnoDB MVCC implementation principle

Two hidden columns, DATA_TRX_ID and DATA_ROLL_PTR, are added to the data table to implement MVCC

       

After transaction A updates the value X, the row produces A new version and an old version. Assume that the transaction ID of the row previously inserted is 100 and the ID of transaction A is 200. The operation process is as follows

Select * from user where ID = 1; select * from user where ID = 1

2) Copy the original value of the row into the undo log

3) Change the row change value and update the DATA_TRX_ID to point the DATA_ROLL_PTR to the old version of the record that was just copied to the undo log chain. If multiple transactions are modified multiple times, the undo log continues to be generated and the pointing relationship is established by DATA_ROLL_PTR

Above the undo log is a list structure, that is if multiple transactions are modify the row data, can according to the order of the transaction ID, stored in the form of a linked list, as for the old version in the order of the list of this really doesn’t matter, as long as easy access, I tend to put the old list after every change of the head, This ensures recursion from the pointer, finding the newer data first, and then finding the older data, one by one, to determine if it is the version you can see.

So now the core question is how to determine which version should be read when the current transaction reads data? Mysql introduced the concept of ReadView as a readable attempt. Contains the following attributes

1) mIds means that all transaction ids are currently active when the ReadView is generated. Active means that the transaction has not been committed yet. It can be mentioned here that the transaction ID will increase automatically when the transaction is started

2) MIN_trx_id indicates the smallest transaction ID of the currently active mIds

3) max_trx_id specifies the largest transaction ID in mIds when the ReadView is generated

4) creator_trx_id The ReadView is created in that transaction.

With these four attributes, what rules should be used to determine which version of data can be read by the current transaction?

1) If the datA_trx_id of the accessed version is less than the minimum value in M_IDS, the transaction that generated the version was committed before ReadView was generated, and the version can be accessed by the current transaction.

2) If the version data_trx_id is greater than the maximum value of the current transaction, the transaction that generated the version data was generated after the ReadView was generated, and the version data cannot be accessed by the current transaction. The maximum value is not the maximum value of mIds. Although the transaction ID is globally increasing, it does not mean that the transaction ID with a large value must be committed after the transaction ID with a small value. In other words, the transaction starts at the same time, but the transaction ends at the same time as the transaction starts at the same time. If the transaction version of the data is 200, and there is no 200 in mIds, then the maximum VALUE of mIds may be less than 200. Therefore, according to rule 2, the data that should be accessible may not be accessible because of this rule. Ultimately, the maximum transaction ID when the ReadView is generated is not found correctly. So it is not certain that the transaction that generated this version of data was generated after the ReadView was generated

3) If the value of the datA_trx_id attribute of the accessed version is between the maximum and minimum (inclusive), then you need to check whether the value of trx_id is in the m_IDS list. If yes, the transaction to which this version belongs was active when the ReadView was created and therefore cannot be accessed. If not, the transaction that generated the version of the ReadView when it was created has been committed and the version can be accessed.

In ReadView, the maximum transaction ID, mIds minimum transaction ID, and mIds active transaction list are used to divide the transaction ID of the data to be read into three types. Either the transaction ID is smaller than the mIds minimum transaction ID, which is obviously generated before the current active minimum transaction, and is not in the active transaction. Must be committed transactions, this version must be accessible; The value of this transaction ID is greater than the current maximum transaction ID of the ReadView, obviously after all active transactions, and cannot be in the active transaction list. Either between maximum minimum, at that time there are two kinds of circumstances, because is not to say that the most minimum value between must be active, after all, first open the transaction will not necessarily end first, transaction size, length, this time is very simple, is the active version hasn’t been submitted in the mIds, cannot be read, is not already submitted version, can be read. When a transaction wants to read a row of data, it first uses the above rules to determine the latest version of the data, that is, the row of records. If it finds that the data can be accessed, it directly reads the data. If it finds that the data cannot be accessed, it finds the undo log through the DATA_ROLL_PTR pointer and recursively searches for each version until it can read its own version. If you can’t read it, return null.

What is the difference between AN MVCC at the RC and RR isolation levels?

  

If A transaction is at RC level, 10 and 20 are read twice. If A transaction is at RR level, 10 and 20 are read twice. If A transaction is at RR level, 10 and 20 are read twice. Say first RC level, the two versions, that might be in A ReadView affairs A twice use must be different, the content of the combination of B transaction there is submitted, and it is obvious to commit the transaction will affect the mIds currently active transaction list, because the transaction commits, it is not active transactions may not appear in the mIds list, This is easy to understand. If the x value of A transaction is the same as that of the x value of A transaction, the value of the x value of A transaction is the same as that of the x value of A transaction. If the x value of A transaction is the same as that of the x value of A transaction, the value of the x value of A transaction is the same as that of the x value of the x value of A transaction. It’s like caching. The RC level generates an updated ReadView for each query, which makes a difference. This is a more general and clever design.

So far, should have a basic understanding of the MVCC and undo log is…, next, it’s time to go back to the first mentioned, all kinds of blog on the net, online classes, or even a book, talk about the errors, inaccurate and fuzzy place, in order to – (a joke, in order to facilitate a clear), copy the above questions again.

DB_ROW_ID: DATA_TRX_ID: DATA_ROLL_PTR: DATA_TRX_ID: DATA_ROLL_PTR: DATA_TRX_ID: DATA_ROLL_PTR DATA_TRX_ID represents the transaction version of the current data, which is fine. DATA_ROLL_PTR represents the transaction deletion version.

2) MVCC uses a readable view ReadView to assist in determining the transaction version number, which mainly consists of several elements: currently active transaction ID set mIds, mIds minimum transaction ID, mIds maximum transaction ID, 99% of the data on the network are described in this way. It can only be said that maybe those authors do not understand clearly or are copying what others say. Correct how to explain later

3) The undo log of insert operation is different from the undo log of DELETE and update operation. The record of INSERT operation is visible only to the transaction itself and not to other transactions (this is the requirement of transaction isolation). Therefore, the undo log can be deleted directly after the transaction is committed. The undo log for delete and update needs to wait for the Purge thread to delete under the right conditions. Insert undo log can be deleted directly, why there is a difference, readers see this sentence is directly memorized by rote, you can understand from this sentence is why, but unfortunately, I have not seen any books explain, probably because I read less books.

4) all say redo is physical log (for the most part, don’t tangle), undo is the logical logs, and you add, the undo log records deleted, you will update his record instead of update, there are two points, how to understand the physical log and the logical logs, how to understand it records the opposite operation, add and delete instead can understand, X = x – 10 instead of x = x + 10, you subtract 10, instead I add 10 to you, I believe there must be a lot of people understand this, this understanding is a big hole.

5) Can MVCC solve phantom reading? Why mysql can boast that RR level has solved phantom reading if it can?

ROLL_ID = ROLL_ID; ROLL_ID = ROLL_ID; ROLL_ID = ROLL_ID; ROLL_ID = ROLL_ID; ROLL_ID = ROLL_ID

If the version number of the data I want to read is greater than the active maximum transaction ID, it must be considered that the version of the data I want to read is after the generation of the ReadView, and the transaction opened first must be committed first. Is the current maximum active transaction ID necessarily the current maximum transaction ID? Several large transaction ids were submitted when the ReadView was generated, okay?

Question 3) Why is the update undo log and insert undo log separated? Why is the update undo log still waiting for the Purge purge process? If a transaction ID=100 creates a new record, then the new record does not exist before that version of the transaction. If a transaction ID=100 does not commit, then the new record will be null. MVCC multiversion control is also required, and the data itself is a version, either does not exist, can not be read, or can be read, whether the data exists, at the RC and RR levels, depends on whether the transaction is committed, as for RU and S, it is not necessary to use MVCC. You don’t have to worry about where the data is read, whether it’s cache or disk, or whether the data is actually on disk after the transaction is committed, but the data can be read after the transaction is committed, and the data can’t be read if it’s not committed, and I think that’s what the book says about transaction isolation. So there is no need to use multiple versions of redundancy, of course, the transaction commit can delete the undo log of the insert directly. As for the update undo log, there may be transactions A, B, or C that modify the data. It is not known whether the undo log is deleted after transactions A, B, or C commit, so the Purge thread will have to decide when to delete the undo log.

For question 4) redo do most of the physical log is, the meaning of the physical log is a log file storage, recorded the value of each physical address is how much, as for the undo log, exists in a special period, exists in the table space, clearly there is organization of data and the primary key id a file, After all, each row of data has a pointer to the Undo log, and the merge is in a separate file. If a new operation is performed, undo log records a delete type and does not even need to copy any data. When reading this version and finding a delete flag, you can directly return null. If an update operation is performed, copy the value before the update. There is no need to record a specific value on a physical address. When you read the undo log, you can update the read data to the corresponding undo log as needed. If it is a delete operation, the row is copied to the Undo log and the original data is marked as deleted. Can’t this log be regarded as a logical log, which is the opposite of the current operation, and does not need to record the contents of the physical address.

For question 5) at first glance, it looks intimidating, and it’s easy to fool you. Age =12; age=12; age=12; age=12; age=12; Maybe I’m just getting drunk and seeing things. When I read data with age>10, I lock the space around me to the right. No other transaction can insert data with age>10. Then I solve the illusion, and you can’t insert data from the source. MVCC again, in the RR isolation level, when the transaction is A open generates A snapshot ReadView transaction, it recorded the biggest transaction ID of the current generation, assume that the transaction for the first time A query is A record, the transaction at this time B transaction ID up to there are two possibilities, either is running at this time haven’t submit (nonsense you’ll need to submit, If transaction B is inserted, the transaction version of the data must be greater than the maximum value of the transaction in the ReadView. In either case, the data cannot be read according to ReadView’s criteria. I do not understand why a large number of people on the Internet said that the MVCC alone can not solve the illusion of reading, information age a large number of information on the Internet some said that can be solved, some said that can not be solved, but do not give a reason, gradually let people divided into two factions, anguish ah! In fact, I think MVCC is a natural solution to phantom reading, and almost all modern relational databases have MVCC implementations, so I have reason to believe that snapshot reads from those databases solve phantom reading (personal guess, after all, I haven’t studied other databases in depth). I think people say that mysql RR can solve phantom read, other databases can not, that is only for lock read, because mysql RR level has gap lock, other databases do not have this algorithm, so say. Do not believe that people can read the above reasoning process several times can also open two links, prepare the following two statements, the above obtained two cases respectively corresponding to transaction A and B execute begin, to test, there is nothing more than their own test let people believe.

-- transaction A begin; select * from test where age > 10; Select * from test where age > 10; rollback;Copy the code
-- transaction B begin; insert into test(age) values(13); COMMIT;Copy the code

I have also seen some tests on the Internet. In fact, many people add an update statement in the middle of transaction A so that the data that could not be found before can be found the second time. I want to say that this kind of operation of my own transaction, can’t I see it? This is magic reading, if their own business inside the modification, they can not see, I estimate you to suspect that the database is out of order, just modify incredibly can not see. Insert age=13; insert age=13; insert age=13; insert age=13; You can add A for UPDATE statement to both statements of transaction A and then insert A “for update” statement in between.

If you think this is the end of the basic theory, you can only say that you think too much, ha ha, as a professional coder, of course, is to show the code, I will use my understanding of Java code to write a set of simple logic about MVCC, ReadView and UNDO LOG. Code is the most simple account model, the purpose is just for the program ape can further understand all the content of this said, such as the same is absolutely copied from me, ha ha!

package com.mvcc; import java.util.ArrayList; import java.util.Collections; import java.util.List; import java.util.Map; import java.util.concurrent.ConcurrentHashMap; import java.util.concurrent.atomic.AtomicInteger; /** * transaction class, just to make it easier to understand the principle and avoid deviating from the topic, * @author rongdi * @date 2020-07-25 20:17 */ public Class Transaction {/** * global Transaction id */ private static AtomicInteger globalTrxId = new AtomicInteger(); Private static Map<Integer,Transaction> currRunningTrxMap = new ConcurrentHashMap<>(); private static Map<Integer,Transaction> currRunningTrxMap = new ConcurrentHashMap<>(); /** * Current transaction ID */ private Integer currTrxId = 0; /** * Transaction isolation level, ru,rc,rr and s */ private String trxMode = "rr"; /** * Private readView readView; /** private readView readView; Open / * * * * / public affairs void the begin () {/ * * * according to the global transaction counter to the current transaction ID * / currTrxId = globalTrxId. IncrementAndGet (); /** * Put the current transaction into the current active transaction map */ currRunningTrxmap. put(currTrxId,this); /** * Update MVCC auxiliary judgment view ReadView */ updateReadView(); } /** * public void updateReadView() {/** * ReadView = new ReadView(currTrxId); /** * Set the maximum transaction value */ readView.setMaxTrxId(globalTrxid.get ()); List<Integer> mIds = new ArrayList<>(currRunningTrxMap.keySet()); Collections.sort(mIds); Readview.setmids (new ArrayList<>(currRunningTrxmap.keyset ()))); Readview.setmintrxid (mids.isempty ()? 0 : mIds.get(0)); /** * Set the current transaction ID */ readview.setCurrtrxId (currTrxId); } /** * Commit transaction */ public void commit() {currRunningTrxmap.remove (currTrxId); } public static AtomicInteger getGlobalTrxId() { return globalTrxId; } public static void setGlobalTrxId(AtomicInteger globalTrxId) { Transaction.globalTrxId = globalTrxId; } public static Map<Integer, Transaction> getCurrRunningTrxMap() { return currRunningTrxMap; } public static void setCurrRunningTrxMap(Map<Integer, Transaction> currRunningTrxMap) { Transaction.currRunningTrxMap = currRunningTrxMap; } public Integer getCurrTrxId() { return currTrxId; } public void setCurrTrxId(Integer currTrxId) { this.currTrxId = currTrxId; } public String getTrxMode() { return trxMode; } public void setTrxMode(String trxMode) { this.trxMode = trxMode; } public ReadView getReadView() { return readView; }}Copy the code
package com.mvcc; import java.util.ArrayList; import java.util.List; /** * ReadView * @author rongdi * @date 2020-07-25 20:31 */ public class ReadView {/** * record active transaction ID */ private List<Integer> mIds = new ArrayList<>(); /** * Record the minimum transaction ID */ private Integer minTrxId; */ private Integer maxTrxId; */ private Integer maxTrxId; Private Integer currTrxId; private Integer currTrxId; public ReadView(Integer currTrxId) { this.currTrxId = currTrxId; } public Data read(Data Data) {/** * if(canRead(data.getDatatrxid ())) {return Data; } /** * UndoLog UndoLog = data.getNextundolog (); Do {/** * if undoLog exists and is readable, the merge returns */ if(undoLog! = null && canRead(undoLog.getTrxId())) { return merge(data,undoLog); } undoLog = undolog.getNext ();} undoLog = undolog.getNext (); } while(undoLog ! = null && undoLog.getNext() ! = null); /** * return null; /** * return null; } /** * merge the latest data with the target version of undo log data, */ Private Data merge(Data Data,UndoLog UndoLog) {if(UndoLog == null) {return Data; } /** * update update update update update update update update update update update update update update update update update update update update update update update update update update update update update update update update update update update update update update update update update update update update update update update update update update update update update update update update update update update update update update update update update update update update update update update update update update update update */ if("update".equalsignorecase (undolog.getOperType ())) {data.setValue(undolog.getValue ())); return data; } else if("add".equalsIgnoreCase(undoLog.getOperType())) { data.setId(undoLog.getRecordId()); data.setValue(undoLog.getValue()); return data; } else if("del".equalsIgnoreCase(undoLog.getOperType())) { return null; } else {// return data; } } private boolean canRead(Integer dataTrxId) { /** * 1. If the current data belongs to the current transaction or the data transaction is less than the mIds minimum transaction ID, * indicates that the transaction that generated the data was committed before the ReadView was generated. This data can be accessed * / if (dataTrxId = = null | | dataTrxId. Equals (currTrxId) | | dataTrxId < minTrxId) {return true; } /** * 2. If the current transaction ID is greater than the current maximum transaction ID, * indicates that the current transaction ID is generated after the ReadView is generated. */ if(dataTrxId > maxTrxId) {return false; } /** * 3. If the current data belongs to a transaction between the mIds minimum transaction ID and the current maximum transaction ID, */ if(dataTrxId >= minTrxId && dataTrxId <= maxTrxId) {/** * if(dataTrxId >= minTrxId && dataTrxId <= maxTrxId) {/** * If (dataTrxId >= minTrxId && dataTrxId <= maxTrxId) {/** * If (dataTrxId >= minTrxId && dataTrxId <= maxTrxId) { */ If (mids. contains(dataTrxId)) {return false; } else { return true; } } return false; } public List<Integer> getmIds() { return mIds; } public void setmIds(List<Integer> mIds) { this.mIds = mIds; } public Integer getMinTrxId() { return minTrxId; } public void setMinTrxId(Integer minTrxId) { this.minTrxId = minTrxId; } public Integer getMaxTrxId() { return maxTrxId; } public void setMaxTrxId(Integer maxTrxId) { this.maxTrxId = maxTrxId; } public Integer getCurrTrxId() { return currTrxId; } public void setCurrTrxId(Integer currTrxId) { this.currTrxId = currTrxId; }}Copy the code
package com.mvcc; import java.util.Map; import java.util.concurrent.ConcurrentHashMap; @author rongdi * @date 2020-07-25 19:24 */ public class Data {/** * private static Map<Integer,Data> dataMap = new ConcurrentHashMap<>(); /** * Record ID */ private Integer ID; /** * private String value; /** * Private Integer dataTrxId; Private UndoLog nextUndoLog; private UndoLog nextUndoLog; */ private Boolean isDelete; public Data(Integer dataTrxId) { this.dataTrxId = dataTrxId; } /** * mysql > update database; * @param id * @param value * @return */ public Integer update(Integer id,String value) { */ Data oldData = datamap.get (id); /** * update current data */ this.id = id; this.value = value; / * * * don't forget that in order to the consistency of the data, to ready failure of rollback undo log, since here is to modify the data, UndoLog = new */ UndoLog = new UndoLog(id,oldData.getValue(),oldData.getDataTrxId(),"update"); Undolog.setnext (olddata.getNextundolog ())); /** * undolog.setNext (olddata.getNextundolog ())); /** */ this.nextundolog = undoLog; /** * datamap. put(id,this); return id; } @param id */ public void delete(Integer id) {/** * */ Data oldData = datamap.get (id); this.id = id; /** * mark current data as deleted */ this.setdelete (true); / * * * in the same way, in order to the consistency of the data, to ready failure of rollback undo log, since here is deleting data, * the data existed before the representative, UndoLog = new */ UndoLog = new */ UndoLog = new UndoLog(id,oldData.getValue(),oldData.getDataTrxId(),"add"); Undolog.setnext (olddata.getNextundolog ())); /** * undolog.setNext (olddata.getNextundolog ())); /** */ this.nextundolog = undoLog; /** * datamap. put(id,this); } @param id */ public void insert(Integer ID,String value) {/** * Update the current data  */ this.id = id; this.value = value; / * * * in the same way, in order to the consistency of the data, to ready failure of rollback undo log, since there is new data, that means * before the data does not exist, UndoLog = new UndoLog(id,null,null,"delete"); UndoLog = new UndoLog(id,null,null,"delete"); /** */ this.nextundolog = undoLog; /** * datamap. put(id,this); } /** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** ** * */ public Data select(Integer id) {/** * Transaction currTrx = (Transaction currTrx =) Transaction.getCurrRunningTrxMap().get(this.getDataTrxId()); String trxMode = currTrx.getTrxMode(); if("rc".equalsIgnoreCase(trxMode)) { currTrx.updateReadView(); } ReadView = currtrx.getreadView (); */ Data Data = data.getDatamap ().get(id); /** * return readView.read(data); } public static Map<Integer, Data> getDataMap() { return dataMap; } public static void setDataMap(Map<Integer, Data> dataMap) { Data.dataMap = dataMap; } public Integer getId() { return id; } public void setId(Integer id) { this.id = id; } public String getValue() { return value; } public void setValue(String value) { this.value = value; } public Integer getDataTrxId() { return dataTrxId; } public void setDataTrxId(Integer dataTrxId) { this.dataTrxId = dataTrxId; } public UndoLog getNextUndoLog() { return nextUndoLog; } public void setNextUndoLog(UndoLog nextUndoLog) { this.nextUndoLog = nextUndoLog; } public boolean isDelete() { return isDelete; } public void setDelete(boolean delete) { isDelete = delete; } @Override public String toString() { return "Data{" + "id=" + id + ", value='" + value + '\'' + '}'; }}Copy the code
package com.mvcc; @author rondi * @date 2020-07-25 19:52 */ public class UndoLog {/** * UndoLog */ private UndoLog pre; /** * private UndoLog next; /** * Record data ID */ private Integer recordId; /** * private String value; /** * Private Integer trxId; * update * add * del */ private String operType; private String operType; private String operType; public UndoLog(Integer recordId, String value, Integer trxId, String operType) { this.recordId = recordId; this.value = value; this.trxId = trxId; this.operType = operType; } public UndoLog getPre() { return pre; } public void setPre(UndoLog pre) { this.pre = pre; } public UndoLog getNext() { return next; } public void setNext(UndoLog next) { this.next = next; } public Integer getRecordId() { return recordId; } public void setRecordId(Integer recordId) { this.recordId = recordId; } public String getValue() { return value; } public void setValue(String value) { this.value = value; } public Integer getTrxId() { return trxId; } public void setTrxId(Integer trxId) { this.trxId = trxId; } public String getOperType() { return operType; } public void setOperType(String operType) { this.operType = operType; }}Copy the code

There are four classes in total, and readers who are interested can write a test class, and then open several threads, each transaction using a thread simulation, and add sleep delay to run the code. Again, this code is intended to give the reader a sense of what the above is all about. It is not a real implementation of mysql, but rather a simple implementation that the author thinks works. Review the full text, found that I really do not know how to typesetting, artistic ability is limited, I feel just to clarify the problem, I hope you forgive me! Well, see you next time!