Wechat search “code road mark”, point attention not lost!

In the recent period of development work, I found some skills or precautions used by Mybatis, here is a summary.

Use Interceptor to monitor interface usage

Recently, due to the optimization of the project, we need to go offline for several tables. In order to safely remove the dependence on several tables, the offline process is divided into two steps: one is to remove the query dependence on several tables, and the other is to stop the write operation on several tables.

Mybatis is used in the operation of several tables, which involves many methods and involves the core link of the business system. A slight mistake may block the core link and lead to class P failure.

After completing the development of the above two steps, in order to monitor the use of several tables, we made log burying points for Mapper method access of each table, and then drew monitoring charts based on log information to visually view the access situation of each method. The mybatis Interceptor is used here.

@Component
@Intercepts(value = { @Signature(type = Executor.class, method = "update", args = {MappedStatement.class, Object.class}), @Signature(type = Executor.class, method = "query", args = {MappedStatement.class, Object.class, RowBounds.class, ResultHandler.class, CacheKey.class, BoundSql.class}), @Signature(type = Executor.class, method = "query", args = {MappedStatement.class, Object.class, RowBounds.class, ResultHandler.class}) })
public class MapperMethodInterceptor implements Interceptor {

    @Override
    public Object intercept(Invocation invocation) throws Throwable {
        Object target = invocation.getTarget();
        if (target instanceof Executor) {
            
            // Get the currently executed MappedStatement
            MappedStatement mappedStatement = (MappedStatement)invocation.getArgs()[0];
            
            // Complete the id
            String id = mappedStatement.getId();
            
            // The corresponding class name
            String mapperName = id.substring(0, id.lastIndexOf("."));
            
            / / the method name
            String methodName = id.substring(id.lastIndexOf(".") +1);
            
            // Monitor log burying point
            MetricUtils.log("MapperMethodMetric", Arrays.asList(mapperName, methodName));
        }
        return invocation.proceed();
    }

    @Override
    public Object plugin(Object o) {
        return Plugin.wrap(o, this);
    }

    @Override
    public void setProperties(Properties properties) {}}Copy the code

Insert ignore prevents unique key conflicts

Before, every time do online release will appear a wave of system jitter, nail alarm overwhelming:

  • MQ consumption success rate decreased
  • The DB write success rate decreases
  • Keyword DuplicateKeyException Alarm

The reasons are as follows:

  • When the application is restarted, MQ fails to come down gracefully, causing message processing to break and message consumption to fail. However, data may have been written to the DB before the interruption, and transactions are not performed for some operations. As a result, the written data cannot be rolled back.
  • MQ also performs DB write code when recasting the message, triggering a database unique constraint that causes the write to fail.
  • The DB write failure and MQ consumption failure alarms are triggered.

Here we focus on the issue of a unique constraint exception being raised when writing to DB for existing data. In DB design, in order to ensure the uniqueness of business, we usually set up a unique key to help us do constraints at the DB level. When repeated data inserts occur, DB will fail to write, which is reflected in the code by throwing an exception. The following is an example:

# tb_test has a unique indexunique(name)
insert into tb_test(name,phone,address) values ('abcd'.'13900000001'.'aaa');

# name=Abcd already exists, execute the following statement to report an errorinsert into tb_test(name,phone,address) values ('abcd'.'13900000001'.'aaa');
Copy the code

According to idempotent design principles, when repeated requests are executed, the system should return idempotent success, otherwise the upstream system may keep retrying and the system will keep reporting errors.

In fact, the solution to this problem is relatively simple to some extent: before each write, the data is queried by a unique index, and the database does not exist before performing the write. However, if there are concurrent requests, this method also has a probability of failure. In addition, query before write, also will be one more DB request, performance degradation.

There is an elegant solution here, which is insert ignore… . Here’s the official explanation:

If you use the IGNORE modifier, ignorable errors that occur while executing the INSERT statement are ignored.

If you use the IGNORE modifier, errors encountered during INSERT statements are ignored.

For example, without IGNORE, a row that duplicates an existing UNIQUE index or PRIMARY KEY value in the table causes a duplicate-key error and the statement is aborted.

For example, if you do not use the IGNORE modifier, a “duplicate key” error is triggered when newly inserted data has a unique index or duplicate primary key in the table, and the current execution statement is terminated.

With IGNORE, the row is discarded and no error occurs. Ignored errors generate warnings instead.

If you use the IGNORE modifier, the “duplicate key” error is ignored and replaced with a warning.

So the above example can be modified to:

# tb_test has a unique indexunique(name)
insert ignore into tb_test(name,phone,address) values ('abcd'.'13900000001'.'aaa'); # ignore does not return an errorinsert ignore into tb_test(name,phone,address) values ('abcd'.'13900000001'.'aaa');
Copy the code

Here is another writing method: replace into. The official explanation is as follows:

REPLACE works exactly like INSERT, except that if an old row in the table has the same value as a new row for a PRIMARY KEY or a UNIQUE index, the old row is deleted before the new row is inserted. ​

REPLACE is like INSERT, but if there is an old record in the table with the same primary key or unique index as the data to be written to, it will delete the old record before performing the write.

Security update DB records

In our order business, we need to update the order status frequently according to the upstream message. Since the control of the order state machine is not in our system, we need to change the order status in an orderly manner and ensure the final consistency. The state flow is shown in the figure below:Goal:

  • For final consistency, all post-states can override the pre-state.
  • Prevent state disorder. The post-status cannot be updated to the pre-status.

Considering:

  • Different status messages may be executed concurrently: received and closed transactions arrive and are processed simultaneously.
  • There is a possibility that the post-state may arrive first: the transaction is closed first, and the order is received later.

We adopted the following processing methods:

  • Add version field (version) as optimistic lock, add 1 for each update version;
  • Query first, then check last update: the primary key ID as the update index, more efficient;

That is, before updating records, check whether records meet the update conditions through the code, and then update the primary key ID and version number of records to be updated as the conditions. The similar SQL is as follows:

update tb_order 
set order_status = #{status}, version = #{newVersion}
where id = #{id} and version = #{oldVersion}
Copy the code

Data update is an operation that we developers face every day. For state-driven data records, how to update data records safely is very important. According to the actual work experience, I can summarize the following:

  • Using an index is the first element, and most programmers know this, regardless of the amount of data.
  • The number of records to be updated must be predictable. With the combination of WHERE conditions, we need to know if the update affects one record or more, and the conditions must be set as expected.
  • State machine constraints and idempotence must be considered when performing updates to stateful data. State machine constraints prevent state control disorders, such as an order with a closed state that cannot be updated as paid. Here you can use the leading state as the update condition. Idempotent update operations can be performed repeatedly without affecting the results of execution. For example, the operation of adding a balance to an account in a money transfer service.

Use universal SQL with caution

Mybatis uses ONGL to implement the dynamic SQL mechanism. Thanks to its powerful syntax ability, there are many code generators in the world, which also leads to our project being filled with all kinds of universal SQL.

I have to say that universal SQL has greatly improved our development efficiency. For general small projects, we do not need to consider the CODE writing of dao layer after the code generation.

However, universal SQL is also a double-edged sword, the use of the wrong mistake can lead to a big mistake, we have also caused P2 “disaster” due to the use of universal SQL.

Here’s an example (take the official example and modify it) :

<select id="findActiveBlogLike" resultType="Blog">
  SELECT * FROM BLOG
  <where>
    <if test="state ! = null">
         state = #{state}
    </if>
    <if test="category ! = null">
        AND category = #{category}
    </if>
    <if test="title ! = null">
        AND title like #{title}
    </if>
  </where>
</select>
Copy the code

Suppose the BLOG data table contains the index index(state,category,title), and the table contains 5 million entries.

If the condition state is included in every query, there should be no problem.

However, for state, category, and title, it is ok to use 0 to 3 conditions at a time. This SQL can combine all three conditions.

Suppose that in the follow-up work, a student only needs to use title as the condition in the change, and it is found that this SQL only needs to pass the title parameter, and other parameters are empty, so it can be directly reused.

Use publication to go live without adding the index. Guess what?

Slow SQL, DB, application faults, system avalanches……

Looking back, what should we have done to avoid similar problems? Here are a few lessons:

  • Avoid using universal SQL, dynamic SQL;
  • Index conditions in conditional statements cannot be made dynamic if they must be used. The state field in the example above.
  • If you must use the SQL input parameter to perform verification, the mandatory condition is empty, refuse to execute.

It is safer to write as follows, so that the state parameter is a mandatory parameter, and even if it is missed, it will only lead to SQL errors.

<select id="findActiveBlogLike" resultType="Blog">
  SELECT * FROM BLOG WHERE state = #{state}
  <if test="category ! = null">
      AND category = #{category}
  </if>
  <if test="title ! = null">
      AND title like #{title}
  </if>
</select>
Copy the code

conclusion

The above several small points are problems encountered in recent work, seemingly insignificant, but in fact, each is full of details. Write down, do summary do reference, also hope to be helpful to everyone.