Moment For Technology

Based on SpringBoot, to achieve MySQL read-write separation technology

Posted on June 26, 2022, 7:24 p.m. by Vanessa Copeland
Category: The back-end Tag: The back-end Spring Boot

preface

Let's start with a question: what can be done to optimize a database in a high-concurrency scenario? Commonly used are the following implementation methods: read and write separation, adding cache, master-slave architecture cluster, sub-library sub-table, etc. In Internet applications, most of the scenarios are read more and write less, set two libraries, the master library and read library.

The primary database is responsible for writing, while the secondary database is mainly responsible for reading. The read database cluster can be established. The isolation of read and write functions on data sources can reduce read and write conflicts, relieve database load, and protect the database. In practice, the write part is directly switched to the master library, and the read part is directly switched to the read library, which is a typical read/write separation technology. This post will focus on read/write separation and explore how to implement it.

directory

  • One: Configure primary and secondary data sources
  • Two: Configure data source routes
  • Three: data source context
  • Four: Switch annotations and Aop configuration
  • Five: usage and testing
  • Six: summarize

Limitations of master-slave synchronization: the master database and the slave database, the master database and the slave database keep the same database structure, the master database is responsible for writing, when writing data, will automatically synchronize data to the slave database; The slave database is responsible for reading. When a read request comes in, the data is read directly from the reader, and the master database automatically copies the data to the slave database. This blog will not cover this part of the configuration, however, because it is more operational.

There is a problem involved here: the delay of master/slave replication. When writing to the master database, a read request suddenly comes, and the data is not fully synchronized at this time, the data in the read request cannot be read or the data read is less than the original value. The simplest solution is to temporarily direct the read request to the master library, but it also loses part of the meaning of the master-slave separation. In other words, in strict data consistency scenarios, read/write separation is not entirely suitable. Note that the timeliness of updates is a disadvantage of using read/write separation.

Ok, this is just an overview, let's look at how to implement read/write separation using Java code:

The project needs to introduce the following dependencies: springBoot, Spring-AOP, Spring-JDBC, AspectJweaver, and so on

One: Configure primary and secondary data sources

We need to configure the master/slave database. The configuration of the master/slave database is usually written in the configuration file. Through @ ConfigurationProperties Properties), so that the values read and written are injected into the specific code configuration. According to the principle of convention over convention, the master library is annotated as master, and the slave library is annotated as slave.

This project adopts ali druid database connection pool, uses build builder mode to create DataSource object,DataSource is the DataSource abstracted from the code level, and then needs to configure sessionFactory, sqlTemplate, transaction manager, etc

/** * Master/slave configuration **@author wyq
 */
@Configuration
@MapperScan(basePackages = "com.wyq.mysqlreadwriteseparate.mapper", sqlSessionTemplateRef = "sqlTemplate")
public class DataSourceConfig {

    /**
     * 主库
     */
    @Bean
    @ConfigurationProperties(prefix = "spring.datasource.master")
    public DataSource master(a) {
        return DruidDataSourceBuilder.create().build();
    }

    /**
     * 从库
     */
    @Bean
    @ConfigurationProperties(prefix = "spring.datasource.slave")
    public DataSource slaver(a) {
        return DruidDataSourceBuilder.create().build();
    }


    /** * instantiate the data source route */
    @Bean
    public DataSourceRouter dynamicDB(@Qualifier("master") DataSource masterDataSource,
                                      @Autowired(required = false) @Qualifier("slaver") DataSource slaveDataSource) {
        DataSourceRouter dynamicDataSource = new DataSourceRouter();
        MapObject, Object targetDataSources = new HashMap();
        targetDataSources.put(DataSourceEnum.MASTER.getDataSourceName(), masterDataSource);
        if(slaveDataSource ! =null) {
            targetDataSources.put(DataSourceEnum.SLAVE.getDataSourceName(), slaveDataSource);
        }
        dynamicDataSource.setTargetDataSources(targetDataSources);
        dynamicDataSource.setDefaultTargetDataSource(masterDataSource);
        return dynamicDataSource;
    }


    /** * Configure sessionFactory *@param dynamicDataSource
     * @return
     * @throws Exception
     */
    @Bean
    public SqlSessionFactory sessionFactory(@Qualifier("dynamicDB") DataSource dynamicDataSource) throws Exception {
        SqlSessionFactoryBean bean = new SqlSessionFactoryBean();
        bean.setMapperLocations(
                new PathMatchingResourcePatternResolver().getResources("classpath*:mapper/*Mapper.xml"));
        bean.setDataSource(dynamicDataSource);
        return bean.getObject();
    }


    /** * create sqlTemplate *@param sqlSessionFactory
     * @return* /
    @Bean
    public SqlSessionTemplate sqlTemplate(@Qualifier("sessionFactory") SqlSessionFactory sqlSessionFactory) {
        return new SqlSessionTemplate(sqlSessionFactory);
    }


    /** * transaction configuration **@param dynamicDataSource
     * @return* /
    @Bean(name = "dataSourceTx")
    public DataSourceTransactionManager dataSourceTransactionManager(@Qualifier("dynamicDB") DataSource dynamicDataSource) {
        DataSourceTransactionManager dataSourceTransactionManager = new DataSourceTransactionManager();
        dataSourceTransactionManager.setDataSource(dynamicDataSource);
        returndataSourceTransactionManager; }}Copy the code

Two: Configure data source routes

Routing is very important in master/slave separation and is basically the core of read/write switchover. Spring provides AbstractRoutingDataSource according to user-defined rules to choose the current data source, the effect is before executing the query, set the use of data source, data source for dynamic routing, Before each database queries perform its abstract method determineCurrentLookupKey () is used to decide which data source.

To have a global data source manager, import the DataSourceContextHolder database context manager, which can be understood as a global variable and is available at any time (see more below). Its main function is to save the current data source;

public class DataSourceRouter extends AbstractRoutingDataSource {

    / * * * end of determineCurrentLookupKey return is to get the DataSourceContextHolder, so when switch in a dynamic data source notes * * * should give DataSourceContextHolder set value@return* /
    @Override
    protected Object determineCurrentLookupKey(a) {
        returnDataSourceContextHolder.get(); }}Copy the code

Three: data source context

A data source context saver, which allows applications to retrieve the current data source at any time, mainly utilizes ThreadLocal encapsulation, since ThreadLocal is thread-isolated and naturally thread-safe. This exposes the set and get and clear methods, The set method is used to assign the name of the current data source, the get method is used to obtain the name of the current data source, and the clear method is used to clear the content in ThreadLocal. Because the key of ThreadLocal is weakReference, it has the risk of memory leakage The remove method prevents memory leaks;

/** * use ThreadLocal to encapsulate the context that holds the data source online */
public class DataSourceContextHolder {

    private static final ThreadLocalString context = new ThreadLocal();

    /** * assign **@param datasourceType
     */
    public static void set(String datasourceType) {
        context.set(datasourceType);
    }

    /** * get the value *@return* /
    public static String get(a) {
        return context.get();
    }

    public static void clear(a) { context.remove(); }}Copy the code

Four: Switch annotations and Aop configuration

First let's define a @datasourcesWitcher annotation with two properties ① the current data source ② Whether to remove the current data source and place it only on the method (not on the class, and not necessarily on the class, since we must switch the data source to the method), the annotation's main purpose is to switch the data source in the DAO Layer to operate on the database, can indicate in the method represents the current use of which data source;

@datasourcesWitcher Definition of annotation:

@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.METHOD)
@Documented
public @interface DataSourceSwitcher {
    /** * Default data source *@return* /
    DataSourceEnum value(a) default DataSourceEnum.MASTER;
    /** * clear *@return* /
    boolean clear(a) default true;

}
Copy the code

DataSourceAop configuration

To give the @datasourcesWitcher annotation the ability to switch data sources, we need to use AOP, then use the @Aroud annotation to find a method that has @datasourcesWitcher.class on it, then take the data source value configured on the annotation and set it to The DataSourceContextHolder implements injecting the data source configured on the current method into the global scope.

@Slf4j
@Aspect
@Order(value = 1)
@Component
public class DataSourceContextAop {

    @Around("@annotation(com.wyq.mysqlreadwriteseparate.annotation.DataSourceSwitcher)")
    public Object setDynamicDataSource(ProceedingJoinPoint pjp) throws Throwable {
        boolean clear = false;
        try {
            Method method = this.getMethod(pjp);
            DataSourceSwitcher dataSourceSwitcher = method.getAnnotation(DataSourceSwitcher.class);
            clear = dataSourceSwitcher.clear();
            DataSourceContextHolder.set(dataSourceSwitcher.value().getDataSourceName());
            log.info("Data source switch to: {}", dataSourceSwitcher.value().getDataSourceName());
            return pjp.proceed();
        } finally {
            if(clear) { DataSourceContextHolder.clear(); }}}private Method getMethod(JoinPoint pjp) {
        MethodSignature signature = (MethodSignature) pjp.getSignature();
        returnsignature.getMethod(); }}Copy the code

Five: usage and testing

Once read/write separation is configured, it can be used in code, typically in the Service layer or dao Layer, add @datasourceswitcher (datasourceenum.slave) to the method to be queried, which indicates that all operations under this method go through read libraries. Using @datasourceswitcher (datasourceenum.master) when an update or insert is required indicates that the library will be written next.

In fact, there is a more automatic way to configure AOP to automatically switch data sources based on the method prefix, such as update, INSERT, fresh, etc., method names are automatically set to write library, SELECT, get, query Method names prefixed with ", "are all configured as read libraries, which is a more automatic way of writing configuration. The downside is that method names need to be defined according to the rigor of an AOP configuration, or they will become invalid

@Service
public class OrderService {

    @Resource
    private OrderMapper orderMapper;


    /** * Read operation **@param orderId
     * @return* /
    @DataSourceSwitcher(DataSourceEnum.SLAVE)
    public ListOrder getOrder(String orderId) {
        return orderMapper.listOrders(orderId);

    }

    /** * Write operation **@param orderId
     * @return* /
    @DataSourceSwitcher(DataSourceEnum.MASTER)
    public ListOrder insertOrder(Long orderId) {
        Order order = new Order();
        order.setOrderId(orderId);
        returnorderMapper.saveOrder(order); }}Copy the code

Six: summarize

Is the basic flow diagram above, this blog is introduced how to realize the database to read and write, pay attention to reading and writing separate emphasis is data routing, we need to inherit AbstractRoutingDataSource, autotype determineCurrentLookupKey it Method, and note that the global context manager DataSourceContextHolder, which is the primary class that holds the data source context, is also the data source value that the routing method looks for and acts as a hub for the data source. Combined with the jDBC-Template layer to create and manage data sources, transactions, and so on, we have a perfect separation of database reads and writes.

Author: Yrion cnblogs.com/wyq178/p/13352707.html

Search
About
mo4tech.com (Moment For Technology) is a global community with thousands techies from across the global hang out!Passionate technologists, be it gadget freaks, tech enthusiasts, coders, technopreneurs, or CIOs, you would find them all here.