Dynamic-datasource source analysis

The new company has many places to use multiple data sources, before the old company has been through the configuration of multiple DataSource to solve, in the company found that they like to use this framework, just look at the source code. The source address

This is MP (Mybatis- Plus) of the organization of the beans produced a multi-data source scheme, the use of more people.

Analysis steps

Automatic configuration

  1. First, this is a SpringBoot launcher, so we start with spring.factories.

    • It turns out it auto-configures it for usDynamicDataSourceAutoConfiguration
  2. Check the DynamicDataSourceAutoConfiguration configuration class.

    • Let’s look at the more important notes first
    @EnableConfigurationProperties(DynamicDataSourceProperties.class)
    @AutoConfigureBefore(DataSourceAutoConfiguration.class)
    @Import(value = {DruidDynamicDataSourceConfiguration.class, DynamicDataSourceCreatorAutoConfiguration.class})
    @ConditionalOnProperty(prefix = DynamicDataSourceProperties.PREFIX, name = "enabled", havingValue = "true", matchIfMissing = true)
    Copy the code
    • Let’s look at it in turn

      • DynamicDataSourceProperties is what we in the yml can configure attribute values, these configurations can be mapped to the class object.
      • AutoConfigureBefore in order to prevent and SpringBoot the default initiatorDataSourceAutoConfigurationConflict, setting this configuration to be configured before it is automatically configured.
      • Import, like the container to inject two configured BeanDefinitions:
        • Automatic configuration of DruidDynamicDataSourceConfiguration: reuse Druid
        • DynamicDataSourceCreatorAutoConfiguration: the configuration class, mainly for the container to inject the DataSource bean creator. There are 4 creators (default, JNDI, Druid, Hikari)
      • ConditionalOnProperty indicates that it can be configuredspring.datasource.dynamic.enable=falseTo turn off dynamic data source configuration
    • Then change the configuration class to inject the following beans into the container:

      • DynamicDataSourceProvider: the provider accepts multiple data sources in the configuration file configuration information, and provide a method loadDataSources used to load multiple data sources.

        @AllArgsConstructor
        public class YmlDynamicDataSourceProvider extends AbstractDataSourceProvider {
            /** * Data source configuration information in the configuration file */
            private final Map<String, DataSourceProperty> dataSourcePropertiesMap;
          	// This method can be called to load the DataSource object. The concrete creation is created by Creator
          	/ / the creator is DynamicDataSourceCreatorAutoConfiguration injection
            @Override
            public Map<String, DataSource> loadDataSources(a) {
                returncreateDataSourceMap(dataSourcePropertiesMap); }}Copy the code
      • DynamicRoutingDataSource: this is the dynamic DataSource implementation scheme, the global only one DataSource, is a custom DataSource: DynamicRoutingDataSource.

        • The custom DataSource is essentially an internal Map that stores all the datasourcees and uses the primary to determine which one is used by default.
        • Creating the DataSource depends on the provider created above. The DynamicRoutingDataSource is called after all the properties of the DynamicRoutingDataSource are setloadDataSourcesMethod to get the data source.
      • DynamicDataSourceAnnotationAdvisor: the container into the AOP aspects: mainly is the Interceptor and specify a PointCut, switching at the core of data source logic in it.

      • Advisor: dynamicTransactionAdvisor, transaction support under multiple data sources, the more early data source is not support transactions: if you want a transaction seata should be used to do distributed transactions. I’ll talk more about that.

      • DsProcessor: The data source processor that parses the configured annotations to determine which data source to use.

    • The automatic configuration is complete.

  3. Let’s get this down to a few important participants that are automatically configured

    1. Creator: the creator of the DataSourceProperty that actually creates the DataSource object. It encapsulates the method that parses the DataSourceProperty and creates the DataSource object.
    2. Provider: Data source provider, which internally holds Creator to create objects and expose themloadDataSourcesMethod returns all data sources.
    3. AOP aspects: one to intercept data source switches and one to handle transactions.
    4. Processor: The processor is the data source information that is extracted from annotations and needs to be switched.

Data source switching principle

The data source switch of Soybean is mainly through @ds annotation to switch data sources. This part of the logic mainly takes the aspect logic of AOP. Our main entrance in automatic configuration above said the first aspect of class, mainly involves two categories: DynamicDataSourceAnnotationAdvisor, DynamicDataSourceAnnotationInterceptor.

DynamicDataSourceAnnotationAdvisor

This class mainly defines pointcuts:

    private Pointcut buildPointcut(a) {
        Pointcut cpc = new AnnotationMatchingPointcut(DS.class, true);
        Pointcut mpc = new AnnotationMethodPoint(DS.class);
        return new ComposablePointcut(cpc).union(mpc);
    }
Copy the code

You can see that this is the interception of the @ds annotation. (Bud’s AOP aspect implementation is not annotated, but Advisor.)

Concrete block processing method in DynamicDataSourceAnnotationInterceptor below.

DynamicDataSourceAnnotationInterceptor

Contains two important attributes:

// Add an extension that gives outsiders the opportunity to modify the AOP conditions. (This opportunity allows us to configure whether to handle only public methods)
private final DataSourceClassResolver dataSourceClassResolver;
// This is what we mentioned above, mainly to parse @ds content (because DS may be an expression)
private final DsProcessor dsProcessor;
Copy the code

This class implements MethodInterceptor, so our core entry logic is on the Invoke method.

@Override
public Object invoke(MethodInvocation invocation) throws Throwable {
  	// Get the key to switch data sources
    String dsKey = determineDatasourceKey(invocation);
  	// Set the current thread's DataSource to the key's DataSource
    DynamicDataSourceContextHolder.push(dsKey);
    try {
      	// Execute the original logic
        return invocation.proceed();
    } finally {
        // Use up, pop up, since @ds can be nested, we should set the data source to the previous oneDynamicDataSourceContextHolder.poll(); }}Copy the code

This is the heart of switching data sources!!

The key logic
How is the data source determined
private String determineDatasourceKey(MethodInvocation invocation) {
    String key = dataSourceClassResolver.findDSKey(invocation.getMethod(), invocation.getThis());
    return(! key.isEmpty() && key.startsWith(DYNAMIC_PREFIX)) ? dsProcessor.determineDatasource(invocation, key) : key; }Copy the code

As you can see, we use the dataSourceClassResolver to retrieve the key of the DataSource (the key is the name of the DataSource you configured in yml).

Key parsing logic:

public String findDSKey(Method method, Object targetObject) {
    if (method.getDeclaringClass() == Object.class) {
        return "";
    }
    Object cacheKey = new MethodClassKey(method, targetObject.getClass());
    String ds = this.dsCache.get(cacheKey);
    if (ds == null) {
        ds = computeDatasource(method, targetObject);
        if (ds == null) {
            ds = "";
        }
        this.dsCache.put(cacheKey, ds);
    }
    return ds;
}
Copy the code

Ds = computeDatasource(method, targetObject); This paragraph. In addition, For the efficiency of key parsing, all methods are cached.

The computeDatasource method is simple. It looks for the @ds annotation (from the current method all the way to the Object) and reflects the value of the annotation.

Finally, if the key is some special expression, the corresponding processor is called to parse them to get the key of the corresponding DataSource. So that’s the parsing logic of key.

How is the data source switched

The core of the switch in DynamicDataSourceContextHolder class! This class internally holds a ThreadLocal:

private static final ThreadLocal<Deque<String>> LOOKUP_KEY_HOLDER = new NamedThreadLocal<Deque<String>>("dynamic-datasource") {
    @Override
    protected Deque<String> initialValue(a) {
        return newArrayDeque<>(); }};Copy the code

This ThreadLocal allocates an ArrayDeque queue to each thread. It is a queue, but it is used as a stack. The reason is that ArrayDeque is more efficient than Stack.

Why does it have to be a stack

Because our calls tend to be nested: A->B->C when C is finished, the data source should switch back to B’s data source, so we should use A stack structure.

Transaction processing

In previous versions, the dynamic data source only supported single library transactions, that is, no other data source switching operations were allowed throughout the call chain and errors were reported. Because once a transaction is started, the Spring transaction manager ensures that the entire thread will receive the same connection for the rest of the transaction. If you want to support all transactions you need to integrate SeATA for distributed transactions. But integrating Seata is a bit of a heavyweight.

In the new version, the @DStransactional annotation was added to address local transactions. The disadvantage is that it is out of the Spring transaction mechanism and cannot be mixed. This is a separate transaction mechanism that has nothing to do with Spring, so take a look at how it works.

Distinguish between distributed transactions and local transactions. Local transaction: refers to a single service with multiple databases under it, and our series of databases operate on the ACID properties of the transaction. Distributed transaction: Refers to multiple services, each service interface may correspond to 1+ libraries, this is guaranteed to be between these services, so the implementation is more difficult than local transactions, hence seATA is heavy weight

@Role(value = BeanDefinition.ROLE_INFRASTRUCTURE)
@ConditionalOnProperty(prefix = DynamicDataSourceProperties.PREFIX, name = "seata", havingValue = "false", matchIfMissing = true)
@Bean
public Advisor dynamicTransactionAdvisor(a) {
    AspectJExpressionPointcut pointcut = new AspectJExpressionPointcut();
    pointcut.setExpression("@annotation(com.baomidou.dynamic.datasource.annotation.DSTransactional)");
    return new DefaultPointcutAdvisor(pointcut, new DynamicTransactionAdvisor());
}
Copy the code

First, he made some changes to the data source:

public Connection getConnection(a) throws SQLException {
    String xid = TransactionContext.getXID();
    // The current thread LOCAL_XID is null, indicating that it is not in a transaction
    if (StringUtils.isEmpty(xid)) {
        // Return the original connection without the transaction
        return determineDataSource().getConnection();
    } else {
        // In a transaction, get a proxy connection for the data source
        String ds = DynamicDataSourceContextHolder.peek();
        ConnectionProxy connection = ConnectionFactory.getConnection(ds);
        // If the thread has been created, it will be created
        return connection == null? getConnectionProxy(ds, determineDataSource().getConnection()) : connection; }}Copy the code

For each getConnection, the class TransactionContext determines whether the SQL was executed in a transaction. If not, the original connection is used. If so, the proxy connection is returned.

Then in the entry point:

public Object invoke(MethodInvocation methodInvocation) throws Throwable {
    if(! StringUtils.isEmpty(TransactionContext.getXID())) {// Note that @dstransAction is executed directly with xID
        return methodInvocation.proceed();
    }
    // Add xID to @dstransAction
    boolean state = true;
    Object o;
    String xid = UUID.randomUUID().toString();
    TransactionContext.bind(xid);
    try {
        o = methodInvocation.proceed();
    } catch (Exception e) {
        state = false;
        throw e;
    } finally {
        // If the execution fails, notify all to roll back
        ConnectionFactory.notify(state);
        TransactionContext.remove();
    }
    return o;
}
Copy the code

If a method is implemented that annotates DSTransactional annotations, but the TransactionContext senses that the state is not yet in a transaction, an XID is generated and bound to The TransactionContext, indicating that the current thread is in a transaction. At this point, it is marked that all subsequent logic is transactional, and all subsequent proxy connections are obtained.

If an exception occurred during method execution, then all current agent connections for the thread are rolled back connectionFactory.notify (state); .

The website says this is currently a temporary version, and we recommend testing it locally before using it online. I actually don’t feel right about this implementation. The logic seems complicated to write. I’m completely using a ThreadLocal to store a Map with all the connections I need. Then the proxy connection doesn’t really seem necessary either.

Why not Spring transactions

Spring transaction AOP forces a transaction manager to be bound to a Connection. When a new transaction starts, it obtains a connection instance from the connection pool and binds the transaction and connection to each other.

This connection will only be used in subsequent transactions, and this connection will only be used in one transaction. Therefore, no matter how many times DB is operated in this transaction, there is really only one connection instance until the transaction commits or rolls back. When a transaction commits or rolls back, the transaction is unbound from the connection and the connection is returned to the pool.

conclusion

The implementation of the dynamic data source is simply: AOP+ annotations +ThreadLocal stack to solve the way.

In general, the overall idea is simple, the lack of we want to just a few small function point, it does not provide what we want some implementation approach, (of course, as designers, generality is certainly a priority, we will not specifically for these small scene subsequent sure its support) later I will write an article for the customized solutions.