Tags: springbatch


1. The introduction

The previous article, “Battle database – Spring Batch (4) Database-to-database,” uses Spring Batch’s built-in read and write components to synchronize data between databases. Relatively speaking, the data reading and writing data is based on JDBC to read and write (mapping data object, we need to handle, such as UserRowMapper), we now development generally use upper ORM framework, such as Hibernate, MyBatis, BeetlSQL. For Hibernate, Spring Batch default HibernateCursorItemReader and HibernateItemWriter, can also realize use MyBatis and BeetlSQL itself. Personally, BeetlSQL will be easier to get started in terms of ease of use and learning curve, and the community is quite active. Therefore, this paper introduces the use of Spring Batch combined with BeetlSQL for database-to-database data synchronization.

2. Development environment

  • The JDK: jdk1.8
  • Spring the Boot: 2.1.4. RELEASE
  • Spring Batch: 4.1.2. RELEASE
  • BeetlSQL: 1.1.77. RELEASE
  • Development IDE: the IDEA
  • Build tool Maven: 3.3.9
  • Log component Logback :1.2.3
  • Lombok: 1.18.6

3.BeetlSQL brief explanation

According to the official document, BeetlSQL is a fully functional DAO tool, with Hibernate advantages & Mybatis advantages function, suitable for recognizing SQL as the center, and at the same time demand tools can automatically generate a large number of commonly used SQL applications. Please refer to the official documentation for details. From the recent period of use of the process, feel from the development efficiency, maintenance, ease of use, are relatively excellent.

4. Use BeetlSQL to read and write the database

Again, this example builds on the previous article’s example capability of reading the data from table test_USER from the source database, processing it, and writing it to table test_user in the target database. Instead of using Spring Batch’s built-in JdbcCursorItemReader and JdbcBatchItemWriter, BeetlSQL is used for reading and writing. See the code for a complete example

4.1 Introducing BeetlSQL dependency

BeetlSQL provides the Starter for Spring Boot to implement automatic configuration. Add the following dependencies to POP.xml:

<! -- ORM framework: beetlsql --> <dependency> <groupId>com.ibeetl</groupId> <artifactId>beetl-framework-starter</artifactId> < version > 1.1.77. RELEASE < / version > < / dependency >Copy the code

Beetl-2.9.9 and BeetlSQL-2.11.2 will be added as follows:

4.2 Writing a DAO for multiple data sources

4.2.1 Adding a Configuration File

Since we are using multiple data sources for reading and writing, the configuration of multiple data sources was described in the previous article and will not be explained here. BeetlSQL has good support for multiple data sources and only requires simple configuration. Please refer to the official documentation for details. Here is a brief explanation of this. Add the following configuration to the application.properties file:

# beetlsql configuration
# default/SQL, optional
#beetlsql.sqlPath=/sql
#dao file suffix
beetlsql.daoSuffix=Repository
# Automatically load and find the dao file package
beetlsql.basePackage=me.mason.springbatch
# the default org. Beetl. SQL. Core. The MySqlStyle, don't set up
#beetlsql.dbStyle=org.beetl.sql.core.db.MySqlStyle
# Multiple data source DAO file location to separate read and write data sources
beetlsql.ds.datasource.basePackage=me.mason.springbatch.dao.local
beetlsql.ds.originDatasource.basePackage=me.mason.springbatch.dao.origin
beetlsql.ds.targetDatasource.basePackage=me.mason.springbatch.dao.target
beetlsql.mutiple.datasource=datasource,originDatasource,targetDatasource
Copy the code

Description:

  • beetlsql.daoSuffixsaidDaoThe suffix of the file,BeetlSqlWill be loaded according to this suffixDao.
  • beetlsql.mutiple.datasourceThe data source name is the same as the data source configuration.

4.2.2 Adding a DAO File

After the above configuration is added, dao.local,dao.origin, and DAo.target packages need to be created respectively under me.mason. springBatch in the project because daO.local,dao.origin, and DAo.target packages are used to distinguish daOS that read and write data sources using package names. Store daOs corresponding to the three data sources to be read respectively. In this example, only the source and target databases are used, so you only need to add OriginUserRepository to dao.origin for source data reads and TargetUserRepository to dao.target for write operations. (Note that since the configuration specifies the use of the suffix Repository, the class name here needs to use it as a suffix). As follows:

OriginUserRepository.java

@Repository public interface OriginUserRepository extends BaseMapper<User> { List<User> getOriginUser(Map<String,Object>  params); }Copy the code

TargetUserRepository.java

@Repository
public interface TargetUserRepository extends BaseMapper<User> {
}
Copy the code

Description:

  • Using annotations@RepositoryThe annotation is a data read/write DAO
  • inheritanceBaseMapperTo useBeetlSQLBuilt-in ability to add, delete, change and check
  • forOriginUserRepository.getOriginUserIs a custom data read operation, this operation is implemented using write insql/user.mdIn thesqlStatements (SQL statements are described later).

4.3 Writing SQL Files

According to BeetlSql, developers can customize SQL statements for database operations, and SQL statements are saved as Markdown files, supporting Beetl syntax, parameterized statements, logical judgment and other operations, which is a bit like XML statements in Mybatis, but more user-friendly to display and modify. Specific SQL file more detailed use function, the reader can refer to the official document.

In this example, OriginUserRepository provides a custom getOriginUser function that allows you to write SQL to read data from a markdown file. As follows:

GetOriginUser === * Select * from test_userCopy the code

InsertUser is the name of this statement, which is used to insert data:

InsertUser = = = * insert data insert into test_user (id, name, phone, title, email, gender, date_of_birth ,sys_create_time,sys_create_user,sys_update_time,sys_update_user) values (#id#,#name#,#phone#,#title#,#email#,#gender#,#dateOfBirth#
    ,#sysCreateTime#,#sysCreateUser#,#sysUpdateTime#,#sysUpdateUser#)
Copy the code

As you can see from the SQL above, this is no different from our usual writing SQL, where the ## includes the parameter, i.e. the field of the entity User.

4.4 Writing the Read component ItemReader

With the above configuration and the added DAO classes, we now have the ability to read and write data. Using OriginUserRepository, you can write Spring Batch ItemReaders. The data is read in memory and returned on read(). As follows:

@Slf4j
public class UserItemReader implements ItemReader<User> {
    protected List<User> items;

    protected Map<String,Object> params;
    @Autowired
    private OriginUserRepository originUserRepository;

    @Override
    public User read() throws Exception, UnexpectedInputException, ParseException, NonTransientResourceException {
        if(Objects. IsNull (items)) {/ / using beetlsql md to execute SQL items. = originUserRepository getOriginUser (params);if(items.size() > 0){
                returnitems.remove(0); }}else{
            if(! items.isEmpty()) {returnitems.remove(0); }}return null;
    }

    public Map<String, Object> getParams() {
        return params;
    }

    public void setParams(Map<String, Object> params) { this.params = params; }}Copy the code

Description:

  • originUserRepository.getOriginUserExecution isuser.mdthegetOriginUserQuery statement.
  • After the query, the data is saved inList<User>In theread()Returns the data within. When all are returned, null is returned to indicate completion.
  • BeetlSqlSupport the use ofMapThis example is not currently used.

4.5 Writing and writing component ItemWriter

After reading the data, use TargetUserRepository and perform insertUser as written above to insert the data. As shown below.

public class UserItemWriter implements ItemWriter<User> {
    @Autowired
    private TargetUserRepository targetUserRepository;

    @Override
    public void write(List<? extends User> items) throws Exception {
        targetUserRepository.getSQLManager().updateBatch("user.insertUser",items); }}Copy the code

Description:

  • useSQLManagertheupdateBatchBatch write Data
  • user.insertUser,userismarkdownThe name of the file, which is the entity name,insertUserIs written bysqlStatements.

4.6 Assembling a Complete Task

Create BeetlsqlBatchconfig. Java as the Spring Batch task configuration

4.6.1 Injecting Read and write Components

Using the Reader and Writer already written above, add them using Bean annotations as follows:

@Bean
public ItemReader beetlsqlItemReader() { UserItemReader userItemReader = new UserItemReader(); Map<String,Object> params = collutil.newhashmap (); userItemReader.setParams(params);return userItemReader;
}

@Bean
public ItemWriter beetlsqlWriter() {
    return new UserItemWriter();
}
Copy the code

4.6.2 Assembly Tasks

Step and Job are used to complete the task configuration as follows:

@Bean
public Job beetlsqlJob(Step beetlsqlStep,JobExecutionListener beetlsqlListener){
    String funcName = Thread.currentThread().getStackTrace()[1].getMethodName();
    return jobBuilderFactory.get(funcName)
            .listener(beetlsqlListener)
            .flow(beetlsqlStep)
            .end().build();
}
@Bean
public Step beetlsqlStep(ItemReader beetlsqlItemReader ,ItemProcessor beetlsqlProcessor
        ,ItemWriter beetlsqlWriter){
    String funcName = Thread.currentThread().getStackTrace()[1].getMethodName();
    return stepBuilderFactory.get(funcName)
            .<User,User>chunk(10)
            .reader(beetlsqlItemReader)
            .processor(beetlsqlProcessor)
            .writer(beetlsqlWriter)
            .build();
}
Copy the code

4.7 test

Refer to Db2DbJobTest in the previous article and write the BeetlsqlJobTest file. Db2DbJobTest = Db2DbJobTest = Db2DbJobTest = Db2DbJobTest = Db2DbJobTest = Db2DbJobTest The following output is displayed:

In using BeetlSql above, you can see several benefits:

  • You don’t have to write it yourselfRowMapperIt’s easier to map the data.
  • sqlStatements written in the bookmarkdownIn the file, modification is more flexible.
  • sqlStatement execution output is clearer in the log.

5. To summarize

In this paper, Spring Batch is used to make a change to the database-to-database example, and BeetlSQL is used for multi-data source read and write operations to achieve a simpler, more flexible and clearer database read and write. Hopefully it will be helpful for readers who want to use Spring Batch while still learning about BeetlSql.