Hmily high concurrency transaction processing

Answer some of your questions!

1. Hmily performance problem?

A: Hmily binds to your RPC methods in an AOP-faceted manner, simply by saving logs (via asynchronous disruptor) and passing parameters when your RPC calls are made. Now confrim and Cancel are also asynchronous calls, so the performance is the same as your RPC. Remember that Hmily does not produce transactions, Hmily is just a porter for distributed transactions. Previously, Hmily added a lock to the AOP aspect, resulting in performance degradation, as in the article by Spring Cloud China. It is now all fixed and all asynchronous. In fact, the test is not reasonable, because it is a pressure test demo, are the default configuration. I’ll show you how to improve Hmiy performance.

2. How does Hmily handle RPC call timeout?

A: We support calling an RPC method in a distributed environment if a timeout occurs. For example, if the dubbo timeout is set to 100ms, your method may take 140ms, but your method will execute successfully. But to the caller, you lose. A rollback is required at this point. So what Hmily did was. The caller thinks you failed and will not join the rollback call chain. Therefore, the TIMEOUT RPC interface performs its own rollback. There is a scheduled task to roll back, because the log state is the try phase, and the cancel method is called to roll back to the final consistency!

3. Does Hmily support cluster deployment? And the recovery of scheduled task logs in a clustered environment?

A: Hmily is bundled with your application AOP aspect and naturally supports clustering. Timed recovery in a clustered environment is virtually non-existent, unless your cluster crashes at the same time. If your cluster fails at the same time, the log will have a version field during recovery, and the recovery will only be performed if the update is successful.

4.Hmily saves logs asynchronously, so in extreme cases (code just gets to this line, then JVM exits, power outages, etc.), the log is not saved.

Answer: this kind of idea, affirmation is did not see the source code, or is to see how to understand. In the AOP aspect, the log is saved asynchronously first, noting that the state is PRE_TRY. After the try execution completes, update to try. Even if there’s the possibility that you’re talking about power outages, that you’re interrupting power debugging, and then killing the service or whatever. (Believe it or not, I can invalidate Mysql transactions.) All I can say is, don’t make a big effort to solve those accidental things, the best solution is not to solve it. Hmily is tuned for high concurrency. Maybe the content of this department is for those who are familiar with Hmily, and it doesn’t matter if they are not. Just go to Github and read the documentation. Hmily supports Spring Bean XML configuration and Spring Boot Start YML configuration.

 <bean id="hmilyTransactionBootstrap" class="com.hmily.tcc.core.bootstrap.HmilyTransactionBootstrap">
        <property name="serializer" value="kryo"/>
        <property name="recoverDelayTime" value="120"/>
        <property name="retryMax" value="3"/>
        <property name="loadFactor" value="2"/>
        <property name="scheduledDelay" value="120"/>
        <property name="scheduledThreadMax" value="4"/>
        <property name="bufferSize" value="4096"/>
        <property name="consumerThreads" value="32"/>
        <property name="started" value="false"/>
        <property name="asyncThreads" value="32"/>
        <property name="repositorySupport" value="db"/>
        <property name="tccDbConfig">
            <bean class="com.hmily.tcc.common.config.TccDbConfig">
                <property name="url"
                          value="JDBC: mysql: / / 192.168.1.98:3306 / TCC? useUnicode=true&amp;characterEncoding=utf8"/>
                <property name="driverClassName" value="com.mysql.jdbc.Driver"/>
                <property name="username" value="root"/>
                <property name="password" value="123456"/>
            </bean>
        </property>
    </bean>
Copy the code
  • Serializer: Here I recommend Kroy. Hessian, Protostuff, and JDK are also supported in Hmily. Kroy > Hessian > Protostuff > JDK in our tests

  • RecoverDelayTime: delay time of a scheduled task, in seconds. The default value is 120. This parameter should only be greater than the timeout setting for your RPC call.

  • RetryMax: Indicates the maximum number of repetitions. The default value is three. When your service is down, the scheduled task will run retryMax times to perform your cancel or confrim.

  • BufferSize: Specifies the bufferSize of a disruptor disruptor, which can be increased for high concurrency. Notice it’s 2n

  • ConsumerThreads Distuptor Consumes the number of threads, which can be increased for high concurrency.

  • Started: Note that this property is set to true when it is the initiator. The player is false.

  • AsyncThreads Specifies the size of the thread pool that performs confirm and cancel asynchronously. Increase the size if the number of concurrent requests is high

  • Next comes the most important transaction log storage in our pressure test, and mongo is recommended. Mongodb > Redis cluster >mysql> ZooKeeper

  • If you use mongodb to store logs, set the url to the URL of the mongdb cluster.

       <property name="repositorySupport" value="mongodb"/>
        <property name="tccMongoConfig">
            <bean class="com.hmily.tcc.common.config.TccMongoConfig">
                <property name="mongoDbUrl"  value="192.168.1.68:27017"/>
                <property name="mongoDbName" value="happylife"/>
                <property name="mongoUserName" value="xiaoyu"/>
                <property name="mongoUserPwd" value="123456"/>
            </bean>
        </property>
    Copy the code
    • If you use Redis to store logs, the configuration is as follows:
    • Redis single node
    <property name="repositorySupport" value="redis" />
    <property name="tccRedisConfig">
        <bean class="com.hmily.tcc.common.config.TccRedisConfig">
            <property name="hostName"
                      value="192.168.1.68"/>
            <property name="port" value="6379"/>
            <property name="password" value=""/>
        </bean>
    </property>
    Copy the code
  • Redis Sentinel mode cluster:

<property name="repositorySupport" value="redis"/>
 <property name="tccRedisConfig">
     <bean class="com.hmily.tcc.common.config.TccRedisConfig">
         <property name="masterName" value="aaa"/>
         <property name="sentinel" value="true"/>
         <property name="sentinelUrl" value="192.168.1.91:26379; 192.168.1.92:26379; 192.168.1.93:26379"/>
         <property name="password" value="123456"/>
     </bean>
 </property>
Copy the code
  • Redis cluster:
<property name="repositorySupport" value="redis"/>
 <property name="tccRedisConfig">
     <bean class="com.hmily.tcc.common.config.TccRedisConfig">
         <property name="cluster" value="true"/>
         <property name="clusterUrl" value="192.168.1.91:26379; 192.168.1.92:26379; 192.168.1.93:26379"/>
         <property name="password" value="123456"/>
     </bean>
 </property>
Copy the code
  • If you use ZooKeeper to store logs, the configuration is as follows:
 <property name="repositorySupport" value="zookeeper"/>
 <property name="tccZookeeperConfig">
     <bean class="om.hmily.tcc.common.config.TccZookeeperConfig">
         <property name="host"  value="192.168.1.73:2181"/>
         <property name="sessionTimeOut" value="100000"/>
         <property name="rootPath" value="/tcc"/>
     </bean>
 </property>
Copy the code
  • The configuration of the database is already shown above, so I will not cover the use of file storage.
  • That’s it for today, one note, a few lines of configuration to easily handle highly concurrent distributed transactions!