Welcome to follow our wechat official account: Shishan100

My new course ** “C2C e-commerce System Micro-service Architecture 120-day Practical Training Camp” is online in the public account ruxihu Technology Nest **, interested students, you can click the link below for details:

120-Day Training Camp of C2C E-commerce System Micro-Service Architecture

Source: crossoverJie

preface

The design of seckill architecture was mentioned in java-interview before, and this time it was simply implemented based on the theory.

This article uses a step-by-step approach to improve performance to achieve the effect of concurrent second kill, the article is long, please prepare for the bench _ ^_^

All the code covered in this article:

  • Github.com/crossoverJi…

  • Github.com/crossoverJi…

First, take a look at the final architecture diagram:

Let’s briefly describe the flow of requests based on this diagram, because this will remain the same no matter how much we improve it later.

  • The front-end request goes to the Web layer, and the corresponding code is the Controller

  • After that, the real inventory check, order and other requests are sent to the Service layer (RPC calls still use Dubbo, but updated to the latest version, we will not discuss the details of Dubbo this time, if you are interested, you can check the distributed architecture based on DuBBo).

  • The Service layer will then ground the data and place the order

unlimited

In fact, putting aside the scene of seckilling, a normal ordering process can be simply divided into the following steps:

  • Check the inventory

  • Buckle inventory

  • Create the order

  • pay

Based on the architecture above, we have the following implementation:

Take a look at the actual project structure:

As before:

  • An API is provided for Service layer implementation and Web layer consumption.

  • The Web layer is simply a SpringMVC.

  • The Service layer is the actual data landing.

  • Ssm-seconds-kill-order-consumer is the Kafka consumption mentioned later.

The database also has only two simple tables to simulate the order:

CREATE TABLE `stock` ( `id` int(11) unsigned NOT NULL AUTO_INCREMENT, 'name' varchar(50) NOT NULL DEFAULT 'COMMENT ',' count 'int(11) NOT NULL COMMENT ', 'sale' int(11) NOT NULL COMMENT 'buy ',' version 'int(11) NOT NULL COMMENT' buy ', ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET= UTf8; CREATE TABLE 'stock_ORDER' (' id 'int(11) unsigned NOT NULL AUTO_INCREMENT,' sid 'int(11) NOT NULL COMMENT' inventory id ', 'name' varchar(30) NOT NULL DEFAULT '' COMMENT ' 'create_time' TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP COMMENT 'create_time ', PRIMARY KEY (`id`)) ENGINE=InnoDB AUTO_INCREMENT=55 DEFAULT CHARSET=utf8;Copy the code

Web layer controller implementation:

    @Autowired    private StockService stockService;    @Autowired    private OrderService orderService;    @RequestMapping("/createWrongOrder/{sid}")    @ResponseBody    public String createWrongOrder(@PathVariable int sid) {        logger.info("sid=[{}]", sid);        int id = 0;        try {            id = orderService.createWrongOrder(sid);        } catch (Exception e) {            logger.error("Exception",e);        }        return String.valueOf(id);    }
Copy the code

Where Web as a consumer calls the Dubbo service provided by OrderService.

Service layer, OrderService implementation:

The first is the implementation of the API (which provides an outbound interface in the API) :

@Servicepublic class OrderServiceImpl implements OrderService {    @Resource(name = "DBOrderService")    private com.crossoverJie.seconds.kill.service.OrderService orderService ;    @Override    public int createWrongOrder(int sid) throws Exception {        return orderService.createWrongOrder(sid);    }}
Copy the code

This simply calls the implementation in DBOrderService, which does the actual data landing, that is, writing to the database.

DBOrderService implementation:

Transactional(rollbackFor = Exception.class)@Service(value = "DBOrderService")public class OrderServiceImpl implements OrderService { @Resource(name = "DBStockService") private com.crossoverJie.seconds.kill.service.StockService stockService; @Autowired private StockOrderMapper orderMapper; @override public int createWrongOrder(int sid) throws Exception{// checkStock = checkStock(sid); // saleStock(stock); Int id = createOrder(stock); return id; } private Stock checkStock(int sid) { Stock stock = stockService.getStockById(sid); If (stock.getsale ().equals(stock.getCount())) {throw new RuntimeException(" Stock is out of stock "); } return stock; } private int saleStock(Stock stock) { stock.setSale(stock.getSale() + 1); return stockService.updateStockById(stock); } private int createOrder(Stock stock) { StockOrder order = new StockOrder(); order.setSid(stock.getId()); order.setName(stock.getName()); int id = orderMapper.insertSelective(order); return id; }}Copy the code

Ten inventories were pre-initialized.

Manually invoke createWrongOrder/1 to discover the interface:

Inventory list:

The order list:

Everything seems to be fine and the data is fine. But when testing concurrently with JMeter:

The test configuration is: 300 threads concurrently, two rounds of testing to see the results in the database:

The requests were answered successfully, and inventory was indeed withheld, but the order generated 124 records. This is obviously a typical oversold phenomenon.

Calling the interface manually now would return an inventory shortage, but it’s too late.

Optimistic lock update

How to avoid the above phenomenon? The simplest approach is naturally optimistic lock, let’s look at the concrete implementation:

In fact, nothing else has changed much, mainly the Service layer.

@override public int createOptimisticOrder(int sid) throws Exception {Stock = checkStock(sid); // saleStockOptimistic(stock); Int id = createOrder(stock); return id; } private void saleStockOptimistic(Stock stock) { int count = stockService.updateStockByOptimistic(stock); If (count == 0){throw new RuntimeException(" concurrent inventory update failed "); }}Copy the code

Corresponding XML:

    <update< span=""> id="updateByOptimistic" parameterType="com.crossoverJie.seconds.kill.pojo.Stock">        update stock                    sale = sale + 1,            version = version + 1,                WHERE id = #{id,jdbcType=INTEGER}        AND version = #{version,jdbcType=INTEGER}
Copy the code

/createOptimisticOrder/1

This time, no matter the inventory order is found to be OK.

Viewing logs:

The effect is that many concurrent requests will respond incorrectly.

Improved throughput

To further improve the throughput and response efficiency of the second kill, both web and Service have been scaled horizontally.

  • The Web leverages Nginx for load.

  • Service is also multiple applications.

When using JMeter to test, you can see the effect intuitively.

Since I did the test on a small water pipe server in Aliyun, and the configuration was not high and the application was all on the same machine, the performance advantage was not fully realized (Nginx also added extra network consumption when doing load forwarding).

Shell scripts implement simple CI

Due to the application of multiple deployment, manual version of the pain of testing I believe experienced all experience.

I didn’t have the energy to build a complete CI CD this time, but I just wrote a simple script to realize automatic deployment, hoping to give some inspiration to students who have no experience in this field:

Build web

#! # read/bin/bash# build web consumers appnameappname = "consumer" echo "input =" $appnamePID = $(ps - ef | grep $appname | grep -v grep | Awk '{print $2}')# kill pidfor var in ${PID[@]}; do echo "loop pid= $var" kill -9 $vardoneecho "kill $appname success"cd .. git pullcd SSM-SECONDS-KILLmvn -Dmaven.test.skip=true clean packageecho "build war success"cp / home/crossoverJie/SSM/SSM - SECONDS - KILL/SSM - SECONDS - KILL - WEB/target/SSM - SECONDS - KILL - WEB - 2.2.0 - the SNAPSHOT. War /home/crossoverJie/tomcat/tomcat-dubbo-consumer-8083/webappsecho "cp tomcat-dubbo-consumer-8083/webapps ok!" Cp/home/crossoverJie/SSM/SSM - SECONDS - KILL/SSM - SECONDS - KILL - WEB/target/SSM - SECONDS - KILL - WEB - 2.2.0 - the SNAPSHOT. War /home/crossoverJie/tomcat/tomcat-dubbo-consumer-7083-slave/webappsecho "cp tomcat-dubbo-consumer-7083-slave/webapps ok!" sh /home/crossoverJie/tomcat/tomcat-dubbo-consumer-8083/bin/startup.shecho "tomcat-dubbo-consumer-8083/bin/startup.sh success"sh /home/crossoverJie/tomcat/tomcat-dubbo-consumer-7083-slave/bin/startup.shecho "tomcat-dubbo-consumer-7083-slave/bin/startup.sh success"echo "start $appname success"Copy the code

To build a Service

# # to build the service provider read appnameappname = "provider" echo "input =" $appnamePID = $(ps - ef | grep $appname | grep -v grep | awk '{print $2}')#if [ $? -eq 0 ]; Then# echo "process id:$PID"#else# echo "process $appname not exit"# exit#fi# echo kill pidfor var in ${PID[@]}; do echo "loop pid= $var" kill -9 $vardoneecho "kill $appname success"cd .. git pullcd SSM-SECONDS-KILLmvn -Dmaven.test.skip=true clean packageecho "build war success"cp / home/crossoverJie/SSM/SSM - SECONDS - KILL/SSM - SECONDS - KILL - SERVICE/target/SSM - SECONDS - KILL - SERVICE - 2.2.0 - the SNAPSHOT. War /home/crossoverJie/tomcat/tomcat-dubbo-provider-8080/webappsecho "cp tomcat-dubbo-provider-8080/webapps ok!" Cp/home/crossoverJie/SSM/SSM - SECONDS - KILL/SSM - SECONDS - KILL - SERVICE/target/SSM - SECONDS - KILL - SERVICE - 2.2.0 - the SNAPSHOT. War /home/crossoverJie/tomcat/tomcat-dubbo-provider-7080-slave/webappsecho "cp tomcat-dubbo-provider-7080-slave/webapps ok!" sh /home/crossoverJie/tomcat/tomcat-dubbo-provider-8080/bin/startup.shecho "tomcat-dubbo-provider-8080/bin/startup.sh success"sh /home/crossoverJie/tomcat/tomcat-dubbo-provider-7080-slave/bin/startup.shecho "tomcat-dubbo-provider-8080/bin/startup.sh success"echo "start $appname success"Copy the code

Then whenever I have an update, I just need to execute these two scripts to help me automatically build. Are the most basic Linux commands, I believe you all understand.

Optimistic lock update + distributed limiting

The above results seem to be no problem, in fact, far from it. There is no problem with only 300 concurrent requests being simulated, but what about 3000, 3W, 300W requests?

Scaling horizontally can support more requests, but can you solve the problem with the least amount of resources? In fact, a careful analysis will find that:

If I only have 10 goods in stock, then no matter how many people come to buy, only 10 people will be able to place an order at most.

So 99 percent of those requests are invalid.

It’s no secret that most application databases are the straw that breaks the camel’s back. SQL > select * from Druid; SQL > select * from Druid;

Because services are two applications.

The database also has more than 20 connections.

How do you optimize it? It’s easy to think of distributed limiting. We keep concurrency within a manageable range and then fail quickly so that we can protect the system as much as possible.

Distributed – redis – tool ⬆ ️ v1.0.3

For this reason also github.com/crossoverJi… A small upgrade.

Because all requests after adding this component will go through Redis, use Redis resources with great care.

API update

The modified API is as follows:

@Configurationpublic class RedisLimitConfig {    private Logger logger = LoggerFactory.getLogger(RedisLimitConfig.class);    @Value("${redis.limit}")    private int limit;    @Autowired    private JedisConnectionFactory jedisConnectionFactory;    @Bean    public RedisLimit build() {        RedisLimit redisLimit = new RedisLimit.Builder(jedisConnectionFactory, RedisToolsConstant.SINGLE)                .limit(limit)                .build();        return redisLimit;    }}
Copy the code

The builder has switched to JedisConnectionFactory, so it needs to work with Spring.

It also displays whether the incoming Redis is deployed in a cluster or a single machine during initialization (clustering is strongly recommended, but there is still some pressure on Redis after traffic limiting).

Current limiting implementation

Now that the API has been updated, the implementation will have to change:

/** * limit traffic * @return if true */ public boolean limit() { //get connection Object connection = getConnection(); Object result = limitRequest(connection); if (FAIL_CODE ! = (Long) result) { return true; } else { return false; } } private Object limitRequest(Object connection) { Object result = null; String key = String.valueOf(System.currentTimeMillis() / 1000); if (connection instanceof Jedis){ result = ((Jedis)connection).eval(script, Collections.singletonList(key), Collections.singletonList(String.valueOf(limit))); ((Jedis) connection).close(); }else { result = ((JedisCluster) connection).eval(script, Collections.singletonList(key), Collections.singletonList(String.valueOf(limit))); try { ((JedisCluster) connection).close(); } catch (IOException e) { logger.error("IOException",e); } } return result; } private Object getConnection() { Object connection ; if (type == RedisToolsConstant.SINGLE){ RedisConnection redisConnection = jedisConnectionFactory.getConnection(); connection = redisConnection.getNativeConnection(); }else { RedisClusterConnection clusterConnection = jedisConnectionFactory.getClusterConnection(); connection = clusterConnection.getNativeConnection() ; } return connection; }Copy the code

If it is a native Spring application, annotate @SpringControllerLimit(errorCode=200).

The actual usage is as follows:

There are no updates on the Service side, and the database is still updated with optimistic locks.

Under the pressure test again to see effect/createOptimisticLimitOrderByRedis / 1:

First of all, there was no problem with the results, and then there was a significant decrease in the number of database connections and concurrent requests.

Optimistic lock update + distributed traffic limiting + Redis cache

If you look closely at Druid’s monitoring data, you can see that this SQL has been queried several times:

In fact, this is the real-time inventory query SQL, mainly in order to determine whether there is inventory before each order.

This is also an optimization point.

This kind of data we can completely put in memory, efficiency is much higher than in the database.

Since our application is distributed, in-heap caching is obviously not a good fit, and Redis is a good fit.

This time the main transformation is the Service layer:

  • Redis is used for every inventory query.

  • Update Redis when destocking.

  • The inventory information needs to be written into Redis in advance (manually or automatically).

The main code is as follows:

@ Override public int createOptimisticOrderUseRedis (int sid) throws the Exception {/ / inspection of inventory, Get Stock from Redis Stock = checkStockByRedis(sid); / / optimistic locking updated inventory and Redis saleStockOptimisticByRedis (stock); Int id = createOrder(stock); return id ; } private Stock checkStockByRedis(int sid) throws Exception { Integer count = Integer.parseInt(redisTemplate.opsForValue().get(RedisKeysConstant.STOCK_COUNT + sid)); Integer sale = Integer.parseInt(redisTemplate.opsForValue().get(RedisKeysConstant.STOCK_SALE + sid)); If (count. Equals (sale)){throw new RuntimeException(" currentCount=" + sale); } Integer version = Integer.parseInt(redisTemplate.opsForValue().get(RedisKeysConstant.STOCK_VERSION + sid)); Stock stock = new Stock() ; stock.setId(sid); stock.setCount(count); stock.setSale(sale); stock.setVersion(version); return stock; * * *} / optimistic locking to update the database To update the Redis * @ param stock * / private void saleStockOptimisticByRedis (stock stock) {int count = stockService.updateStockByOptimistic(stock); If (count == 0){throw new RuntimeException(" concurrent inventory update failed "); } / / since the increase redisTemplate opsForValue (). The increment (RedisKeysConstant. STOCK_SALE + stock. GetId (), 1); redisTemplate.opsForValue().increment(RedisKeysConstant.STOCK_VERSION + stock.getId(),1) ; }Copy the code

Look at the actual pressure test effect/createOptimisticLimitOrderByRedis / 1:

Finally, the data was found to be ok, and the database requests and concurrency came down.

Optimistic lock update + distributed traffic limiting + Redis cache + Kafka asynchronous

The final optimization is how to improve throughput and performance again. All of the examples above are synchronous requests, which can be used to improve performance.

Here we asynchronize order writing and inventory updating, using Kafka for decoupling and queuing.

Every time a request passes the limit and reaches the Service layer and passes the inventory check, the order information is sent to Kafka so that a request can be returned directly.

The consuming procedure carries on the data warehousing landing again. Because it is asynchronous, you will eventually need to use a callback or other reminder to remind the user that the purchase is complete.

Here more code will not stick, the consumer program is actually the previous Service layer of logic rewrite again, but the use of SpringBoot.

Interested friends can see:

Github.com/crossoverJi…

conclusion

In fact, after the above optimization summed up is nothing more than the following points:

  • Try to intercept requests upstream.

  • You can also limit traffic by UID.

  • Minimize requests falling to DB.

  • Take advantage of caching.

  • Synchronous operations are asynchronized.

  • Fail fast, early failure, protect applications.

Code word is not easy, this should be I have written the word number of the most, think of the high school 800 words of the composition are not hold out 😂, can be imagined is difficult to get.

Welcome to discuss the above.

END

Personal public account: Architecture Notes of Huishania (ID: Shishan100)

Welcome to long press the picture below to pay attention to the public number: Huoia architecture notes!

The official number backstage replies the information, obtains the author exclusive secret system study material

Architecture notes, BAT architecture experience taught each other