This is the 9th day of my participation in the November Gwen Challenge. Check out the event details: The last Gwen Challenge 2021

In the second kill scenario, pay attention to the following:

  • What data does Redis need to cache in a seckill scenario?
  • How to ensure atomicity when updating cached data?

Analysis at different stages in seckill scenarios

In seckill scenarios, there is high concurrency, characterized by more reads and less writes. In the face of instantaneous read traffic, a single database layer is not able to withstand the pressure, the data from disk is slow, the connection count fills up quickly, which can quickly crash the database.

The process of the second kill scene can be divided into the second kill before, second kill start and second kill after. At each stage, Redis plays a different role.

Seconds kill before the activity

At this stage, the user is constantly refreshing the product detail page, which results in an instantaneous spike in requests to the detail page. The solution at this stage is to try to make the page elements of the commodity details page static, and then use CDN or browser to cache these static elements. In this way, the large number of requests before the second kill can be directly served by the CDN or browser cache, and will not reach the server side, which relieves the pressure on the server side.

At this stage, CDN and browser caching service requests are sufficient, and Redis is not required.

The seckill campaign begins

When the seconds kill starts, a large number of users click the seconds kill button, resulting in a large number of concurrent requests to query the inventory. Once a request is queried to find inventory, the system will make inventory deduction, and then the system will generate the actual order, and subsequent processing, such as order payment, logistics services, etc.

The operation of this stage is divided into three parts: inventory inspection, inventory deduction and order processing.

Inventory deduction and order processing are carried out only when inventory inspection finds inventory, so the biggest concurrent pressure is in inventory inspection operation.

In order to support a large number of highly concurrent inventory check requests, Redis is used to hold the inventory in this link. In this way, the request can read the inventory directly from Redis and verify it.

Inventory deduction also needs to be handled in the cache. If this operation is performed in a database, two problems arise:

  • Additional overhead. After the data of the database layer is updated, it also needs to synchronize with Redis, bringing additional operation logic;
  • Order quantity exceeds actual stock, appear oversell. Due to the slow processing speed of the database, the inventory margin cannot be updated in time. It is possible that when the Redis cache is updated after the database layer is updated, other concurrent requests will read the old inventory value, and the order quantity will be larger than the actual inventory, resulting in overbooking.

Therefore, inventory deduction needs to be made in Redis. When inventory check is completed, inventory deduction will be made in Redis as soon as there is inventory surplus. And to avoid requesting old inventory values, the inventory check and inventory deduction operations need to be atomic.

After the SEC kill activity

Requests are greatly reduced after a seckill activity. Some users may try to return the order, but not much. The request can be completed without Redis.

Redis supports the method of seckill scenarios

Atomic operations or distributed locks are used to perform inventory checks and inventory deductions on Redis.

The inventory check and inventory deduction operations cannot be performed using the atomic commands provided by Redis, and need to be performed atomically using Lua scripts.

The Lua script

Local counts = redis. Call ("HMGET", KEYS[1], "total", "ordered"); Local Ordered = tonumber(counts[2]) If the currently requested inventory plus the inventory that has been killed is still less than the total inventory, Redis. Call ("HINCRBY",KEYS[1],"ordered",k) return k; end return 0Copy the code

When distributed locks are used to support seckilling scenarios, clients need to apply for distributed locks from Redis first. Only clients with distributed locks can perform inventory checks and inventory deductions. As a result, most requests will be blocked while applying the lock, eliminating the need for atomic operations for later inventory checking and inventory holding operations. Since only one client can get the lock from multiple clients, mutual exclusion is guaranteed.

Pseudo-code for distributed locks

Key = itemID // Value val = clientUniqueID // Apply distributed lock, Lock =acquireLock(key, val, Timeout) If (lock == True) {availavailstock = DECR(key, k); // availStock = DECR(key, k); If (availStock < 0) {releaseLock(key, val) return error} ReleaseLock else{releaseLock(key, val) // order processing}Copy the code

In the second kill scenario, Redis only realizes the back-end support for inventory inspection and inventory deduction. Other links need other support. The main purpose is to reduce a large number of requests into the back-end system, reduce the pressure of back-end system services, the common means are page static (CDN), request interception and flow control, cache verification and deduction inventory, message queue processing orders.

In addition, to avoid affecting other service systems, you are advised to isolate the second kill system from the service system, including application isolation, deployment isolation, and data storage isolation.