I. Program introduction

This paper introduces the high concurrency flash purchase second kill system developed by SpringBoot. In addition to realizing basic functions such as login, viewing commodity list, second kill and placing orders, the project also realizes system cache, downgrade and flow limiting in view of high concurrency.

Second, development tools

IntelliJ IDEA + Navicat + Git + Chrome

Development technology

Debugging tool: Postman

Backend technology: SpringBoot + MyBatis + MySQL

Middleware technology: Druid + Redis + RabbitMQ + Guava

Four, second kill optimization ideas

  1. Will be asked to try to intercept in upstream system: traditional seconds kill system is hung, requests are outweighed the back-end data layer, data read-write lock conflict is serious, almost all the request timeout, flow is big, order successful effective flow is rather small, we can through the current limiting, degradation and other measures to maximize reduce the access to the database, thus protecting system.

  2. Make full use of the cache: Seckill items are typically read more than write less, and making full use of the cache will greatly increase concurrency

Five, core technology

1. Encrypt MD5 packets twice

The password entered by the user and the fixed Salt are encrypted by MD5 to generate the password encrypted for the first time. Then the password and the randomly generated Salt are encrypted by MD5 for the second time. Finally, the password encrypted for the second time and the fixed Salt for the first time are saved in the database

Benefits:

  1. First function: Prevents plaintext passwords from being transmitted over the network
  2. The second function: to prevent database theft, to avoid anti-launch password through MD5, double insurance

2. The session to be Shared

If the user account and password are correct, the UUID generates a unique ID as a token, and stores the token as a key and user information as a value in a simulated session to redis. At the same time, the token is stored in a cookie to save the login status

Benefits: In a distributed cluster, the session information of each server needs to be synchronized periodically, which may cause session inconsistency due to the delay. Therefore, Redis is used to store session data in a centralized manner to solve the problem of session inconsistency.

3. Verify JSR303 user-defined parameters

The JSR303 custom validator is used to verify user accounts and passwords, separating the authentication logic from service codes.

4. Handle global exceptions in a unified manner

By intercepting all exceptions, all kinds of exceptions are processed accordingly. When exceptions are encountered, they are thrown layer by layer until they are finally dealt with by a unified, specialized exception handling place, which is conducive to the maintenance of exceptions.

5. Page cache + object cache

  1. Page caching: Cache HTML pages to Redis by manually rendering them
  2. Object cache: Caches user information, commodity information, order information, token and other data to reduce the access to the database and greatly speed up the query.

6. Static pages (not necessarily customized…)

Page static processing of commodity details and order details, the page is HTML, dynamic data is obtained from the server through the interface, to achieve the separation of the front and back end, static pages do not need to connect to the database opening speed compared with dynamic pages will be significantly improved

7. Local markup + Redis preprocessing + RabbitMQ asynchronous ordering + client polling

Description: Protected by three-level buffering,

1. Local markup

2. Redis pretreatment

3. RabbitMQ places orders asynchronously and only accesses the database at the end, in order to minimize access to the database.

Implementation:

  1. In the second kill phase, the local mark is used to mark the products that the user has second killed. If the products are marked, the repeat second kill is returned directly. The redis is queried only when the products are not marked, and the access to Redis is reduced through the local mark
  2. Before buying starts, the data of goods and inventory are synchronized to Redis, all buying operations are processed in Redis, and the database access is reduced through redis pre-reducing inventory
  3. To protect the system from crashing due to high traffic, use RabbitMQ to queue orders asynchronously, with a layer of buffer protection and a window model that updates the user’s state in real time.
  4. The client uses JS to poll an interface for processing status

8. Address oversold

Description: For example, the inventory of a commodity is 1, and user 1 and user 2 purchase the commodity simultaneously. After user 1 submits an order, the inventory of the commodity is changed to 0, and user 2 submits an order without knowing it, and the inventory of the commodity is changed to -1 again. This is the oversold phenomenon

Implementation:

  1. When the inventory is updated, the inventory is judged first, and the inventory can be updated only when the inventory is greater than 0
  2. Create a unique index for the user ID and the item ID. This constraint prevents the same user from sending two requests at the same time to two items of the same item
  3. Implement optimistic locking by adding a version field to the commodity information table, adding the version to each piece of data. Version +1 is used for each update, and the update is accompanied by the version number. When the version number before submission is equal to the version number before update, it indicates that the update is not affected by other threads at this time, and the update is normal. If there is a conflict, the update will not be submitted. An optimistic lock conflict occurs when the inventory is sufficient and a certain number of retries are performed.

9. Use a mathematical formula captcha

Description: Before clicking seckill, let the user input the verification code of mathematical formula first, and verify correctly before seckill.

Benefits:

  1. Protect against malicious robots and crawlers
  2. Split user requests

Implementation:

  1. The front end creates a captcha interface by calling the server with the commodity ID as a parameter
  2. The server generates the verification code according to the commodity ID and user ID transmitted from the front end, and takes the commodity ID + user ID as the key, and saves the generated verification code as value into Redis. Meanwhile, the generated verification code input image is written into imageIO for the front end to display
  3. The verification code entered by the user is compared with the verification code queried from Redis according to the product ID + user ID. If the verification code is the same, the verification is successful and seconds kill is entered. Verification failure is returned if the verification code is different or null. Refresh the verification code and try again

10. Use RateLimiter to limit traffic

Description: When we go to kill some goods, at this time may cause the system crash because of the traffic is too large, at this time to use the flow limit to limit the traffic, when the limit threshold is reached, the subsequent request will be degraded; After the degradation, the solution can be: return to the queued page (too frequently accessed during peak hours, try again later), error page, etc.

Implementation: The project uses RateLimiter to achieve traffic limiting. RateLimiter is a token bucket algorithm based traffic limiting implementation class provided by Guava. By adjusting the rate of token generation, users are restricted from frequently accessing seckill pages, thus preventing the system from being overwhelmed by excessive traffic. (The token bucket algorithm works by putting tokens into the bucket at a constant rate. If the request needs to be processed, a token needs to be retrieved from the bucket. If no token is available in the bucket, the service is denied.)

Participate in the code warehouse

.