Writing in the front

Feel good friends can read attention, every week there are regular updates of the selected content to share with you, thank you for your support!

Why is it difficult

Why seckill systems are difficult: There is only one copy in stock, and everyone will read and write the data at the same time. For example, xiaomi mobile phone kills every Tuesday. There may only be 10,000 mobile phones, but the instantaneous flow may be hundreds of millions. For example, 12306 grab tickets, also similar to the second kill, instantaneous flow is even more.

There are two main issues that need to be addressed:

  1. The strain of high concurrency on the database
  2. How to solve the right inventory reduction in a competitive situation (oversold problem)

For the first question, it’s already easy to think of using a cache to handle the buying and avoid directly manipulating the database, such as using Redis. The focus is on the second question, written conventionally:

Query the inventory of the corresponding goods to see if it is greater than 0, and then perform operations such as order generation. However, when judging whether the inventory is greater than 0, there will be a problem in high concurrency, resulting in negative inventory

2. Common architecture

 

Traffic to hundreds of millions of levels, common site architecture as above:

  1. The browser side, the top layer, will execute some JS code
  2. The site layer, which accesses the back-end data and sends the HTML page back to the browser
  3. The service layer, which shields the underlying data details upstream
  4. Data layer, where the final inventory is stored, mysql is a typical example

Third, optimization direction

1. Intercept requests as upstream as possible: Traditional seconds kill system is hung, requests are outweighed the back-end data layer, data read-write lock conflict is serious, concurrent high response is slow, almost all the request timeout, flow is big, order successful effective flow little 【 a train actually only 2000 tickets, personal to buy 200 w, few people can buy success, request efficiently to 0]

2. Make full use of cache: this is a typical application scenario of reading more and writing less [actually there are only 2000 tickets for a train, 200W people buy, at most 2000 people order successfully, the rest of the inventory is checked, write ratio is only 0.1%, read ratio is 99.9%], which is very suitable for the use of cache.

Fourth, optimize the details

4.1. Browser-layer request interception

After clicking the “query” button, the system card ah, the progress bar rises slowly ah, as a user, I will not consciously click “query”, continue to point, continue to point, point… Useful? Increased system load for no reason (a user clicks 5 times, 80% of the requests are so many).

  • At the product level, after the user clicks “Query” or “purchase ticket”, the button becomes unavailable, and the user is prohibited to submit requests repeatedly
  • At the JS level, users are limited to one request in x seconds

So restricted, 80% of the flow has been stopped.

4.2. Site-level request interception and page caching

Browser-level request blocking only blocks the user (99% of the user), but high-end programmers don’t want to use it. Write a for loop that calls HTTP requests directly from your backend.

  • The same UID, limit access frequency, do page caching, requests that reach the site layer within x seconds, all return the same page
  • Queries of the same item, such as mobile phone number, page caching, and requests that reach the site layer within x seconds, all return to the same page

In this way, 99% of the traffic will be intercepted at the site layer

4.3. Service layer request interception and data caching

Site-level request interception can only stop ordinary programmers, advanced hackers, assuming that he controls 10W chickens (and assuming that buying tickets does not require real name authentication). How the whole?

  • Eldest brother, I am the service layer, I clearly know that millet only has 10 thousand mobile phones, I clearly know that a train only has 2000 tickets, I through 10w requests to go to the database what is the meaning? For write requests, a request queue is made, and only a limited number of write requests are sent to the data layer each time. If they are all successful, another batch is put down. If the inventory is insufficient, all the write requests in the queue are returned as “sold out”.
  • Do I have to say anything about the read request? Cache cache cache cache cache cache cache cache cache cache cache cache cache cache cache

With this limiting, very few write requests and very few read mis requests penetrate the data layer, and 99.9% of requests are blocked

4.4. The data layer takes a leisurely walk

When it comes to the data layer, there is almost no request, which can be carried by a single machine. In the same words, the inventory is limited, xiaomi’s capacity is limited, so it is meaningless to show so many requests to the database.

4.5. Mysql batch loading improves INSERT efficiency

Five, the Redis

The use of Redis queues (Lists), push and POP operations ensures atomicity. Even if many users arrive at the same time, the execution is sequential. (Mysql transaction performance degrades significantly under high concurrency)

Queue the commodity inventory first:

<? php $store=1000; $redis=new redis (); $result = $redis - > connect (127.0.0.1, 6379); $res=$redis->llen('goods_store'); for($i=0; $i<$store; $i++){ $redis->lpush('goods_store',1); } echo $redis->llen('goods_store'); ? >Copy the code

Customer to place an order:

$redis=new Redis(); $result = $redis - > connect (127.0.0.1, 6379); $count = $redis->lpop('goods_store'); if(! $count){echo 'failed to buy! '; return; }Copy the code

Cache can also deal with write requests. For example, we can transfer the inventory data from the database to the Redis cache. All inventory reduction operations are carried out in Redis, and then the user seconds kill requests in Redis are synchronized to the database through the background process

Six, summarized

There is no summary, the description above should be very clear, for the second kill system, repeat the next two architecture optimization ideas

  1. Try to intercept requests upstream of the system
  2. Read more, write less, use more cache
  3. Redis queue cache + mysql batch database

The last

Read this friend can forward support, at the same time can pay attention to me, every week there are regular updates of the selected content to share with you!