In the past two years, due to the epidemic situation, I didn’t go out much during holidays. In the past, I had to grab train tickets a month or half a month in advance for every holiday. When there were too many people, 12306 would collapse. At that time, many technical personnel guide jiangshan, inspire ideas, when the academy of Railway.

A few years ago, Xiaomi mobile phones were still very popular, especially popular. Every time a new phone was released, people crowded and empty, but they were always disappointed after mi Rabbit queued up.

All in the past! The number of romantic routines, but also look at today!

A few jokes, lead to panic buying this thing, the beginning of the text below.


How to understand panic buying?

Panic buying, most people should have experienced, did not grab train tickets, did not grab masks? No salt, no rice, no flour, no grain, no oil? No 618, no double 11? There’s always one for you.

“Shopping”, as we all know, means spending money to buy things. Firstly, the number of goods is limited and there are so many people in need that distribution becomes a problem.

I do not know if you have seen such a scene, a group of people around the store door, the door is not fully open, roaring crowded in, grab the goods on the shelf, who ran fast crowded fierce, who can get, who can go to pay away, this is the most true portrayal of robbery. If you haven’t, you can either hop on, swipe your card, or get bumped off and wait for the next bus. It was a bad experience, but it did solve the distribution problem.

The so-called “grab” here is to see who gets the vantage point first, who locks the goods first. So locked goods will be sure to buy? Not really. You might suddenly find yourself strapped for cash. So for panic buying, grab is the “buy” opportunity, who arrived first will have the “buy” opportunity. As for whether it will actually buy home, it’s really hard to say.

Problems with the buying process

Now that I understand panic buying, how does the program implement it?

According to the previous understanding, we can divide buying into two stages, “grab” mapping to order, “purchase” mapping to pay order. The operation of placing an order is locking the goods, which can be divided into two stages: locking the stock and creating an order. Locking the stock is equivalent to getting the goods in your hand in the store, and creating an order is equivalent to reaching an agreement with the merchants on the price of the goods.

Operating inventory there is a well-known “overbooked” problem, in which an error occurs while deducting inventory and the amount of inventory is deducted to a negative number. How does this happen?

Back-end services are typically multithreaded when processing requests, and may be distributed for high availability. This problem is caused by the parallel processing that comes with multithreading and distributed environments. When inventory is deducted, there are usually two steps: first check whether the inventory is sufficient, and then subtract the corresponding quantity from the inventory. If these two steps are not atomic, then in a multi-threaded or distributed environment, there will be multiple threads querying the inventory at the same time. The inventory is sufficient in the context of each request processing, but the total inventory does not satisfy all these deductions and the result is negative.

So what’s the solution? This can be done by placing queries and updates in the same database transaction, or by using a lock (distributed lock in distributed deployment) to lock only one query/deduction for an item inventory.

Is there no problem with that? There are. It should be noted that what is said here is panic buying, which means that there will be a lot of people to try. In real life, if there are too many people, the store cannot carry so many people, and there may be safety accidents. The problem of running out of resources can also cause crashes in computer programs, especially databases, which are likely to be overwhelmed by a sudden burst of IO requests. The database transactions mentioned above will undoubtedly increase this burden. And the lock will also lead to a decrease in throughput, request accumulation can not be timely processing, will also increase the burden of the server, frequent timeout or even unable to serve the problem.

What else can we do? In the old days of programming, there was a belief that you don’t have to worry too much about the efficiency of the code, just add more hardware, such as more memory, CPU upgrade, solid-state drive, etc. However, it is not feasible here because the concurrency may be too large, the cost of computing resources has to be considered, and the speed of dynamic resource allocation is also an issue.

In real life, we also have a civilized way of handling: queue, first come, first served, until sold out. The program first saves the requests to the queue, and then processes them one by one in order, and then replies one by one, and that’s how the queue works. The advantage of this is that there is no need to coordinate resource access conflicts across threads and processes, and computing resource requirements are greatly reduced. The downside is that users have to wait a little bit longer to see the results, so the experience is a little bit worse, but it’s pretty hot stuff anyway.

Limit the flow of panic buying

The above has analyzed the problems that buying may encounter, so what can be done to limit the flow?

In the software system, inventory is a number, and there is also a number for flow limiting. We can take the threshold value of flow limiting as the number of inventory. When the request comes over, we can first use flow limiting to process, and then enter the next step. The schematic diagram is as follows:

What are the benefits of this? Reduce the pressure of subsequent processing. If the inventory is exhausted, there is no need to query the database, wasting precious database resources, even distributed locks, explicit database transactions are not needed, because the flow limiting check can be used to reduce the inventory, directly use Update; However, the pressure of inventory reduction and order placing still exists at this time. For example, 3W requests come in at the same time, and the flow limiting can intercept 27000 of them, and the remaining 3000 will go to the next step. The influence still needs to be carefully considered. Look at the way of using queue, add the flow limit, queue also can only receive the order of the request, queue pressure is small, the subsequent processing queue database operation is also reduced a lot.

There may also be some problems, for example, the flow limiting check is passed, but the subsequent processing fails for some reason, and the inventory cannot be recycled. In fact, an opportunity is wasted, but this is not the main contradiction of panic buying, and this problem is generally ignored.

Of course, there are many other problems for limit purchase, limit flow is difficult to play a role, such as high front-end concurrent query inventory number, here is not to talk about.

The technical implementation

This section uses ASP.NET Core Web API as an example. The flow limiting component selects FireflySoft.RateLimit, and the algorithm selects a simple fixed window, that is, a counter, and counts in the process. Redis is not selected here, because the in-process counting is relatively simple, does not need external dependencies, convenient demonstration; Even in a distributed environment, it can generally be handled. For example, if the total traffic limiting threshold is 1000/ hour and five service instances are deployed, you only need to use the threshold of 200/ hour in the application. However, if the load balancing is uneven, it may be inappropriate in some cases. In this case, you can select Redis global consistent traffic limiting.

Install Nuget package

Using the Package Manager console:

Install-Package FireflySoft.RateLimit.AspNetCore
Copy the code

Or use the.NET CLI:

dotnet add package FireflySoft.RateLimit.AspNetCore
Copy the code

Or add it directly to the project file:

<ItemGroup>
<PackageReference Include="FireflySoft.RateLimit.AspNetCore" Version="2.*" />
</ItemGroup>
Copy the code

Write traffic limiting rules

Register the traffic limiting service in startup. cs and use the traffic limiting middleware. There are some comments added, so you can read them carefully.

public void ConfigureServices(IServiceCollection services) { ... services.AddRateLimit(new InProcessFixedWindowAlgorithm( new[] { new FixedWindowRule() { Id = "1", ExtractTarget = context => {// Limit the target: The item Id, which is assumed to be passed in from the Url parameter // may require a lot of security checks for actual use, Return (context as HttpContext).request.query ["GoodsId"].toString (); }, CheckRuleMatching = context => { Var path = (context as HttpContext).request.path. Value; if(path == "/Order/PanicBuying"){ return true; } return false; }, Name = "LimitNumber ", LimitNumber = 1000, StatWindow = timespan.fromseconds (3600), Here is 3600 seconds StartTimeType = StartTimeType FromNaturalPeriodBeign}}), New HttpErrorResponse() {BuildHttpContent = (context, ruleCheckResult) => {return" You're too late, it's sold out!" ; }}); . } public void Configure(IApplicationBuilder app, IWebHostEnvironment env) { ... app.UseRateLimit(); . }Copy the code

All you need is this simple code, and you can test it with Postman.

If you don’t want to use in Web application, or need more custom Settings, can also be installed directly FireflySoft. RateLimit. The Core of this bag, it can be used for various. NET program. Click to view sample: bosima/FireflySoft RateLimit (github.com)

If you want to use Redis, only need to replace InProcessFixedWindowAlgorithm with RedisFixedWindowAlgorithm, passing a Redis connection object, in addition to many other code is the same.


Well, that’s the main content of this article. What do you have to say about the use of limiting traffic to ease the panic buying pressure?

FireflySoft.RateLimit is an open source. NET Standard stream limiting library, which is flexible and easy to use, can be accessed on GitHub for the latest code. Its main features include:

  • A variety of traffic limiting algorithms: built-in fixed window, sliding window, leakage bucket, token bucket four algorithms, can also be customized expansion.
  • Multiple count storage: currently, memory and Redis storage are supported.
  • Distributed friendly: Support unified counting of distributed applications through Redis storage.
  • Flexible traffic limiting target: Various data can be extracted from the request to set the traffic limiting target.
  • Traffic limiting punishment: After traffic limiting is triggered, the client can be locked for a period of time.
  • Dynamic change rules: Supports dynamic change of traffic limiting rules while the program is running.
  • Custom error: You can customize the error code and error message after traffic limiting is triggered.
  • Universality: In principle, it can meet any scenario requiring traffic limiting.