Today we talk about an interesting topic: under the scenario of thousands of orders per second, how to optimize the concurrency of distributed locks?

Background introduction

First, let’s look at the background of this question. Okay?

Some time ago, a friend of mine was in an interview, and then one day he talked to me about a good e-commerce company in China. The interviewer asked him a scene question:

If a distributed lock is used to prevent oversold inventory when placing orders, but it is a high concurrency scenario with thousands of orders per second, how to optimize the distributed lock for high concurrency to deal with this scenario?

He said he didn’t answer because he hadn’t done it. In fact, I felt a little bit interesting when I heard the interview question, because if I were interviewing the candidates, I would have given more scope.

For example, let the students in the interview talk about the solution of overselling inventory under the scenario of high concurrency of e-commerce, the advantages and disadvantages of various solutions and practices, and then talk about the topic of distributed lock.

There are many technical solutions to the oversold problem, such as pessimistic locking, distributed locking, optimistic locking, queue serialization, Redis atomic operation, etc.

However, since the interviewer brother is limited to using distributed locks to solve the oversold inventory, I guess I just want to ask a point: how to optimize the concurrency performance of distributed locks in high concurrency scenarios.

In my opinion, the Angle of the interviewer’s question is acceptable, because in the actual production, the distributed lock ensures the accuracy of data, but its natural concurrency is a little weak.

It just so happens that IN other scenarios of my own project, I did the distributed lock optimization scheme under the scenario of high concurrency, so I just borrowed this friend’s interview question to give you the idea of high concurrency optimization of distributed lock.

Inventory oversold phenomenon is how to produce?

Let’s take a look at what the so-called e-commerce inventory oversold means if we don’t use distributed locks. Take a look at the picture below:

This diagram, in fact, is very clear. Suppose the order system is deployed on two machines, and different users want to buy 10 iphones at the same time, and each sends a request to the order system. Each order system instance is then checked in the database, and the current inventory is 12 iphones.

Two big brothers a look, happy, 12 inventory is greater than the number of 10 to buy ah! As a result, each order system instance sent SQL to the database to place an order and then deducted 10 stocks, one from 12 to 2 and the other from 2 to -8.

It’s over now, the inventory is negative! Tears, there are not 20 iphones for two users! There’s nothing to be said for that.

How to solve the problem of overselling inventory with distributed lock?

How can we solve the problem of overselling inventory with distributed locks? In fact, it is very simple, remember last time we talked about the implementation of the distributed lock principle:

For a lock key, only one client can obtain the lock at a time. Other clients will wait indefinitely to try to obtain the lock. Only the client that has obtained the lock can execute the following business logic.

The code looks something like this. Now let’s analyze why this prevents inventory overselling.

You can follow the sequence number of that step above to see again, immediately understand. As you can see from the figure above, only one instance of an order system can successfully add a distributed lock, and then only one instance can check the inventory, determine whether the inventory is sufficient, place an order to reduce the inventory, and then release the lock.

After the lock is released, another order system instance can be locked. Then we check the inventory and find that there are only 2 sets in stock. The inventory is insufficient and we cannot buy, so the order fails. It’s not going to subtract inventory to -8.

Are there any other solutions to the oversold inventory problem?

Of course! For example, pessimistic locks, distributed locks, optimistic locks, queue serialization, asynchronous queue dispersion, Redis atomic operation, etc., many other solutions, we have our own set of optimization mechanisms for inventory oversold.

But as mentioned earlier, this article is about concurrent optimization of a distributed lock, not a solution to oversold inventory, which is just a business scenario.

Distributed locking scheme in high concurrency scenario

Ok, now let’s see, what are the problems with distributed locking schemes in high concurrency scenarios?

That’s a big problem! Dude, I don’t know if you can tell. Once the distributed lock is added, all clients must lock the inventory lock key of the same item in order to order the same item.

For example, all orders for iPhone must be locked with the “iphone_stock” lock key. As a result, single requests for the same item must be serialized and processed one after another. If you go back and look at it again and again, you should be able to figure this out.

Let’s assume that after the lock is added and before the lock is released, check the inventory -> create the order -> deduct the inventory. This process is very high performance, calculate the whole process 20 milliseconds, this should be good.

So a second is 1000 milliseconds, and only 50 requests for this good can be processed sequentially. For example, if 50 requests come in a second, all of which are for the iPhone, each request will be processed in 20 milliseconds, one at a time, and the last 1000 milliseconds will be exactly 50 requests.

Take a look at the picture below to get a sense of it.

So at least you can see the pitfalls of simply using distributed locks to deal with oversold inventory.

The defect is that when multiple users place an order for the same commodity at the same time, it will be serialized based on distributed lock, resulting in the inability to process a large number of orders for the same commodity at the same time.

Such a solution might be acceptable for ordinary small e-commerce systems with low concurrency and no SEC kill scenarios. Because if the concurrency is very low, there will be less than 10 requests per second. If there is no scene of instantaneous high concurrency killing a single commodity in a second, in fact, it is rare to place 1000 orders for the same commodity in a second, because the small e-commerce system does not have that scene.

How to optimize distributed locks for high concurrency?

Okay, so we’re finally on to something, so what do we do now?

The interviewer said, I am stuck now, overselling inventory is to use distributed locks to solve, and one second for an iPhone to place thousands of orders, how to optimize?

Now, based on that calculation, you can only process 50 iPhone orders a second.

In fact, it is also very simple to say, I believe that many people have seen the Java ConcurrentHashMap source code and the underlying principle, should know the core idea inside, is paragraph-based lock!

The data is divided into several segments, each of which is a separate lock, so that multiple threads can concurrently modify the data in different segments. Not to mention that only one thread can exclusively modify the data in ConcurrentHashMap at a time.

In addition, Java 8 has a new LongAdder class, which is also an optimization of Java 7’s AtomicLong, to solve the problem of CAS class operations in high concurrency scenarios, using optimistic locking ideas, which can cause a large number of threads to repeat the loop for a long time.

LongAdder also adopts a similar segmentation CAS operation. If it fails, it will automatically migrate to the next segment for CAS.

In fact, the optimization idea of distributed lock is similar, before we dropped this scheme into production in another business scenario, not in the inventory oversold problem.

But oversold inventory is a good business scenario that’s easy to understand, so let’s use it. Take a look at the picture below:

In fact, this is segment locking. You know, if you have 1,000 inventory items on your iPhone right now, you can split it into 20 inventory items, and if you want, you can have 20 inventory items in a table in your database, like stock_01, stock_02, something like that, or 20 inventory keys in a place like Redis.

In short, it is to break down your 1000 pieces of inventory to him, and each stock segment is 50 pieces of inventory. For example, stock_01 corresponds to 50 pieces of inventory, and stock_02 corresponds to 50 pieces of inventory.

And then, 1,000 requests per second, good! At this point, you can actually write a simple random algorithm, each request is randomly selected in 20 sections of inventory, one to lock.

Bingo! At the same time, there can be up to 20 order requests executed together, each order request locks a section of inventory, and then in the business logic, the database or Redis that section of inventory can be operated, including checking inventory -> judging whether the inventory is sufficient -> deducting inventory.

So what is this equivalent to? It is equivalent to processing 20 order requests simultaneously in 20 milliseconds, so in 1 second, it can process 20 * 50 = 1000 order requests to iPhone in turn.

Once the data is segmented, there is a pit we must pay attention to: that is, if a single order request, click lock, and then found that the inventory in the segmented inventory is insufficient, what to do at this time?

At this point, you automatically release the lock, then immediately change the next section inventory, try locking again and try processing. This process must be implemented.

Are there any disadvantages to distributed lock concurrency optimization?

Inadequacy is certain some, the biggest inadequacy, everybody discovers have not, very inconvenient! The implementation is too complex.

  • First of all, you have to store a single piece of data in segments. One good inventory field is now divided into 20 segmented inventory fields.

  • Secondly, every time you deal with inventory, you have to write your own random algorithm and randomly pick a segment to deal with;

  • Finally, if you run out of data in one segment, you have to automatically move to the next.

This process is to manually write code to achieve, or a bit of work, very troublesome.

However, we do in some business scenarios, because of the use of distributed lock, and then have to optimize the lock concurrency, and further use of the technology of segwise lock, the effect is of course very good, suddenly the concurrency performance can increase dozens of times.

The subsequent improvement of the optimization scheme

Take the oversold inventory scenario we discussed in this article. If you play this way, you will make yourself miserable!

Again, the oversold scenario is just a demonstration scenario, and we’ll talk about other solutions to oversold inventory in high concurrency architecture separately.

Recommended reading

Bytedance’s summary of design patterns in PDF is popular, and the full version is open for sharing

While swiping Github, I found a note of Ali’s algorithm! The star 70.5 K

If you can understand this, you can make $40K a month even for private work

Why alibaba’s programmer growth rate so fast, read their internal data I understand

The knowledge system required for a programmer to earn 50 million a year.

What you don’t know about violent recursive algorithms

Three things to watch ❤️

If you find this article helpful, I’d like to invite you to do three small favors for me:

Like, forward, have your “like and comment”, is the motivation of my creation.

Follow the public account “Java Doudi” to share original knowledge from time to time.

Also look forward to the follow-up article ing🚀