Interviewer psychoanalysis

Honestly, if the interviewer asks you this question, you’re going to have to pull out all the stops. Why? Because you don’t see what the JD of many companies says these days, high concurrency experience is preferred.

If you really have real talent, in the Internet company to do high concurrency system, then you do take the offer basically as a safe bet, no problem. The interviewer would never ask you that, or he would be stupid.

Suppose you are working on a high concurrency system in a well-known e-commerce company, with hundreds of millions of users, billions of traffic a day, and tens of thousands or even hundreds of thousands of concurrent users at peak times. You’re going to be quizzed about your system architecture. What’s your system architecture? How is it deployed? How many machines are deployed? How does the cache work? Q: how does MQ work? How does the database work? Is to dig deep into how you handle high concurrency.

Because the people who really do high concurrency must know that the system architecture without business is on paper, in the real complex business scenarios and also high concurrency, the system architecture must not be so simple, with a Redis, with MQ can be done? Of course not, a real system architecture combined with a business is many times more complex than this simple “high-concurrency architecture.”

If an interviewer asks you, how do you design a high concurrency system? Well, I’m sorry, it must be because you’re not actually working on a high concurrency system. The interviewer will ask you, how do you design a high concurrency system? In fact, the essence is to see if you have their own research, there is a certain amount of knowledge accumulation.

Of course, the best is to recruit a real high concurrency guy, but this guy is scarce, difficult to recruit. So maybe the next best thing is to hire a guy who has done his own research, rather than a guy who doesn’t know anything.

So this is when you have to do a one-man show, showing off all your knowledge of high concurrency!

Analysis of interview questions

In fact, the so-called high concurrency, if you want to understand the problem, you have to start from the root of the high concurrency, why high concurrency? Why is high concurrency awesome?

I said a simple point, very simple, because at the beginning of the system is connected to the database, but to know that the database support to two or three thousand concurrency per second, basically almost finished. So it is said that many companies, at the beginning of the dry, the technology is relatively low, the result of business development too fast, sometimes the system can not bear the pressure on the hang.

Of course I hung up. Why not? Your database will crash if it can handle 5,000/8,000, or even tens of thousands of concurrent requests per second all of a sudden, because mysql, for example, can’t handle such a high number of concurrent requests.

So why is it so high? Because more and more people are using the Internet now, many apps, websites and systems are carrying high concurrent requests, which may be several thousand concurrent requests per second at peak time. It is very normal. If it is something like double Eleven, tens of thousands of times per second are possible.

So such a high concurrency, plus such a complex business, how to play? If you don’t, I’ll tell you how you should answer this question:

It can be divided into the following six points:

Resolution of the system

The cache

MQ

Depots table

Reading and writing separation

ElasticSearch

Resolution of the system

To split a system into multiple subsystems, use Dubbo. Then each system connected to a database, so that originally a library, now multiple databases, can not also carry high concurrency.

The cache

Caching. You have to use caching. Most of the high concurrency scenarios are read more than write less, so you can write a copy to the database and cache, and then read a lot of cache. After all, redis easily tens of thousands of concurrent single machine. So you might want to consider caching against high concurrency in your project’s read scenarios, which carry the main requests.

MQ

MQ, you have to use MQ. You may still have high concurrency scenarios, such as a business operation to frequent the database dozens of times, add, delete, add, delete, change, crazy. The high concurrency will definitely cause your system to fail, if you use Redis to load write that will not work, others are cache, data will be LRU at any time, the data format is also extremely simple, no transaction support. So you have to use mysql and you have to use mysql. So what do you do? With MQ, a large number of write requests are poured into MQ, queued to play slowly, and then consumed by the system to write slowly, within the scope of mysql. So you need to consider how MQ can be used asynchronously to improve concurrency in scenarios that host complex write business logic in your projects. MQ single machine against tens of thousands of concurrent is also OK, this has been specifically said before.

Depots table

Sub-database sub-table, may be the final database level is still unavoidable to resist high concurrency requirements, well, then a database split into multiple libraries, multiple libraries to carry higher concurrency; Then split the table into multiple tables, keeping the amount of data in each table smaller to improve the performance of SQL runs.

Reading and writing separation

Read/write separation, this means that most of the time the database may also read more than write, there is no need to centralize all requests in one library, it can be a master/slave architecture, master write, read from the library, have a read/write separation. When there is too much read traffic, you can also add more slave libraries.

ElasticSearch

Elasticsearch, es for short. Es is distributed and can be expanded at will, which naturally supports high concurrency because it is easy to expand and add machines to carry higher concurrency. Then some relatively simple query and statistics operations can be considered to be carried by ES, and some full-text search operations can also be considered to be carried by ES.


The above 6 points, basic is high concurrency system is sure to do some things, you can carefully before combining consider about the knowledge, then you can put the paper in the system, and then each part should pay attention to what problem, before we talked, you can be expounded in this paper, the, shows that you are a little to the accumulation.

Let’s face it, after all, your real genius isn’t figuring out some technology or what a high-concurrency system should look like. In fact, in the real complex business system, do high concurrency is far more complex than the point mentioned above dozens of times to hundreds of times. You need to consider: Which need depots table, which does not need depots table, single library table with depots table how to join, what data to go into the cache, put what data can resist high concurrent requests, you need to be done on the analysis of a complex business systems, and then gradually gradually join high concurrency system architecture of the transformation, the process is very complicated, Once you’ve done it once and done it well, you’ll be very popular in this market.

Most companies don’t really want you to know the basics of high concurrency, RocketMQ, Kafka, Redis, Elasticsearch. This is a valuable experience for anyone who constructs, designs, and practices a highly concurrent architecture step by step in a complex distributed system with hundreds of thousands of lines of code.