First, many newcomers are obsessed with high concurrency, thinking they understand the concept of high concurrency = high level, thus ignoring low latency and high throughput.

No product starts out with a highly concurrent architecture. But we need to be ready for a lot of traffic. If you really want to work directly on high-concurrency projects, that’s fine. Let me give you a practical idea. The first thing you need to know is the JMeter tool, which is often used by high-concurrency projects for concurrent testing. Concurrency is usually divided into two types, one is read concurrency, the other is write concurrency.

I set up a programmer communication circle, the main group is 10 years old technical personnel, the technical director of a listed company, the group will answer questions to the group every day, recruitment push, click to join the circle verification code: JU

Simulated read concurrency phase 1: Under normal circumstances, we write an interface to access the database directly to obtain data, that must occupy the database connection, so the concurrency is not very high. Phase 2: We add a cache to the data. When accessing the data, we first access the cache, such as Redis. If there is no data in the cache, we read from the database. We can ignore these problems because the solutions are simple and there are many. Adding caching reduces database stress, and more importantly, we increase the IOPS (throughput) of requests, which means that our server can handle more requests per second, which increases concurrency.

Stage 3: When the IOPS of our cache processing is higher than the maximum concurrency supported by our server, for example, if Tomcat is well optimized, it can be around 1.5K (some people say it can reach 6K), but I think this depends on the memory of your machine and Tomcat configuration, let’s assume 1.5K for the moment. If you want to exceed 1.5K concurrency, then you need to increase the number of servers or switch to a higher concurrency server. Taking adding servers as an example, you can add 1 server to support higher concurrency and then load balance the traffic between the two Tomcats via Nginx. We know that nginx has a maximum concurrency of 3W, which means you can add 20 Tomcats.

Phase 4: When Nginx becomes our concurrency bottleneck, we are going to do nginx clustering. This also means that we need to divide a region into several smaller regions, each with a public network nginx configured. And this approach has two kinds, one is to increase control through the network routing layer to achieve distribution, one is to achieve through hardware, HARDWARE I have not operated, their practice is not practical, because it is very expensive!

Of course, we can’t handle concurrency directly by increasing the maximum concurrency. For example, in the middle of the database level, we need to change to the master-slave mechanism, and use database and tables or mongodb, a database with high throughput, to do so. For example, our machine configuration also needs to be increased.

The main difference between write concurrency and read concurrency is that the write time is usually longer. The read time is improved by caching, but the write library action is not handled by caching. At this time, we need to increase the message queue MQ, and eventually go to the MQ cluster. After adding MQ, our requests simply add the write library task to the queue and execute it one by one, but for user-initiated requests we immediately return a success flag. This does not occupy the service thread. Also improves the concurrency value, at the same time also need to modify our business, such as normal order is returned after order warehousing success, after the concurrency value up we need to adjust is order to join the queue after success, and did not put in storage, so this time can tell the user order succeed, but to remind the user is order, and then the front to query the order every 1 seconds to see whether there is any, If there is, it will prompt you to place the order successfully.

Finally, improving technical ability depends on self-study and working practice. If there are no opportunities at work, you have to teach yourself. You can prepare your base environment in your own home, and today’s high-concurrency technologies are all based on cloud computing, which is essentially virtualization. Therefore, as long as you prepare a well-configured host at home and make several virtual machines, you can roughly simulate a distributed environment, and then learn to master the pressure testing tools and methods, such as Jmeter, you can get started. Self-taught technology is an amazing thing. Pain, and happy