Small knowledge, big challenge! This article is participating in the creation activity of “Essential Tips for Programmers”.

This article has participated in the “Digitalstar Project” and won a creative gift package to challenge the creative incentive money.


In the system design, I do not know if you have encountered this problem, a large number of requests caused by the database pressure is too large. This article presents a simple solution to the above problem in a high-union environment.

Consider a scenario where, in a highly concurrent system, there are a large number of requests per second to access the database. How can you handle and reduce database stress without considering caching? Some of you might say how easy it is, increase bandwidth, increase memory and improve server performance.

What if you don’t use these methods? So you can use the method of request merge, merge the requests within a period of time, and then submit the query database, can do dozens or even hundreds of queries for batch processing.

Of course, there is a prerequisite to do this, that is, the request for real-time requirements should not be too high. In this case, sacrificing some processing time to reduce the number of network connections is a cost-effective way to do so.

First we simulate a scenario where 1000 requests are made without merge requests, Postman is used for request testing, and Druid connection pool is used for database monitoring:

As you can see, 1000 database accesses were actually made. In the case of high traffic, this type of access can be dangerous, so reducing database access becomes a priority.

Looking at the request merge mentioned earlier, there are several issues that need to be addressed in order to implement it:

1. What granularity is the rule for merging requests:

It is recommended to merge requests based on the time granularity. It is not recommended to merge requests when the number of requests reaches a certain value because the number of requests may be small in a period of time. If the number of requests does not reach the threshold, the request cannot be executed.

ScheduledExecutorService in Java provides a scheduling mechanism and implements the ExecutorService interface, so it also supports all the functions of a thread pool.

2. How to store requests for a period of time:

There are many ways to store requests. As we know, in the design of high-concurrency systems, message queues are commonly used for decoupling, and using message queues to store requests is very appropriate. Since we are in a single-machine environment, the thread-safe LinkedBlockingQueue is easy to implement.

How do I return the result of a request to the request

Since JAVA 1.5, the Future interface has been introduced to handle asynchronous calls and concurrent transactions. A Future represents the result of an asynchronous task that may not have completed, to which a Callback can be added to act upon the success or failure of the task. Simply put, we can use it to receive the results of thread execution.

Now that the merge, execute, and return steps of the request are sorted out, let’s see how they are implemented.

@Service
public class BatchQueryService {
    // A queue is used to store requests
    private LinkedBlockingQueue<Request> queue = new LinkedBlockingQueue<>();

    @Autowired
    ItemService queryItemService;

    // Encapsulate the request
    class Request {
        String code;
        CompletableFuture<Map<String, Object>> future;

        public String getCode(a) {
            return code;
        }

        public void setCode(String code) {
            this.code = code;
        }

        public CompletableFuture<Map<String, Object>> getFuture() {
            return future;
        }

        public void setFuture(CompletableFuture<Map<String, Object>> future) {
            this.future = future; }}@PostConstruct
    public void init(a) {
        ScheduledExecutorService scheduledExecutorService = Executors.newScheduledThreadPool(1);
        scheduledExecutorService.scheduleAtFixedRate(() -> {

            int size = queue.size();
            if (size == 0)
                return;

            List<Request> requests = new ArrayList<>(size);
            for (int i = 0; i < size; i++) {
                Request request = queue.poll();
                requests.add(request);
            }
            System.out.println("Batch processed." + size + "A request");

            List<String> codes = new ArrayList<>();
            for (Request request : requests) {
                codes.add(request.getCode());
            }

            List<Map<String, Object>> responses = queryItemService.queryByCodes(codes);

            // Result set completion --> distributes requests to each specific Request
            Map<String, Map<String, Object>> responseMap = new HashMap<>();
            for (Map<String, Object> response : responses) {
                String code = response.get("code").toString();
                responseMap.put(code, response);
            }

            // Return the request
            for(Request request : requests) { Map<String, Object> result = responseMap.get(request.getCode()); request.future.complete(result); }},0.200, TimeUnit.MILLISECONDS);
    }

    // Perform a single query according to code
    public Map<String, Object> queryItem(String code) {
        Request request = new Request();
        request.setCode(code);

        CompletableFuture<Map<String, Object>> future = new CompletableFuture<>();
        request.setFuture(future);

        queue.add(request);

        try {
            return future.get();
        } catch (InterruptedException e) {
            e.printStackTrace();
        } catch (ExecutionException e) {
            e.printStackTrace();
        }
        return null; }}Copy the code

Test the request merge method with 1000 threads:

@ResponseBody
@RequestMapping("/batchQuery")
public String batchQuery(a){
    Thread thread[]=new Thread[1000];
    for (int i = 0; i <1000 ; i++) {
        int j=i;
        thread[i]=new Thread(new Runnable() {
            @Override
            public void run(a) {
                queryService.queryItem(j + ""); }}); thread[i].start();try {
            Thread.sleep(1);
        } catch(InterruptedException e) { e.printStackTrace(); }}return "ok";
}
Copy the code

Look at the console output:

Druid monitoring:

We reduced the number of database operations from 1,000 to seven, and the actual access to the database dropped to 0.7%. Of course, in a real business environment, the timing interval may not increase to as large as 200ms, but this is just to demonstrate the tremendous potential of request merging.

Finally, a summary of request merge:

The advantages are obvious, reducing network connections and database stress through request merging. Maximize the utilization of system IO to improve system throughput performance.

Of course, it also has some limitations, it can only be used in high concurrency systems with low requirements on real-time request. If the application scenario of the system is not in high concurrency scenarios, there is no need to use request merging.

The last

If you think it is helpful, you can like it and forward it. Thank you very much

Public number agriculture ginseng, add a friend, do a thumbs-up friend ah