In current software development, servers are all multi-core, so we developers need to consider how to improve the efficiency of running speed and design systems with better performance when developing more. In addition to optimizing the execution process of the code, the most important thing is to make full use of the advantages of multi-core server processor, so that the code can be run in parallel to improve the execution efficiency. For example: the statistics of the number of likes, reading, forwarding and so on of the blog, after all the statistics are completed, the statistical data report is generated. Execute according to the idea of serial: Article number of thumb up – > article reading – – > > article forwarded quantity statistics, it can only run one at a time function, if different correlation function is executed in parallel, after the completion of the final needs associated in the implementation of the function, this will not improve the running efficiency of the code, the full use of the performance of the server resources, in the implementation process can be just a serial: After the parallel execution of the three functions, the number of articles liked, the number of articles read and the number of articles forwarded, the statistical report will be generated in the execution, so as to improve the efficiency of code execution, as for how to achieve, please continue to see the following relevant article content.



It can be seen from the flowchart that the methods that have no connection between the number of likes, the number of read articles and the number of forwarded articles are executed in parallel to complete their respective functions, and then the common function method is implemented after all the execution is completed.

For loosely coupled services in the open, we should try to parallelize their execution to avoid blocking, so as to maximize the throughput of the program, improve the execution efficiency of the function, maximize the utilization of server resources, and improve the execution rate of server CPU.

Synchronous API

Synchronous apis are just traditional method calls, implemented in serial form. For example, when multiple methods are called, the other methods will wait while the first method executes, and after the first method executes and returns a result, the subsequent methods will continue to execute one by one. For this kind of call way, within the industry generally call blocking call for synchronization with flow chart above the left part of the API, four function executed in sequence, after the completion of the wait for a function to perform the execution of a function, if every function execution needs 2 seconds, the whole process execution requires eight seconds, after the completion of the online service if appear such circumstance, That is definitely not acceptable, and the servers are all multi-core servers, so it is also a great waste of resources, wasting the server multi-core resource advantage.

Asynchronous API

Asynchronous API is not associated with functional finish of the method in the form of parallel execution, the execution of each method are in another thread to execute, this way is called * * non-blocking call, * * every task thread will own the results returned to the caller, or through the callback, or by the caller to perform a “wait – > end” method. Flow diagram on the right side in the part above step for asynchronous API + mixed synchronous API calls, thumb up article number, article reading, the article forwarded the three function calls as asynchronous API, their function calls are carried out using different threads, each other, for all three execution is completed, return the results to the caller, in carrying out statistical statements, In this way, it can greatly save time. It takes about 4 seconds to complete the execution according to this execution process, which greatly accelerates the execution speed and perfectly utilizes the resource advantages of multi-core server.

case

In the case mentioned above, the synchronous and asynchronous execution process of the report is generated according to the reading amount, forwarding amount and comment amount of the article. In the following, I use the code to achieve the specific coding.

  • Synchronized execution code
report(getLikeCount(), getWatchCount(), getForwardCount());
Copy the code
  • Asynchronous execution code
futureA = executor.submit((Callable) () -> getLikeCount());
futureB = executor.submit((Callable) () -> getWatchCount());
futureC = executor.submit((Callable) () -> getForwardCount());
report(futureA, futureB, futureC);
Copy the code

It can be seen that asynchronous execution mainly adopts thread pool combined with Future to realize the specific operation logic, so we do not need to consider the related state of the thread, only need to care about the specific execution result, as for how to use Future and principle, please follow the blogger’s other articles.