The concept of the Future

Future is designed to model the outcome of a Future acura moment, preserving a direct reference to expectations. Futures can be cancelled, and the future.get () method is blocking.

The Future model solves part of the asynchronous reference problem, but not all of it. It has the following limitations:

  1. Cannot merge two asynchronous results into one (cannot merge directly)
  2. You cannot simply wait for all tasks in a Future collection to complete
  3. Unable to wait for the fastest task in the Future collection to complete
  4. Unable to set callback method for Future result
  5. A Future task cannot be executed programmatically

CompletableFuture

CompletableFuture implements the Future interface and solves the above problems. The design of CompletableFuture and Stream follows a similar design pattern: Lambda expressions and the idea of pipeline are used. From this point of view, the relationship between CompletableFuture and Future is similar to that between Stream and Collection.

Select the correct thread pool size:

Number = NCpu * Ucpu * (1 + W/C) Number: Number of threads NCpu: Number of processor cores Ucpu: expected CPU usage W/C: ratio of waiting time to computing time

Parallel streams with CompletableFuture

At present, we have two ways to calculate the set: 1. Parallel stream 2.CompletableFuture; 2. CompletableFuture is much more flexible, and we can configure the size of the thread pool to ensure that the overall computation doesn’t block waiting for I/OS. The recommendations are as follows:

If the operation is computationally intensive and there is no IO recommendation stream interface, because the implementation is simple and efficient, there is no need to create more threads than cores if all threads are computationally intensive. On the other hand, if the task involves IO, network, etc. : The CompletableFuture is more flexible because most threads are in a wait state and need to be kept busy, and adding exception handling to the logic makes it more effective to monitor what triggered the wait.