There is no thread library in C++98 standard. It is not until C++11 that a standard library for multithreading is finally provided, which provides classes for managing threads, protecting shared data, synchronizing operations between threads, and atomic operations. The corresponding header file for the multithreaded library is #include , and the class name is STD :: Thread.

However, thread is more close to the system after all, it is still not very convenient to use, especially thread synchronization and access to the thread running results on the more troublesome. Instead of simply getting the result from thread.join(), we must define a thread shared variable to pass the result, and consider the mutual exclusion between threads. Fortunately, C++11 provides a relatively simple asynchronous interface, STD ::async, that makes it easy to create threads and fetch results from STD ::future. In the past, we used to encapsulate the thread to achieve our own async, but now the cross-platform interface of the thread can be used to greatly facilitate C++ multi-threaded programming.

Take a look at the STD :: Async function prototype

//(C++11) (C++17) template< class Function, class... Args> std::future<std::result_of_t<std::decay_t<Function>(std::decay_t<Args>...) >> async( Function&& f, Args&&... args ); //(C++11) (C++17) template< class Function, class... Args > std::future<std::result_of_t<std::decay_t<Function>(std::decay_t<Args>...) >> async( std::launch policy, Function&& f, Args&&... args );Copy the code

The first parameter is the thread creation policy. There are two policies to choose from:

  • std::launch::async: Thread creation begins when async is called.
  • std::launch::deferred: Creates threads in lazy loading mode. A thread is not created when async is called until a future’s GET or wait is called.

The default policy is: STD: : launch: : async | STD: : launch: : deferred, that is, two strategies of collection, specific what meaning behind in detail

The second argument is the thread function

Thread functions can accept function, lambda expression, bind expression, or another function object

The third argument is the argument to the thread function

No longer show

The return value STD: : the future

STD :: Future is a template class that provides a mechanism for accessing the results of asynchronous operations. It literally means the future, which is very appropriate, because she doesn’t get the results immediately but she can get the results synchronously at some point. We can obtain the structure of asynchronous operations by querying the state of the Future. Future_status has three states:

  • Deferred: The asynchronous operation has not started
  • Ready: The asynchronous operation is complete
  • Timeout: asynchronous operation timeout, mainly used in STD ::future.wait_for()

Example:

STD ::future_status status; do { status = future.wait_for(std::chrono::seconds(1)); if (status == std::future_status::deferred) { std::cout << "deferred" << std::endl; } else if (status == std::future_status::timeout) { std::cout << "timeout" << std::endl; } else if (status == std::future_status::ready) { std::cout << "ready!" << std::endl; } } while (status ! = std::future_status::ready);Copy the code

STD :: Future can get results in three ways:

  • Get: Waits for the asynchronous operation to end and returns the result
  • Wait: Waits for an asynchronous operation to end, but returns no value
  • Waite_for: Returns the result of a timeout wait, which is illustrated in the example above

Introduce finishedstd::asyncFunction prototype, so how should it be used?

Basic use of STD ::async: sample link

#include <iostream> #include <vector> #include <algorithm> #include <numeric> #include <future> #include <string> #include <mutex> std::mutex m; struct X { void foo(int i, const std::string& str) { std::lock_guard<std::mutex> lk(m); std::cout << str << ' ' << i << '\n'; } void bar(const std::string& str) { std::lock_guard<std::mutex> lk(m); std::cout << str << '\n'; } int operator()(int i) { std::lock_guard<std::mutex> lk(m); std::cout << i << '\n'; return i + 10; }}; template <typename RandomIt>int parallel_sum(RandomIt beg, RandomIt end){ auto len = end - beg; if (len < 1000) return std::accumulate(beg, end, 0); RandomIt mid = beg + len/2; auto handle = std::async(std::launch::async, parallel_sum<RandomIt>, mid, end); int sum = parallel_sum(beg, mid); return sum + handle.get(); } int main(){ std::vector<int> v(10000, 1); std::cout << "The sum is " << parallel_sum(v.begin(), v.end()) << '\n'; X x; // Call x.foo(42, "Hello") with the default policy: // May also print "Hello 42" or defer auto A1 = STD ::async(&x ::foo, &x, 42, "Hello"); // Call x.bar("world!" ) // Prints "world!" when a2.get() or a2.wait() is called auto a2 = std::async(std::launch::deferred, &X::bar, x, "world!" ); Auto a3 = STD ::async(STD ::launch::async, X(), 43); a2.wait(); / / print "world!" std::cout << a3.get() << '\n'; // If a1 is incomplete at this point, a1's destructor prints "Hello 42" hereCopy the code

Possible outcomes

The sum is 10000
43
world!
53
Hello 42

Copy the code

STD ::async encapsulates asynchronous operations, allowing us to easily obtain the status and results of asynchronous execution without focusing on the internal details of thread creation, as well as specifying thread creation policies.

In-depth understanding of thread creation strategies

  • The STD ::launch::async scheduling policy means that functions must be executed asynchronously, i.e. on another thread.
  • The STD :: Launch :: Deferred scheduling policy means that functions may only be executed when a future object returned by STD :: Async calls GET or wait. That is, execution is deferred until one of the calls occurs. When get or WAIT is called, the function is executed synchronously, that is, the caller blocks until the function finishes running. If GET or wait is not called, the function never executes.

Both strategies are very clear, but the function of the default policy is very interesting, it is not what you specified, which is the first function prototype used in the tactics of STD: : launch: : async | STD: : launch: : deferred, the instructions given in the c + + standard is:

Whether to perform asynchronous execution or lazy evaluation depends on the implementation

auto future = std::async(func); // Execute func using the default launch modeCopy the code

There is no way to predict if func will be executed on any thread, or even if it will be executed at all, because func may be scheduled to be deferred, i.e. executed when GET or WAIT is called, and there is no way to predict whether or on which thread GET or WAIT will be executed.

This flexibility also confuses the use of thread_local variables, meaning that if func writes to or reads from Thread Local Storage (TLS), it is impossible to predict which Thread’s Local variables will be fetched.

It also affects timeouts in wait-based loops, since a call to wait_for or wait_until may return the value STD ::launch:: DEFERRED because the scheduling policy may be deferred. This means that the following loop, which looks like it will eventually stop, may actually run forever:

Void func() // f returns {STD ::this_thread::sleep_for(1); } auto future = std::async(func); // execute f while(fut.wait_for(100ms)! STD :: FUture_status ::ready) // But this may never happen {... }Copy the code

To avoid an infinite loop, we have to check if the future is deferring the task. However, the Future has no way of knowing if the task is deferred. A good trick is to find if future_status is deferred by wait_for(0) :

auto future = std::async(func); // (conceptually) async f if (fut.wait_for(0) == STD :: FUTURE_status ::deferred) // If the task is deferred {... // fut uses get or wait to call f} else {// The task is not delayed while(fut.wait_for(100ms)! = STD ::future_status::ready) {// Impossible to loop indefinitely... // The task is not delayed or ready, so do something concurrent until the task is ready}... // fut ready}Copy the code

One might say why use it at all, given all the disadvantages, because after all we’re thinking about the limit of what’s possible, sometimes I don’t have to ask whether it’s concurrent or synchronous, I don’t have to worry about changing that thread thread_local variable, and I can accept that maybe the task will never execute, So this method is a convenient and efficient scheduling strategy.

To sum up, we can conclude the following points:

  • The default scheduling policy for STD :: Async allows tasks to execute both asynchronously and synchronously.
  • Default policy flexibility leads to uncertainty when using the thread_local variable, which implies that the task may not be executed, and it affects program logic for timeout-based wait calls.
  • If asynchronous execution is required, specify the STD ::launch::async launch policy.

Reference article:

API Reference Document

Replace thread creation with C++11’s STD ::async