1. Call_once use

In multithreaded programming, sometimes a task needs to be executed only once. This can be done using the STD ::call_once function in C++11 along with STD ::once_flag. If multiple threads need to call a function at the same time, STD ::call_once ensures that the function is called only once by multiple threads.

#include<iostream>
#include<mutex>
#include<thread>
using namespace std;
once_flag init_flag;
 
void init()
{
	cout << "data has inited" << endl;
}
 
void fun()
{
	call_once(init_flag, init);
}
 
int main()
{
	thread t1(fun);
	thread t2(fun);
	t1.join();
	t2.join();
	system("pause");
	return 0;
}
Copy the code

2.shared_mutex

C++14 provides shared_mutex to solve the reader-writer problem, that is, read-write locks. Unlike regular locks, read-write locks can only have one writer or multiple readers at the same time, but cannot have both readers and writers. Read-write locks generally perform better than regular locks.

shared_mutex g_mutex;
std::string g_str;

void readLoop()
{
	while (true) {
		this_thread::sleep_for(chrono::milliseconds(100));
		g_mutex.lock_shared();
		cout << g_str;
		g_mutex.unlock_shared();
	}
}

void writeLoop()
{
	int number = 0;
	while (true) {
		this_thread::sleep_for(chrono::milliseconds(100));
		g_mutex.lock();
		g_str = to_string(number++)+"\n";
		g_mutex.unlock();
	}
}

int main()
{
	thread(writeLoop).detach();
	thread(readLoop).detach();
	thread(readLoop).detach();
	system("pause");
}

Copy the code

3. Multithreading uses condition variables to achieve thread-safe queues

Background: The standard STL library queue is thread unsafe. Using Condition variable to simply implement a thread – safe queue

#include <queue>
#include <memory>
#include <mutex>
#include <condition_variable>
#include <iostream>
#include <thread>template<typename T> class threadsave_queue{ private: mutable std::mutex mut; // Must be mutable because empty is a const method, but to lock a mut, the lock is to change STD ::queue<T> data_queue; std::condition_variable data_cond; public:threadsave_queue(){}
  threadsave_queue(threadsave_queue const& other){
    std::lock_guard<std::mutex> lk(other.mut);
    data_queue = other.data_queue();
  }
  void push(T new_value){
    std::lock_guard<std::mutex> lk(mut);
    data_queue.push(new_value);
    data_cond.notify_one();
  }
  void wait_and_pop(T& value){
    std::unique_lock<std::mutex> lk(mut);
    data_cond.wait(lk, [this]{return! data_queue.empty(); }); value = data_queue.front(); data_queue.pop(); } std::shared_ptr<T>wait_and_pop(){
    std::unique_lock<std::mutex> lk(mut);
    data_cond.wait(lk, [this]{return! data_queue.empty(); }); std::shared_ptr<T> res(std::make_shared<T>(data_queue.front())); data_queue.pop();return res;
  }
  
  bool empty()const{
    std::lock_guard<std::mutex> lk(mut);
    returndata_queue.empty(); }}; void make_data(threadsave_queue<int>& tq, int val){ tq.push(val); } void get_data1(threadsave_queue<int>& tq, int& d1){ tq.wait_and_pop(d1); } void get_data2(threadsave_queue<int>& tq, int& d1){ auto at = tq.wait_and_pop(); d1 = *at; } intmain(){
  threadsave_queue<int> q1;
  int d1;
  std::thread t1(make_data, std::ref(q1), 10);
  std::thread t2(get_data1, std::ref(q1),std::ref(d1));
  t1.join();
  t2.join();
  std::cout << d1 << std::endl;
  std::thread t3(make_data, std::ref(q1), 20);
  std::thread t4(get_data2, std::ref(q1),std::ref(d1));
  t3.join();
  t4.join();
  std::cout << d1 << std::endl;
  q1.empty();
}
Copy the code

4. Packaged_task usage introduction

The packaged_task class template, also defined in the Future header, wraps any Callable target, including functions, lambda expressions, bind expressions, or other function objects, so that it can be called asynchronously. Its return value or thrown exception is stored in a shared state that can be accessed through the STD :: Future object. In short, transform a plain callable function object into a task that executes asynchronously.

#include 
      
        // std::cout
      
#include 
      
        // std::thread
      
#include <chrono>
#include <future>using namespace std; Int Add(int x, int y) {return x + y;
}


void task_lambdaLambda packaged_task<int(int,int)> task([](int a, int b){returna + b; }); // start task(2, 10); Future <int> result = task.get_future(); cout <<"task_lambda :" << result.get() << "\n";
}

void task_threadSTD ::packaged_task<int (int,int)> task(Add); future<int> result = task.get_future(); // start task, non-asynchronous task(4,8); cout <<"task_thread :" << result.get() << "\n"; // Reset the share status task.reset(); result = task.get_future(); Thread TD (move(task), 2, 10); td.join(); Cout <<"task_thread :" << result.get() << "\n";
}

int main(int argc, char *argv[])
{
    task_lambda();
    task_thread();

    return 0;
}
Copy the code

5. Multithreading waits for one-time events

STD :: Promises are used to wrap a value that binds data to a future to facilitate retrieving a value from a thread function, indirectly through the future provided within the promise. The main purpose of a Promise is to provide a “Set” operation that corresponds to the Future’s Get ()


#include<iostream>
#include<future>
#include<thread>using namespace std; using namespace std::this_thread; using namespace std::chrono; Void work(promise<int> &prom) {cout <<"Start counting!<< endl; sleep_for(seconds(3)); //promise: cout <<"计算完成!"<< endl; prom.set_value(123); // Set the result, the future will get to} intmain() {// define a promise. Promise <int> PROM; Aynsc future<int> result = prom.get_future(); thread t1(work , ref(prom)); t1.detach(); int sum = result.get(); cout <<"Get results:" << sum << endl;
 
	system("pause");
	return 0;
}
Copy the code