Concurrency in shared state

This section describes how threads can share state with each other. In general, when multiple threads modify the same data at the same time, data competition can result in uncertain results. Rust provides a mutex to solve this problem

Use mutex

Mutex<T> Type, Mutex is an abbreviation of the English Mutual Exclusion, that is, a Mutex allows only one thread to access data at any one time:

use std::sync::Mutex;
	
let m = Mutex::new(5);
{
  // Get the lock and block the current thread until it gets the lock
  let mut num = m.lock().unwrap();
  // Num is a smart pointer of type MutexGuard
      
       , which is a mutable reference to internal data
      
  *num = 6;
} // Automatically unlocks when you leave the scope
println!("{:? }", m);
Copy the code

Sharing Mutex<T> across multiple threads

Suppose we start 10 threads and increment the value of the shared counter by one in each thread. This will eventually accumulate the counter from 0 to 10 when executed properly:

use std::thread;
use std::sync::Mutex;

// Mutex for counting
let counter = Mutex::new(0);
// Used to store threads
let mut handles = vec![];

for _ in 0.10 {
  let handle = thread::spawn(move| | {let mut num = counter.lock().unwrap(); // Error, ownership problem, when loop to the second time, counter has been moved
    *num += 1;
  });
  handles.push(handle);
}

for handle in handles {
  // Wait for all threads to complete
  handle.join().unwrap();
}

println!("counter: {}", *counter.lock().unwrap())
Copy the code

When we tried to move counter to the thread above, it would not compile because the first time the loop created the first child thread, the ownership of counter was moved in, and by the second time the loop created the second child thread, the ownership of counter was gone.

Try sharing counter with Rc<T>

Remember that we learned to use Rc<T> to share data, so we can try it out in multi-threaded scenarios:

use std::rc::Rc;
// Wrap Mutex with Rc
let counter = Rc::new(Mutex::new(0));
let mut handles = vec![];

for _ in 0.10 {
  let counter = Rc::clone(&counter);
  let handle = thread::spawn(move| | {The Rc
      
       > type cannot be safely passed in the thread
      
    let mut num = counter.lock().unwrap();
    *num += 1;
  });
  handles.push(handle);
}

for handle in handles {
  handle.join().unwrap();
}

println!("counter: {}", *counter.lock().unwrap())
Copy the code

When Rc<T> manages the reference count, it increases it during each call to Clone and decreases it when the cloned instance is discarded, but it does not use any concurrent primitives to ensure that the process of changing the count will not be interrupted by another thread.

Using atomic reference counting Arc<T>

Rust also provides the Arc<T> type instead of the Rc<T> type to solve the above problem, which has Rc<T> -like behavior while ensuring that it can be safely used in concurrent scenarios:

use std::sync::Arc;
// replace Rc with Arc
let counter = Arc::new(Mutex::new(0));
let mut handles = vec![];

for _ in 0.10 {
  let counter = Arc::clone(&counter);
  let handle = thread::spawn(move| | {let mut num = counter.lock().unwrap();
    *num += 1;
  });
  handles.push(handle);
}

for handle in handles {
  handle.join().unwrap();
}

println!("counter: {}", *counter.lock().unwrap());
// counter: 10
Copy the code

Using Arc<T>, the value of counter was successfully calculated above.

RefCell<T>, Rc<T>, Mutex<T>, Arc<T

  • Mutex

    has similar functionality to the RcCell

    family of types in that it also provides internal variability. The previous section used RefCell

    to change the content in Rc

    , and this section uses Mutex

    to change the content in Arc

    in the same way.





  • Using Rc

    runs the risk of creating circular references. Using Mutex

    also risks creating a deadlock.

What is a deadlock

When an operation requires two resources to be locked simultaneously, and two threads each hold one lock and request the other lock, the two threads are stuck in an infinite waiting process:

use std::thread;
use std::time::Duration;
use std::sync::{Arc,Mutex};

// Define two mutex types a, b that can be shared in multiple threads
let a = Arc::new(Mutex::new("a"));
let b = Arc::new(Mutex::new("b"));

// Use a2, b2 in handle1
let a2 = Arc::clone(&a);
let b2 = Arc::clone(&b);
let handle1 = thread::spawn(move| | {// obtain the lock for a2, named A3
  let a3 = a2.lock().unwrap();
  println!("a3:{}", a3); // a3:a
  // Wait 3 seconds
  thread::sleep(Duration::from_secs(3));
  // After 3 seconds, lock b2, named b3
  let b3 = b2.lock().unwrap();
  println!("b3:{}", b3); // No output
});

// Use a4, b4 in handle1
let a4 = Arc::clone(&a);
let b4 = Arc::clone(&b);
let handle2 = thread::spawn(move| | {// Get the lock of b4, named b5
  let b5 = b4.lock().unwrap();
  println!("b5:{}", b5); // b5:b
  // Wait 3 seconds
  thread::sleep(Duration::from_secs(3));
  // Get lock a5 after 3 seconds
  let a5 = a4.lock().unwrap();
  println!("a5:{}", a5); // No output
});

handle1.join().unwrap();
handle2.join().unwrap();
println!("ok");
// a3:a
// b5:b
Copy the code

The above program is locked in a deadlock state and will never stop, nor output B3 and B5. The reason is that handle1 does not unlock A3 after acquiring the lock of A2, and handle2 does not unlock B5 after acquiring the lock of B4. Three seconds later, Handle1 acquires the lock of B2 again. Handle2 also grabs the lock on A4 (note that all a(n) variables are references to A and b(n) are references to B), and handle1 and Handle2 both hold the lock on each other at the same time, causing the program to “deadlock”.