directory

  • 1. Why thread-safe
  • 2. Spin locks and mutex
  • 3. Type of lock
    • 1, OSSpinLock
    • 2, os_unfair_lock
    • 3, the pthread_mutex
    • 4, dispatch_semaphore
    • 5, dispatch_queue (DISPATCH_QUEUE_SERIAL)
    • 6, NSLock
    • 7, NSRecursiveLock
    • 8 NSCondition.
    • 9 NSConditionLock.
    • 10, @ synchronized
    • 11, pthread_rwlock
    • 12, dispatch_barrier_async
    • 13, atomic
  • 4. Performance comparison of lock

Why thread-safe

When multiple threads access the same resource, data clutter can easily occur. One example that everyone likes to use is the buying tickets demo, and today I’m going to use this example and let’s say we have 100 tickets, five Windows, five Windows, and let’s see what happens

(void)ticketTest{self.ticketsCount = 50; dispatch_queue_t queue = dispatch_get_global_queue(0, 0);for (NSInteger i = 0; i < 5; i++) {
dispatch_async(queue, ^{
for(int i = 0; i < 10; i++) { [self sellingTickets]; }}); }}} - (void)sellingTickets{int oldMoney = self.ticketscount; sleep(.2); oldMoney -= 1; self.ticketsCount = oldMoney; NSLog(@"Current number of remaining votes -> %d", oldMoney);
}
Copy the code

I normally have 50 tickets, and I sell 50 times, and there should be 0 left, but I print out 3, so there’s a thread safety issue here.

Reasons for thread safety

Thread safety occurs when multiple threads read the same value at the same time. For example, threads A and B read the current vote of 10 at the same time, so they sell two tickets, but the total number of votes is actually reduced by one.

The solution

Use thread synchronization technology, in a predetermined order, the common thread synchronization technology is locking

Spin-locks and mutex

Spin lock

Spinlocks are similar to mutex, except that they do not cause the caller to sleep. If the spinlock has been held by another execution unit, the caller loops around to see if the holder of the spinlock has released the lock, hence the name “spin”. It is used to solve the mutually exclusive use of a resource. Because spinlocks do not cause the caller to sleep, they are far more efficient than mutex. Although it is more efficient than mutex, it also has some disadvantages: 1. Spin lock is always occupying CPU, it is always running without a lock – spin, so occupy CPU, if not in a very short period of time to obtain the lock, this will undoubtedly reduce CPU efficiency. 2. It is possible to cause a deadlock when using a spin lock, it is possible to cause a deadlock when using a recursive call, and it is possible to cause a deadlock when calling some other functions, such as copy_to_user(), copy_from_user(), kmalloc(), and so on. Therefore, we should use spin locks with caution. Spin locks are only really needed if the kernel is preemptible or SMP. In a single-CPU, non-preemptible kernel, the operation of spin locks is empty. Spin locks are suitable for situations where the lock user holds the lock for a short time.

The mutex

The mutex is a lock of the sleep-waiting type. For example, on A dual-core machine you have two threads (thread A and thread B) running on Core0 and Core1, respectively. Suppose thread A wants to use the pthread_mutex_lock operation to obtain A critical block lock, and the lock is held by thread B, so thread A is blocking. Core0 performs A Context Switch at this point to place thread A on the wait queue, at which point Core0 can run other tasks (such as another thread C) without busy waiting. If thread A uses the pthread_spin_lock operation to request the lock, thread A will keep busy waiting on Core0 until it gets the lock.

Two kinds of locking principle

Mutex: Threads go from sleep — >running, with overhead of context switching, CPU preemption, signal sending, etc.

Spin lock: the thread is always running(lock — > unlock), an infinite loop to detect the lock flag, the mechanism is not complex.

Contrast the mutex starting costs more than the original spin lock, but the basic is once and for all, critical region the size of the lock time will not affect the cost of the mutex, and spin lock is infinite loop testing, lock the entire consume CPU, although initial cost is lower than the mutex, but with the lock time, lock overhead is linear growth.

Two kinds of lock application

Mutex is used for operations where critical sections are held for a long time, such as the following

  • 1 I/O operations are performed on critical sections
  • 2. The code of critical section is complex or the amount of loop is large
  • The competition in critical area is very fierce
  • 4 Single-core PROCESSOR

As for spin lock, it is mainly used in the case that the critical region holding lock time is very short and CPU resources are not tight, and spin lock is generally used in multi-core servers.

13 kinds of lock

1, OSSpinLock

OSSpinLock is called a “spin lock”. To use OSSpinLock, you need to import the header file #import <libkern/ osatomic.h >

OSSpinLock lock = OS_SPINLOCK_INIT; / / lock OSSpinLockLock (& lock); / / unlock OSSpinLockUnlock (& lock);Copy the code

demo

#import "OSSpinLockDemo.h"
#import <libkern/OSAtomic.h>
@interface OSSpinLockDemo()
@property (assign, nonatomic) OSSpinLock ticketLock;
@end

@implementation OSSpinLockDemo

- (instancetype)init
{
self = [super init];
if (self) {
self.ticketLock = OS_SPINLOCK_INIT;
}
returnself; } // sellingTickets - (void)sellingTickets{OSSpinLockLock(&_ticketLock); [super sellingTickets]; OSSpinLockUnlock(&_ticketLock); } @endCopy the code

OSSpinLock has been deprecated since iOS10.0. Os_unfair_lock_lock is used instead. There are also some security issues, specifically refer to OSSpinLock, which is no longer secure

2, os_unfair_lock

Os_unfair_lock is used to replace unsafe osspinlocks, and is not supported until iOS10. The threads waiting for the os_unfair_lock lock are in sleep state.

Os_unfair_lock LOCK = OS_UNFAIR_LOCK_INIT; / / lock os_unfair_lock_lock (& lock); / / unlock os_unfair_lock_unlock (& lock);Copy the code

demo

#import "os_unfair_lockDemo.h"
#import <os/lock.h>
@interface os_unfair_lockDemo()
@property (assign, nonatomic) os_unfair_lock ticketLock;
@end

@implementation os_unfair_lockDemo
- (instancetype)init
{
self = [super init];
if (self) {
self.ticketLock = OS_UNFAIR_LOCK_INIT;
}
returnself; } - (void)sellingTickets{os_unFAIR_lock_lock (&_ticketlock); [super sellingTickets]; os_unfair_lock_unlock(&_ticketLock); } @endCopy the code

3, the pthread_mutex

Mutex is called a mutex, and the thread waiting for the lock will sleep. Need to import header file #import <pthread.h> use steps

  • 1. Initialize lock properties
pthread_mutexattr_t attr;
pthread_mutexattr_init(&attr);
pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_RECURSIVE);

/*
* Mutex type attributes
*/
#define PTHREAD_MUTEX_NORMAL 0
#define PTHREAD_MUTEX_ERRORCHECK 1
#define PTHREAD_MUTEX_RECURSIVE 2
#define PTHREAD_MUTEX_DEFAULT PTHREAD_MUTEX_NORMAL

Copy the code
  • 2. Initialize the lock
// Initialize lock pthread_mutex_init(mutex, &attr);Copy the code
  • 3. After the initial lock is complete, the property is destroyed
Pthread_mutexattr_destroy (&attr);Copy the code
  • 4. Lock to unlock
pthread_mutex_lock(&_mutex);
pthread_mutex_unlock(&_mutex);
Copy the code
  • 5. Destroy the lock
pthread_mutex_destroy(&_mutex);
Copy the code

Note: We can pass NULL to use the default PTHREAD_MUTEX_NORMAL attribute without initializing the attribute. pthread_mutex_init(mutex, NULL);

Specific code

#import "pthread_mutexDemo.h"
#import <pthread.h>
@interface pthread_mutexDemo()
@property (assign, nonatomic) pthread_mutex_t ticketMutex;
@end

@implementation pthread_mutexDemo

- (instancetype)init
{
self = [super init];
if(self) {// Initialize the attribute pthread_mutexattr_t attr; pthread_mutexattr_init(&attr); pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_DEFAULT); // Initialize lock pthread_mutex_init(&(_ticketMutex), &attr); Pthread_mutexattr_destroy (&attr); }returnself; } // sellingTickets - (void)sellingTickets{pthread_mutex_lock(&_ticketmutex); [super sellingTickets]; pthread_mutex_unlock(&_ticketMutex); } @endCopy the code

Deadlock let’s change the code a little bit

- (void)sellingTickets{pthread_mutex_lock(&_ticketmutex); [super sellingTickets]; [self sellingTickets2]; pthread_mutex_unlock(&_ticketMutex); } - (void)sellingTickets2{ pthread_mutex_lock(&_ticketMutex); NSLog(@"%s",__func__);
pthread_mutex_unlock(&_ticketMutex);
}
Copy the code

The code above causes a thread deadlock, because the end of the method sellingTickets2 needs to be unlocked, and the end of the method sellingTickets2 needs to be unlocked. Cross-referencing causes a deadlock

But there is an attribute in pthread_mutex_t that solves this problem: PTHREAD_MUTEX_RECURSIVE

PTHREAD_MUTEX_RECURSIVE lock: Allows the same thread to repeatedly lock the same lock. Focus on one thread and one lock

- (instancetype)init
{
self = [super init];
if(self) {// Initialize the attribute pthread_mutexattr_t attr; pthread_mutexattr_init(&attr); pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_RECURSIVE); // Initialize lock pthread_mutex_init(&(_ticketMutex), &attr); Pthread_mutexattr_destroy (&attr); }return self;
}
Copy the code

Another solution to the above problem is to create a new lock in the method sellingTickets2. The two methods have different lock objects so that the thread does not deadlock.

conditions

// Initialize the attribute pthread_mutexattr_t attr; pthread_mutexattr_init(&attr); pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_RECURSIVE); // Initialize lock pthread_mutex_init(&_mutex, &attr); Pthread_mutexattr_destroy (&attr); Pthread_cond_t condition pthread_cond_init(&_cond, NULL); Pthread_cond_wait (&_cond, &_mutex); // Activate a thread waiting for the condition pthread_cond_signal(&_cond); // Activate all threads waiting for the condition pthread_cond_broadcast(&_cond); Pthread_mutex_destroy (&_mutex); pthread_cond_destroy(&_cond);Copy the code

Use case: Suppose we have an array, there are two threads, one is to add the array, one is to delete the array, we call delete the array, in the call to add the array, but when the array is empty, do not call delete the array.

#import "pthread_mutexDemo1.h"
#import <pthread.h>

@interface pthread_mutexDemo1()
@property (assign, nonatomic) pthread_mutex_t mutex;
@property (assign, nonatomic) pthread_cond_t cond;
@property (strong, nonatomic) NSMutableArray *data;
@end

@implementation pthread_mutexDemo1

- (instancetype)init
{
if(self = [super init]) {// Initialize the attribute pthread_mutexattr_t attr; pthread_mutexattr_init(&attr); pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_RECURSIVE); // Initialize lock pthread_mutex_init(&_mutex, &attr); Pthread_mutexattr_destroy (&attr); Pthread_cond_init (&_cond, NULL); self.data = [NSMutableArray array]; }returnself; } - (void)otherTest { [[[NSThread alloc] initWithTarget:self selector:@selector(__remove) object:nil] start]; [[[NSThread alloc] initWithTarget:self selector:@selector(__add) object:nil] start]; } // thread 1 // remove elements from array - (void)__remove {pthread_mutex_lock(&_mutex); NSLog(@"__remove - begin");

if(self.data.count == 0) {// wait pthread_cond_wait(&_cond, &_mutex); } [self.data removeLastObject]; NSLog(@"Deleted element"); pthread_mutex_unlock(&_mutex); } // thread 2 // add elements to array - (void)__add {pthread_mutex_lock(&_mutex); sleep(1); [self.data addObject:@"Test"];
NSLog(@"Added elements"); // Activate a thread waiting for the condition pthread_cond_signal(&_cond); pthread_mutex_unlock(&_mutex); } - (void)dealloc { pthread_mutex_destroy(&_mutex); pthread_cond_destroy(&_cond); }Copy the code

To test for accuracy we can use sleep(1) in __add

4, NSLock

NSLock is a wrapper around a mutex normal lock. pthread_mutex_init(mutex, NULL);

NSLock complies with the NSLocking protocol. Lock means to Lock, unlock means to unlock, tryLock means to try to Lock, and return NO if it fails, lockBeforeDate: attempts to Lock before the specified Date, and returns NO if the Lock cannot be held before the specified time

@protocol NSLocking
- (void)lock;
- (void)unlock;
@end

@interface NSLock : NSObject <NSLocking> {
@private
void *_priv;
}

- (BOOL)tryLock;
- (BOOL)lockBeforeDate:(NSDate *)limit;
@property (nullable, copy) NSString *name
@end
Copy the code

It is also very simple to use

#import "LockDemo.h"@interface LockDemo() @property (strong, nonatomic) NSLock *ticketLock; @end@implementation LockDemo - (void)sellingTickets{[self.ticketLock lock]; [super sellingTickets]; [self.ticketLock unlock]; } @endCopy the code

5, NSRecursiveLock

NSRecursiveLock encapsulates a mutex recursive lock, and has the same API as NSLock

#import "RecursiveLockDemo.h"@interface RecursiveLockDemo() @property (nonatomic,strong) NSRecursiveLock *ticketLock; @end@implementation RecursiveLockDemo - (void)sellingTickets{[self.ticketLock lock]; [super sellingTickets]; [self.ticketLock unlock]; } @endCopy the code

6, NSCondition

NSCondition is the encapsulation of MUtex and COND. It is more object-oriented and more convenient and concise to use

@interface NSCondition : NSObject <NSLocking> {
- (void)wait;
- (BOOL)waitUntilDate:(NSDate *)limit;
- (void)signal;
- (void)broadcast;
@property (nullable, copy) NSString *name 
@end
Copy the code

For the array manipulation case above we can look like this

- (void)__remove {[self.condition lock];if(self.data.count == 0) {// wait for [self.conditionwait];
}
[self.data removeLastObject];
NSLog(@"Deleted element"); [self.condition unlock]; } // thread 2 // Add elements to array - (void)__add {[self.condition lock]; sleep(1); [self.data addObject:@"Test"];
NSLog(@"Added elements"); [self. Condition signal]; [self.condition unlock]; }Copy the code

7, NSConditionLock

NSConditionLock is a further encapsulation of NSCondition, and you can set specific conditional values

@interface NSConditionLock : NSObject <NSLocking> {
 
- (instancetype)initWithCondition:(NSInteger)condition;

@property (readonly) NSInteger condition;
- (void)lockWhenCondition:(NSInteger)condition;
- (BOOL)tryLock;
- (BOOL)tryLockWhenCondition:(NSInteger)condition;
- (void)unlockWithCondition:(NSInteger)condition;
- (BOOL)lockBeforeDate:(NSDate *)limit;
- (BOOL)lockWhenCondition:(NSInteger)condition beforeDate:(NSDate *)limit;
@property (nullable, copy) NSString *name;
@end
Copy the code

There are three commonly used methods

  • 1.InitWithCondition:Initialize theConditionAnd sets the status value
  • 2,lockWhenCondition:(NSInteger)condition:The lock is added when the status is condition
  • 3,unlockWithCondition:(NSInteger)conditionUnlock when the status value is condition
@interface NSConditionLockDemo()
@property (strong, nonatomic) NSConditionLock *conditionLock;
@end
@implementation NSConditionLockDemo
- (instancetype)init
{
if (self = [super init]) {
self.conditionLock = [[NSConditionLock alloc] initWithCondition:1];
}
return self;
}

- (void)otherTest
{
[[[NSThread alloc] initWithTarget:self selector:@selector(__one) object:nil] start];
[[[NSThread alloc] initWithTarget:self selector:@selector(__two) object:nil] start];
}

- (void)__one
{
[self.conditionLock lock];
NSLog(@"__one");
sleep(1);
[self.conditionLock unlockWithCondition:2];
}

- (void)__two
{
[self.conditionLock lockWhenCondition:2];
NSLog(@"__two");
[self.conditionLock unlockWithCondition:3];
}
@end
Copy the code

8 dispatch_semaphore.

  • A semaphore is called a semaphore.
  • The initial value of a semaphore that can be used to control the maximum number of concurrent accesses by a thread
  • The initial value of the semaphore is 1, which means that only one thread is allowed to access the resource at the same time to ensure thread synchronization
Dispatch_semaphore_create (5); // If the semaphore value is <= 0, it will wait until the semaphore value is >0, then it will decrease the semaphore value by 1. Then proceed to dispatch_semaphore_wait(self.semaphore, DISPATCH_TIME_FOREVER); // dispatch_semaphore_signal(self.semaphore);Copy the code
@interface dispatch_semaphoreDemo()
@property (strong, nonatomic) dispatch_semaphore_t semaphore;
@end
@implementation dispatch_semaphoreDemo
- (instancetype)init
{
if (self = [super init]) {
self.semaphore = dispatch_semaphore_create(1);
}
return self;
}
- (void)otherTest
{
for (int i = 0; i < 20; i++) {
[[[NSThread alloc] initWithTarget:self selector:@selector(test) object:nil] start];
}
}
- (void)test{// If the semaphore value is >0, then the semaphore value is decreased by 1, then continue to execute the code // If the semaphore value is <= 0, then the semaphore value is decreased by 1. Then proceed to dispatch_semaphore_wait(self.semaphore, DISPATCH_TIME_FOREVER); sleep(2); NSLog(@"test - %@", [NSThread currentThread]); // dispatch_semaphore_signal(self.semaphore); } @endCopy the code

When we run the code print, we find that the print occurs every second. Although we have 20 threads open at the same time, only one thread’s resources can be accessed at a time

9 dispatch_queue.

The serial queue using GCD can also achieve thread synchronization

dispatch_queue_t queue = dispatch_queue_create("test", DISPATCH_QUEUE_SERIAL); Dispatch_sync (queue, ^{// Add task 1for (int i = 0; i < 2; ++i) {
NSLog(@"1 - % @",[NSThread currentThread]); }}); Dispatch_sync (queue, ^{// Add task 2for (int i = 0; i < 2; ++i) {
NSLog(@"2 - % @",[NSThread currentThread]); }});Copy the code

10, @ synchronized

@synchronized encapsulates the mutex recursive lock. @synchronized(OBj) internally generates a recursive lock corresponding to OBJ, and locks and unlocks the lock

// sellingTickets - (void)sellingTickets{@synchronized ([self class]) {[super sellingTickets]; }}Copy the code

Synchronized is the implementation layer, which can be found in objC4’s objc-sync.mm file. Synchronized starts and ends by calling objc_sync_enter and objc_sync_exit methods.

Objc_sync_enter implementation

int objc_sync_enter(id obj)
{
int result = OBJC_SYNC_SUCCESS;

if (obj) {
SyncData* data = id2data(obj, ACQUIRE);
assert(data);
data->mutex.lock();
} else {
// @synchronized(nil) does nothing
if (DebugNilSync) {
_objc_inform("NIL SYNC DEBUG: @synchronized(nil); set a breakpoint on objc_sync_nil to debug");
}
objc_sync_nil();
}

return result;
}
Copy the code

Mutex.lock () = mutex.lock(); Let’s go to the ID2Data method and keep looking

#define LIST_FOR_OBJ(obj) sDataLists[obj].data
static StripedMap<SyncList> sDataLists;
Copy the code

Discover that retrieving data objects is based on the sDataLists[obj].data method, which is a hash table.

There’s more to @synchronized than you ever wanted to know

11, atomic

  • Atomic is used to ensure atomicity of property setters and getters. It is equivalent to placing a thread synchronization lock inside the getter and setter
  • See objC4’s objc-accessors.mm
  • It does not guarantee that the process of using attributes is thread-safe

Pthread_rwlock: read/write lock

Pthread_rwlock is often used for reading and writing files, such as files.

Note the following scenarios for the READ and write security solution in iOS

  • 1. Only one thread can write data at a time
  • 2. Multiple threads are allowed to read data at the same time
  • 3. Both write and read operations are not allowed at the same time
// Initialize lock pthread_rwlock_t lock; pthread_rwlock_init(&_lock, NULL); Pthread_rwlock_rdlock (&_lock); Pthread_rwlock_trywrlock (&_lock); pthread_rwlock_trywrlock(&_lock); // Write attempt to lock pthread_rwlock_trywrlock(&_lock) // unlock pthread_rwlock_unlock(&_lock); / / destroyed pthread_rwlock_destroy (& _lock);Copy the code
#import <pthread.h>
@interface pthread_rwlockDemo ()
@property (assign, nonatomic) pthread_rwlock_t lock;
@end

@implementation pthread_rwlockDemo

- (instancetype)init
{
self = [super init];
if(self) {// Initialize the lock pthread_rwlock_init(&_lock, NULL); }return self;
}

- (void)otherTest{
dispatch_queue_t queue = dispatch_get_global_queue(0, 0);

for (int i = 0; i < 10; i++) {
dispatch_async(queue, ^{
[self read];
});
dispatch_async(queue, ^{
[self write];
});
}
}
- (void)read {
pthread_rwlock_rdlock(&_lock);
sleep(1);
NSLog(@"%s", __func__);
pthread_rwlock_unlock(&_lock);
}
- (void)write
{
pthread_rwlock_wrlock(&_lock);
sleep(1);
NSLog(@"%s", __func__);
pthread_rwlock_unlock(&_lock);
}
- (void)dealloc
{
pthread_rwlock_destroy(&_lock);
}
@end
Copy the code

We can see that read 1s May occur more than once, but write does not

13, dispatch_barrier_async

The incoming queue must be created by dispatch_queue_cretate itself. If the incoming queue is a serial queue or a global queue, this function is equivalent to dispatch_async

Self. Queue = dispatch_queue_create("rw_queue", DISPATCH_QUEUE_CONCURRENT); Dispatch_async (self.queue, ^{}); Dispatch_barrier_async (self.queue, ^{});Copy the code

Lock performance comparison

Sort performance from highest to lowest

  • 1, os_unfair_lock
  • 2, OSSpinLock
  • 3, dispatch_semaphore
  • 4, the pthread_mutex
  • 5, dispatch_queue (DISPATCH_QUEUE_SERIAL)
  • 6, NSLock
  • 7, NSCondition
  • 8, the pthread_mutex (recursive)
  • 9 NSRecursiveLock.
  • 10, NSConditionLock
  • 11, @ synchronized