This paper mainly introduces iOS multi-thread scheme, multi-thread security scheme, multi-read single write scheme.

It’s a bit long, so please be patient.

process

In theory, each iOS App is a process with its own virtual space to store its own running data.

thread

There are multiple threads in each process that share global variables and heap data for the process. Multiple threads can execute concurrently in one process. Achieve the effect of completing multiple tasks at the same time. In fact, the so-called concurrency in a single processor is a pseudo-concurrency effect achieved by the operating system constantly switching back and forth between threads.

The role of multithreading

  • Avoid thread clogging. Some tasks that take a lot of time to execute, if placed in the main thread at the same time, will cause a lag
  • Breaking up complex tasks, such as images loaded by the UITableViewCell, allows a new thread to process the image data, which is eventually handed over to the main thread
  • Multitask parallelism

Multithreading in iOS

  • Pthread: C language, difficult to use. The lifecycle needs to be managed by the consumer. Hardly anyone uses it
  • NSThread: OC is object-oriented and needs to rely on Runloop for survival. The life cycle needs to be managed by users and is rarely used
  • GCD: the C language. It is a mainstream multithreading scheme to manage the life cycle by the system itself
  • NSOperation: OC language based on GCD encapsulation, more object-oriented. The system itself manages the declaration cycle. Also more people use

The queue

The queue is loaded with multiple threads, and the task scheduling of the thread is arranged according to the different attributes of the queue. The queue can be divided into serial queue and concurrent queue

  • Serial queue: Tasks in a queue are executed one by one
  • Concurrent queue: Queues can execute tasks simultaneously

synchronous

All tasks are executed one after another on the same thread

Task B can be performed only after task A is complete. If task A takes too long, task B waits for task A to complete.

IOS scenario: When loading network data, use the loading indicator to block the current view. When the loading is complete, the indicator disappears and the page displays data. If the loading task has not been completed. The interface is stuck in the loading interface.

It may cause a deadlock

asynchronous

Execute the task in a new thread.

Let’s look at how synchronous asynchrony behaves in different threads

- (void)viewDidLoad {
    [super viewDidLoad];
    NSLog(@"main thread %@",[NSThread currentThread]);
    dispatch_async(dispatch_get_main_queue(), ^{
        NSLog(@"main dispatch_async_thred:%@",[NSThread currentThread]);
    });
    
    dispatch_sync(dispatch_get_global_queue(0, 0), ^{
        NSLog(@"global dispatch_sync_thred:%@",[NSThread currentThread]);
    });
    
    dispatch_async(dispatch_get_global_queue(0, 0), ^{
        NSLog(@"global dispatch_async_thred:%@",[NSThread currentThread]);
    });
    
    dispatch_queue_t squeue = dispatch_queue_create("QUEUE1", DISPATCH_QUEUE_SERIAL);
    dispatch_sync(squeue, ^{
        NSLog(@"squeue dispatch_sync_thred:%@",[NSThread currentThread]);
    });
    dispatch_async(squeue, ^{
        NSLog(@"squeue dispatch_async_thred:%@",[NSThread currentThread]);
    });
    
    dispatch_queue_t cqueue = dispatch_queue_create("QUEUE1", DISPATCH_QUEUE_CONCURRENT);
    dispatch_sync(cqueue, ^{
        NSLog(@"cqueue dispatch_sync_thred:%@",[NSThread currentThread]);
    });
    
    dispatch_async(squeue, ^{
        NSLog(@"cqueue dispatch_async_thred:%@",[NSThread currentThread]);
    });
}

main thread <NSThread: 0x600003a0c200>{number = 1, name = main}
global dispatch_sync_thred:<NSThread: 0x600003a0c200>{number = 1, name = main}
global dispatch_async_thred:<NSThread: 0x600003a4d980>{number = 2, name = (null)}
squeue dispatch_sync_thred:<NSThread: 0x600003a0c200>{number = 1, name = main}
cqueue dispatch_sync_thred:<NSThread: 0x600003a0c200>{number = 1, name = main}
squeue dispatch_async_thred:<NSThread: 0x600003a4d980>{number = 2, name = (null)}
cqueue dispatch_async_thred:<NSThread: 0x600003a4d980>{number = 2, name = (null)}
main dispatch_async_thred:<NSThread: 0x600003a0c200>{number = 1, name = main}
Copy the code

You can see from the results.

  • Synchronization: no new threads are started
  • Asynchrony: new threads are started in global queues, serial queues, and concurrent queues. The main queue does not open a new thread

Primary queue asynchronously executes tasks

NSLog(@"1%@",[NSThread currentThread]);
dispatch_async(dispatch_get_main_queue(), ^{
    NSLog(@"main async 2:%@",[NSThread currentThread]);
});
NSLog(@"3%@",[NSThread currentThread]);
dispatch_async(dispatch_get_main_queue(), ^{
    NSLog(@"main async 4:%@",[NSThread currentThread]);
});
NSLog(@"5%@",[NSThread currentThread]);
dispatch_async(dispatch_get_main_queue(), ^{
    NSLog(@"main async 6:%@",[NSThread currentThread]);
});
dispatch_async(dispatch_get_main_queue(), ^{
    NSLog(@"main async 7:%@",[NSThread currentThread]);
});
dispatch_async(dispatch_get_main_queue(), ^{
    NSLog(@"main async 8:%@",[NSThread currentThread]);
});
NSLog(@"9%@",[NSThread currentThread]);
dispatch_async(dispatch_get_main_queue(), ^{
    NSLog(@"main async 10:%@",[NSThread currentThread]);
});

1<NSThread: 0x600001564140>{number = 1, name = main}
3<NSThread: 0x600001564140>{number = 1, name = main}
5<NSThread: 0x600001564140>{number = 1, name = main}
9<NSThread: 0x600001564140>{number = 1, name = main}
main async 2:<NSThread: 0x600001564140>{number = 1, name = main}
main async 4:<NSThread: 0x600001564140>{number = 1, name = main}
main async 6:<NSThread: 0x600001564140>{number = 1, name = main}
main async 7:<NSThread: 0x600001564140>{number = 1, name = main}
main async 8:<NSThread: 0x600001564140>{number = 1, name = main}
main async 10:<NSThread: 0x600001564140>{number = 1, name = main}
Copy the code

It follows from the results. Running asynchronous tasks in the main queue will wait for tasks in ViewDidLoad to complete before executing them sequentially. So according to the above results. Raises a problem. Subordinate code performs synchronization tasks in the main queue. So what are the consequences.

NSLog(@"1%@",[NSThread currentThread]);
dispatch_sync(dispatch_get_main_queue(), ^{
    NSLog(@"main sync 2:%@",[NSThread currentThread]);
});
NSLog(@"3%@",[NSThread currentThread]);
Copy the code

The answer: deadlock. Because the synchronization task needs to wait for viewDidLoad to finish, and viewDidLoad needs to wait for the synchronization task to exit. Everybody waits. Everybody stays in line. Well, then we’ll all be stuck together.

Task execution status of global queue asynchronously

dispatch_queue_t gQueue = dispatch_get_global_queue(0, 0);
    NSLog(@"1%@",[NSThread currentThread]);
dispatch_async(gQueue, ^{
    NSLog(@"global async 2:%@",[NSThread currentThread]);
});
NSLog(@"3%@",[NSThread currentThread]);
dispatch_async(gQueue, ^{
    NSLog(@"global async 4:%@",[NSThread currentThread]);
});
NSLog(@"5%@",[NSThread currentThread]);
dispatch_async(gQueue, ^{
    NSLog(@"global async 6:%@",[NSThread currentThread]);
});
dispatch_async(gQueue, ^{
    NSLog(@"global async 7:%@",[NSThread currentThread]);
});
dispatch_async(gQueue, ^{
    NSLog(@"global async 8:%@",[NSThread currentThread]);
});
NSLog(@"9%@",[NSThread currentThread]);
dispatch_async(gQueue, ^{
    NSLog(@"global async 10:%@",[NSThread currentThread]);
});

1<NSThread: 0x6000003f0080>{number = 1, name = main}
3<NSThread: 0x6000003f0080>{number = 1, name = main}
global async 2:<NSThread: 0x6000003b1840>{number = 6, name = (null)}
5<NSThread: 0x6000003f0080>{number = 1, name = main}
9<NSThread: 0x6000003f0080>{number = 1, name = main}
global async 4:<NSThread: 0x6000003b2180>{number = 7, name = (null)}
global async 6:<NSThread: 0x6000003b1840>{number = 6, name = (null)}
global async 7:<NSThread: 0x6000003f3380>{number = 5, name = (null)}
global async 8:<NSThread: 0x6000003b2180>{number = 7, name = (null)}
global async 10:<NSThread: 0x6000003ec9c0>{number = 4, name = (null)}
Copy the code

We have reached the conclusion above. Executing an asynchronous task on a global queue opens up a new thread. So obviously he doesn’t have to wait for viewDidLoad to finish executing before he can execute.

Self-created serial queues perform tasks asynchronously

dispatch_queue_t squeue = dispatch_queue_create("QUEUE1", DISPATCH_QUEUE_SERIAL);
    NSLog(@"1%@",[NSThread currentThread]);
dispatch_async(squeue, ^{
    NSLog(@"squeue async 2:%@",[NSThread currentThread]);
});
NSLog(@"3%@",[NSThread currentThread]);
dispatch_async(squeue, ^{
    NSLog(@"squeue async 4:%@",[NSThread currentThread]);
});
NSLog(@"5%@",[NSThread currentThread]);
dispatch_async(squeue, ^{
    NSLog(@"squeue async 6:%@",[NSThread currentThread]);
});
dispatch_async(squeue, ^{
    NSLog(@"squeue async 7:%@",[NSThread currentThread]);
});
dispatch_async(squeue, ^{
    NSLog(@"squeue async 8:%@",[NSThread currentThread]);
});
NSLog(@"9%@",[NSThread currentThread]);
dispatch_async(squeue, ^{
    NSLog(@"squeue async 10:%@",[NSThread currentThread]);
});

1<NSThread: 0x60000215c600>{number = 1, name = main}
3<NSThread: 0x60000215c600>{number = 1, name = main}
squeue async 2:<NSThread: 0x600002108140>{number = 6, name = (null)}
5<NSThread: 0x60000215c600>{number = 1, name = main}
9<NSThread: 0x60000215c600>{number = 1, name = main}
squeue async 4:<NSThread: 0x600002108140>{number = 6, name = (null)}
squeue async 6:<NSThread: 0x600002108140>{number = 6, name = (null)}
squeue async 7:<NSThread: 0x600002108140>{number = 6, name = (null)}
squeue async 8:<NSThread: 0x600002108140>{number = 6, name = (null)}
squeue async 10:<NSThread: 0x600002108140>{number = 6, name = (null)}
Copy the code

The result is the same as the global queue, but only one thread is created

Self-created concurrent queues execute tasks asynchronously

dispatch_queue_t cqueue = dispatch_queue_create("QUEUE1", DISPATCH_QUEUE_CONCURRENT);
NSLog(@"1%@",[NSThread currentThread]);
dispatch_async(cqueue, ^{
    NSLog(@"cqueue async 2:%@",[NSThread currentThread]);
});
NSLog(@"3%@",[NSThread currentThread]);
dispatch_async(cqueue, ^{
    NSLog(@"cqueue async 4:%@",[NSThread currentThread]);
});
NSLog(@"5%@",[NSThread currentThread]);
dispatch_async(cqueue, ^{
    NSLog(@"cqueue async 6:%@",[NSThread currentThread]);
});
dispatch_async(cqueue, ^{
    NSLog(@"cqueue async 7:%@",[NSThread currentThread]);
});
dispatch_async(cqueue, ^{
    NSLog(@"cqueue async 8:%@",[NSThread currentThread]);
});
NSLog(@"9%@",[NSThread currentThread]);
dispatch_async(cqueue, ^{
    NSLog(@"cqueue async 10:%@",[NSThread currentThread]);
});

1<NSThread: 0x6000028a03c0>{number = 1, name = main}
3<NSThread: 0x6000028a03c0>{number = 1, name = main}
cqueue async 2:<NSThread: 0x60000289d1c0>{number = 3, name = (null)}
5<NSThread: 0x6000028a03c0>{number = 1, name = main}
cqueue async 4:<NSThread: 0x6000028f8800>{number = 7, name = (null)}
cqueue async 6:<NSThread: 0x60000289d1c0>{number = 3, name = (null)}
cqueue async 7:<NSThread: 0x6000028f8800>{number = 7, name = (null)}
cqueue async 8:<NSThread: 0x6000028f8800>{number = 7, name = (null)}
9<NSThread: 0x6000028a03c0>{number = 1, name = main}
cqueue async 10:<NSThread: 0x6000028f8800>{number = 7, name = (null)}
Copy the code

The result is the same as the global queue, with multiple threads created

global_queue

Compared with the above results,global_queue is a concurrent queue

main_queue

Main_queue is a serial queue

A deadlock

Deadlock occurs when adding synchronization tasks to the main queue. Adding a synchronization task to a serial queue synchronization task can also cause a deadlock, not just for the primary queue. The same code can cause a crash. This does not happen in concurrent queues.

dispatch_queue_t cqueue = dispatch_queue_create("QUEUE1", DISPATCH_QUEUE_SERIAL);
dispatch_sync(cqueue, ^{
    NSLog(@"cqueue async 2:%@",[NSThread currentThread]);
    dispatch_sync(cqueue, ^{
        NSLog(@"cqueue sync 3:%@",[NSThread currentThread]);
    });
});
Copy the code

Thread safety

Multithreading brings convenience as well as hidden dangers. When multiple threads write to the same variable, the results may differ. Selling tickets is a typical multi-threaded problem.

- (void)viewDidLoad { [super viewDidLoad]; dispatch_queue_t queue = dispatch_get_global_queue(0, 0); dispatch_async(queue, ^{ for (int i = 0; i < 30; i++) { [self saleTickets]; }}); dispatch_async(queue, ^{ for (int i = 0; i < 30; i++) { [self saleTickets]; }}); dispatch_async(queue, ^{ for (int i = 0; i < 40; i++) { [self saleTickets]; }}); } - (void)saleTickets { tickets = tickets - 1; NSLog(@" sold 1 ticket, %d tickets left ",tickets); }Copy the code

You can see from the end result. The final balance is not zero.

Multiple threads operate a memory at the same time

For example, when multiple threads access tickets, the value 98 is subtracted by 1. This results in a different result.

Solution 1. Lock

In the iOS lock
  • OSSpinLock(spin lock)
  • os_unfairLock
  • pthread_mutex
  • dispatch_semaphor
  • NSLock
  • NSRecursiveLock
  • NSCondition
  • NSConditonLock
  • @synchronized
spinlocks

When the thread detects that the spin lock is locked, it waits busy until the lock is released before continuing to execute the task. Similar to the while (lock) {}

The mutex

When a thread detects a lock, it goes to sleep and waits until other threads unlock it.

OSSpinLock
#import <libkern/OSAtomic.h> - (void)viewDidLoad { [super viewDidLoad]; self.lock = OS_SPINLOCK_INIT; [self beginSaleTicket]; } - (void)saleTickets { OSSpinLockLock(&_lock); tickets = tickets - 1; NSLog(@" sold 1 ticket, %d tickets left ",tickets); OSSpinLockUnlock(&_lock); } - (void)beginSaleTicket { dispatch_queue_t queue = dispatch_get_global_queue(0, 0); dispatch_async(queue, ^{ for (int i = 0; i < 30; i++) { [self saleTickets]; }}); dispatch_async(queue, ^{ for (int i = 0; i < 30; i++) { [self saleTickets]; }}); dispatch_async(queue, ^{ for (int i = 0; i < 40; i++) { [self saleTickets]; }}); }Copy the code

The final result of the execution is 0. That is to say, the thread is safe

Why was OSSpinLock deprecated
Cause 1: Deadlock
#import <os/lock.h> - (void)viewDidLoad { [super viewDidLoad]; self.lock = OS_SPINLOCK_INIT; [self deadLock]; NSLog(@"test"); } - (void)deadLock { OSSpinLockLock(&_lock); [self lockAgain]; NSLog(@" wait for lockAgain to complete "); OSSpinLockUnlock(&_lock); } - (void)lockAgain { OSSpinLockLock(&_lock); NSLog (@ "lock"); OSSpinLockUnlock(&_lock); } - (void)touchesBegan:(NSSet<UITouch *> *)touches withEvent:(UIEvent *)event { NSLog(@"touch"); }Copy the code

As a result, nothing is printed, and a ‘touchesBegan’ can’t respond. Why is that? Look at the

In a spin lock, the deadLock method acquires the lock, then calls lockAgain. When the deadLock is detected, the lockAgain enters busy, etc. The deadLock is blocked and the deadLock cannot be obtained to unlock the deadLock. Of course, this can happen with mutex as well.

Cause 2: The priority was reversed

Threads run preemptively, preempting CPU controllers for lower-priority tasks as higher-priority threads rotate.

Suppose there are two threads, A and B, with priority A > B

B holds the lock before A and locks it. At this point, A takes turns to seize the CPU control of B. If B does not lock the lock, the unlocking operation will be performed. Having only two threads might not be a problem. However, if multiple threads are constantly rotating, the highest priority task, A, is delayed, which ultimately affects program performance

os_unfair_lock

After iOS10, OC introduced a lock to replace OSSpinLock,

- (void)viewDidLoad { [super viewDidLoad]; self.lock = OS_UNFAIR_LOCK_INIT; [self beginSaleTicket]; } - (void)saleTickets { os_unfair_lock_lock(&_lock); tickets = tickets - 1; NSLog(@" sold 1 ticket, %d tickets left ",tickets); os_unfair_lock_unlock(&_lock); }Copy the code

Os_unfair_lock does not specify whether it is a spin lock or a mutex lock. However, as seen from the assembly, it is detected that the locked thread is asleep after it has reached a certain stage. Os_unfair_lock is, by definition, a mutex.

pthread_mutex_t

Pthread_mutex_t is a powerful lock that encapsulates multiple types of locks. Regular lock, recursive lock. You can also add conditions to locks

- (void)viewDidLoad { [super viewDidLoad]; pthread_mutex_init(&(_lock), NULL); [self beginSaleTicket]; } - (void)saleTickets { pthread_mutex_lock(&_lock); tickets = tickets - 1; NSLog(@" sold 1 ticket, %d tickets left ",tickets); pthread_mutex_unlock(&_lock); }Copy the code

Pthread_mutex_init takes two arguments, the second of which specifies the type of lock. Remember the deadlock problem mentioned above when different tasks in the same thread are locked multiple times? Pthread_mutex_init provides the solution. That’s the recursive lock

Recursive locking
- (void)viewDidLoad { [super viewDidLoad]; pthread_mutexattr_t attr; pthread_mutexattr_init(&attr); pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_RECURSIVE); pthread_mutex_init(&(_lock), &attr); pthread_mutexattr_destroy(&attr); [self deadLock]; NSLog(@"test"); } - (void)deadLock { pthread_mutex_lock(&_lock); [self lockAgain]; NSLog(@" wait for lockAgain to complete "); pthread_mutex_unlock(&_lock); } - (void)lockAgain { pthread_mutex_lock(&_lock); NSLog (@ "lock"); pthread_mutex_unlock(&_lock); }Copy the code

As you can see, when pthread_mutex_lock is set to a recursive lock, the task executes smoothly. Recursive locking allows different tasks in the same thread to lock the same lock multiple times.

conditions

When a condition is added to a lock, the thread releases the currently held lock and then sleeps, waiting for another thread to release the condition signal. The thread wakes up, re-locks and continues execution.

Suppose we have a scene: moving bricks to cover a wall. When there are bricks, you can start to paste the wall. If there are two people who paste the wall and move the bricks, when the person who paste the wall runs out of bricks, they need to wait for the person who moves the bricks before they can continue.

- (void)viewDidLoad { [super viewDidLoad]; pthread_mutexattr_t attr; pthread_mutexattr_init(&attr); pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_RECURSIVE); pthread_mutex_init(&_lock, &attr); pthread_mutexattr_destroy(&attr); pthread_cond_init(&_cond, NULL); dispatch_async(dispatch_get_global_queue(0, 0), ^{ [self build]; }); dispatch_async(dispatch_get_global_queue(0, 0), ^{ [self getBrick]; }); } - (void)build { pthread_mutex_lock(&_lock); NSLog(@" start pasting "); If (brick == 0) {// wait for NSLog(@" no bricks, wait for bricks "); pthread_cond_wait(&_cond, &_lock); } NSLog(@" paste wall "); pthread_mutex_unlock(&_lock); } - (void)getBrick { pthread_mutex_lock(&_lock); sleep(1); brick += 1; NSLog(@" Bricks are coming "); pthread_cond_signal(&_cond); pthread_mutex_unlock(&_lock); }Copy the code

It’s worth noting that. After the condition signal is released, the party that added the condition does not immediately wake up and lock for subsequent tasks. Instead, it waits for the sender to unlock the current lock before waking up. So if you have a particularly time-consuming task. You can do it after unlock.

NSLock
- (void)viewDidLoad { [super viewDidLoad]; self.lock = [NSLock new]; [self beginSaleTicket]; } - (void)saleTickets { [self.lock lock]; tickets = tickets - 1; NSLog(@" sold 1 ticket, %d tickets left ",tickets); [self.lock unlock]; }Copy the code

NSLock encapsulates a mutex lock. I won’t repeat it here

NSRecursiveLock

For mutex recursive lock encapsulation, the API is basically the same as NSLock. I won’t post the code here

NSCondition

Encapsulation of mutex and COND

- (void)viewDidLoad { [super viewDidLoad]; self.condition = [NSCondition new]; [self buildWall]; } - (void)buildWall { dispatch_async(dispatch_get_global_queue(0, 0), ^{ [self build]; }); dispatch_async(dispatch_get_global_queue(0, 0), ^{ [self getBrick]; }); } - (void)build { [self.condition lock]; NSLog(@" start pasting "); If (brick == 0) {// wait for NSLog(@" no bricks, move bricks "); [self.condition wait]; } NSLog(@" paste wall "); [self.condition unlock]; } - (void)getBrick { [self.condition lock]; sleep(1); brick += 1; NSLog(@" Start moving bricks "); [self.condition signal]; [self.condition unlock]; }Copy the code
NSConditionLock

NSCondition encapsulation allows you to add different conditions to the same lock

- (void)viewDidLoad {
    [super viewDidLoad];
    self.conditionLock = [[NSConditionLock alloc] initWithCondition:1];
    dispatch_async(dispatch_get_global_queue(0, 0), ^{
        [self stepThree];
    });
    dispatch_async(dispatch_get_global_queue(0, 0), ^{
        [self stepOne];
    });
    dispatch_async(dispatch_get_global_queue(0, 0), ^{
        [self stepTwo];
    });
}

- (void)stepOne
{
    [self.conditionLock lock];
    NSLog(@"%s",__func__);
    sleep(1);
    [self.conditionLock unlockWithCondition:2];
}

- (void)stepTwo
{
    [self.conditionLock lockWhenCondition:2];
    NSLog(@"%s",__func__);
    sleep(1);
    [self.conditionLock unlockWithCondition:3];
}

- (void)stepThree
{
    [self.conditionLock lockWhenCondition:3];
    NSLog(@"%s",__func__);
    [self.conditionLock unlock];
}
Copy the code
@synchronized

Is the syntactic sugar of OC, which is essentially a encapsulation of Mutex

- (void)viewDidLoad { [super viewDidLoad]; [self beginSaleTicket]; } - (void)saleTickets { @synchronized (self) { tickets = tickets - 1; NSLog(@" sold 1 ticket, %d tickets left ",tickets); }}Copy the code

Using the @synchronized code is pretty neat. Through testing, @synchronized internally implements recursive locking.

Moving on from the locks available in iOS, let’s talk about their performance

Normal lock > Conditional lock > Recursive lock

os_unfair_lock > OSSPinLock > pthread_mutext_t > NSLock > NSCondition > pthread_mutex(recursive) > NSRecursiveLock > NSConditionLock > @synchronized

Solution 2:GCD serial queue

As mentioned above, in serial queues, asynchronous tasks are executed serially. So we can create a serial queue of GCD to sell tickets.

- (void)viewDidLoad { [super viewDidLoad]; self.ticketQueue = dispatch_queue_create("ticketQueue", DISPATCH_QUEUE_SERIAL); [self beginSaleTicket]; } - (void)saleTickets { tickets = tickets - 1; NSLog(@" sold 1 ticket, %d tickets left ",tickets); } - (void)beginSaleTicket { dispatch_async(self.ticketQueue, ^{ for (int i = 0; i < 30; i++) { [self saleTickets]; }}); dispatch_async(self.ticketQueue, ^{ for (int i = 0; i < 30; i++) { [self saleTickets]; }}); dispatch_async(self.ticketQueue, ^{ for (int i = 0; i < 40; i++) { [self saleTickets]; }}); }Copy the code

Solution 3: Semaphore dispatch_semaphore_t

The concept of PV operation is involved here.

When performing P operation, if P > 0, semaphore -1, execute the task; if P <= 0, sleep and wait

Perform V operation, semaphore +1

We can set the maximum number of concurrent threads to 1 by setting the semaphore to 1

- (void)viewDidLoad { [super viewDidLoad]; self.ticketQueue = dispatch_get_global_queue(0, 0); self.semaphore = dispatch_semaphore_create(1); [self beginSaleTicket]; } - (void)saleTickets { dispatch_semaphore_wait(self.semaphore, DISPATCH_TIME_FOREVER); tickets = tickets - 1; NSLog(@" sold 1 ticket, %d tickets left ",tickets); dispatch_semaphore_signal(self.semaphore); }Copy the code

Read and write Security (Multiple read and Single write)

When a file can be read or written to, it will be unsafe. Multiple threads writing at the same time, or reading and writing at the same time are security risks.

So we want to be able to manipulate files

  • Multiple threads can read files simultaneously
  • During the write operation, files cannot be read, and multiple threads cannot write data

Solution :dispatch_barrier_async

- (void)viewDidLoad {
    [super viewDidLoad];
    self.queue = dispatch_queue_create("rw_queue", DISPATCH_QUEUE_CONCURRENT);
    
    for (int i = 0; i < 10; i++) {
        dispatch_async(self.queue, ^{
            [self read];
        });
        dispatch_async(self.queue, ^{
            [self read];
        });
        dispatch_async(self.queue, ^{
            [self read];
        });
        dispatch_barrier_async(self.queue, ^{
            [self write];
        });
    }
}

- (void)read {
    sleep(1);
    NSLog(@"read");
}

- (void)write
{
    sleep(1);
    NSLog(@"write");
}
Copy the code

As you can see from the output, multiple reads are printed at the same time, but only one write is printed at a time, and nothing else is done while reading.

Dispatch_barrier_async Sets a barrier for concurrent queues. When a barrier is created, other asynchronous tasks are not allowed to be executed.

Pay attention to

What if we replaced the above concurrent queue with a global queue?

self.queue = dispatch_get_global_queue(0, 0); 2021-06-04 11:53:23.544718+0800 MultiThread[98352:14881477] Read 2021-06-04 11:53:23.544718+0800 MultiThread[98352:14881479] Write 2021-06-04 11:53:23.544733+0800 MultiThread[98352:14881480] Read 2021-06-04 11:53:23.544743+0800 MultiThread[98352:14881485] read 2021-06-04 11:53:23.544718+0800 MultiThread[98352:14881476 2021-06-04 11:53:23.544718+0800 MultiThread[98352:14881482] Read 2021-06-04 11:53:23.544766+0800 MultiThread[98352:14881483] read 2021-06-04 11:53:23.544780+0800 MultiThread[98352:14881490] Read 2021-06-04 11:53:23.544780+0800 MultiThread[98352:14881489] write 2021-06-04 11:53:23.544830+0800 MultiThread[98352:14881491] Read 2021-06-04 11:53:23.544841+0800 MultiThread[98352:14881492] Read 2021-06-04 11:53:23.544882+0800 MultiThread[98352:14881493] Write 2021-06-04 11:53:23.544905+0800 MultiThread[98352:14881494] Read 2021-06-04 11:53:23.545049+0800 MultiThread[98352:14881496] read 2021-06-04 11:53:23.545061+0800 MultiThread[98352:14881495 2021-06-04 11:53:23.545064+0800 MultiThread[98352:14881498] Read 2021-06-04 11:53:23.545077+0800 MultiThread[98352:14881499] Read 2021-06-04 11:53:23.545146+0800 MultiThread[98352:14881501] Write 2021-06-04 11:53:23. 545083 + 0800 MultiThread (98352-14881497) writeCopy the code

These are some of the results. As you can see, even with the GCD fence, or equal to asynchronous concurrency.

It’s worth noting that GCD’s fence can only block concurrent queues that we created ourselves.

It doesn’t block the global queue. It’s just to be safe. If the global queue is blocked by an extremely time-consuming operation, many other problems can arise.

It’s a long article, thanks for watching. Please feel free to comment if there is any problem with the article. If it helps you at all, please give it a thumbs up