Programmers use limited life to pursue infinite knowledge.

A pre-announced

First of all, I am not deliberately to do the title party, nor am I to stir the pan, I just want to change the posture to see multithreading, most of the content of this article in the analysis of how to make deadlock, but the skill is still shallow, but again shallow, also need to take the first step. Open your Xcode to verify these deadlocks.

Multithreading tips

Here are three ways to implement multithreading:

  • NSThread
  • GCD
  • NSOperationQueue

Without going into the details of the methods used, let’s take a look at some of the less well-known aspects of them

1. Behind the lock

NSLock is implemented based on POSIX Threads, which uses mutex to synchronize threads.

Mutexes, or mutex as they are called, are a basic mechanism provided by the PThread library to solve this problem. A mutex is a lock that guarantees three things:

  • Atomicity – Locking a mutex is an atomic operation, indicating that the operating system guarantees that if you have locked a mutex, no other thread will be able to lock the mutex at the same time;

  • Singularity – If one thread locks a mutex, it is guaranteed that no other thread can lock the mutex until the thread releases the lock;

  • Idle wait – If a thread (thread 1) attempts to lock a lock held by thread 2, thread 1 suspends and does not consume CPU resources until thread 2 releases the lock. At this point, thread 1 wakes up and continues to execute, locking the mutex.

2. About the life cycle

An NSThread can be terminated immediately by exiting a thread using the [NSThread exit] method (which may cause a memory leak, but I won’t go into that here). Even if you do this on the main thread, the main thread will exit and the app will no longer be able to respond to events. Cancel can be used as a flag bit for a similar purpose, and if nothing is done, the execution continues.

GCD and NSOperationQueue can cancel tasks that have not yet been executed in the queue, and can do nothing about tasks that have already been executed.

Implementation method \ function Thread life cycle Cancel the task
NSThread Manual management Stop execution immediately
GCD Automatic management Cancels unexecuted tasks in the queue
NSOperationQueue Automatic management Cancels unexecuted tasks in the queue

3. Parallelism and concurrency

One of the pitfalls of reading a lot of articles about concurrent queues is that they confuse concurrency with parallelism. Let’s take a look at the differences:

As you can see from the diagram, parallelism is true multithreading, whereas concurrency is simply switching between multitasking. In general, multi-core cpus can execute multiple threads in parallel, while single-core cpus actually have only one thread. Multiplexing achieves near-simultaneous execution. Dispatch_async and globalQueue are both parallel in iOS based on thread usage in Xcode.

4. Queues and threads

Queues store and manage tasks. When tasks are added to queues, they are executed in the sequence in which they are added to queues. If it is a global queue or a concurrent queue, the system will create new threads according to system resources to handle the tasks in the queue. The creation, maintenance and destruction of threads are managed by the operating system, and the queue itself is thread-safe.

When implementing multithreading with NSOperationQueue, you can control the total number of threads and thread dependencies, whereas the GCD can only select concurrent or serial queues.

Competition for resources

Simultaneous execution of tasks by multiple threads can improve the execution efficiency and response time of programs, but it is inevitable that multiple threads will operate on the same resource at the same time. Some time ago I saw a resource competition problem as an example:

@property (nonatomic, strong) NSString *target; 
dispatch_queue_t queue = dispatch_queue_create("parallel", DISPATCH_QUEUE_CONCURRENT);
for (int i = 0; i < 1000000 ; i++) { 
    dispatch_async(queue, ^{ 
        self.target = [NSString stringWithFormat:@"ksddkjalkjd%d",i]; 
    }); 
}
Copy the code

Solutions:

  • @property (nonatomic, strong) NSString *target;willnonatomictoatomic.
  • Queue concurrentDISPATCH_QUEUE_CONCURRENTChange to a serial queueDISPATCH_QUEUE_SERIAL.
  • Asynchronous executiondispatch_asyncChange to synchronous executiondispatch_sync.
  • Assignment using@synchronizedOr lock it.

These are all ways to solve the problem from the perspective of avoiding simultaneous access. There are better ways to share.

Designs a deadlock

Everything has two sides, just like multi-threading can improve efficiency, but also cause the problem of resource competition. While locking ensures the security of multithreaded data, it is also prone to problems under carelessness, that is, deadlocks.

1. NSOperationQueue

Given that NSOperationQueue is highly encapsulated, it is very simple to use and generally does not cause any fuss. The following example shows a bad example. We usually achieve an orderly execution of tasks by controlling dependencies between NSOperations. But mutual or circular dependencies can cause all tasks to fail to start.

 NSBlockOperation *blockOperation1 = [NSBlockOperation blockOperationWithBlock:^{
        NSLog(@"lock 1 start");
        [NSThread sleepForTimeInterval:1];
        NSLog(@"lock 1 over");
    }];
    
    NSBlockOperation *blockOperation2 = [NSBlockOperation blockOperationWithBlock:^{
        NSLog(@"lock 2 start");
        [NSThread sleepForTimeInterval:1];
        NSLog(@"lock 2 over");
    }];
    
    NSBlockOperation *blockOperation3 = [NSBlockOperation blockOperationWithBlock:^{
        NSLog(@"lock 3 start");
        [NSThread sleepForTimeInterval:1];
        NSLog(@"lock 3 over"); }]; / / loop attached [blockOperation2 addDependency: blockOperation1]; [blockOperation3 addDependency:blockOperation2]; [blockOperation1 addDependency:blockOperation3]; / / cycle the culprit / / dependent each other / / [blockOperation1 addDependency: blockOperation2]; //[blockOperation2 addDependency:blockOperation1]; [_operationQueue addOperation:blockOperation1]; [_operationQueue addOperation:blockOperation2]; [_operationQueue addOperation:blockOperation3];Copy the code

Has anyone tried this before? If you’re curious, try it!

[blockOperation1 addDependency:blockOperation1];
Copy the code

2. GCD

Most developers know that executing tasks synchronously in the main thread can cause deadlocks, so let’s take a look at some other situations that can cause deadlocks and similar issues.

A. EXC_BAD_INSTRUCEION error caused when executing in the same step in the main thread:

- (void)deadlock1 {
    dispatch_sync(dispatch_get_main_queue(), ^{
        NSLog(@"task 1 start"); [NSThread sleepForTimeInterval: 1.0]; NSLog(@"task 1 over");
    });
}
Copy the code

B. Similar to the synchronized execution of the main thread, synchronized execution tasks are nested in the serial queue. The synchronization queue task1 can be executed only after the completion of task2, but task1 is doomed to fail because of the nesting of Task2.

- (void)deadlock2 {
    dispatch_queue_t queue = dispatch_queue_create("com.xietao3.sync", DISPATCH_QUEUE_SERIAL); Dispatch_sync (queue, ^{// asynchron also causes NSLog(@)"task 1 start");
        dispatch_sync(queue, ^{
            NSLog(@"task 2 start"); [NSThread sleepForTimeInterval: 1.0]; NSLog(@"task 2 over");
        });
        NSLog(@"task 1 over");
    });
}
Copy the code

Nesting synchronous tasks is buggy, but not always, and replacing the synchronous DISPATCH_QUEUE_SERIAL queue with the concurrent DISPATCH_QUEUE_CONCURRENT queue fixes the problem. After changing to a concurrent queue, task1 still needs to finish executing the nested task2 first. However, when Task2 starts executing, the concurrent queue does not block due to mutual waiting, and Task1 continues to execute after task2 completes executing.

C. Many people think that asynchronous execution is not easy to wait for each other. Indeed, even with serial queues, asynchronous tasks will wait for the current task to execute before starting, unless you add some unhealthy ingredients.

- (void)deadlock3 {
    dispatch_queue_t queue = dispatch_queue_create("com.xietao3.asyn", DISPATCH_QUEUE_SERIAL);
    dispatch_semaphore_t semaphore = dispatch_semaphore_create(0);
    
    dispatch_async(queue, ^{
        __block NSString *str = @"xietao3"; Thread 1 dispatch_async(queue, ^{STR = [NSString stringWithFormat:@"%ld",[str hash]];        // 线程2 加工数据
            dispatch_semaphore_signal(semaphore);
        });
        dispatch_semaphore_wait(semaphore, DISPATCH_TIME_FOREVER);
        NSLog(@"% @",str); // thread 1 uses processed data}); }Copy the code

D. Conventional deadlocks, which are locked again when already locked, form a waiting situation for each other.

  if(! _lock) _lock = [NSLock new]; dispatch_queue_t queue = dispatch_queue_create("com.xietao3.sync", DISPATCH_QUEUE_CONCURRENT); [_lock lock]; dispatch_sync(queue, ^{ [_lock lock]; [NSThread sleepForTimeInterval: 1.0]; [_lock unlock]; }); [_lock unlock];Copy the code

To solve the problem is also relatively simple, NSRecursiveLock into recursive lock NSRecursiveLock, recursive lock is like ordinary lock, turn clockwise after a turn to lock, counterclockwise a turn that is unlocked; If it turns clockwise twice, it can be unlocked by turning counterclockwise twice. Here’s an example of recursion:

(void)recursivelock:(int)count {recursivelock:(int)count {if (count>10) return;
    count++;
    if(! _recursiveLock) _recursiveLock = [NSRecursiveLock new]; [_recursiveLock lock]; NSLog(@"task%d start",count);
    [self recursivelock:count];
    NSLog(@"task%d over",count);
    [_recursiveLock unlock];
}
Copy the code

3. The other

In addition to the mutex and recursive locks mentioned above, other locks include:

  • OSSpinLock(spin lock)
  • Pthread_mutex (low-level implementation of locks in OC)
  • NSConditionLock NSConditionLock NSConditionLock NSConditionLock NSConditionLock
  • NSCondition (underlying implementation of conditional locking)
  • @synchronized

NSConditionLock is more flexible than the mutexes condition, and the spin-lock condition is more flexible than the mutexes condition. The spin-lock condition is more flexible than the spin-lock condition, and the spin-lock condition is more flexible.

conclusion

About multithreading, lock articles have been rotten street, this article as far as possible from a new point of view, try not to write those repeated content, I hope to help you, if the content is wrong, welcome to point out.

Please note the original text:Juejin. Cn/post / 684490…