What I understand about iOS concurrent programming

Concurrent programming can be a headache on any platform. Fortunately, concurrent programming on the client side is much easier than on the server side. This post is about concurrent programming on the iOS platform. It’s my habit when I write a blog to introduce the principles first and then the usage. After all, there is better documentation on the official website for API use.

Something primal

To make it easier to understand, here are some related concepts. If you are already familiar with these concepts, you can skip them.

Process of 1.

From the definition of an operating system, a process is the basic unit of resource allocation and scheduling. After a thread is created, the system allocates corresponding resources to it. In iOS, a process can be understood as an App. IOS does not provide an API for creating processes, even if you call fork(), you cannot create new processes. So, when I talk about concurrent programming, I’m talking about threads.

Thread 2.

A thread is the smallest unit of a program’s execution flow. Typically, a process will have multiple threads, or at least one thread. A thread has five states: created, ready, running, blocked, and dead. Threads can share process resources, and all problems are caused by sharing resources.

3. The concurrent

The operating system introduced the concept of threads, is to make the CPU better coordinated operation, give full play to their parallel processing capabilities. For example, in iOS, you can do UI operations in the main thread, and then start other threads to do things that are not related to UI operations, so it’s faster to do both in parallel. That’s the general idea of concurrency.

4. Time slice

A microcosmic amount of CPU time allocated by a time-sharing operating system to each running process (in the preemption kernel, from the time a process starts running until it is preempted). Threads can be thought of as “microprocesses”, so this concept can be applied to threads as well.

General operating systems use time slice rotation algorithm for scheduling, that is, every time scheduling, always select the queue head process ready queue, let it run a preset system time slice on the CPU. A process that has not finished running in its timeslice returns to the end of the thread queue to requeue for the next schedule. The range of time slices varies from operating system to operating system, usually in milliseconds (ms).

4. A deadlock

A deadlock occurs when multiple threads (processes) are waiting for each other during execution because they are competing for resources. There are four necessary conditions for a deadlock to occur:

  • Mutually exclusive: A process uses the allocated resources exclusively. That is, only one process occupies a resource in a period of time. If another process requests the resource at this time, the requester can only wait until the process holding the resource is released.
  • Request and hold conditions: a process that has held at least one resource makes a new request for the resource that has been occupied by another process. In this case, the requesting process blocks but does not release other resources that it has obtained.
  • Inalienable conditions: Resources acquired by a process cannot be taken away until they are used up. They can only be released when they are used up.
  • Loop waiting condition: when a deadlock occurs, there must be a process — resource loop chain, that is, P0 in process set {P0, P1, P2, ··· Pn} is waiting for a resource occupied by P1. P1 is waiting for resources occupied by P2…… Pn is waiting for a resource that has been occupied by P0.

To make it easier to understand, here’s an example: a bridge that allows only one vehicle to cross at a time (mutually exclusive). Two cars, A and B, drove up the bridge from opposite ends and walked to the middle of it. At this point, Car A refuses to retreat (inalienable) and wants to occupy the road occupied by car B; Car B also refuses to back up and wants to occupy the road occupied by car A (request and hold). At this point, A waits for resources occupied by B, and B waits for resources occupied by A (loop waiting). The deadlock phenomenon is formed when the two vehicles are locked.

5. Thread safety

When multiple threads access a shared resource (such as a database) at the same time, data can be corrupted due to timing issues. This is called thread insecurity. For example, if the value of an integer field in the database is 0, two threads write to it at the same time. Thread B doesn’t get it when A is done, it gets it at the same time as A, and then it gets A 1, so it gets it twice, so it should be 2, but it ends up with A 1 in the database. Actual development scenarios can be much more complex than this.

Thread-safe means that there is no problem when multiple threads manipulate this part of the data, such as read and write operations.

Lock

Because threads share process resources, thread safety issues arise in the case of concurrency. To solve this problem, the concept of locks emerged. In a multi-threaded environment, when you access some shared data, you gain access, lock the data, and no other thread can access it until you unlock it.

IOS provides a wide variety of locks, and ibireme’s article provides a performance analysis of these locks, which I have directly copied here:

The following locks are analyzed one by one.

1.OSSpinLock

Ibireme’s article also says that although this lock has the highest performance, it is no longer safe. It is recommended not to use it. Here is a brief description.

OSSpinLock is a spin lock, mainly provides the lock (OSSpinLockLock), try the lock (OSSpinLockTry) unlock (OSSpinLockUnlock) three methods. If a resource fails to be locked, it does not enter the sleep state, but keeps asking (spin), occupying CPU resources, and is not suitable for long-term tasks. During the spin, priority inversion occurs when the low-priority thread loses access to the CPU, fails to complete the task and releases the lock.

So, although it is very high performance, don’t use it. And Apple has made this analogy obsolete.

Spin-locks are similar to mutex locks. The difference lies in the following: Spin-locks are of the busy-waiting type. When a lock fails to be added, it is always in the query state, occupying CPU resources and high efficiency. The mutex is a sleep-waiting lock. After a failed attempt, the mutex will be blocked, and then the mutex will be placed in the wait queue. Because of the context switch, the efficiency is low. In iOS, NSLock is a mutex.

Priority inversion: When a high-priority task accesses a shared resource, the resource is preempted by a low-priority task, blocking the high-priority task. At the same time, the low-priority task is preempted by a lower-priority task, thus failing to release the critical resource in a timely manner. Finally, the task priority is inverted and blocked. (Quoted in Wiki

Bestswifter’s in-depth understanding of locks in iOS development is a great article on the principles of spin locking. Most of my knowledge of locks is here, so I recommend reading the original article.

A spin lock is a loop until it tries to add a lock. The pseudo-code is as follows:

bool lock = false; // Any thread can apply for a lock
do {  
    while(test_and_set(&lock); Test_and_set is an atomic operation that attempts to lock
        Critical section  / / critical region
    lock = false; // Release the lock so that other threads can enter the critical section
        Reminder section // Code that does not require lock protection
}
Copy the code

Use:

OSSpinLock spinLock = OS_SPINLOCK_INIT;
OSSpinLockLock(&spinLock);
// Locked resources
OSSpinLockUnlock(&spinLock);
Copy the code

2.dispatch_semaphore

Dispatch_semaphore is not a lock, it is a semaphore. The differences are as follows:

  • Locking is used for thread exclusive operations. One thread locks a resource and other threads cannot access it until the whole thread is unlocked. Semaphores are used for thread synchronization, where one thread tells other threads that it has completed an action, and then the other threads perform the action.
  • The scope of the lock is between threads; The scope of a semaphore is between threads and processes.
  • Semaphores can sometimes act as locks, among other things, before the first time.
  • If converted to a numeric value, the lock can be thought of as only 0 and 1; Semaphores can be greater than or less than zero and have multiple values.

Dispatch_semaphore can be used in three steps: create, wait, and signal. As follows:

    // create
    dispatch_semaphore_t semaphore = dispatch_semaphore_create(1);

    // thread A
    dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
        dispatch_semaphore_wait(semaphore, DISPATCH_TIME_FOREVER);
        // execute task A
        NSLog(@"task A");
        sleep(10);
        dispatch_semaphore_signal(semaphore);
    });
    
    // thread B
    dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
        dispatch_semaphore_wait(semaphore, DISPATCH_TIME_FOREVER);
        // execute task B
        NSLog(@"task B");
        dispatch_semaphore_signal(semaphore);
    });
Copy the code

Execution Result:

2018- 05- 03 21:40:09.068586+0800 ConcurrencyTest[44084:1384262] task A
2018- 05- 03 21:40:19.072951+0800 ConcurrencyTest[44084:1384265] task B
Copy the code

Thread A and THREAD B are asynchronous threads that execute their own events and do not interfere with each other. However, according to the console output, B is executed after A has completed 10 seconds of execution, which is obviously blocked. The dispatch_semaphore process is as follows: When a semaphore is created, the signal quantity is 1. When the dispatch_semaphore_wait of thread A is executed, the signal quantity value decreases by 1 and becomes 0. The sleep method blocks the current thread for 10s after task A is executed. At the same time, thread B has executed dispatch_semaphore_wait. Since the semaphore is 0 and thread A has set DISPATCH_TIME_FOREVER, it needs to wait for thread A to sleep for 10s. Execute dispatch_semaphore_signal to set the semaphore to 1.

According to the above description, the principle of dispatch_semaphore is generally understood. The GCD source code defines these methods as follows:

long
dispatch_semaphore_wait(dispatch_semaphore_t dsema, dispatch_time_t timeout)
{
	long value = dispatch_atomic_dec2o(dsema, dsema_value);
	dispatch_atomic_acquire_barrier();
	if (fastpath(value >= 0)) {
		return 0;
	}
	return _dispatch_semaphore_wait_slow(dsema, timeout);
}

static long
_dispatch_semaphore_wait_slow(dispatch_semaphore_t dsema,
		dispatch_time_t timeout)
{
	long orig;

again:
	// Mach semaphores appear to sometimes spuriously wake up. Therefore,
	// we keep a parallel count of the number of times a Mach semaphore is
	// signaled (6880961).
	while ((orig = dsema->dsema_sent_ksignals)) {
		if (dispatch_atomic_cmpxchg2o(dsema, dsema_sent_ksignals, orig,
				orig - 1)) {
			return 0; }}struct timespec _timeout;
	int ret;

	switch (timeout) {
	default:
		do {
			uint64_t nsec = _dispatch_timeout(timeout);
			_timeout.tv_sec = (typeof(_timeout.tv_sec))(nsec / NSEC_PER_SEC);
			_timeout.tv_nsec = (typeof(_timeout.tv_nsec))(nsec % NSEC_PER_SEC);
			ret = slowpath(sem_timedwait(&dsema->dsema_sem, &_timeout));
		} while (ret == - 1 && errno == EINTR);

		if (ret == - 1&& errno ! = ETIMEDOUT) { DISPATCH_SEMAPHORE_VERIFY_RET(ret);break;
		}
		// Fall through and try to undo what the fast path did to
		// dsema->dsema_value
	case DISPATCH_TIME_NOW:
		while ((orig = dsema->dsema_value) < 0) {
			if (dispatch_atomic_cmpxchg2o(dsema, dsema_value, orig, orig + 1)) {
				errno = ETIMEDOUT;
				return - 1; }}// Another thread called semaphore_signal().
		// Fall through and drain the wakeup.
	case DISPATCH_TIME_FOREVER:
		do {
			ret = sem_wait(&dsema->dsema_sem);
		} while(ret ! =0);
		DISPATCH_SEMAPHORE_VERIFY_RET(ret);
		break;
	}

	goto again;
}
Copy the code

If you don’t want to look at the code, you can just listen to me:

  • calldispatch_semaphore_waitMethod, if the semaphore is greater than 0, directly return; Otherwise, go to the next step.
  • _dispatch_semaphore_wait_slowMethod based on incomingtimeoutIf the parameters are different, use switch-case.
  • If the DISPATCH_TIME_NOW parameter is passed, increment the semaphore by one and return immediately.
  • If a timeout is passed, call the system’ssemaphore_timedwaitMethod to wait until a timeout occurs.
  • If the DISPATCH_TIME_FOREVER parameter is passed in, the system’ssemaphore_waitWait until receivedsingalThe signal.

Dispatch_semaphore_signal = dispatch_semaphore_signal

long
dispatch_semaphore_signal(dispatch_semaphore_t dsema)
{
	dispatch_atomic_release_barrier();
	long value = dispatch_atomic_inc2o(dsema, dsema_value);
	if (fastpath(value > 0)) {
		return 0;
	}
	if (slowpath(value == LONG_MIN)) {
		DISPATCH_CLIENT_CRASH("Unbalanced call to dispatch_semaphore_signal()");
	}
	return _dispatch_semaphore_signal_slow(dsema);
}
Copy the code
  • Now increase the semaphore by 1 and return if it is greater than 0.
  • Less than 0 returns_dispatch_semaphore_signal_slowThis method calls the kernel semaphore_signal to wake up the semaphore and then returns 1.

3.pthread_mutex

Pthreads is short for POSIX Threads. Pthread_mutex is a mutex, which is a mutex that fails in an attempt to lock. There are three main types of locks: PTHREAD_MUTEX_NORMAL, PTHREAD_MUTEX_ERRORCHECK, and PTHREAD_MUTEX_RECURSIVE.

  • PTHREAD_MUTEX_NORMAL, a normal lock. When a thread locks, the rest of the threads that request the lock form a waiting queue and obtain the lock according to the priority after unlocking. This locking strategy ensures the fairness of resource allocation.
  • PTHREAD_MUTEX_ERRORCHECK, which checks for an error lock and returns EDEADLK if the same thread requests the same lock. Otherwise, the same as PTHREAD_MUTEX_NORMAL.
  • PTHREAD_MUTEX_RECURSIVE, a recursive lock that allows a thread to recursively request a lock.

Use as follows:

    pthread_mutex_t mutex;   / / define the lock
    pthread_mutexattr_t attr; // Define the mutexattr_t variable
    pthread_mutexattr_init(&attr); // Initialize attr as the default
    pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_NORMAL);  // Set the lock properties
    pthread_mutex_init(&mutex, &attr); / / create a lock

    pthread_mutex_lock(&mutex); / / to apply for the lock
    / / critical region
    pthread_mutex_unlock(&mutex); / / releases the lock
Copy the code

4.NSLock

NSLock is a mutex, an object wrapped in Objective-C. We don’t know how objective-C is implemented, but we can find it in the Swift source code:

.internal var mutex = _PthreadMutexPointer.allocate(capacity: 1)... openfunc lock(a) {
        pthread_mutex_lock(mutex)
    }

open func unlock(a) {
   pthread_mutex_unlock(mutex)
#if os(macOS) || os(iOS)
  // Wakeup any threads waiting in lock(before:)
   pthread_mutex_lock(timeoutMutex)
   pthread_cond_broadcast(timeoutCond)
   pthread_mutex_unlock(timeoutMutex)
#endif
}
Copy the code

You can see that he just wraps pthread_mutex. Just because it is slower than pthread_mutex, is there a few more pushdown operations between method levels?

General use:

NSLock *mutexLock = [NSLock new];
[mutexLock lock];
/ / critical region
[muteLock unlock];
Copy the code

4.NSCondition & NSConditionLock

NSCondition can act as both a lock and a condition variable. You can also find its implementation in the Swift source code:

open class NSCondition: NSObject.NSLocking {
    internal var mutex = _PthreadMutexPointer.allocate(capacity: 1)
    internal var cond = _PthreadCondPointer.allocate(capacity: 1)

    public override init() {
        pthread_mutex_init(mutex, nil)
        pthread_cond_init(cond, nil)}deinit {
        pthread_mutex_destroy(mutex)
        pthread_cond_destroy(cond)
        mutex.deinitialize(count: 1)
        cond.deinitialize(count: 1)
        mutex.deallocate()
        cond.deallocate()
    }
    
    open func lock(a) {
        pthread_mutex_lock(mutex)
    }
    
    open func unlock(a) {
        pthread_mutex_unlock(mutex)
    }
    
    open func wait(a) {
        pthread_cond_wait(cond, mutex)
    }

    open func wait(until limit: Date) -> Bool {
        guard var timeout = timeSpecFrom(date: limit) else {
            return false
        }
        return pthread_cond_timedwait(cond, mutex, &timeout) == 0
    }
    
    open func signal(a) {
        pthread_cond_signal(cond)
    }
    
    open func broadcast(a) {
        pthread_cond_broadcast(cond)
    }
    
    open var name: String?
}
Copy the code

As can be seen, it still follows the NSLocking protocol. The lock method also uses pthread_mutex, and the wait and signal methods use pthread_cond_wait and pthread_cond_signal.

To use NSCondition, lock the critical section to be operated on, and then block the thread with wait method because the condition is not met. After the conditions are met, the signal method is used to notify. Here is a producer-consumer example:

NSCondition *condition = [NSCondition new];
NSMutableArray *products = [NSMutableArray array];

// consume
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
    [condition lock];
    while (products.count == 0) {
        [condition wait];
    }
    [products removeObjectAtIndex:0];
    [condition unlock];
});

// product
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
    [condition lock];
    [products addObject:[NSObject new]];
    [condition signal];
    [condition unlock];
});
Copy the code

NSConditionLock is implemented by using NSCondition, following the NSLocking protocol, and here is the swift source code:

open class NSConditionLock : NSObject.NSLocking {
    internal var _cond = NSCondition()

    ...

    open func lock(whenCondition condition: Int) {
        let _ = lock(whenCondition: condition, before: Date.distantFuture)
    }

    open func `try`(a) -> Bool {
        return lock(before: Date.distantPast)
    }
    
    open func tryLock(whenCondition condition: Int) -> Bool {
        return lock(whenCondition: condition, before: Date.distantPast)
    }

    open func unlock(withCondition condition: Int) {
        _cond.lock()
        _thread = nil
        _value = condition
        _cond.broadcast()
        _cond.unlock()
    }

    open func lock(before limit: Date) -> Bool {
        _cond.lock()
        while_thread ! =nil {
            if! _cond.wait(until: limit) { _cond.unlock()return false
            }
        }
        _thread = pthread_self()
        _cond.unlock()
        return true
    }
    
    open func lock(whenCondition condition: Int, before limit: Date) -> Bool {
        _cond.lock()
        while_thread ! =nil|| _value ! = condition {if! _cond.wait(until: limit) { _cond.unlock()return false
            }
        }
        _thread = pthread_self()
        _cond.unlock()
        return true}... }Copy the code

You can see that it uses an NSCondition global variable to implement lock and unlock methods, which is some simple code logic that I won’t go into.

Use NSConditionLock

  • Initializing NSConditionLock sets a condition that can only be locked.
  • -[unlockWithCondition:] The value of condition is changed after the unlock is unlocked.

typedef NS_ENUM(NSInteger.CTLockCondition) {
    CTLockConditionNone = 0.CTLockConditionPlay.CTLockConditionShow
};

- (void)testConditionLock {
    NSConditionLock *conditionLock = [[NSConditionLock alloc] initWithCondition:CTLockConditionPlay];
    
    // thread one
    dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
        [conditionLock lockWhenCondition:CTLockConditionNone];
        NSLog(@"thread one");
        sleep(2);
        [conditionLock unlock];
    });
    
    // thread two
    dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
        sleep(1);
        if ([conditionLock tryLockWhenCondition:CTLockConditionPlay]) {
            NSLog(@"thread two");
            [conditionLock unlockWithCondition:CTLockConditionShow];
            NSLog(@"thread two unlocked");
        } else {
            NSLog(@"thread two try lock failed"); }});// thread three
    dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
        sleep(2);
        if ([conditionLock tryLockWhenCondition:CTLockConditionPlay]) {
            NSLog(@"thread three");
            [conditionLock unlock];
            NSLog(@"thread three locked success");
        } else {
            NSLog(@"thread three try lock failed"); }}); }// thread four
    dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
        sleep(4);
        if ([conditionLock tryLockWhenCondition:CTLockConditionShow]) {
            NSLog(@"thread four");
            [conditionLock unlock];
            NSLog(@"thread four unlocked success");
        } else {
            NSLog(@"thread four try lock failed"); }}); }Copy the code

Then look at the output:

2018- 05- 05 16:34:33.801855+0800 ConcurrencyTest[97128:3100768] thread two
2018- 05- 05 16:34:33.802312+0800 ConcurrencyTest[97128:3100768] thread two unlocked
2018- 05- 05 16:34:34.804384+0800 ConcurrencyTest[97128:3100776] thread three try lock failed
2018- 05- 05 16:34:35.806634+0800 ConcurrencyTest[97128:3100778] thread four
2018- 05- 05 16:34:35.806883+0800 ConcurrencyTest[97128:3100778] thread four unlocked success
Copy the code

As can be seen, thread one failed to lock due to inconsistent conditions with initialization, and failed to output log. If thread two meets the conditions, the lock is successfully unlocked and the lock conditions are modified. Thread Three failed to lock using the original locking condition. Thread four successfully locked the lock using the modified conditions.

5. NSRecursiveLock

NSRecursiveLock is a recursive lock. Then here’s the swift source, just the key parts:

open class NSRecursiveLock: NSObject, NSLocking {
    ...
    public override init() {
        super.init()
#if CYGWIN
        var attrib : pthread_mutexattr_t? = nil
#else
        var attrib = pthread_mutexattr_t()
#endif
        withUnsafeMutablePointer(to: &attrib) { attrs in
            pthread_mutexattr_settype(attrs, Int32(PTHREAD_MUTEX_RECURSIVE))
            pthread_mutex_init(mutex, attrs)
        }
    }
    
    ...
}
Copy the code

It is initialized using pthread_mutex_T of type PTHREAD_MUTEX_RECURSIVE. Recursion can be called repeatedly in a thread, and then the bottom will record the lock and unlock times, when the two times are the same, to unlock correctly, release the critical region.

Use examples:

- (void)testRecursiveLock {
    NSRecursiveLock *recursiveLock = [NSRecursiveLock new];
    
    int (^__block fibBlock)(int) = ^ (int num) {
        [recursiveLock lock];
        
        if (num < 0) {
            [recursiveLock unlock];
            return 0;
        }
        
        if (num == 1 || num == 2) {
            [recursiveLock unlock];
            return num;
        }
        int newValue = fibBlock(num - 1) + fibBlock(num - 2);
        [recursiveLock unlock];
        return newValue;
    };
    
    int value = fibBlock(10);
    NSLog(@"value is %d", value);
}
Copy the code

6. @synchronized

@synchronized sacrifices performance for syntactic brevity. If you want to know more, I suggest you read this article. Here’s how it works:

@synchronized’s locking process looks something like this:

@try {
    objc_sync_enter(obj); // lock
    / / critical region
} @finally {
    objc_sync_exit(obj);    // unlock
}

Copy the code

The storage structure of @synchronized is implemented using hash tables. When you pass in an object, a lock is assigned to that object. Lock and object packaged into an object, and then with a lock again packaged into an object, can be understood as value; An algorithm that obtains a value based on the address of an object as a key. It then writes to the hash table as key-value. The structure might look something like this:

And when it’s stored, it’s stored in a hash table structure, not in the order I drew it up here, it’s just a node.

@synchronized is easy to use:

NSMutableArray *elementArray = [NSMutableArray array];
    
@synchronized(elementArray) {
   [elementArray addObject:[NSObject new]];
}
Copy the code

Pthreads

As mentioned earlier, PThreads is short for POSIX Threads. This is something we don’t usually use, so here’s a brief introduction. Pthreads is a POSIX threading standard that defines a set of apis for creating and manipulating threads. The library that implements the POSIX threading standard is often called Pthreads and is commonly used on UNIX-like POSIX systems such as Linux and Solaris.

NSThread

Nsthreads are an encapsulation of Mach threads in the kernel Mach kernel. An NSThread object is a thread. Usage is low, there is nothing to talk about except API usage. If you are already familiar with these apis, you can skip this section.

1. Initialize the thread to execute a task

-[cancel], -[start, and -[main] are used to initialize a NSTherad object. The thread is usually destroyed immediately after execution, or it exits due to some exception.

/** Use the method in the target object as the execution body. You can pass some arguments through the argument. - (instancetype)initWithTarget:(id)target selector:(SEL)selector object:(nullable id)argument; /** Use the block object as the execution body */
- (instancetype)initWithBlock:(void(^) (void))block;

/** class method, the above object method needs to call -[start] method to start the thread, the following two methods do not need to manually start */
+ (void)detachNewThreadWithBlock:(void(^) (void))block;
+ (void)detachNewThreadSelector:(SEL)selector toTarget:(id)target withObject:(nullable id)argument;
Copy the code

2. Execute a task on the main thread

/** The last argument, where you specify at least one mode to perform the selector. If you pass nil or an empty array, the selector doesn't perform, even though the method definition says nullable */
- (void)performSelectorOnMainThread:(SEL)aSelector withObject:(nullable id)arg waitUntilDone:(BOOL)wait modes:(nullable NSArray<NSString *> *)array;

- (void)performSelectorOnMainThread:(SEL)aSelector withObject:(nullable id)arg waitUntilDone:(BOOL)wait;
Copy the code

3. Execute a task on another thread

/** modes the same as the previous */
- (void)performSelector:(SEL)aSelector onThread:(NSThread *)thr withObject:(nullable id)arg waitUntilDone:(BOOL)wait modes:(nullable NSArray<NSString *> *)array;

- (void)performSelector:(SEL)aSelector onThread:(NSThread *)thr withObject:(nullable id)arg waitUntilDone:(BOOL)wait
Copy the code

4. Execute a task in the background thread

- (void)performSelectorInBackground:(SEL)aSelector withObject:(nullable id)arg;
Copy the code

5. Obtain the current thread

@property (class.readonly.strong) NSThread *currentThread;
Copy the code

When using thread-dependent methods, remember to set the name for later debugging. Other parameters, such as priority, are also set.

PerformSelector: A series of methods that are not very safe, use with caution.

Grand Central Dispatch (GCD)

GCD is a set of API based on C implementation, and is open source, if interested, can be down a source code here to study. GCD is by the system to help us deal with multi-threaded scheduling, is very convenient, but also the highest frequency of use. This chapter mainly explains the principle and use of GCD.

Before we get there, let’s have an overview of what the CCP has to offer:

The API provided by the system can fully meet our daily development needs. Each of these modules is explained below.

1. Dispatch Queue

GCD provides us with two types of queues, serial queues and parallel queues. The difference between the two is:

  • In a serial queue, tasks are executed in FIFO order. The first task is executed before the second one is executed.
  • In parallel queues, tasks are executed in FIFO order, as long as the first one is taken to execute, and then the next one starts to execute, and the latter task does not need to wait until the previous task is finished.

In addition, explain the confusing concept of concurrency and parallelism:

  • Concurrency: When individual parts can be executed at the same time, but it is up to the system to decide how.
  • Parallel: Two tasks are executed simultaneously without interfering with each other. Single-core device, the system needs to switch context to achieve concurrency; Multi-core devices, where systems can perform concurrent tasks through parallelism.

Finally, one more concept, synchronous and asynchronous:

  • Synchronization: Tasks executed synchronously block the current thread.
  • Asynchronous: Asynchronously executed tasks do not block the current thread. Whether to start a new thread is managed by the system. If there are currently free threads, use the current thread to execute the asynchronous task. If there are no idle threads and the number of threads does not reach the system maximum, a new thread is started. If the number of threads reaches the upper limit, wait for other threads to complete their tasks.
The queue

When we use it, we usually use these queues:

  • Primary queue – dispatch_get_main_queue: A special serial queue. In GCD, tasks in the method main queue are executed on the main thread. When we want to dispatch to the main thread when we update the UI, we can use this queue.

    - (void)viewDidLoad {
    [super viewDidLoad];
    	dispatch_async(dispatch_get_main_queue(), 	^{
          // UI-related operations
       });
    Copy the code

} ` ` `

  • Global parallel queue -dispatch_get_global_queue: a global parallel queue provided by the system. We can specify parameters to obtain queues of different priorities. The system provides four priorities, so it can also be considered that the system provides us with four parallel queues, which are:

    • DISPATCH_QUEUE_PRIORITY_HIGH
    • DISPATCH_QUEUE_PRIORITY_DEFAULT
    • DISPATCH_QUEUE_PRIORITY_LOW
    • DISPATCH_QUEUE_PRIORITY_BACKGROUND
    dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0);
    dispatch_async(queue, ^{
        // Related operations
    });
    Copy the code
  • Custom queues: You can define your own serial or parallel queues to perform related tasks. Custom queues are also recommended in development. When creating a custom queue, two parameters are required. One is the name of the queue, which is convenient for us to find the queue when debugging. The naming method is the reverse DNS naming rule. NULL or DISPATCH_QUEUE_SERIAL indicates serial queues and DISPATCH_QUEUE_CONCURRENT indicates parallel queues. In general, do not send NULL as it will reduce readability. DISPATCH_QUEUE_SERIAL_INACTIVE indicates the serial inactive queue. DISPATCH_QUEUE_CONCURRENT_INACTIVE indicates the parallel inactive queue. It needs to be activated to execute block tasks.

    Copy the code

dispatch_queue_t queue = dispatch_queue_create(“com.bool.dispatch”,DISPATCH_QUEUE_SERIAL); ` ` `

  • You can usedispatch_queue_set_specific,dispatch_queue_get_specificdispatch_get_specificMethod to set the associated key to the queue or find the associated object based on the key.

So the system gives us five different queues, the main queue running on the main thread; Global queues with three different priorities; A background queue with a lower priority. In addition, developers can customize serial and parallel queues, and all blocks that are scheduled in these custom queues will eventually be placed in the global queue and thread pool, as described below. Steal a classic picture:

Synchronous VS Asynchronous

Most of the time we use dispatch_asyn() for asynchronous operations because the program is sequential and rarely synchronous. Sometimes we use dispatch_syn() as a lock for protection.

The system maintains a queue. According to THE rules of FIFO, the tasks dispatched to the queue are executed one by one. Sometimes we want to defer some tasks. For example, when the App starts, I want to defer a time-consuming task from the main thread. Try dispatch_asyn().

dispatch_async(dispatch_get_main_queue(), ^{
        // Want to postpone the task
    });
Copy the code

Normally we don’t deadlock with dispatch_asyn(). Deadlocks typically occur when dispatch_SYN () is used. Such as:

dispatch_sync(dispatch_get_main_queue(), ^{
   NSLog(@"dead lock");
});
Copy the code

If you want to write the above, you will get an error when you start. The same is true for:

dispatch_queue_t queue = dispatch_queue_create("com.bool.dispatch", DISPATCH_QUEUE_SERIAL);
    dispatch_async(queue, ^{
        NSLog(@"dispatch asyn");
        dispatch_sync(queue, ^{
            NSLog(@"dispatch asyn -> dispatch syn");
        });
    });
Copy the code

In the above code, the whole block of dispatch_asyn() (called blCOk_ASyn) is appended as a task to the end of the serial queue and executed. Inside block_asyn, there’s dispatch_SYN (). Think block_SYN. Because it is a serial queue, the first one (block_ASyn) needs to be executed, and the second one (block_SYN) needs to be executed. However, to complete block_ASyn, internal block_SYN needs to be executed. Waiting for each other, forming a deadlock.

In real world development, there are more complex deadlock scenarios. But now that the compiler is friendly, we can detect it at compile time.

The basic principle of

For the following lines of code, let’s examine the underlying process:

- (void)viewDidLoad {
    [super viewDidLoad];
    dispatch_queue_t queue = dispatch_queue_create("com.bool.dispatch", DISPATCH_QUEUE_SERIAL);
    dispatch_async(queue, ^{
        NSLog(@"dispatch asyn test");
    });
}
Copy the code

Create a queue

The source code is very long, but there is only one method, the logic is relatively clear, as follows:

/** The method called by the developer */
dispatch_queue_t
dispatch_queue_create(const char *label, dispatch_queue_attr_t attr)
{
	return _dispatch_queue_create_with_target(label, attr,
			DISPATCH_TARGET_QUEUE_DEFAULT, true);
}

/** Internal actual call method */
DISPATCH_NOINLINE
static dispatch_queue_t
_dispatch_queue_create_with_target(const char *label, dispatch_queue_attr_t dqa,
		dispatch_queue_t tq, bool legacy)
{
	// 1. Preliminary judgment
	if(! slowpath(dqa)) { dqa = _dispatch_get_default_queue_attr(); }else if(dqa->do_vtable ! = DISPATCH_VTABLE(queue_attr)) { DISPATCH_CLIENT_CRASH(dqa->do_vtable,"Invalid queue attribute");
	}

	// 2. Set queue parameters
	dispatch_qos_t qos = _dispatch_priority_qos(dqa->dqa_qos_and_relpri);
#if! HAVE_PTHREAD_WORKQUEUE_QOS
	if (qos == DISPATCH_QOS_USER_INTERACTIVE) {
		qos = DISPATCH_QOS_USER_INITIATED;
	}
	if (qos == DISPATCH_QOS_MAINTENANCE) {
		qos = DISPATCH_QOS_BACKGROUND;
	}
#endif / /! HAVE_PTHREAD_WORKQUEUE_QOS

	_dispatch_queue_attr_overcommit_t overcommit = dqa->dqa_overcommit;
	if(overcommit ! = _dispatch_queue_attr_overcommit_unspecified && tq) {if (tq->do_targetq) {
			DISPATCH_CLIENT_CRASH(tq, "Cannot specify both overcommit and "
					"a non-global target queue"); }}if(tq && ! tq->do_targetq && tq->do_ref_cnt == DISPATCH_OBJECT_GLOBAL_REFCNT) {// Handle discrepancies between attr and target queue, attributes win
		if (overcommit == _dispatch_queue_attr_overcommit_unspecified) {
			if (tq->dq_priority & DISPATCH_PRIORITY_FLAG_OVERCOMMIT) {
				overcommit = _dispatch_queue_attr_overcommit_enabled;
			} else{ overcommit = _dispatch_queue_attr_overcommit_disabled; }}if (qos == DISPATCH_QOS_UNSPECIFIED) {
			dispatch_qos_t tq_qos = _dispatch_priority_qos(tq->dq_priority);
			tq = _dispatch_get_root_queue(tq_qos,
					overcommit == _dispatch_queue_attr_overcommit_enabled);
		} else {
			tq = NULL; }}else if(tq && ! tq->do_targetq) {// target is a pthread or runloop root queue, setting QoS or overcommit
		// is disallowed
		if(overcommit ! = _dispatch_queue_attr_overcommit_unspecified) { DISPATCH_CLIENT_CRASH(tq,"Cannot specify an overcommit attribute "
					"and use this kind of target queue");
		}
		if(qos ! = DISPATCH_QOS_UNSPECIFIED) { DISPATCH_CLIENT_CRASH(tq,"Cannot specify a QoS attribute "
					"and use this kind of target queue"); }}else {
		if (overcommit == _dispatch_queue_attr_overcommit_unspecified) {
			 // Serial queues default to overcommit!overcommit = dqa->dqa_concurrent ? _dispatch_queue_attr_overcommit_disabled : _dispatch_queue_attr_overcommit_enabled; }}if(! tq) { tq = _dispatch_get_root_queue( qos == DISPATCH_QOS_UNSPECIFIED ? DISPATCH_QOS_DEFAULT : qos, overcommit == _dispatch_queue_attr_overcommit_enabled);if(slowpath(! tq)) { DISPATCH_CLIENT_CRASH(qos,"Invalid queue attribute"); }}// 3. Initialize the queue
	if (legacy) {
		// if any of these attributes is specified, use non legacy classes
		if (dqa->dqa_inactive || dqa->dqa_autorelease_frequency) {
			legacy = false; }}const void *vtable;
	dispatch_queue_flags_t dqf = 0;
	if (legacy) {
		vtable = DISPATCH_VTABLE(queue);
	} else if (dqa->dqa_concurrent) {
		vtable = DISPATCH_VTABLE(queue_concurrent);
	} else {
		vtable = DISPATCH_VTABLE(queue_serial);
	}
	switch (dqa->dqa_autorelease_frequency) {
	case DISPATCH_AUTORELEASE_FREQUENCY_NEVER:
		dqf |= DQF_AUTORELEASE_NEVER;
		break;
	case DISPATCH_AUTORELEASE_FREQUENCY_WORK_ITEM:
		dqf |= DQF_AUTORELEASE_ALWAYS;
		break;
	}
	if (legacy) {
		dqf |= DQF_LEGACY;
	}
	if (label) {
		const char *tmp = _dispatch_strdup_if_mutable(label);
		if (tmp != label) {
			dqf |= DQF_LABEL_NEEDS_FREE;
			label = tmp;
		}
	}

	dispatch_queue_t dq = _dispatch_object_alloc(vtable,
			sizeof(struct dispatch_queue_s) - DISPATCH_QUEUE_CACHELINE_PAD);
	_dispatch_queue_init(dq, dqf, dqa->dqa_concurrent ?
			DISPATCH_QUEUE_WIDTH_MAX : 1, DISPATCH_QUEUE_ROLE_INNER |
			(dqa->dqa_inactive ? DISPATCH_QUEUE_INACTIVE : 0));

	dq->dq_label = label;
#if HAVE_PTHREAD_WORKQUEUE_QOS
	dq->dq_priority = dqa->dqa_qos_and_relpri;
	if (overcommit == _dispatch_queue_attr_overcommit_enabled) {
		dq->dq_priority |= DISPATCH_PRIORITY_FLAG_OVERCOMMIT;
	}
#endif
	_dispatch_retain(tq);
	if (qos == QOS_CLASS_UNSPECIFIED) {
		// legacy way of inherithing the QoS from the target
		_dispatch_queue_priority_inherit_from_target(dq, tq);
	}
	if(! dqa->dqa_inactive) { _dispatch_queue_inherit_wlh_from_target(dq, tq); } dq->do_targetq = tq; _dispatch_object_debug(dq,"%s", __func__);
	return _dispatch_introspection_queue_create(dq);
}
Copy the code

According to the flow chart generated by the code, instead of looking at the code directly, the same as below:

According to the flowchart, the steps of this method are as follows:

  • Developer calldispatch_queue_create()Method is called internally_dispatch_queue_create_with_target()Methods.
  • And then we make a preliminary judgment, and most of the time, we don’t pass queue types, it’s always NULL, so this is a slowpath here. If you pass an argument that is not of the specified queue type, the system will think you are mentally retarded and throw an error.
  • Then initialize some configuration items. The main ones are target_queue, overcommit and qos. Target_queue is the dependent target queue, and like any queue submitted task (block), will eventually be placed in the target queue for execution; When overcommit is supported, each time a task is submitted to the queue, a new thread will be started to process it. In this way, a single thread will not be overloaded with too many tasks. Qos is queue priority, as mentioned earlier.
  • Then go to the judgment branch. The target queue of a normal serial queue is a global queue that supports an OverCOMMIT (else branch); The overCOMMIT item can be set only when the tQ object reference count is DISPATCH_OBJECT_GLOBAL_REFCNT (which is never released) and there is no target queue. The priority of the TQ object is DISPATCH_QOS_UNSPECIFIED. The TQ needs to be reset (for the if branch); Other cases (else if branch).
  • Then configure the identity of the queue so that you can easily find your own queue during debugging.
  • use_dispatch_object_allocMethod requests a dispatch_queue_t object space, Dq.
  • Based on incoming information (parallel or serial; Active or inactive) to initialize the queue. The width of the parallel queue is set toDISPATCH_QUEUE_WIDTH_MAXThat is, maximum, without limits; Serial will be set to 1.
  • Get the configuration items above, the target queue, whether overCOMMIT, priority, and DQ binding are supported.
  • Return this queue. A message is returned to facilitate debugging.

Asynchronous execution

This version of the code that executes asynchronously is messy because of the many method splits. The source code is as follows:

/** The developer calls */
void
dispatch_async(dispatch_queue_t dq, dispatch_block_t work)
{
	dispatch_continuation_t dc = _dispatch_continuation_alloc();
	uintptr_t dc_flags = DISPATCH_OBJ_CONSUME_BIT;

	_dispatch_continuation_init(dc, dq, work, 0.0, dc_flags);
	_dispatch_continuation_async(dq, dc);
}

/** internal call, package a layer, then call */
DISPATCH_NOINLINE
void
_dispatch_continuation_async(dispatch_queue_t dq, dispatch_continuation_t dc)
{
	_dispatch_continuation_async2(dq, dc,
			dc->dc_flags & DISPATCH_OBJ_BARRIER_BIT);
}

/** the barrier keyword differentiates serial from parallel */
DISPATCH_ALWAYS_INLINE
static inline void
_dispatch_continuation_async2(dispatch_queue_t dq, dispatch_continuation_t dc,
		bool barrier)
{
	if(fastpath(barrier || ! DISPATCH_QUEUE_USES_REDIRECTION(dq->dq_width))) {/ / serial
		return _dispatch_continuation_push(dq, dc);
	}
	
	/ / parallel
	return _dispatch_async_f2(dq, dc);
}

/** parallel has another layer of calls, which is this method */
DISPATCH_NOINLINE
static void
_dispatch_async_f2(dispatch_queue_t dq, dispatch_continuation_t dc)
{
	if (slowpath(dq->dq_items_tail)) {/ / path
		return _dispatch_continuation_push(dq, dc);
	}

	if(slowpath(! _dispatch_queue_try_acquire_async(dq))) {/ / path
		return _dispatch_continuation_push(dq, dc);
	}
	/ / path
	return _dispatch_async_f_redirect(dq, dc,
			_dispatch_continuation_override_qos(dq, dc));
}

/** is mainly used to redirect */
DISPATCH_NOINLINE
static void
_dispatch_async_f_redirect(dispatch_queue_t dq,
		dispatch_object_t dou, dispatch_qos_t qos)
{
	if(! slowpath(_dispatch_object_is_redirection(dou))) { dou._dc = _dispatch_async_redirect_wrap(dq, dou); } dq = dq->do_targetq;// Find the queue to redirect to
	while (slowpath(DISPATCH_QUEUE_USES_REDIRECTION(dq->dq_width))) {
		if(! fastpath(_dispatch_queue_try_acquire_async(dq))) {break;
		}
		if(! dou._dc->dc_ctxt) { dou._dc->dc_ctxt = (void(*)uintptr_t)_dispatch_queue_autorelease_frequency(dq);
		}
		dq = dq->do_targetq;
	}

	// Synchronous asynchrony eventually calls this method to append tasks to the queuedx_push(dq, dou, qos); }... Omit some call levels,/** core method, using dc_flags to distinguish group, serial, or parallel */
DISPATCH_ALWAYS_INLINE
static inline void
_dispatch_continuation_invoke_inline(dispatch_object_t dou, voucher_t ov,
		dispatch_invoke_flags_t flags)
{
	dispatch_continuation_t dc = dou._dc, dc1;
	dispatch_invoke_with_autoreleasepool(flags, {
		uintptr_t dc_flags = dc->dc_flags;
		_dispatch_continuation_voucher_adopt(dc, ov, dc_flags);
		if (dc_flags & DISPATCH_OBJ_CONSUME_BIT) { / / parallel
			dc1 = _dispatch_continuation_free_cacheonly(dc);
		} else {
			dc1 = NULL;
		}
		if (unlikely(dc_flags & DISPATCH_OBJ_GROUP_BIT)) { // group
			_dispatch_continuation_with_group_invoke(dc);
		} else { / / serial
			_dispatch_client_callout(dc->dc_ctxt, dc->dc_func);
			_dispatch_introspection_queue_item_complete(dou);
		}
		if(unlikely(dc1)) { _dispatch_continuation_free_to_cache_limit(dc1); }}); _dispatch_perfmon_workitem_inc(); }Copy the code

Instead of looking at the code, look at the diagram:

Describe the process according to the flow chart:

  • First the developer callsdispatch_async()Method, and then creates one internally_dispatch_continuation_initQueue, bind information like queue and block to this DC. I copy the block.
  • It then goes through several levels of calls, mainly to distinguish between parallel and serial.
  • If it’s serial (which is a very common case, so it’s fastPath), it’s dX_push, which means that tasks are appended to a linked list.
  • If it’s parallel, you need to redirect. As we said earlier, tasks that are put on a queue are eventually appended to the target queue in one form or another. in_dispatch_async_f_redirectMethod, find the dependency target queue again, and append to it.
  • After a series of calls, we’re going to be in_dispatch_continuation_invoke_inlineMethod to distinguish between serial and parallel. Because this method is called frequently, it is defined as an inline function. For the serial queue, we use semaphore control, the semaphore is set to wait before execution, and singal is sent after execution. For scheduling groups, we call it after executiondispatch_group_leave.
  • The underlying thread pool is maintained using PThreads, so pThreads are ultimately used to handle these tasks.

Synchronous execution

Synchronous execution, relatively simple, the source code is as follows:

/** The developer calls */
void
dispatch_sync(dispatch_queue_t dq, dispatch_block_t work)
{
	if (unlikely(_dispatch_block_has_private_data(work))) {
		return _dispatch_sync_block_with_private_data(dq, work, 0);
	}
	dispatch_sync_f(dq, work, _dispatch_Block_invoke(work));
}

/** Internal call */
DISPATCH_NOINLINE
void
dispatch_sync_f(dispatch_queue_t dq, void *ctxt, dispatch_function_t func)
{
	if (likely(dq->dq_width == 1)) {
		return dispatch_barrier_sync_f(dq, ctxt, func);
	}

	// Global concurrent queues and queues bound to non-dispatch threads
	// always fall into the slow case, see DISPATCH_ROOT_QUEUE_STATE_INIT_VALUE
	if(unlikely(! _dispatch_queue_try_reserve_sync_width(dq))) {return _dispatch_sync_f_slow(dq, ctxt, func, 0);
	}

	_dispatch_introspection_sync_begin(dq);
	if (unlikely(dq->do_targetq->do_targetq)) {
		return _dispatch_sync_recurse(dq, ctxt, func, 0);
	}
	_dispatch_sync_invoke_and_complete(dq, ctxt, func);
}
Copy the code

Synchronous execution, relatively simple, the general logic is similar. I’m not going to draw a picture, I’m going to describe it:

  • Developer usedispatch_sync()Method, most paths, will be calleddispatch_sync_f()Methods.
  • If it is a serial queue, it passesdispatch_barrier_sync_f()Method to ensure atomic operation.
  • If it is not serial (rarely), we use it_dispatch_introspection_sync_begin_dispatch_sync_invoke_and_completeTo ensure synchronization.
dispatch_after

Dispatch_after is usually used for postponement of some tasks and can be used instead of NSTimer because NSTimer sometimes has too many problems. In a later chapter, I will cover problems in multithreading in general, but I won’t go into detail here. We usually use dispatch_after like this:

- (void)viewDidLoad {
    [super viewDidLoad];
    dispatch_queue_t queue = dispatch_queue_create("com.bool.dispatch", DISPATCH_QUEUE_SERIAL);
    dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(NSEC_PER_SEC * 2.0f)),queue, ^{
        / / 2.0 second to execute
    });
}
Copy the code

When we transition to a new page, we don’t update some of the views immediately. To get users’ attention, we will update them later, which can be done by using this API.

The source code is as follows:

DISPATCH_ALWAYS_INLINE
static inline void
_dispatch_after(dispatch_time_t when, dispatch_queue_t queue.void *ctxt, void *handler, bool block)
{
	dispatch_timer_source_refs_t dt;
	dispatch_source_t ds;
	uint64_t leeway, delta;

	if (when == DISPATCH_TIME_FOREVER) {
#if DISPATCH_DEBUG
		DISPATCH_CLIENT_CRASH(0."dispatch_after called with 'when' == infinity");
#endif
		return;
	}

	delta = _dispatch_timeout(when);
	if (delta == 0) {
		if (block) {
			return dispatch_async(queue, handler);
		}
		return dispatch_async_f(queue, ctxt, handler);
	}
	leeway = delta / 10; // <rdar://problem/13447496>

	if (leeway < NSEC_PER_MSEC) leeway = NSEC_PER_MSEC;
	if (leeway > 60 * NSEC_PER_SEC) leeway = 60 * NSEC_PER_SEC;

	// this function can and should be optimized to not use a dispatch source
	ds = dispatch_source_create(&_dispatch_source_type_after, 0.0.queue);
	dt = ds->ds_timer_refs;

	dispatch_continuation_t dc = _dispatch_continuation_alloc();
	if (block) {
		_dispatch_continuation_init(dc, ds, handler, 0.0.0);
	} else {
		_dispatch_continuation_init_f(dc, ds, ctxt, handler, 0.0.0);
	}
	// reference `ds` so that it doesn't show up as a leak
	dc->dc_data = ds;
	_dispatch_trace_continuation_push(ds->_as_dq, dc);
	os_atomic_store2o(dt, ds_handler[DS_EVENT_HANDLER], dc, relaxed);

	if ((int64_t)when < 0) {
		// wall clock
		when = (dispatch_time_t) - ((int64_t)when);
	} else {
		// absolute clock
		dt->du_fflags |= DISPATCH_TIMER_CLOCK_MACH;
		leeway = _dispatch_time_nano2mach(leeway);
	}
	dt->dt_timer.target = when;
	dt->dt_timer.interval = UINT64_MAX;
	dt->dt_timer.deadline = when + leeway;
	dispatch_activate(ds);
}
Copy the code

The _dispatch_after() method is called inside dispatch_after() and the delay time is determined first. If DISPATCH_TIME_FOREVER (never executed), an exception is raised; If the value is 0, the command is executed immediately. Otherwise a pointer to the dispatch_TIMer_source_refs_t structure is created to associate contextual information with it. The timer and block tasks are then associated using the dispatch_source correlation method. When the timer expires, the block task is removed and executed.

dispatch_once

If we have a piece of code that should only be initialized once in the life of the App, then dispatch_once is best. For example, we often use this in singletons:

+ (instancetype)sharedManager {
    static BLDispatchManager *sharedInstance = nil;
    static dispatch_once_t onceToken;
    dispatch_once(&onceToken, ^{
        sharedInstance = [[BLDispatchManager alloc] initPrivate];
    });
    
    return sharedInstance;
}
Copy the code

Also used when defining NSDateFormatter:

- (NSString *)todayDateString {
    static NSDateFormatter *formatter = nil;
    static dispatch_once_t onceToken;
    dispatch_once(&onceToken, ^{
        formatter = [NSDateFormatter new];
        formatter.locale = [NSLocale localeWithLocaleIdentifier:@"en_US_POSIX"];
        formatter.timeZone = [NSTimeZone timeZoneForSecondsFromGMT:8 * 3600];
        formatter.dateFormat = @"yyyyMMdd";
    });
    
    return [formatter stringFromDate:[NSDate date]];
}
Copy the code

Because this is a common snippet, it is added to the Code snippet in Xcode.

Its source code is as follows:

/** A structure containing the current semaphore, thread port, and pointer to the next node */
typedef struct _dispatch_once_waiter_s {
	volatile struct _dispatch_once_waiter_s *volatile dow_next;
	dispatch_thread_event_s dow_event;
	mach_port_t dow_thread;
} *_dispatch_once_waiter_t;

/** The method we call */
void
dispatch_once(dispatch_once_t *val, dispatch_block_t block)
{
	dispatch_once_f(val, block, _dispatch_Block_invoke(block));
}

/** The actual execution method */
DISPATCH_NOINLINE
void
dispatch_once_f(dispatch_once_t *val, void *ctxt, dispatch_function_t func)
{
#if! DISPATCH_ONCE_INLINE_FASTPATH
	if (likely(os_atomic_load(val, acquire) == DLOCK_ONCE_DONE)) {
		return;
	}
#endif / /! DISPATCH_ONCE_INLINE_FASTPATH
	return dispatch_once_f_slow(val, ctxt, func);
}

DISPATCH_ONCE_SLOW_INLINE
static void
dispatch_once_f_slow(dispatch_once_t *val, void *ctxt, dispatch_function_t func)
{
#if DISPATCH_GATE_USE_FOR_DISPATCH_ONCE
	dispatch_once_gate_t l = (dispatch_once_gate_t)val;

	if (_dispatch_once_gate_tryenter(l)) {
		_dispatch_client_callout(ctxt, func);
		_dispatch_once_gate_broadcast(l);
	} else {
		_dispatch_once_gate_wait(l);
	}
#else
	_dispatch_once_waiter_t volatile *vval = (_dispatch_once_waiter_t*)val;
	struct _dispatch_once_waiter_s dow = { };
	_dispatch_once_waiter_t tail = &dow, next, tmp;
	dispatch_thread_event_t event;

	if (os_atomic_cmpxchg(vval, NULL, tail, acquire)) {
		dow.dow_thread = _dispatch_tid_self();
		_dispatch_client_callout(ctxt, func);

		next = (_dispatch_once_waiter_t)_dispatch_once_xchg_done(val);
		while(next ! = tail) { tmp = (_dispatch_once_waiter_t)_dispatch_wait_until(next->dow_next); event = &next->dow_event; next = tmp; _dispatch_thread_event_signal(event); }}else {
		_dispatch_thread_event_init(&dow.dow_event);
		next = *vval;
		for (;;) {
			if (next == DISPATCH_ONCE_DONE) {
				break;
			}
			if (os_atomic_cmpxchgv(vval, next, tail, &next, release)) {
				dow.dow_thread = next->dow_thread;
				dow.dow_next = next;
				if (dow.dow_thread) {
					pthread_priority_t pp = _dispatch_get_priority();
					_dispatch_thread_override_start(dow.dow_thread, pp, val);
				}
				_dispatch_thread_event_wait(&dow.dow_event);
				if (dow.dow_thread) {
					_dispatch_thread_override_end(dow.dow_thread, val);
				}
				break;
			}
		}
		_dispatch_thread_event_destroy(&dow.dow_event);
	}
#endif
}
Copy the code

Instead of looking at code, look at graphics (EMMM… After drawing the graph logically, I found that the graph was also quite messy, so I marked the two main branches with different colors) :

Based on this diagram, LET me describe the main process:

  • After we call the dispatch_once() method, internally we mostly call the dispatch_once_f_slow() method, which is the real execution method.

  • Os_atomic_cmpxchg (vval, NULL, tail, acquire

    if (*vval == NULL) {
    	*vval = tail = &dow;
    	return true;
    } else {
    	return false
    }
    Copy the code

    The once_token we initialized, *vval, is actually 0, so it returns true the first time. The method in if() is an atomic operation, meaning that if multiple threads call the method at the same time, only one thread will enter the True branch, and all the others will enter the else branch.

  • Let’s go to the True branch. Once inside, the corresponding block, the corresponding task, is executed. Next then points to *vval, flagged as DISPATCH_ONCE_DONE, which performs the following procedure:

    	next = (_dispatch_once_waiter_t)_dispatch_once_xchg_done(val);
    	// This is true in practice
    	next = *vval;
    	*vval = DISPATCH_ONCE_DONE;
    Copy the code
  • Then tail = &dow. *vval = &dow -> next = *vval -> next = &dow. If no other thread (or call) enters the else branch, &dow does not change. While (tail! = TMP) is not executed, the branch ends.

  • If another thread (or call) enters the else branch, a linked list has been generated waiting for responses. At this point the entry &dow has changed to become the end of the list and *vval is the head of the list. After entering the while loop, it starts to traverse the linked list and sends signals in turn to evoke it.

  • And then these calls into the else branch. Once in the branch, an infinite loop is created until *vval is flagged as DISPATCH_ONCE_DONE.

  • When *vval is not DISPATCH_ONCE_DONE, the node is appended to the end of the list and the semaphore’s wait method is invoked.

The above is all the execution process. As you can see from the source code, atomic operations + semaphores are used to ensure that blocks are only executed multiple times, even in multithreaded situations.

It is easy to explain why a recursive call to dispatch_once causes a deadlock. Look at the following code:

- (void)dispatchOnceTest {
    static dispatch_once_t onceToken;
    dispatch_once(&onceToken, ^{
        [self dispatchOnceTest];
    });
}
Copy the code

From the above analysis, all other calls go to the else branch until the block is executed and *vval is DISPATCH_ONCE_DONE. In the second recursive call, the semaphore is in a wait state and cannot be invoked until the first block is finished. But what the first block does is make a second call, which is waited, meaning that the block never finishes executing. The deadlock just happens.

dispatch_apply

Sometimes we use dispatch_apply instead of for loop when there is no sequential dependency. For example, we download a set of images:

/** Use for loop */
- (void)downloadImages:(NSArray <NSURL *> *)imageURLs {
    for (NSURL *imageURL in imageURLs) {
        [selfdownloadImageWithURL:imageURL]; }}/** dispatch_apply */
- (void)downloadImages:(NSArray <NSURL *> *)imageURLs {
    dispatch_queue_t downloadQueue = dispatch_queue_create("com.bool.download", DISPATCH_QUEUE_CONCURRENT);
    dispatch_apply(imageURLs.count, downloadQueue, ^(size_t index) {
        NSURL *imageURL = imageURLs[index];
        [self downloadImageWithURL:imageURL];
    });
}
Copy the code

There are several issues to pay attention to when making the substitution:

  • There is no sequential dependency between tasks and whoever executes first can do it.
  • It is usually replaced only when tasks are executed concurrently in a concurrent queue. Serial queue substitution makes no sense.
  • If there is very little data in the array, or if the execution time of each task is very short, substitution also makes no sense. Forcing concurrency, which can be more expensive than using for loop, is not optimized.

There is not much space for the principle. It looks something like this: This method is synchronous and blocks the current thread until all the block tasks are complete. If you submit to a concurrent queue, the order in which each task is executed is not necessarily the same.

More often, when we are performing a download task and do not want to block the current thread, we can use dispatch_group.

dispatch_group

Dispatch_group is a good choice when handling batch asynchronous tasks. For the example of downloading images above, we can do this:

- (void)downloadImages:(NSArray <NSURL *> *)imageURLs {
    dispatch_group_t taskGroup = dispatch_group_create();
    dispatch_queue_t queue = dispatch_queue_create("com.bool.group", DISPATCH_QUEUE_CONCURRENT);
    for (NSURL *imageURL in imageURLs) {
        dispatch_group_enter(taskGroup);
        // The download method is asynchronous
        [self downloadImageWithURL:imageURL withQueue:queue completeHandler:^{
            dispatch_group_leave(taskGroup);
        }];
    }
    
    dispatch_group_notify(taskGroup, queue, ^{
        // all task finish
    });
    
    /** If the dispatch_group_notify method is executed asynchronously, the dispatch_group_notify method is executed asynchronously. So this method is not used much. * /
    dispatch_group_async(taskGroup, queue, ^{
        
    })
}
Copy the code

In terms of principle, this is similar to the dispatch_async() method, which was mentioned earlier. Here’s just one piece of code:

DISPATCH_ALWAYS_INLINE
static inline void
_dispatch_continuation_group_async(dispatch_group_t dg, dispatch_queue_t dq,
		dispatch_continuation_t dc)
{
	dispatch_group_enter(dg);
	dc->dc_data = dg;
	_dispatch_continuation_async(dq, dc);
}
Copy the code

In this code, the dispatch_group_enter(dg) method is called, which ends up in the same method as dispatch_async() _dispatch_continuation_invoke_inline(). Check that the type is group, execute task, and then call dispatch_group_leave((dispatch_group_t)dou), which corresponds to enter.

The above is the introduction of The Dispatch Queues. 60% of our normal use of GCD is made with the above contents.

2. Dispatch Block

In iOS 8, Apple provided us with a new API, the Dispatch Block. Although we could have passed a block argument to Dispatch as a task before, this is different. As mentioned previously, tasks created using NSOperation can cancel, but GCD cannot. But after iOS 8, you can cancel tasks.

The basic use
  • Create a block and execute it.

    - (void)dispatchBlockTest {
        // No priority is specified
        dispatch_block_t dsBlock = dispatch_block_create(0The ^ {NSLog(@"test block");
        });
        
        // Specify the priority
        dispatch_block_t dsQosBlock = dispatch_block_create_with_qos_class(0, QOS_CLASS_USER_INITIATED, - 1The ^ {NSLog(@"test block");
        });
        
        dispatch_async(dispatch_get_main_queue(), dsBlock);
        dispatch_async(dispatch_get_main_queue(), dsQosBlock);
        
        // Create and execute directly
        dispatch_block_perform(0The ^ {NSLog(@"test block");
    	});
    Copy the code

} ` ` `

  • Block the current task and continue the execution after the block completes.

    	- (void)dispatchBlockTest {
        dispatch_queue_t queue = dispatch_queue_create("com.bool.block", DISPATCH_QUEUE_SERIAL);
        dispatch_block_t dsBlock = dispatch_block_create(0The ^ {NSLog(@"test block");
        });
        dispatch_async(queue, dsBlock);
        // Wait until the block finishes executing
        dispatch_block_wait(dsBlock, DISPATCH_TIME_FOREVER);
        NSLog(@"block was finished");
    }
    Copy the code
  • After the block is executed, it receives a notification and performs other tasks

    	- (void)dispatchBlockTest {
        dispatch_queue_t queue = dispatch_queue_create("com.bool.block", DISPATCH_QUEUE_SERIAL);
        dispatch_block_t dsBlock = dispatch_block_create(0The ^ {NSLog(@"test block");
        });
        dispatch_async(queue, dsBlock);
        // A notification is received after block execution
        dispatch_block_notify(dsBlock, queue, ^{
            NSLog(@"block was finished,do other thing");
        });
    	 NSLog(@"execute first");
    }
    Copy the code
  • Cancel the block

    	- (void)dispatchBlockTest {
        dispatch_queue_t queue = dispatch_queue_create("com.bool.block", DISPATCH_QUEUE_SERIAL);
        dispatch_block_t dsBlock1 = dispatch_block_create(0The ^ {NSLog(@"test block1");
        });
        dispatch_block_t dsBlock2 = dispatch_block_create(0The ^ {NSLog(@"test block2");
        });
        dispatch_async(queue, dsBlock1);
        dispatch_async(queue, dsBlock2);
        
        // The second block will be cancelled and not executed
        dispatch_block_cancel(dsBlock2);
    }
    Copy the code

3. Dispatch Barriers

Dispatch Barriers can be understood as scheduling Barriers and are often used for multi-threaded concurrent read and write operations. Such as:

@interface ViewController(a)
@property (nonatomic.strong) dispatch_queue_t imageQueue;
@property (nonatomic.strong) NSMutableArray *imageArray;
@end

@implementation ViewController

- (void)viewDidLoad {
    [super viewDidLoad];
    
    self.imageQueue = dispatch_queue_create("com.bool.image", DISPATCH_QUEUE_CONCURRENT);
    self.imageArray = [NSMutableArray array];
}

/** Make sure there is no other operation while writing to the main thread */
- (void)addImage:(UIImage *)image {
    dispatch_barrier_async(self.imageQueue, ^{
        [self.imageArray addObject:image];
        dispatch_async(dispatch_get_main_queue(), ^{
            // update UI
        });
    });
}

/** 这里的 dispatch_sync 起到了 lock 的作用 */
- (NSArray <UIImage *> *)images {
    __block NSArray *imagesArray = nil;
    dispatch_sync(self.imageQueue, ^{
        imagesArray = [self.imageArray mutableCopy];
    });
    return imagesArray;
}
@end
Copy the code

It might be easier to understand when translated into graphs:

Dispatch_barrier_async () works like dispatch_async() except that flags are set differently:

void
dispatch_barrier_async(dispatch_queue_t dq, dispatch_block_t work)
{
	dispatch_continuation_t dc = _dispatch_continuation_alloc();
	// Dispatch_async () is just DISPATCH_OBJ_CONSUME_BIT
	uintptr_t dc_flags = DISPATCH_OBJ_CONSUME_BIT | DISPATCH_OBJ_BARRIER_BIT;

	_dispatch_continuation_init(dc, dq, work, 0.0, dc_flags);
	_dispatch_continuation_push(dq, dc);
}
Copy the code

After that, the task is pushed into the queue. Then, the task is obtained in an infinite loop, and the task is obtained from the queue and executed one by one. If the flag is judged to be barrier, the loop is terminated, and the task is executed separately. The tasks that follow it are put into a queue and wait for it to finish.

DISPATCH_ALWAYS_INLINE
static dispatch_queue_wakeup_target_t
_dispatch_queue_drain(dispatch_queue_t dq, dispatch_invoke_context_t dic,
		dispatch_invoke_flags_t flags, uint64_t *owned_ptr, bool serial_drain)
{
	...
	
	for(;;) {... first_iteration: dq_state = os_atomic_load(&dq->dq_state, relaxed);if (unlikely(_dq_state_is_suspended(dq_state))) {
			break;
		}
		if(unlikely(orig_tq ! = dq->do_targetq)) {break;
		}

		if (serial_drain || _dispatch_object_is_barrier(dc)) {
			if(! serial_drain && owned ! = DISPATCH_QUEUE_IN_BARRIER) {if(! _dispatch_queue_try_upgrade_full_width(dq, owned)) {goto out_with_no_width;
				}
				owned = DISPATCH_QUEUE_IN_BARRIER;
			}
			next_dc = _dispatch_queue_next(dq, dc);
			if (_dispatch_object_is_sync_waiter(dc)) {
				owned = 0;
				dic->dic_deferred = dc;
				gotoout_with_deferred; }}else {
			if (owned == DISPATCH_QUEUE_IN_BARRIER) {
				// we just ran barrier work items, we have to make their
				// effect visible to other sync work items on other threads
				// that may start coming in after this point, hence the
				// release barrier
				os_atomic_xor2o(dq, dq_state, owned, release);
				owned = dq->dq_width * DISPATCH_QUEUE_WIDTH_INTERVAL;
			} else if (unlikely(owned == 0)) {
				if (_dispatch_object_is_sync_waiter(dc)) {
					// sync "readers" don't observe the limit
					_dispatch_queue_reserve_sync_width(dq);
				} else if(! _dispatch_queue_try_acquire_async(dq)) {goto out_with_no_width;
				}
				owned = DISPATCH_QUEUE_WIDTH_INTERVAL;
			} 
			
			next_dc = _dispatch_queue_next(dq, dc);
			if (_dispatch_object_is_sync_waiter(dc)) {
				owned -= DISPATCH_QUEUE_WIDTH_INTERVAL;
				_dispatch_sync_waiter_redirect_or_wake(dq,
						DISPATCH_SYNC_WAITER_NO_UNLOCK, dc);
				continue; }... }Copy the code

4. Dispatch Source

We use dispatch_source very little, it is a wrapper around the BSD system kernel functions and is often used to monitor certain events. For example, monitoring breakpoint usage and cancellation. [here] [https://developer.apple.com/documentation/dispatch/dispatch_source_type_constants?language=objc] can monitor events are introduced:

  • DISPATCH_SOURCE_TYPE_DATA_ADD: user-defined event
  • DISPATCH_SOURCE_TYPE_DATA_OR: user-defined event
  • DISPATCH_SOURCE_TYPE_MACH_RECV: The MACH port receives events
  • DISPATCH_SOURCE_TYPE_MACH_SEND: MACH port sends events
  • DISPATCH_SOURCE_TYPE_PROC: process-related events
  • DISPATCH_SOURCE_TYPE_READ: file read event
  • DISPATCH_SOURCE_TYPE_SIGNAL: indicates signal-related events
  • DISPATCH_SOURCE_TYPE_TIMER: timer related events
  • DISPATCH_SOURCE_TYPE_VNODE: file attribute modification event
  • DISPATCH_SOURCE_TYPE_WRITE: file write event
  • DISPATCH_SOURCE_TYPE_MEMORYPRESSURE: memory pressure event

For example, we can monitor breakpoint usage and cancellation with the following code:

@interface ViewController(a)
@property (nonatomic.strong) dispatch_source_t signalSource;
@property (nonatomic.assign) dispatch_once_t signalOnceToken;
@end

@implementation ViewController

- (void)viewDidLoad {
	dispatch_once(&_signalOnceToken, ^{
        dispatch_queue_t queue = dispatch_get_main_queue();
        self.signalSource = dispatch_source_create(DISPATCH_SOURCE_TYPE_SIGNAL, SIGSTOP, 0, queue);
        
        if (self.signalSource) {
            dispatch_source_set_event_handler(self.signalSource, ^{
            	// Click the breakpoint and cancel the breakpoint.
                NSLog(@"debug test");
            });
            dispatch_resume(self.signalSource); }}); }Copy the code

Diapatch_after () relies on dispatch_source(). We can implement a similar timer ourselves:

- (void)customTimer {
    dispatch_source_t timerSource = dispatch_source_create(DISPATCH_SOURCE_TYPE_TIMER, 0.0, DISPATCH_TARGET_QUEUE_DEFAULT);
    dispatch_source_set_timer(timerSource, dispatch_time(DISPATCH_TIME_NOW, 5.0 * NSEC_PER_SEC), 2.0 * NSEC_PER_SEC.5);
    dispatch_source_set_event_handler(timerSource, ^{
        NSLog(@"dispatch source timer");
    });
    
    self.signalSource = timerSource;
    dispatch_resume(self.signalSource);
}
Copy the code
The basic principle of

With dispatch_source, the process looks like this: we create a source, add it to the queue, and call the dispatch_resume() method, which invokes the source from the queue and executes the corresponding block. Here is a detailed flow chart, let’s combine it with this picture:

  • Creating a source object is similar to creating a queue, so the next steps are very similar to creating a queue.

    dispatch_source_t
    dispatch_source_create(dispatch_source_type_t dst, uintptr_t handle,
    		unsigned long mask, dispatch_queue_t dq)
    {
    	dispatch_source_refs_t dr;
    	dispatch_source_t ds;
    
    	dr = dux_create(dst, handle, mask)._dr;
    	if(unlikely(! dr)) {return DISPATCH_BAD_INPUT;
    	}
    	
    	// Apply for memory space
    	ds = _dispatch_object_alloc(DISPATCH_VTABLE(source),
    			sizeof(struct dispatch_source_s));
    	// Initialize a queue and configure the parameters to be treated as a queue
    	_dispatch_queue_init(ds->_as_dq, DQF_LEGACY, 1,
    			DISPATCH_QUEUE_INACTIVE | DISPATCH_QUEUE_ROLE_INNER);
    	ds->dq_label = "source";
    	ds->do_ref_cnt++; // the reference the manager queue holds
    	ds->ds_refs = dr;
    	dr->du_owner_wref = _dispatch_ptr2wref(ds);
    
    	if(slowpath(! dq)) { dq = _dispatch_get_root_queue(DISPATCH_QOS_DEFAULT,true);
    	} else {
    		_dispatch_retain((dispatch_queue_t _Nonnull)dq);
    	}
    	ds->do_targetq = dq;
    	if (dr->du_is_timer && (dr->du_fflags & DISPATCH_TIMER_INTERVAL)) {
    		_dispatch_source_set_interval(ds, handle);
    	}
    	_dispatch_object_debug(ds, "%s", __func__);
    	return ds;
    }
    Copy the code
  • Set the event_handler. Continuation_t = dispatch_continuation_t = dispatch_continuation_t = dispatch_continuation_t = dispatch_continuation_t Take it out for later execution. Then push the task to the queue.

    	void
    dispatch_source_set_event_handler(dispatch_source_t ds,
    		dispatch_block_t handler)
    {
    	dispatch_continuation_t dc;
    	// Dispatch_continuation_t is initialized
    	dc = _dispatch_source_handler_alloc(ds, handler, DS_EVENT_HANDLER, true);
    	// After an operation, the task is pushed to the queue.
    	_dispatch_source_set_handler(ds, DS_EVENT_HANDLER, dc);
    }
    Copy the code
  • Call the resume method and execute source. Generally, the newly created state is a pause state, which is judged to be a pause state, and starts to evoke.

    	void
    dispatch_resume(dispatch_object_t dou)
    {
    	DISPATCH_OBJECT_TFB(_dispatch_objc_resume, dou);
    	if (dx_vtable(dou._do)->do_suspend) {
    		dx_vtable(dou._do)->do_resume(dou._do, false); }}Copy the code
  • The final step, which is the core asynchrony, is to invoke the task to begin execution. The previous queues ended up doing something similar, and you can see that the return type was dispatch_queue_wakeup_target_t, which basically copied all the way through the queue logic. This method, through a series of judgments, ensures that all sources are executed on the correct queue; If the queue does not correspond to the task, the correct queue is returned and redispatched so that the task is executed on the correct queue.

    	DISPATCH_ALWAYS_INLINE
    static inline dispatch_queue_wakeup_target_t
    _dispatch_source_invoke2(dispatch_object_t dou, dispatch_invoke_context_t dic,
    		dispatch_invoke_flags_t flags, uint64_t *owned)
    {
    	dispatch_source_t ds = dou._ds;
    	dispatch_queue_wakeup_target_t retq = DISPATCH_QUEUE_WAKEUP_NONE;
    	// Get the current queue
    	dispatch_queue_t dq = _dispatch_queue_get_current();
    	dispatch_source_refs_t dr = ds->ds_refs;
    	dispatch_queue_flags_tdqf; .// timer event processing
    	if (dr->du_is_timer &&
    			os_atomic_load2o(ds, ds_timer_refs->dt_pending_config, relaxed)) {
    		dqf = _dispatch_queue_atomic_flags(ds->_as_dq);
    		if(! (dqf & (DSF_CANCELED | DQF_RELEASED))) {// timer has to be configured on the kevent queue
    			if(dq ! = dkq) {returndkq; } _dispatch_source_timer_configure(ds); }}// Whether to install source
    	if(! ds->ds_is_installed) {// The source needs to be installed on the kevent queue.
    		if(dq ! = dkq) {return dkq;
    		}
    		_dispatch_source_install(ds, _dispatch_get_wlh(),
    				_dispatch_get_basepri());
    	}
    
    	// Whether to pause, because the previous judgment, it is generally not possible to go here
    	if (unlikely(DISPATCH_QUEUE_IS_SUSPENDED(ds))) {
    		// Source suspended by an item drained from the source queue.
    		return ds->do_targetq;
    	}
    
    	/ / is in
    	if (_dispatch_source_get_registration_handler(dr)) {
    		// The source has been registered and the registration handler needs
    		// to be delivered on the target queue.
    		if(dq ! = ds->do_targetq) {return ds->do_targetq;
    		}
    		// clears ds_registration_handler_dispatch_source_registration_callout(ds, dq, flags); }...if(! (dqf & (DSF_CANCELED | DQF_RELEASED)) && os_atomic_load2o(ds, ds_pending_data, relaxed)) {// Some sources have unfinished data that needs to be sent via a callback on the target queue; Some sources need to be switched to the administrative queue.
    		if(dq == ds->do_targetq) { _dispatch_source_latch_and_call(ds, dq, flags); dqf = _dispatch_queue_atomic_flags(ds->_as_dq); prevent_starvation = dq->do_targetq || ! (dq->dq_priority & DISPATCH_PRIORITY_FLAG_OVERCOMMIT);if(prevent_starvation && os_atomic_load2o(ds, ds_pending_data, relaxed)) { retq = ds->do_targetq; }}else {
    			returnds->do_targetq; }}if((dqf & (DSF_CANCELED | DQF_RELEASED)) && ! (dqf & DSF_DEFERRED_DELETE)) {// The cancelled source needs to be uninstalled from the administrative queue. After uninstallation is complete, the cancelled handler needs to be delivered to the destination queue.
    		if(! (dqf & DSF_DELETED)) {if(dr->du_is_timer && ! (dqf & DSF_ARMED)) {// timers can cheat if not armed because there's nothing left
    				// to do on the manager queue and unregistration can happen
    				// on the regular target queue
    			} else if(dq ! = dkq) {return dkq;
    			}
    			_dispatch_source_refs_unregister(ds, 0);
    			dqf = _dispatch_queue_atomic_flags(ds->_as_dq);
    			if (unlikely(dqf & DSF_DEFERRED_DELETE)) {
    				if(! (dqf & DSF_ARMED)) {goto unregister_event;
    				}
    				// we need to wait for the EV_DELETE
    				returnretq ? retq : DISPATCH_QUEUE_WAKEUP_WAIT_FOR_EVENT; }}if(dq ! = ds->do_targetq && (_dispatch_source_get_event_handler(dr) || _dispatch_source_get_cancel_handler(dr) || _dispatch_source_get_registration_handler(dr))) { retq = ds->do_targetq; }else {
    			_dispatch_source_cancel_callout(ds, dq, flags);
    			dqf = _dispatch_queue_atomic_flags(ds->_as_dq);
    		}
    		prevent_starvation = false;
    	}
    
    	if(_dispatch_unote_needs_rearm(dr) && ! (dqf & (DSF_ARMED|DSF_DELETED|DSF_CANCELED|DQF_RELEASED))) {// Rearm on the management queue
    		if(dq ! = dkq) {return dkq;
    		}
    		if (unlikely(dqf & DSF_DEFERRED_DELETE)) {
    			// If we can just log off without a resume
    			goto unregister_event;
    		}
    		if (unlikely(DISPATCH_QUEUE_IS_SUSPENDED(ds))) {
    			// If source has been paused, there is no need to rearm in the management queue
    			return ds->do_targetq;
    		}
    		if (prevent_starvation && dr->du_wlh == DISPATCH_WLH_ANON) {
    			return ds->do_targetq;
    		}
    		if(unlikely(! _dispatch_source_refs_resume(ds))) {goto unregister_event;
    		}
    		if (!prevent_starvation && _dispatch_wlh_should_poll_unote(dr)) {
    			_dispatch_event_loop_drain(KEVENT_FLAG_IMMEDIATE);
    		}
    	}
    	return retq;
    }
    Copy the code

    There are some other methods that I won’t cover here. Interested can look at the source code, too much.

5. Dispatch I/O

We can use Dispatch I/O to read some files quickly, for example:

- (void)readFile {
    NSString *filePath = @ "/... / blue and white porcelain.;
    dispatch_queue_t queue = dispatch_queue_create("com.bool.readfile", DISPATCH_QUEUE_SERIAL);
    dispatch_fd_t fd = open(filePath.UTF8String, O_RDONLY,0);
    dispatch_io_t fileChannel = dispatch_io_create(DISPATCH_IO_STREAM, fd, queue, ^(int error) {
        close(fd);
    });
    
    NSMutableData *fileData = [NSMutableData new];
    dispatch_io_set_low_water(fileChannel, SIZE_MAX);
    dispatch_io_read(fileChannel, 0, SIZE_MAX, queue, ^(bool done, dispatch_data_t  _Nullable data, int error) {
        if (error == 0 && dispatch_data_get_size(data) > 0) {
            [fileData appendData:(NSData *)data];
        }
        
        if (done) {
            NSString *str = [[NSString alloc] initWithData:fileData encoding:NSUTF8StringEncoding];
            NSLog(@"read file completed, string is :\n %@",str); }}); }Copy the code

Output result:

ConcurrencyTest[41479:5357296[read file completed, string is: sky blue and so on misty rain and I'm waiting for you moonlight was fished up dizzy open the endCopy the code

If you’re reading a large file, you can do a slice read that splits the file into multiple slices and executes them concurrently in an asynchronous thread, which is faster.

A quick look at the source code shows that the scheduling logic is similar to the previous task. And then the read and write operations, which are some of the underlying interface implementations of the call, I’m not going to go into detail here. Dispatch I/O is mostly used to read a large file concurrently to speed up the read.

6. Other

Most of the things in the overview chart have been covered above, but there are a few that haven’t been covered, so here’s a brief description:

  • Dispatch_object. Objects that GCD implements with C functions cannot be implemented with the integration Dispatch class, nor can they be initialized with the alloc method. GCD provides interfaces for Dispatch_object that we can use to handle memory events, cancel and suspend operations, define context, and handle log related work. Dispatch_object must manually manage memory and does not follow the garbage collection mechanism.

  • Dispatch_time. You can create custom times or use DISPATCH_TIME_NOW and DISPATCH_TIME_FOREVER for the GCD time object.

The above is GCD related knowledge, the source version used this time is the latest version – 912.30.4.tar.gz, and the previous version of the code gap is very large, because the amount of code increased, the new version of the code is more chaotic, but the basic principle is still about the same. There was a time when I thought the top was the latest version…

Operations

Operations is also a set of apis commonly used in concurrent programming. The structure of Operations is as follows according to the official document:

NSBlockOperation and NSInvocationOperation are subclassed based on NSOperation. Compared to GCD, the Operations principle is a little easier to understand, and the following uses and principles are introduced.

1. NSOperation

The basic use

Each operation can be considered a task. NSOperation is an abstract class that needs to be subclassed before being used. Fortunately, Apple has implemented two subclasses for us: NSInvocationOperation and NSBlockOperation. We can also define an operation ourselves. Here are the basic uses:

  • Create an NSInvocationOperation object and execute it in the current thread.

    NSInvocationOperation *invocationOperation = [[NSInvocationOperation alloc] initWithTarget:self selector:@selector(log) object:nil];
    [invocationOperation start];
    Copy the code
  • An NSBlockOperation object is created and executed (each block may not be executed in the current thread, nor in the same thread).

    NSBlockOperation *blockOpeartion = [NSBlockOperation blockOperationWithBlock:^{
        NSLog(@"block operation");
    }];
    
    // Multiple blocks can be added
    [blockOpeartion addExecutionBlock:^{
        NSLog(@"other block opeartion");
    }];
    [blockOpeartion start];
    Copy the code
  • Define an Operation. When we don’t need to manipulate the state, we just implement the main() method. We need to talk about the operation state later.

    @interface BLOpeartion : NSOperation
    @end
    
    @implementation BLOpeartion
    - (void)main {
      NSLog(@"BLOperation main method");
    }
    @end
    
    - (void)viewDidLoad {
    	[super viewDidLoad];
     	BLOperation *blOperation = [BLOperation new];
    	[blOperation start];
    }
    Copy the code
  • Set dependencies between each operation.

    NSBlockOperation *blockOpeartion1 = [NSBlockOperation blockOperationWithBlock:^{
        NSLog(@"block operation1");
    }];
    NSBlockOperation *blockOpeartion2 = [NSBlockOperation blockOperationWithBlock:^{
        NSLog(@"block operation2");
    }];
    // 2 needs to be executed after 1.
    [blockOpeartion2 addDependency:blockOpeartion1];
    Copy the code
  • Queue-related uses, more on that later.

The basic principle of

NSOperation has a powerful state machine built in. The life cycle of an operation from initialization to completion corresponds to various states. Here is an image from WWDC 2015 Advanced NSOperations:

Operation starts with a Pending state, which means it is about to enter Ready. After Ready, the representative task can be executed. Then you are in Executing state. After the configuration is complete, the Finished state is displayed. During the process, Cancelled can be Cancelled in all states except Finished.

NSOperation is not open source. But Swift is open source, in Swift it’s called Opeartion, and you can find its source code here. I have a copy here:

open class Operation : NSObject {
    let lock = NSLock(a)internal weak var _queue: OperationQueue?

    // All states are false by default
    internal var _cancelled = false
    internal var _executing = false
    internal var _finished = false
    internal var _ready = false

    // Use a collection to hold objects that depend on it
    internal var _dependencies = Set<Operation> ()// Initialize some dispatch_group objects to manage operation and the execution of its dependent objects.
#if DEPLOYMENT_ENABLE_LIBDISPATCH
    internal var _group = DispatchGroup(a)internal var _depGroup = DispatchGroup(a)internal var _groups = [DispatchGroup]()
#endif
    
    public override init() {
        super.init(#)if DEPLOYMENT_ENABLE_LIBDISPATCH
        _group.enter()
#endif
    }
    
    internal func _leaveGroups(a) {
        // assumes lock is taken
#if DEPLOYMENT_ENABLE_LIBDISPATCH
        _groups.forEach() { $0.leave() }
        _groups.removeAll()
        _group.leave()
#endif
    }
    
    // The default implementation of the start method, the main method, thread-safe, same below. Set _executing before and after the execution.
    open func start(a) {
        if! isCancelled { lock.lock() _executing =true
            lock.unlock()
            main()
            lock.lock()
            _executing = false
            lock.unlock()
        }
        finish()
    }
    
    // The default implementation of the finish method marks the _FINISHED state.
    internal func finish(a) {
        lock.lock()
        _finished = true
        _leaveGroups()
        lock.unlock()
        if let queue = _queue {
            queue._operationFinished(self)}... }// The main method is empty by default, requiring subclasses.
    open func main(a){}// After calling cancel, only the status is marked. The operation is described as finish after calling Cancel in main.
    open func cancel(a) {
        lock.lock()
        _cancelled = true
        lock.unlock()
    }
    
    /** get methods for several states, omitting */.// Whether the task is asynchronous. The default value is false. This method will never be implemented in OC
    open var isAsynchronous: Bool {
        return false
    }
    
    // Set the dependency to put operation into the collection
    open func addDependency(_ op: Operation) {
        lock.lock()
        _dependencies.insert(op)
        op.lock.lock()
#if DEPLOYMENT_ENABLE_LIBDISPATCH
        _depGroup.enter()
        op._groups.append(_depGroup)
#endif
        op.lock.unlock()
        lock.unlock()
    }
    
    ...

    // The default queue priority is Normal
    open var queuePriority: QueuePriority = .normal

    public var completionBlock: (() -> Void)?
    open func waitUntilFinished(a){#if DEPLOYMENT_ENABLE_LIBDISPATCH
        _group.wait()
#endif
    }
    
    // Thread priority
    open var threadPriority: Double = 0.5
    
    /// - Note: Quality of service is not directly supported here since there are not qos class promotions available outside of darwin targets.
    open var qualityOfService: QualityOfService=.default
    
    open var name: String?
    
    internal func _waitUntilReady(a){#if DEPLOYMENT_ENABLE_LIBDISPATCH
        _depGroup.wait()
#endif
        _ready = true}}Copy the code

The code is very simple, the specific process can be directly read the comments, not another story. NSOperation is thread-safe. When subclassing NSOperation, overwrite methods should be aware of thread darkening.

2. NSOperationQueue

A lot of the fancy operations for NSOperation are done in conjunction with NSOperationQueue. When we use it, we use a combination of the two. The following is a detailed analysis.

Basic usage
  • Operation is put in the queue instead of being called manuallystartMethod to execute the operation automatically.
  • Queue can be set to the maximum number of concurrent requests. When the number of concurrent requests is set to 1, it is a serial queue. The default number of concurrent requests is infinite.
  • Queue can be setsuspendedAttribute toPause or start an operation that has not yet been executed.
  • Queue can be called-[cancelAllOperations]Method to cancel a task in a queue.
  • The queue can be passedmainQueueMethod to return to the main queue; Can be achieved bycurrentQueueMethod to get the current queue.
  • For more information, please refer to the official documentation

Use examples:

- (void)testOperationQueue {
    NSOperationQueue *operationQueue = [NSOperationQueue new];
    // Set the maximum number of concurrent requests to 3
    [operationQueue setMaxConcurrentOperationCount:3];
    
    NSInvocationOperation *invocationOpeartion = [[NSInvocationOperation alloc] initWithTarget:self selector:@selector(log) object:nil];
    [operationQueue addOperation:invocationOpeartion];
    
    [operationQueue addOperationWithBlock:^{
        NSLog(@"block operation");
        // Return to the main thread to execute the task
        [[NSOperationQueue mainQueue] addOperationWithBlock:^{
            NSLog(@"execute in main thread");
        }];
    }];
    
    // Pause tasks that have not yet started
    operationQueue.suspended = YES;
    
    // Cancel all tasks
    [operationQueue cancelAllOperations];
}
Copy the code

There is one problem in particular that needs to be explained:

NSOperationQueueUnlike queues in GCD. The queue in GCD follows the FIFO principle, and the first to join the queue is executed first.NSOperationQueueBased on who gets in firstReadyState, who executes first. If more than one task is reached at the same timeReadyState, then execute according to priority.

For example, in the following task, 4 reaches the Ready state first, and 4 is executed first. It’s not one, two, three… Sequential execution.

The basic principle of

We still find the relevant source code in Swift, and then analyze:

// The default maximum number of concurrent requests is int
public extension OperationQueue {
    public static let defaultMaxConcurrentOperationCount: Int = Int.max
}

// Use a list to store the operations of each priority. Call the method to add or delete operation.
internal struct _OperationList {
    var veryLow = [Operation] ()var low = [Operation] ()var normal = [Operation] ()var high = [Operation] ()var veryHigh = [Operation] ()var all = [Operation] ()mutating func insert(_ operation: Operation){... }mutating func remove(_ operation: Operation){... }mutating func dequeue(a) -> Operation? {... }var count: Int {
        return all.count
    }
    func map<T>(_ transform: (Operation) throws -> T) rethrows- > [T] {
        return try all.map(transform)
    }
}

open class OperationQueue: NSObject {...// Use a semaphore to control the number of concurrent requests
    var __concurrencyGate: DispatchSemaphore?
    var __underlyingQueue: DispatchQueue? {
        didSet {
            let key = OperationQueue.OperationQueueKeyoldValue? .setSpecific(key: key, value:nil) __underlyingQueue? .setSpecific(key: key, value:Unmanaged.passUnretained(self))}}...internal var _underlyingQueue: DispatchQueue {
        lock.lock()
        if let queue = __underlyingQueue {
            lock.unlock()
            return queue
        } else{...// The semaphore value is determined by the maximum number of concurrent requests. Each time a task is executed, the wait semaphore decreases by one and the signal semaphore increases by one. If the semaphore is 0, wait until it is greater than 0.
            if maxConcurrentOperationCount == 1 {
                attr = []
                __concurrencyGate = DispatchSemaphore(value: 1)}else {
                attr = .concurrent
                ifmaxConcurrentOperationCount ! =OperationQueue.defaultMaxConcurrentOperationCount {
                    __concurrencyGate = DispatchSemaphore(value:maxConcurrentOperationCount)
                }
            }
            let queue = DispatchQueue(label: effectiveName, attributes: attr)
            if _suspended {
                queue.suspend()
            }
            __underlyingQueue = queue
            lock.unlock()
            return queue
        }
    }
#endif

    ...

    // Queue out. Each task is queued for execution
    internal func _dequeueOperation(a) -> Operation? {
        lock.lock()
        let op = _operations.dequeue()
        lock.unlock()
        return op
    }
    
    open func addOperation(_ op: Operation) {
        addOperations([op], waitUntilFinished: false)}// Main execution method. Check whether operation is ready first and cancel after operation is ready. Execute without cancel.
    internal func _runOperation(a) {
        if let op = _dequeueOperation() {
            if! op.isCancelled { op._waitUntilReady()if! op.isCancelled { op.start() } } } }// Add tasks to the queue. If you do not specify a task priority, the execution is faster. Otherwise, different priorities need to be divided and executed
    open func addOperations(_ ops: [Operation], waitUntilFinished wait: Bool){#if DEPLOYMENT_ENABLE_LIBDISPATCH
        var waitGroup: DispatchGroup?
        if wait {
            waitGroup = DispatchGroup()
        }
#endif
        lock.lock()
        // Add operation to the list and store it in different arrays according to the priority
        ops.forEach { (operation: Operation) - >Void in
            operation._queue = self
            _operations.insert(operation)
        }
        lock.unlock()

        Diapatch Group, control Enter and leave
        ops.forEach { (operation: Operation) - >Void in
#if DEPLOYMENT_ENABLE_LIBDISPATCH
            if let group = waitGroup {
                group.enter()
            }

            // Use semaphores to control concurrency
            let block = DispatchWorkItem(flags: .enforceQoS) { () -> Void in
                if let sema = self._concurrencyGate {
                    sema.wait()
                    self._runOperation()
                    sema.signal()
                } else {
                    self._runOperation()
                }
                if let group = waitGroup {
                    group.leave()
                }
            }
            _underlyingQueue.async(group: queueGroup, execute: block)
#endif
        }
#if DEPLOYMENT_ENABLE_LIBDISPATCH
        if let group = waitGroup {
            group.wait()
        }
#endif
    }
    
    internal func _operationFinished(_ operation: Operation){... } openfunc addOperation(_ block: @escaping (a) -> Swift.Void) {... }// The return value may not be accurate
    open var operations: [Operation] {... }// The return value may not be accurate
    open var operationCount: Int{... } openvar maxConcurrentOperationCount: Int = OperationQueue.defaultMaxConcurrentOperationCount
    
    // Suppend property get & set method. Default no pause
    internal var _suspended = false
    open var isSuspended: Bool{... }...// operation Indicates the priority of obtaining system resources
    open var qualityOfService: QualityOfService=.default
    
    // Call the cancel method for each operation in turn
    open func cancelAllOperations(a){... } openfunc waitUntilAllOperationsAreFinished(a){#if DEPLOYMENT_ENABLE_LIBDISPATCH
        queueGroup.wait()
#endif
    }
    
    static let OperationQueueKey = DispatchSpecificKey<Unmanaged<OperationQueue> > ()// Get the current queue by using the getSpecific method in GCD
    open class var current: OperationQueue? {#if DEPLOYMENT_ENABLE_LIBDISPATCH
        guard let specific = DispatchQueue.getSpecific(key: OperationQueue.OperationQueueKey) else {
            if _CFIsMainThread() {
                return OperationQueue.main
            } else {
                return nil}}return specific.takeUnretainedValue()
#else
        return nil
#endif
    }
    
    // Define the primary queue. The maximum number of concurrent requests is 1. This value is returned when the primary queue is obtained
    private static let _main = OperationQueue(_queue: .main, maxConcurrentOperations: 1)    
    open class var main: OperationQueue {... }}Copy the code

The code is long, but simple enough to be understood directly through comments. Here again:

  • When each operation is added to the queue, it is classified into the list according to its priority and executed according to its priority. If no priority is set, execution is faster.
  • Joins the queue, iterates through every operation, pulls out and entersReadyStatus and not byCancelIs executed in turn.
  • throughconcurrencyGateThis semaphore controls the number of concurrent requests. Each time a task is executed, the wait semaphore decreases by one and the signal semaphore increases by one. If the semaphore is 0, wait until it is greater than 0.
  • Locks are basically added to each method to keep it thread-safe.
Custom NSOperation

To customize normal NSOperation, we just need to override the main method, but since we don’t deal with concurrency, threads perform terminations, and KVO mechanisms, this general approach is not recommended for concurrent tasks. Now let’s see how you can customize parallel NSOperation.

Some methods that must be implemented:

  • startMethod, called in the thread you want to execute.You don’t need to call the super method.
  • mainMethods,startMethod, the task body.
  • isExecutingMethod, whether it is being executed, to implement the KVO mechanism.
  • isConcurrentMethod, deprecated, byisAsynchronousInstead.
  • isAsynchronousMethod, in a concurrent task, you need to return YES.
@interface BLOperation(a)
@property (nonatomic.assign) BOOL executing;
@property (nonatomic.assign) BOOL finished;
@end

@implementation BLOperation
@synthesize executing;
@synthesize finished;

- (instancetype)init {
    self = [super init];
    if (self) {
        executing = NO;
        finished = NO;
    }
    return self;
}

- (void)start {
    if ([self isCancelled]) {
        [self willChangeValueForKey:@"isFinished"];
        finished = YES;
        [self didChangeValueForKey:@"isFinished"];
        return;
    }
    
    [self willChangeValueForKey:@"isExecuting"];
    [NSThread detachNewThreadSelector:@selector(main) toTarget:self withObject:nil];
    executing = YES;
    [self didChangeValueForKey:@"isExecuting"];
}

- (void)main {
    NSLog(@"main begin");
    @try {
        @autoreleasepool {
            NSLog(@"custom operation");
            NSLog(@"currentThread = %@"[NSThread currentThread]);
            NSLog(@"mainThread = %@"[NSThread mainThread]);
            
            [self willChangeValueForKey:@"isFinished"];
            [self willChangeValueForKey:@"isExecuting"];
            executing = NO;
            finished = YES;
            [self didChangeValueForKey:@"isExecuting"];
            [self didChangeValueForKey:@"isFinished"]; }}@catch (NSException *exception) {
        NSLog(@"exception is %@", exception);
    }
    NSLog(@"main end");
}

- (BOOL)isExecuting {
    return executing;
}

- (BOOL)isFinished {
    return finished;
}

- (BOOL)isAsynchronous {
    return YES;
}
@end
Copy the code

NSBlockOpeartion: NSBlockOpeartion: NSBlockOpeartion: NSBlockOpeartion: NSBlockOpeartion: NSBlockOpeartion: NSBlockOpeartion: NSBlockOpeartion: NSBlockOpeartion

open class BlockOperation: Operation {
    typealias ExecutionBlock = () -> Void
    internal var _block: () -> Void
    internal var _executionBlocks = [ExecutionBlock]()
    
    public init(block: @escaping () -> Void) {
        _block = block
    }
    
    override open func main() {
        lock.lock()
        let block = _block
        let executionBlocks = _executionBlocks
        lock.unlock()
        block()
        executionBlocks.forEach { $0() }
    }
    
    open func addExecutionBlock(_ block: @escaping () -> Void) {
        lock.lock()
        _executionBlocks.append(block)
        lock.unlock()
    }
    
    open var executionBlocks: [() -> Void] {
        lock.lock()
        let blocks = _executionBlocks
        lock.unlock()
        return blocks
    }
}
Copy the code

So that’s the end of the NSOperation stuff.

Some problems in development

In my opinion, this is the most important part of API usage and basic understanding. After all, we still have to develop these things. There are many pitfalls in concurrent programming, and here are a few.

1. NSNotification and multithreading

We all know that the NSNotification will be executed on the thread where it was posted. If we’re not Posting on the main thread, but we’re receiving on the main thread, and we expect the selector to be executed on the main thread. Now, we need to be careful that in this case, the selector needs to be dispatched to the main thread. Of course you can also use addObserverForName: object: queue: usingBlock: to specify execution block queue.

@implementation BLPostNotification

- (void)postNotification {
    dispatch_queue_t queue = dispatch_queue_create("com.bool.post.notification", DISPATCH_QUEUE_SERIAL);
    dispatch_async(queue, ^{
        // Send notifications from the non-main thread (the notification name is best defined as a constant)
        [[NSNotificationCenter defaultCenter] postNotificationName:@"downloadImage" object:nil];
    });
}
@end

@implementation ImageViewController

- (void)viewDidLoad {
    [super viewDidLoad];
    [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(show) name:@"downloadImage" object:nil];
}

- (void)showImage {
    // Need to dispatch to main thread to update UI
    dispatch_async(dispatch_get_main_queue(), ^{
        // update UI
    });
}
@end
Copy the code

2. NSTimer and multithreading problems

With NSTimer, the timer generated in the same thread is destroyed, otherwise unexpected results may occur. The official description:

However, for a repeating timer, you must invalidate the timer object yourself by calling its invalidate method. Calling this method requests the removal of the timer from the current run loop; as a result, you should always call the invalidate method from the same thread on which the timer was installed.

@interface BLTimerTest(a)
@property (nonatomic.strong) dispatch_queue_t queue;
@property (nonatomic.strong) NSTimer *timer;
@end

@implementation BLTimerTest
- (instancetype)init {
    self = [super init];
    if (self) {
        _queue = dispatch_queue_create("com.bool.timer.test", DISPATCH_QUEUE_SERIAL);
    }
    return self;
}

- (void)installTimer {
    dispatch_async(self.queue, ^{
        self.timer = [NSTimer scheduledTimerWithTimeInterval:3.0f repeats:YES block:^(NSTimer * _Nonnull timer) {
            NSLog(@"test timer");
        }];
    });
}

- (void)clearTimer {
    dispatch_async(self.queue, ^{
        if ([self.timer isValid]) {
            [self.timer invalidate];
            self.timer = nil; }}); }@end
Copy the code

3. Dispatch Once deadlock problem

In development, we often use dispatch_once, but recursive calls cause deadlocks. For example:

- (void)dispatchOnceTest {
    static dispatch_once_t onceToken;
    dispatch_once(&onceToken, ^{
        [self dispatchOnceTest];
    });
}
Copy the code

The reason for deadlocks is explained in the introduction of Dispatch Once, so I won’t go into more details here. Caution when using, do not cause recursive calls.

4. Dispatch Group problem

When dispatch_group is used, dispatch_group_enter(taskGroup) and dispatch_group_leave(taskGroup) must be paired. Otherwise, a crash occurs. Most of the time we do, but sometimes we don’t. For example, multilayer for loop:

- (void)testDispatchGroup {
    NSString *path = @ "";
    NSFileManager *fileManager = [NSFileManager defaultManager];
    NSArray *folderList = [fileManager contentsOfDirectoryAtPath:path error:nil];
    dispatch_group_t taskGroup = dispatch_group_create();
    
    for (NSString *folderName in folderList) {
        dispatch_group_enter(taskGroup);
        NSString *folderPath = [@"path" stringByAppendingPathComponent:folderName];
        NSArray *fileList = [fileManager contentsOfDirectoryAtPath:folderPath error:nil];
        for (NSString *fileName in fileList) {
            dispatch_async(_queue, ^{
                // Asynchronous tasksdispatch_group_leave(taskGroup); }); }}}Copy the code

Dispatch_group_enter (taskGroup) is dispatch_group_leave(taskGroup) and dispatch_group_leave(taskGroup) is dispatch_group_enter(taskGroup) and dispatch_group_leave(taskGroup). Sometimes there are so many levels of nesting that it’s easy to overlook this problem.

conclusion

So much for iOS concurrent programming. If there are some best practices, I will update them. In addition, because the article is quite long, there may be a mistake, welcome to correct, I will modify it.

reference

  1. In-depth understanding of locks in iOS development
  2. There’s more to @synchronized than you ever wanted to know
  3. Deep understanding of GCD
  4. GCD source analysis 2 — Dispatch_once article
  5. GCD source code analysis 6 — Dispatch_source chapter
  6. Dispatch
  7. Task Management – Operation
  8. swift-corelibs-foundation
  9. Advanced NSOperations