Objc4-781, libdispatch-1173.40.5, cF-1151.16, libmalloc-283.100.6, libclosure-74 for iOS Most of the underlying principles have a basic cognition, and then the algorithm part of the word is to focus on brushing two times “sword finger Offer” (in IDE can be completed in silence, handwritten words may also need some practice). So since the interview must be unavoidable to brush the topic, the topic from the network to collect the topic of each big guy interview and I interview was asked the topic, and then try to answer the topic from their own understanding, if there is a wrong place also hope everyone to correct.

11. Understanding processes and threads, parallel and concurrent, synchronous and asynchronous.

A Process is a running activity of a program in a computer on a certain data set. It is the basic unit of the system for resource allocation and scheduling. Each Process is independent and runs in its dedicated and protected memory, which is the basis of the operating system structure.

A process is an instance of a running program. When a program enters the memory to run, it becomes a process. For example, in iOS, the startup of an App is to start a process. A program is a description of instructions, data, and their organization.

In modern thread-oriented computer architectures, processes are containers for threads.

A process is a running activity of a program with independent functions about a set of data. It can claim and own system resources, is a dynamic concept, is a moving entity. It is not just the code of the program, but also the current activity, represented by the value of the program counter and the contents of the processing registers.

The concept of a process has two main points: first, a process is an entity. Each process has its own address space, which generally includes text regions, data regions, and stack regions. The text area stores the code executed by the processor; The data area stores variables and dynamically allocated memory used during process execution; The stack area stores the instructions and local variables of the active procedure call. Second, a process is an “ongoing program.” A program is an inanimate entity, and only when it is given life by the processor (executed by the operating system) can it become a living entity, we call it a process.

Process switching: Process to take up the processor, in essence is to put a process in private stack register data when the process is suspended (the last) in the middle of the data into the processor’s registers again, and the process to run into the breakpoint processor program pointer PC register (PC), then began to run process are processors to run, That is, the process already owns the processor. In the process of switching, the intermediate data stored in the registers of the processor is called the context of the process, so the process switching is essentially the context switching between the suspended process and the process to be run. When the process is not occupying the processor, the context of the process is stored in the process’s private stack.

A thread is the smallest unit in which an operating system can schedule operations. It is contained within the process and is the actual operating unit within the process. A thread is a single sequential flow of control in a process, and multiple threads can be concurrent in a process, each performing a different task in parallel.

After the program is run, the first thing the system needs to do is to set up a default thread for the program process (in iOS, the App will start a main thread by default), and then the program can add or delete related threads according to the need.

Usually can contain a number of threads in a process, they can take advantage of the process has the resources, the introduction of the thread of the operating system, usually the process as the basic unit of the allocation of resources, the thread as the basic unit of the independent running and scheduling, because the thread process than smaller, basically does not have the system resources, Therefore, the cost of its scheduling will be much smaller, which can improve the degree of concurrent execution of multiple programs in the system more efficiently.

Multiple threads in the same process share all system resources in that process, such as virtual address space, file descriptors, signal processing, and so on. But multiple threads in the same process have their own call stack, their own register context, and their own thread-local storage.

The benefits of using multithreaded programming on a multi-core or multi-CPU, or hyper-threading enabled CPU are obvious, namely increased program throughput. In a single CPU single-core computer, the use of multithreading technology, can also be responsible for THE PROCESS of I/O processing, human-computer interaction and often blocked part and the part of intensive computing to separate the execution, write a special workhorse thread to perform intensive computing, thus improving the efficiency of the program.

The process is also the scheduling unit of the preemption processor and has a complete virtual address space. When a process is scheduled, different processes have different virtual address Spaces, and different threads within the same process share the same address space.

Threads, in contrast, are independent of resource allocation; they belong to a process and share its resources with other threads within the process.

Typically, there can be several threads in a process that can take advantage of the resources owned by the process. In operating systems that introduce threads, it is common to regard processes as the basic unit of resource allocation, and threads as the basic unit of independent operation and independent scheduling. Because the thread is smaller than the process, basically does not have the system resources, so the cost of its scheduling will be much smaller, can more efficiently improve the degree of concurrent execution between multiple programs in the system, so as to significantly improve the utilization rate of system resources and throughput.

The differences between threads and processes can be summarized in the following four points:

  1. Address space and other resources (such as open files) : processes are independent of each other and are shared by threads of the same process. Threads in one process are not visible to other processes.
  2. Communication: Inter-process communication IPC, threads can directly read and write process data segments (such as global variables) to communicate — need the assistance of process synchronization and mutual exclusion means to ensure data consistency.
  3. Scheduling and switching: Thread context switching is much faster than process context switching.
  4. In a multithreaded OS, a process is not an executable entity.

A thread is an entity in a process. A process can have multiple threads, and a thread must have a parent process. Threads do not own system resources, only the necessary data structures to run; It shares all resources owned by the parent process with other threads. Threads can create and undo threads, enabling concurrent execution of programs. In general, threads have three basic states: ready, blocked, and running.

Thread pooling is a form of multithreaded processing in which tasks are added to a queue and then automatically started after a thread is created. Thread pool threads are background threads. Each thread uses the default stack size, runs at the default priority, and is in a multithreaded cell.

Thread pool: A thread usage pattern. Too many lines will bring scheduling overhead, which will affect cache locality and overall performance. A thread pool maintains multiple threads, waiting for the supervisor to assign tasks that can be executed concurrently. This avoids the cost of creating and destroying threads while working on short-duration tasks. Thread pools not only ensure full utilization of the kernel, but also prevent overscheduling. The number of threads available should depend on the number of concurrent processors available, processor cores, memory, network sockets, and so on. For example, the number of threads is usually the number of cpus +2, because too many threads cause additional thread switching overhead.

A common way to schedule tasks to execute threads is to use synchronous queues, called task queues. Threads in the pool wait for tasks in the queue and put the completed tasks into the completion queue.

Concurrency: The execution of two or more programs in the same time period, with temporal overlap (simultaneous macroscopically, sequential microscopically). In an operating system, concurrency refers to a period of time when several programs are running between start and finish, and all of these programs are running on the same processor, but only one program is running on the processor at any time.

Concurrent when there are multiple threads in operation, if the system is only one CPU, it is impossible to truly for more than one thread at the same time, it can only divide the CPU run time into several time periods, then time period assigned to each thread execution, threads run code in a time period, other threads in a hang. This approach is called Concurrent.

Parallel: When the system has more than one CPU, the operation of the thread may be non-concurrent. When one CPU executes a thread, another CPU can execute another thread. The two threads do not occupy CPU resources, but can execute simultaneously. This method is called Parallel.

Difference: Concurrency and parallelism are both similar and different concepts. Parallelism refers to two or more events occurring at the same time. Concurrency is when two or more events occur at the same time interval. In the multi-program environment, concurrency refers to the fact that there are multiple programs running concurrently at a macro level in a period of time, but in the single-processor system, only one program can be executed at any time, so these programs can only be executed alternately at a time. If there are multiple processors in the computer system, the programs that can be executed concurrently can be distributed to multiple processors to achieve parallel execution, that is, each processor is used to process a program that can be executed concurrently, so that multiple programs can be executed simultaneously.

Task: Each execution of a piece of code, such as downloading an image, triggers a network request.

Queues: Queues are used to organize tasks, and a queue contains multiple tasks.

A queue is a description of a task. It can contain multiple tasks. It is a description of the application layer. Threads are the scheduling unit at the system level, which is a more low-level description. Multiple tasks on a queue (parallel queue) may be assigned to multiple threads for execution.

In iOS, the main thread is a thread, and the main queue refers to the organization of tasks on the main thread.

The main queue is executed only on the main thread, but not necessarily on the main thread. It may be another synchronized queue. Synchronous operations do not open up new threads, so when you customize a synchronous serial or parallel queue, it is still executed on the main thread.

Thread synchronization: When one thread is working on a memory address, no other thread can work on the memory address until the thread completes the operation, and the other threads are waiting.

Another meaning of asynchrony is the asynchronous processing of computer multithreading. In contrast to synchronous processing, asynchronous processing does not block the current thread to wait for processing to complete, but allows subsequent operations until another thread has completed processing, with a callback notifying that thread.

Sync: Only tasks can be executed on the current thread in sequence and new threads cannot be started. (Block the current thread and wait for the task to complete)

Asynchronous Async: Performs tasks in a new thread and has the ability to start a new thread. (Do not block the current thread, do not wait for the task to complete)

Please refer to 🔗🔗 :

  • IOS multithreading knowledge architecture construction (I) : Basic concepts

12. What are thread locks in iOS?

Locking is a common synchronization tool. A code segment can only be accessed by a limited number of threads at a time. For example, if thread A adds A simple mutex before entering the protected code, another thread B cannot access the protected code. Thread B can access the protected code only after the previous thread A has finished executing the protected code.

Locks used in iOS development, including Spinlock_t, OS_UNfair_LOCK, pthread_mutex_t, NSLock, NSRecursiveLock, NSCondition, NSConditionLock, @synchronized, dispatch_sema Phore, pthread_rwlock_t.

OSSpinLock spin lock, also only lock, unlock and try to lock three methods. Different from NSLock, if the NSLock request fails, it will poll first, but after a second, the thread will enter the waiting state, waiting to wake up. OSSpinLock, on the other hand, polls all the time, consumes a lot of CPU while waiting, and is not suitable for longer tasks.

OSSpinLock has thread safety issues, it can cause priority inversion issues, and we should not use it under any circumstances at this time, We can use Os_UNFAIR_LOCK, apple’s os_UNFAIR_lock, which was released after iOS 10.0. ‘OSSpinLock’ is deprecated: First deprecated in iOS 10.0 – Use OS_unfair_lock () from < OS /lock.h> instead

OSSpinLock lock = OS_SPINLOCK_INIT; Initialize lock, OSSpinLockLock(&lock); Acquire the lock (lock), although the lock operation rotates (if the lock fails, it will remain in a waiting state until the lock is acquired, blocking the current thread until the lock is acquired). It uses various strategies to support turning off the rotation if the lock succeeds. OSSpinLockUnlock(&lock); Unlock, OSSpinLockTry (& lock); The return value of bool indicates whether the lock was successful. The thread does not block even if the lock failed. Return false if the lock is already held by another thread, true otherwise.

The spin lock OSSpinLock is not a thread-safe lock, and the threads waiting for the lock are in busy-wait state, occupying CPU resources all the time. (It is similar to a while(1) loop, which keeps querying the lock status. It should be distinguished from the run loop mechanism, which is also blocking, but the run loop is similar to sleep blocking and does not consume CPU resources. The busy mechanism of the spin lock makes it more efficient than other locks. After all, there is no wakeup-and-sleep operation to do things faster.) Spin-locks are currently deprecated and can cause priority inversion.

For example, if thread A and thread B have A higher priority than thread B, the original intention is that A’s task is executed first. However, after using OSSpinLock, if THREAD B accesses the shared resource first, obtains the lock and locks it, and thread A accesses the shared resource again, the lock will be in the busy state. As A has A high priority, it will occupy CPU resources all the time and will not give up the time slice. In this way, B cannot obtain CPU resources to execute the task and thus cannot complete it.

In the new iOS, the system maintains five different thread priorities /QoS: Background, Utility, Default, User-initiated, and User-interactive. High-priority threads always execute before low-priority threads, and a thread is not disturbed by threads of lower priority than itself. This thread scheduling algorithm breaks the Spin lock by creating potential priority inversion problems.

Specifically, if a low-priority thread acquires a lock and accesses a shared resource, and a high-priority thread attempts to acquire the lock, it will be in a busy state of spin lock and thus consume a lot of CPU. The low-priority thread cannot compete with the high-priority thread for CPU time, resulting in the task being delayed and unable to release the lock. This isn’t just a theoretical problem, libobjc has encountered this problem so many times that apple engineers disabled OSSpinLock. Apple engineer Greg Parker mentioned that one solution to this problem is to use the truly unbounded backoff algorithm, which avoids the livelock problem, but if the system is heavily loaded, it may still block high-priority threads for tens of seconds. The alternative is to use the Handoff Lock algorithm, which libobJC is currently using. The lock holder stores the thread ID inside the lock, and the lock holder temporarily contributes its priority to avoid priority inversion. In theory this pattern can cause problems under more complex multi-lock conditions, but in practice so far all is well. Libobjc uses the Mach kernel’s thread_switch() and passes a Mach Thread port to avoid priority inversion. It also uses a private parameter option, so developers cannot implement the lock themselves. On the other hand, OSSpinLock cannot be changed due to binary compatibility issues. The bottom line is that all types of spin locks in iOS will no longer be available unless the developer can ensure that all threads accessing the lock are of the same priority. – OSSpinLock is no longer safe

Os_unfair_lock is designed to replace OSSpinLock, and is supported after iOS 10. Unlike OSSpinLock, threads waiting for os_UNFAIR_Lock are hibernated (similar to run loop). It’s not busy-wait.

Os_unfair_lock is a structure
typedef struct os_unfair_lock_s {

uint32_t _os_unfair_lock_opaque;

} os_unfair_lock, *os_unfair_lock_t;
Copy the code
  1. Os_unfair_lock is a low-level lock. Some high level locks should be our first choice for daily development.
  2. The same thread must be used to unlock the lock, and attempts to unlock it using a different thread will result in assertion aborting the process. The OS_UNFAIR_LOCK_ASSERT_OWNER function checks whether the current thread is the owner of os_UNFAIR_LOCK, or raises an assertion.
  3. The lock contains thread ownership information to address priority inversion.
  4. This lock cannot be accessed from multiple processes or threads through shared or multiple mapped memory, and the implementation of the lock depends on the address of the lock value and the owning process.
  5. The OS_UNFAIR_LOCK_INIT command must be used for initialization.

Pthread_mutex_t is a multi-thread mutex lock in C language. It is a cross-platform lock. The threads waiting for the lock are in hibernation state. When using recursive locking, the same thread is allowed to lock repeatedly and another thread will wait for access, thus ensuring the security of accessing shared resources by multiple threads.

PTHREAD_MUTEX_NORMAL // The default type, which is the normal type, is that when one thread locks, the other threads that request the lock form a queue and obtain the lock on a first-in-first-out basis.
PTHREAD_MUTEX_ERRORCHECK If the same thread requests the same lock, EDEADLK is returned. Otherwise, the action is the same as normal lock type. This ensures that no nested deadlocks occur when multiple locks are not allowed
PTHREAD_MUTEX_RECURSIVE // A recursive lock that allows the same thread to acquire the same lock multiple times and unlock it multiple times.
PTHREAD_MUTEX_DEFAULT // Adapt lock, the simplest type of lock, only wait to unlock after re-compete, no wait queue.
Copy the code

Pthread_mutex_trylock is different from trylock. Trylock returns YES and NO. Pthread_mutex_trylock returns 0 on success and an error message on failure.

NSLock inherits NSObject and follows the NSLocking protocol. Lock locks, unlock locks, tryLock attempts and locks. If true is returned, the lock succeeds, and false is returned, the lock fails. Keep in mind that the returned BOOL indicates the success or failure of the lock action, not whether the lock can be held or not, and whether the lock failure will block the current thread. LockBeforeDate: Attempts to lock before the specified Date. If the lock cannot be held before the specified time, NO is returned and the current thread is blocked. It can be used to estimate how long it will take for the last critical section of code to complete, and then lock another section after that time.

  1. Based on themutexThe encapsulation of the basic lock, which is more object-oriented, puts the thread waiting for the lock to sleep.
  2. Comply with theNSLockingAgreement,NSLockingThere are only two methods in the protocol-(void)lock-(void)unlock.
  3. Possible methods:
  4. Initialization and otherOCObject as directallocinitOperation.
  5. -(void)lock;Lock.
  6. -(void)unlock;Unlocked.
  7. -(BOOL)tryLock;Try locking.
  8. -(BOOL)lockBeforeDate:(NSDate *)limit;Wait for a lock before a certain point in time.
  9. Calls continuously in the current thread[self.lock lock]Locking causes the current thread to deadlock.
  10. No fetch in the main threadLockIn the case of and in the acquisitionLockIn this case, twice in a row [self.lock unlock]There are no exceptions. Other locks may be unlocked consecutivelycrash, yet to come and test)
  11. At the same time child thread deadlock can causeViewControllerDon’t release.

When a thread locks, the other threads that request the lock will form a waiting queue. According to the first-in, first-out principle, this result can be tested by changing the thread priority.

NSRecursiveLock is a recursive lock that differs from NSLock in that it can be repeatedly locked in the same thread without causing a deadlock. NSRecursiveLock records the number of times the lock was added and unlocked. When the number of times is equal, this thread releases the lock and other threads can successfully add the lock.

  1. withNSLockAgain, based onmutexEncapsulation, but based onmutexRecursive lock encapsulation, so this is a recursive lock.
  2. Comply with theNSLockingAgreement,NSLockingThere are only two methods in the protocol-(void)lock-(void)unlock.
  3. Possible methods:
  4. Inherits from NSObject, so initialization is just like any other OC object, alloc and init directly.
  5. -(void)lock;lock
  6. -(void)unlock;unlock
  7. -(BOOL)tryLock;Try to lock
  8. -(BOOL)lockBeforeDate:(NSDate *)limit;Wait for a lock before a certain point in time.
  9. Recursive locks can be called consecutively in the same threadlockDoes not directly cause a blocking deadlock, but still executes an equal number of timesunlock. Otherwise, an asynchronous thread acquiring the recursive lock will block and deadlock the asynchronous thread.
  10. Recursive locking allows the same thread to lock multiple times. Different threads will be in a waiting state when entering the lock entry. They need to wait for the completion of the previous thread before entering the lock state.

The object of NSCondition actually acts as a lock and a thread checker. After being locked, other threads can also be locked, and then the thread can decide whether to continue running according to conditions, that is, whether the thread is in waiting state. After testing, NSCondition does not poll first like those locks above. Instead, the lock in another thread executes the signal or broadcast method, and the thread wakes up to continue running the later method. In other words, the model using NSCondition is as follows: 1. Lock the conditional object. 2. Test whether you can perform the following tasks safely. If the Boolean value is false, the wait or waitUntilDate: methods of the condition object are called to block the thread. After returning from these methods, go to Step 2 to retest your booleans. Continue to wait for signals and retest until it is safe to perform the next task. WaitUntilDate: The waitUntilDate method has a wait time limit. When the specified time is up, it returns NO and continues to run the next task. On the wait signal, the thread executes the signal sent by [Lock Signal]. The difference between signal and broadcast is that signal is only a semaphore and can only wake up one waiting thread. Broadcast can wake up all waiting threads. If there are no waiting threads, neither method works.

  1. Based on themutexLock andcontCondition encapsulation, so it is a mutex and comes with a condition that the thread waiting for the lock sleeps.
  2. Comply with theNSLockingAgreement,NSLockingThere are only two methods in the protocol-(void)lock-(void)unlock.
  3. Methods that might be used
  4. Initialization happens directly, as with any OC objectallocinitOperation.
  5. -(void)lock;lock
  6. -(void)unlock;unlock
  7. -(BOOL)tryLock;Try to lock
  8. -(BOOL)lockBeforeDate:(NSDate *)limit;Wait for a lock before a certain point in time
  9. -(void)wait;Wait condition (release lock when entering sleep, re-lock when waking up)
  10. -(void)signal;Send a signal to activate the thread waiting for the condition. Remember that the thread receives the signal from the wait state
  11. - (void)broadcast;Send a broadcast signal to activate all threads waiting for the condition, bearing in mind that the thread receives the signal from the wait state

NSConditionLock is similar to NSLockin that it inherits NSObject and follows the NSLocking protocol and locks the try. However, the condition attribute is added. And each operation has one more method on the condition property, such as tryLock and tryLock whencondition, NSConditionLock can be called a conditional lock. The lock will only work properly if condition is equal to the condition set at initialization or the condition set since the last unlock. UnlockWithCondition: The condition is not unlocked when the condition meets the conditions. After the condition is unlocked, the value of the condition is changed to the entry parameter. When unlock is used, the value of the condition remains the same. If init is used, condition defaults to 0. LockWhenCondition: Similar to the lock method, failure to lock blocks the current thread and waits until the lock can be successfully added. TryLock whencondition: Similar to tryLock, an attempt to lock is made. If the lock fails, the current thread will not be blocked, but the lock is idle and the condition meets the conditions. As can be seen from the above, NSConditionLock can also achieve dependencies between tasks.

The object used by the @synchronized(object) instruction is the unique identifier of the lock, which is mutually exclusive only if the identifier is the same. Therefore, if the @synchronized(self) in thread 2 is changed to @synchronized(self.view), Thread 2 is not blocked. The advantage of @synchronized is that we do not need to explicitly create a lock object in the code to implement the locking mechanism, but as a precaution, the @synchronized block implicitly adds an exception handler to protect the code. This routine automatically releases the mutex when an exception is thrown. Another benefit of @synchronized is that you don’t have to worry about forgetting to unlock. If object is released or set to nil inside @synchronized(object) {}, there is no problem from the test results, but if object is nil to begin with, locking is lost. But while nil doesn’t work, [NSNull null] does.

Before objC4-750 (iOS 12) @synchronized was a recursive lock based on the Pthread_mutex_t encapsulation, but the implementation has since changed to os_UNfair_LOCK as the underlying encapsulation.

Dispatch_semaphore is a GCD method for synchronization. There are only three functions associated with it, one for creating semaphore, one for waiting on semaphore, and one for sending signal. Dispatch_semaphore is similar to NSCondition in that it is a signal-based synchronization, but NSCondition signals can only be sent, not saved (if there is no thread waiting, the sent signal is invalid). Dispatch_semaphore saves sent signals. The core of dispatch_Semaphore is a semaphore of type dispatch_semaphore_T.

The dispatch_semaphoRE_create (1) method creates a semaphore of type dispatch_semaphore_t with an initial value of 1. Note that the incoming argument must be greater than or equal to 0, otherwise dispatch_semaphore returns NULL.

The dispatch_semaphore_wait(signal, overTime) method determines whether the signal value is greater than 0. If the signal value is greater than 0, the thread will not be blocked. If the signal value is 0, the thread will enter the waiting state like NSCondition and wait for other threads to send the signal to wake up the thread to perform the follow-up task, or it will also perform the follow-up task when the overTime time expires.

Dispatch_semaphore_signal (signal) dispatch_semaphore_signal(signal) increases the value of signal by 1 if no waiting thread calls the signal. A dispatch_semaphore_wait(signal, overTime) method will correspond to a dispatch_semaphore_signal(signal) that looks like NSLock lock and unlock, Lock unlock only one thread can access a protected critical section at a time. If the initial semaphore value of dispatcch_semaphore is X, x threads can access a protected critical section at the same time.

  1. Originally used to control the maximum number of concurrent threads, we set the concurrent number to1You can also think of it as a locking function.
  2. Possible methods:
  3. Initialize thedispatch_semaphore_create()The value passed is the maximum number of concurrent requests, set to1Achieve the lock effect.
  4. Determine the value of the semaphoredispatch_semaphore_wait()If more than0, you can continue (while the semaphore value is subtracted1) if the semaphore value is0The second parameter to this method is to set how long to wait. It is usually used foreverDISPATCH_TIME_FOREVER).
  5. Release semaphoredispatch_semaphore_signal()At the same time add the semaphore value to1.

The pthread_rwlock_t read-write lock first introduces the question: “How do I implement a read-write model?” , the requirements are as follows:

  • Multiple threads can read at the same time.
  • Only one thread can write at a time.
  • Only one can be read or written at a time.

The first thing that comes to mind is our pthread_rwlock_T.

  1. Read locking can be performed by multiple threads at the same time, and write can be performed by only one thread at the same time. The waiting thread is in hibernation state.

  2. Possible methods:

  3. Pthread_rwlock_init () initializes a read-write lock

  4. Pthread_rwlock_rdlock () Reads and locks read-write locks

  5. Pthread_rwlock_wrlock () Writes locks to read-write locks

  6. Pthread_rwlock_unlock unlock ()

  7. Pthread_rwlock_destroy () destruction of the lock

Dispatch_barrier_async Implements multiple read and write operations.

  1. The incoming concurrent queue must bedispatch_queue_create()The way is manually created, if passed into a serial queue or throughdispatch_get_global_queue()To obtain a global concurrent queue, thendispatch_barrier_asyncThe function of thedispatch_asyncIt’s the same.
  2. Possible methods:
  3. dispatch_queue_create()Creating concurrent queues
  4. dispatch_barrier_async()Asynchronous fence

Rough efficiency sequencing of locks (different locks may be better at different scenarios)

  1. os_unfair_lock(After iOS 10)
  2. OSSpinLock(Before iOS 10)
  3. dispatch_semaphore(Good compatibility with iOS versions)
  4. pthread_mutex_t(Good compatibility with iOS versions)
  5. NSLock(Based on pthread_mutex_t encapsulation)
  6. NSCondition(Based on pthread_mutex_t encapsulation)
  7. pthread_mutex_t(recursive)Is a preferred recommendation for recursive locking
  8. NSRecursiveLock(Based on pthread_mutex_t encapsulation)
  9. NSConditionLock(Based on NSCondition encapsulation)
  10. @synchronized
  11. iOS 12Based on beforepthread_mutex_tencapsulation
  12. iOS 12Then based on theos_unfair_lockEncapsulation (after iOS 12 it should not be the least efficient, it should rank around 3/4)

Os_unfair_lock is a mutex replacement for OSSpinLock, which is deprecated in iOS 10.

  • The thread is expected to wait for a short time
  • Multicore processor
  • CPU resources are not tight

Exclusive locks:

  • The thread is expected to wait a long time
  • Single-core processor
  • The critical area (the part between locking and unlocking) has I/O operations (i.e., long waiting time)

13. Introduction to Pthreads and NSThreads.

Portable Operating System Interface (English: Portable Operating System Interface (POSIX) is a set of related standards defined by IEEE for apis for running software on various UNIX Operating systems.

Pthreads generally refers to POSIX threads. POSIX Threads (POSIX Threads, often abbreviated to Pthreads) is the THREADING standard for POSIX, which defines a set of apis for creating and manipulating Threads.

Pthread_create is a function for creating threads on UNIX-like operating systems (Unix, Linux, Mac OS X, and so on). Its function is to create a thread (in effect, to determine the entry point to call that thread function), and once the thread is created, the related thread function is run. Pthread_create returns 0 on success; If an error occurs, the error number is returned and the content in pthread_t * __RESTRICT is undefined.

Thread merging is an active reclamation of thread resources. A thread merge occurs when one process or thread calls the pthread_join function for another thread. This interface blocks the calling process or thread until the merged thread terminates. When the merged thread terminates, the pthread_JOIN interface reclaims the thread’s resources and returns the thread’s return value to the merger. Another thread reclamation mechanism corresponding to thread merge is thread separation, with the call interface being pthread_detach.

Thread separation is to reclaim thread resources by the system automatically, that is, when the separated thread ends, the system will automatically reclaim its resources. Since thread detach is the automatic recycling mechanism that starts the system, the program cannot obtain the return value of the detached thread. This allows the pthread_detach interface to have only one parameter, which is the handle to the detached thread.

Thread merging and thread splitting are used to reclaim thread resources and can be used according to different business scenarios. Whatever the reason, you have to choose one or the other, or you’ll have a resource leak problem that’s just as scary as a memory leak.

Pthread_setcanceltype Sets the cancellation time of the current thread. The value of type can be: PTHREAD_CANCEL_DEFERRED and PTHREAD_CANCEL_ASYNCHRONOUS, valid only when Cancel is in Enable state, continue to the next Cancel point after receiving a signal and then exit. (Recommended, Since memory collection must be handled before the thread terminates to prevent memory leaks, manually setting the cancellation point gives us the freedom to handle the timing of memory collection and cancel immediately (exit) (this operation is not recommended because it may cause memory leaks and other problems). Oldtype is saved to the original cancel action type if it is not NULL.

Void pthread_testcancel (void); Check whether the thread is in Canceld state, if so, cancel, otherwise return directly. This function is executed in the thread, and the execution position is the thread exit position. Before executing this function, the related resource request in the thread must be released, otherwise it is easy to cause memory leak. A thread cancels by sending a Cancel signal to the target thread, but it is up to the target thread to decide how to handle the Cancel signal, whether to ignore it, terminate it immediately, or continue running to the cancelation-point, depending on the Cancelation state. The default handling for a thread that receives CANCEL (the default state of the pthread_CREATE thread) is to continue running to the cancellation point, setting a CANCELED state. The thread continues running only after Cancelation-point.

The thread terminates execution by calling the pthread_exit function, just as the process calls exit when it terminates. This function terminates the calling thread and returns a pointer to an object.

Threads within the same process can share memory address space, data exchange between threads can be very fast, which is the most significant advantage of threads. However, multiple threads accessing shared data requires expensive synchronization overhead (locking), can cause synchronization related bugs, and even more troublesome is that some data does not want to be shared at all, which is another disadvantage.

From the point of view of modern technology, the purpose of using multithreading in many cases is not parallel processing of shared data. More due to the introduction of multi-core CPU technology, in order to make full use of CPU resources and parallel computing (not interfering with each other). In other words, most of the time each thread only cares about its own data and does not need to synchronize with others.

One NSThread object corresponds to one thread. Compared to Pthreads, it operates and manages threads in a more object-oriented manner. Although we still need to manage the thread life cycle manually, it is limited to creation. Here, we can understand the process of thread creation as the creation of NSThread objects. As for the final task execution, the system will help us deal with the recycling of thread resources, so it is not the most easy to use compared to GCD. During the use of GCD, we can completely ignore the creation and destruction of threads.

The class method of using NSThread explicitly creates the thread and automatically starts it immediately (no need to call the start function above).


14. GCD brief introduction.

Grand Central Dispatch (GCD) is a relatively new solution to multi-core programming developed by Apple. Execute code simultaneously on multi-core hardware by submitting work to Dispatch queues managed by the Dispatch system. It is used to optimize applications to support multi-core processors and other symmetric multiprocessing systems. It can be understood that the Dispatch queue encapsulates the operation of the underlying multi-core system. We only need to care about the operation of the Dispatch queue, and do not need to care about which core the task is assigned to, or even which thread the task is executed in (of course, it is necessary to study the main thread and sub-thread in order to further learn the main thread and sub-thread).

@protocol OS_dispatch_queue <OS_dispatch_object>
@end

typedef NSObject<OS_dispatch_queue> * dispatch_queue_t;
Copy the code

The OS_dispatch_queue protocol inherits the OS_dispatch_object protocol and defines an alias for dispatch_queue_t for Pointers to NSObject instances of the object type. For example, dispatch_queue_t globalQueue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0); Get a global concurrent object, which is an NSObject pointer to the OS_dispatch_queue protocol (dispatch_queue_t is an NSObject pointer).

Dispatch is an abstract model for expressing concurrency through a simple but powerful API. At the core, Dispatches provide serial FIFO queues to which blocks can be submitted. The blocks submitted to these Dispatch queues are called on a fully managed thread pool. There is no guarantee on which thread the blocks will be called (the system itself fetches available threads from the thread pool). However, it guarantees that only one block submitted to the FIFO Dispatch queue will be called at a time. When multiple queues have blocks to process, the system is free to allocate additional threads to call those blocks concurrently. When the queue becomes empty, these threads are released automatically.

Dispatch queues come in many forms, the most common of which is Dispatch serial queues (Dispatch_queue_serial_t). The system manages a thread pool that handles dispatch queues and invokes the work items submitted to them. Conceptually, a Dispatch queue can have its own thread of execution, and the interaction between queues is highly asynchronous. The dispatch queue performs reference counting operations by calling dispatch_retain() and dispatch_release(). Pending work items submitted to the queue also retain references to the queue until they are completed. Once all references to the queue are freed, the system reallocates the queue (the queue is freed and destroyed).

Dispatch Global Concurrent Queues provide priority buckets on top of a system-managed thread pool where the system determines the number of threads allocated to this pool based on demand and system load. In particular, the system tries to maintain a good concurrency level for this resource, and new threads are created when too many existing worker threads are blocked in the system call. (One important difference between NSThreads and GCD is that threads in GCD are automatically created and assigned by the system, while NSthreads are manually created or started.)

Global Concurrent queues are shared resources, so it is the responsibility of each user of this resource not to commit an unlimited amount of work to the pool, especially work that can block, as this can cause the system to produce a large number of threads (aka: Thread explosion).

Work items submitted to Global Concurrent queues are not guaranteed to be sorted relative to the order in which they are submitted, and work items submitted to these queues can be invoked concurrently (concurrency queues, after all).

Typedef void (^ dispatch_block_t) (void); Everyday work items submitted to the queue using GCD are blocks with names like dispatch_block_t and return values of void.

The type of blocks submitted to dispatch queues, with no arguments and no return value. When building without Objective-C ARC, block objects allocated to or copied to the heap must be released via a -[release] message or the Block_release() function.

Block allocations declared as literals are stored on the stack. Blocks must be copied to the heap using the Block_copy() function or by sending a -[copy] message.

Dispatch_async submits a block to be executed asynchronously on the dispatch queue. Calls to the dispatch_async function always return immediately after a block is committed and never wait for the block to be called. The familiar asynchronous call does not block because dispatch_async returns immediately after the block has been submitted, whereas the dispatch_sync call does not return until the block has been executed. We may subconsciously feel that the type of queue dispatch_async is sent by the dispatch_async function will affect whether or not the function returns immediately. The dispatch_async function returns immediately whether a block is sent to a concurrent queue or a serial queue and does not block the current thread.

In the dispatch_async function, the destination queue (dispatch_queue_t queue) decides whether to call blocks in the queue serially or concurrently. (If queue is a concurrent queue, multiple threads are started to execute all blocks concurrently, if queue is a serial queue (except for the main queue), only one thread is started to execute all blocks serially, if the main queue is not started, all blocks are executed serially in the main thread.) The dispatch_async function submits blocks to different serial queues that are processed in parallel with each other. (They execute blocks in serial queues concurrently in different threads.)

Dispatch_sync submits a block for synchronous execution on the dispatch queue. Used to submit work items to a Dispatch queue, like the dispatch_async function, but dispatch_sync does not return until the work item is complete. (the dispatch_sync function does not return until the block submitted to the queue is complete, blocking the current thread).

Dispatch_apply submits a block to the dispatch queue for a parallel call. (Rapid iteration)

The Dispatch Barrier API is a mechanism for submitting barrier blocks to a Dispatch queue, similar to the Dispatch_async/Dispatch_sync API. It can implement an efficient reader/writer scheme.

Barrier blocks behave the same as blocks submitted using the Dispatch_ASYNc/Dispatch_SYNC API when submitted to global queues or queues created without the DISPATCH_QUEUE_CONCURRENT attribute. (If dispatch_async and dispatch_barrier_async are sent to dispatch_get_global_queue, the queue will be executed concurrently without the barrier function.)

Queue: The scheduled queue to which the block is submitted. The system keeps the reference on the target queue until the block is complete. The result of passing NULL in this parameter is indeterminate.

Block: block submitted to the destination scheduling queue. This function performs Block_copy and Block_release on behalf of the caller. The result of passing NULL in this parameter is indeterminate.

Dispatch_barrier_sync Submits barrier blocks for synchronous execution on the dispatch queue (blocking the current thread until the Barrier block is complete).


15. dispatch_object_t

By default, libSystem objects (such as GCD and XPC objects) are declared as Objective-C types when building with the Objective-C compiler, which allows them to participate in ARC, Participate in RR management through the Blocks runtime and leak checking through the static parser and add them to the Cocoa collection.

Dispatch_group_wait Waits synchronously until all blocks associated with a group have completed or until the specified timeout period has elapsed.

This function waits for the completion of blocks associated with a given scheduling group and returns when all blocks are complete or the specified timeout period expires. (block until the function returns)

If there is no block associated with the scheduling group (that is, the group is empty), this function returns immediately.

The result of calling this function from multiple threads simultaneously using the same scheduling group is uncertain.

After successfully returning this function, the scheduling group is empty. You can release it using dispatch_release, or you can reuse it for another block.

Dispatch_group_notify Planned block submission to the queue when all blocks associated with the group have completed (that is, blocks submitted to the queue will execute when all blocks associated with the group have completed).

If no block is associated with a scheduling group (that is, the group is empty), the notification block is committed immediately.


16. dispatch_source_t/dispatch_block_t

The Dispatch Framework provides an interface for monitoring the activity of file descriptors, Mach Ports, signals, VFS Nodes, etc. And automatically submit event handler blocks to Dispatch Queues when such activities occur. This interface is called the Dispatch Source API.

DISPATCH_SOURCE_TYPE_TIMER DISPATCH_SOURCE_TYPE_TIMER DISPATCH_SOURCE_TYPE_TIMER DISPATCH_SOURCE_TYPE_TIMER Dispatch source that submits event Handler blocks based on a timer. The handle is not used (now passes zero). Mask specifies the flag from dispatch_source_timer_FLAGs_t to apply.

#define DISPATCH_SOURCE_TYPE_TIMER (&_dispatch_source_type_timer)
API_AVAILABLE(macos(10.6), ios(4.0))
DISPATCH_SOURCE_TYPE_DECL(timer);
Copy the code

Void dispatch_source_cancel (dispatch_source_t source); The cancel action blocks any further calls to the event Handler block of the specified scheduling source, but does not interrupt the event handler block that is already in progress.

Once the source’s event handler completes, the cancellation handler is committed to the source’s target queue, indicating that it is now safe to close the source’s handle (i.e. file Descriptor or Mach port).

The __builtin_expect directive, introduced by GCC to allow programmers to tell the compiler which branch is most likely to be executed, is written as __builtin_expect(EXP, N), which means: EXP == N has a high probability, and the CPU will prefetch the instruction of this branch. In this way, the CPU pipeline will reduce the time of waiting for the instruction of the CPU with a high probability, thus improving the CPU efficiency.

Dispatch_block_t dispatch_block_create(dispatch_block_FLAGs_t flags, dispatch_block_t block); Create a new Dispatch block object on the heap based on existing blocks and the given flags. The supplied blocks are block_copied to the heap and retained by the newly created scheduling block object.

The returned Dispatch block object is intended to be submitted to the dispatch queue via dispatch_async and related functions, but can also be called directly. Either operation can be performed any number of times, but only the execution of the dispatch block object that has been completed for the first time can be waited with dispatch_block_wait or observed with dispatch_block_notify.

Dispatch_block_wait (dispatch_block_t block, dispatch_time_t timeout); Synchronous waiting (blocking) until execution of the specified distribution block object is complete or the specified timeout period ends. This function returns immediately if the execution of the block object has completed.

You cannot use this interface to wait for multiple executions of the same block object; To do this, use dispatch_group_wait. A single Dispatch block object can wait and execute once, or execute any number of times. The behavior of any other combination is undefined. A commit to the dispatch queue is considered executed even if the cancellation (dispatch_block_cancel) indicates that the block’s code never runs.

Void dispatch_block_notify(dispatch_block_t block, dispatch_queue_t queue, dispatch_block_t notification_block) Plan to submit a notification_block to the queue after the execution of the specified scheduling block object (block) is complete.

If the execution of the observed block object (block) has completed, this function immediately submits a notification_block.

This interface cannot be used to notify multiple executions of the same block. Please use dispatch_group_notify for this purpose.

Void dispatch_block_cancel (dispatch_block_t block); Asynchronously cancels the specified Dispatch block object.

Cancellation causes any future execution of a Dispatch block object to return immediately, but does not affect any execution of a block object that is already in progress.

The release of any resources associated with the block object will be delayed until the next attempt to execute the block object (or any execution that has been completed).

Note: Care needs to be taken to ensure that cancelable block objects do not capture any resources that need to be freed by executing the block (for example, the block calls free (memory allocated with malloc)). If the block is never executed due to cancellation, these resources will be leaked. (For example, we are familiar with the operation that only block execution can break the circular reference ring)


17. GCD process of obtaining the main queue/obtaining a global queue/creating a custom queue.

#define DISPATCH_QUEUE_SERIAL NULL Property used to create scheduling queues (serial queues) that serial invoke blocks in FIFO order.

The _dispatch_lane_create_with_target function creates the queue. The main body is divided into two steps. Call dispatch_lane_t dq = _dispatch_object_alloc(vtable, sizeof(struct dispatch_lane_s)); Initialize after applying for a structure variable.

DISPATCH_NOINLINE
static dispatch_queue_t
_dispatch_lane_create_with_target(const char *label, dispatch_queue_attr_t dqa, dispatch_queue_t tq, bool legacy) {

    // 1. If dQA is passed DISPATCH_QUEUE_SERIAL (value NULL), an empty instance of dispatch_queue_attr_info_t is returned. (dispatch_queue_attr_info_t dqai = {};) .
    
    // 2. If dQA is DISPATCH_QUEUE_CONCURRENT (global variable _dispatch_queue_attr_concurrent) as an input parameter, An instance of the dispatch_queue_attr_info_t structure is returned with dqai_concurrent true (dqai_concurrent true indicates a concurrent queue).
    
    // dispatch_queue_attr_t = dispatch_queue_attr_t;
    The dispatch_queue_attr_info_t struct instance is returned after each member variable of the dispatch_queue_attr_info_t struct instance is assigned a value.
    
    dispatch_queue_attr_info_t dqai = _dispatch_queue_attr_to_info(dqa);

    
    // Step 1: Normalize arguments (qos, overcommit, tq)
    
    dispatch_qos_t qos = dqai.dqai_qos; // (dqai_qos indicates thread priority)
    
    // if HAVE_PTHREAD_WORKQUEUE_QOS is false a dqai_qos switch is performed
#if! HAVE_PTHREAD_WORKQUEUE_QOS
    if (qos == DISPATCH_QOS_USER_INTERACTIVE) {
        // If "user interaction" is the highest priority, then "user start" is the second priority
        dqai.dqai_qos = qos = DISPATCH_QOS_USER_INITIATED;
    }
    
    if (qos == DISPATCH_QOS_MAINTENANCE) {
        // If "QOS_CLASS_MAINTENANCE" is the lowest priority, then "background threads" is the second-to-last priority
        dqai.dqai_qos = qos = DISPATCH_QOS_BACKGROUND;
    }
#endif / /! HAVE_PTHREAD_WORKQUEUE_QOS

    // Check whether "overuse (more than the physical core number)" is allowed.
    _dispatch_queue_attr_overcommit_t overcommit = dqai.dqai_overcommit;
    
    if(overcommit ! = _dispatch_queue_attr_overcommit_unspecified && tq) {// If overcommit does not equal "overcommit not specified" and tq is not null
        // (dispatch_queue_create is NULL)
        if (tq->do_targetq) {
            // crash
            DISPATCH_CLIENT_CRASH(tq, "Cannot specify both overcommit and "
                    "a non-global target queue"); }}if (tq && dx_type(tq) == DISPATCH_QUEUE_GLOBAL_ROOT_TYPE) {
        // Handle discrepancies between attr and target queue, attributes win
        // Handle the difference between the attR and the target queue, with the attR as the main
        
        // If the destination queue exists and the destination queue is the global root queue
        if (overcommit == _dispatch_queue_attr_overcommit_unspecified) {
            // If overcommit is not specified
            if (tq->dq_priority & DISPATCH_PRIORITY_FLAG_OVERCOMMIT) {
                // If the priority of the destination queue is DISPATCH_PRIORITY_FLAG_OVERCOMMIT, set overCOMMIT to permit
                overcommit = _dispatch_queue_attr_overcommit_enabled;
            } else {
                // Otherwise, it is not allowedovercommit = _dispatch_queue_attr_overcommit_disabled; }}// If the priority is not specified, the priority of the newly created queue inherits the priority of the target queue
        if (qos == DISPATCH_QOS_UNSPECIFIED) {
            qos = _dispatch_priority_qos(tq->dq_priority);
        }
        
        / / NULL tq
        tq = NULL;
    } else if(tq && ! tq->do_targetq) {// target is a pthread or runloop root queue, setting QoS or overcommit is disallowed
        // Target queue is a pthread or runloop root queue. Setting QoS or overcommit is not allowed
        
        if(overcommit ! = _dispatch_queue_attr_overcommit_unspecified) {// Crash if tq exists and overcommit is not unspecified
            DISPATCH_CLIENT_CRASH(tq, "Cannot specify an overcommit attribute " "and use this kind of target queue"); }}else {
        // tq is NULL
        
        if (overcommit == _dispatch_queue_attr_overcommit_unspecified) {
            // Serial queues default to overcommit! (Serial queue defaults to Overcommit)
            // Dqai_concurrent is false for the serial queue and true for the concurrent queue.
            
            // When dqai.dqai_concurrent is true, overcommit is not allowed, otherwise overcommit is allowedovercommit = dqai.dqai_concurrent ? _dispatch_queue_attr_overcommit_disabled : _dispatch_queue_attr_overcommit_enabled; }}// When tq is NULL, i.e. DISPATCH_TARGET_QUEUE_DEFAULT (value NULL),
    // Get a root queue from the global root queue array _dispatch_root_queues as the target queue of the new queue according to qos and overcommit.
    // (return &_dispatch_root_queues[2 * (qos - 1) + overcommit]; Read directly from the array after calculating the subscript from these two parameters.)
    // (👇👇 _dispatch_root_queues)
    
    if(! tq) { tq = _dispatch_get_root_queue( qos == DISPATCH_QOS_UNSPECIFIED ? DISPATCH_QOS_DEFAULT : qos, overcommit == _dispatch_queue_attr_overcommit_enabled)->_as_dq;if (unlikely(! tq)) {// If no target queue is obtained, crash
            DISPATCH_CLIENT_CRASH(qos, "Invalid queue attribute"); }}//
    // Initialize the queue
    //
    
    Legacy passes true by default when the dispatch_queue_create function is called
    if (legacy) {
        // if any of these attributes is specified, use non legacy classes
        // If any of these attributes are specified, use a non-old class
        
        // Dqai_inactive and dqai_autorelease_frequency
        if (dqai.dqai_inactive || dqai.dqai_autorelease_frequency) {
            legacy = false; }}const void *vtable;
    dispatch_queue_flags_t dqf = legacy ? DQF_MUTABLE : 0;
    if (dqai.dqai_concurrent) {
        // Concurrent queue
        vtable = DISPATCH_VTABLE(queue_concurrent); // _dispatch_queue_conCURRENT_vtable Function calls that wrap queues can make
    } else {
        // Serial queue
        vtable = DISPATCH_VTABLE(queue_serial); // _dispatch_queue_serial_vtable Function call that can be made by the package queue
    }
    
    // Automatically release the frequency
    switch (dqai.dqai_autorelease_frequency) {
    case DISPATCH_AUTORELEASE_FREQUENCY_NEVER:
        dqf |= DQF_AUTORELEASE_NEVER;
        break;
    case DISPATCH_AUTORELEASE_FREQUENCY_WORK_ITEM:
        dqf |= DQF_AUTORELEASE_ALWAYS;
        break;
    }
    
    // Queue label
    if (label) {
        // _dispatch_strdup_if_mutable allocates space and copies the original string if the label parameter is a mutable string, and returns the original if the label parameter is an immutable string
        const char *tmp = _dispatch_strdup_if_mutable(label);
        
        if(tmp ! = label) {// A new space is requested
            dqf |= DQF_LABEL_NEEDS_FREE;
            // Assign "new value" to labellabel = tmp; }}// void *_dispatch_object_alloc(const void *vtable, size_t size);
    // dispatch_lane_s is a subclass of dispatch_queue_s.
    
    // dq is a pointer to the dispatch_lane_s structure
    dispatch_lane_t dq = _dispatch_object_alloc(vtable,
            sizeof(struct dispatch_lane_s));
            
    // If dqai.dqai_concurrent is true the input parameter is DISPATCH_QUEUE_WIDTH_MAX (4094) otherwise 1
    // if dqai.dqai_inactive is true, it is inactive; otherwise, it is active
    // #define DISPATCH_QUEUE_ROLE_INNER 0x0000000000000000ull
    // #define DISPATCH_QUEUE_INACTIVE 0x0180000000000000ull
    
    // Initialize dQ
    _dispatch_queue_init(dq, dqf, dqai.dqai_concurrent ?
            DISPATCH_QUEUE_WIDTH_MAX : 1, DISPATCH_QUEUE_ROLE_INNER |
            (dqai.dqai_inactive ? DISPATCH_QUEUE_INACTIVE : 0));

    // Queue signature
    dq->dq_label = label;
    
    / / priority
    dq->dq_priority = _dispatch_priority_make((dispatch_qos_t)dqai.dqai_qos, dqai.dqai_relpri);
    // overcommit
    if (overcommit == _dispatch_queue_attr_overcommit_enabled) {
        dq->dq_priority |= DISPATCH_PRIORITY_FLAG_OVERCOMMIT;
    }
    
    // If it is inactive
    if(! dqai.dqai_inactive) {// The priority of the new queue is inherited from the priority of the destination queue
        _dispatch_queue_priority_inherit_from_target(dq, tq);
        _dispatch_lane_inherit_wlh_from_target(dq, tq);
    }
    
    // The internal reference count of the target queue increases by 1 (atomic operation)
    _dispatch_retain(tq);
    
    // Set the target queue for the new queue
    dq->do_targetq = tq;
    
    // DEBUG print function
    _dispatch_object_debug(dq, "%s", __func__);
    return _dispatch_trace_queue_create(dq)._dq;
}
Copy the code

_dispatch_root_queues is an array of queues of length 12.

// 6618342 Contact the team that owns the Instrument DTrace probe before renaming this symbol
struct dispatch_queue_global_s _dispatch_root_queues[] = {
    _DISPATCH_ROOT_QUEUE_ENTRY(MAINTENANCE, 0,
        .dq_label = "com.apple.root.maintenance-qos",
        .dq_serialnum = 4,
    ),
    _DISPATCH_ROOT_QUEUE_ENTRY(MAINTENANCE, DISPATCH_PRIORITY_FLAG_OVERCOMMIT,
        .dq_label = "com.apple.root.maintenance-qos.overcommit",
        .dq_serialnum = 5,
    ),
    _DISPATCH_ROOT_QUEUE_ENTRY(BACKGROUND, 0,
        .dq_label = "com.apple.root.background-qos",
        .dq_serialnum = 6,
    ),
    _DISPATCH_ROOT_QUEUE_ENTRY(BACKGROUND, DISPATCH_PRIORITY_FLAG_OVERCOMMIT,
        .dq_label = "com.apple.root.background-qos.overcommit",
        .dq_serialnum = 7,
    ),
    _DISPATCH_ROOT_QUEUE_ENTRY(UTILITY, 0,
        .dq_label = "com.apple.root.utility-qos",
        .dq_serialnum = 8,
    ),
    _DISPATCH_ROOT_QUEUE_ENTRY(UTILITY, DISPATCH_PRIORITY_FLAG_OVERCOMMIT,
        .dq_label = "com.apple.root.utility-qos.overcommit",
        .dq_serialnum = 9,
    ),
    _DISPATCH_ROOT_QUEUE_ENTRY(DEFAULT, DISPATCH_PRIORITY_FLAG_FALLBACK,
        .dq_label = "com.apple.root.default-qos",
        .dq_serialnum = 10,
    ),
    _DISPATCH_ROOT_QUEUE_ENTRY(DEFAULT,
            DISPATCH_PRIORITY_FLAG_FALLBACK | DISPATCH_PRIORITY_FLAG_OVERCOMMIT,
        .dq_label = "com.apple.root.default-qos.overcommit",
        .dq_serialnum = 11,
    ),
    _DISPATCH_ROOT_QUEUE_ENTRY(USER_INITIATED, 0,
        .dq_label = "com.apple.root.user-initiated-qos",
        .dq_serialnum = 12,
    ),
    _DISPATCH_ROOT_QUEUE_ENTRY(USER_INITIATED, DISPATCH_PRIORITY_FLAG_OVERCOMMIT,
        .dq_label = "com.apple.root.user-initiated-qos.overcommit",
        .dq_serialnum = 13,
    ),
    _DISPATCH_ROOT_QUEUE_ENTRY(USER_INTERACTIVE, 0,
        .dq_label = "com.apple.root.user-interactive-qos",
        .dq_serialnum = 14,
    ),
    _DISPATCH_ROOT_QUEUE_ENTRY(USER_INTERACTIVE, DISPATCH_PRIORITY_FLAG_OVERCOMMIT,
        .dq_label = "com.apple.root.user-interactive-qos.overcommit",
        .dq_serialnum = 15,),};Copy the code

The main queue is a global variable _dispatch_main_q. We get the main queue everyday, just get it directly.

struct dispatch_queue_static_s _dispatch_main_q = {
    DISPATCH_GLOBAL_OBJECT_HEADER(queue_main), // Inherits from the parent class
    
#if! DISPATCH_USE_RESOLVERS

    DISPATCH_ROOT_QUEUE_IDX_DEFAULT_QOS +!! (overcommit) The subscript queue is the target queue.)
    // #define _dispatch_get_default_queue(overcommit) \ // _dispatch_root_queues[DISPATCH_ROOT_QUEUE_IDX_DEFAULT_QOS + \ //  !!(overcommit)]._as_dq
    
    .do_targetq = _dispatch_get_default_queue(true),
#endif
    
    .dq_state = DISPATCH_QUEUE_STATE_INIT_VALUE(1) | DISPATCH_QUEUE_ROLE_BASE_ANON, // (0xfffull << 41) | 0x0000001000000000ull
    .dq_label = "com.apple.main-thread".// Queue label (queue name)
    .dq_atomic_flags = DQF_THREAD_BOUND | DQF_WIDTH(1), // The number of concurrent requests is 1, i.e. the serial queue
    .dq_serialnum = 1.// The queue number is 1
};
Copy the code

The dispatch_get_MAIN_queue that we use to get the main queue is implemented by getting the _dispatch_MAIN_q variable.

// Cast
#define DISPATCH_GLOBAL_OBJECT(type, object) ((OS_OBJECT_BRIDGE type)&(object))

DISPATCH_INLINE DISPATCH_ALWAYS_INLINE DISPATCH_CONST DISPATCH_NOTHROW
dispatch_queue_main_t dispatch_get_main_queue(void) {

    return DISPATCH_GLOBAL_OBJECT(dispatch_queue_main_t, _dispatch_main_q);
    
}
Copy the code

Dispatch_get_global_queue also fetches a queue directly from the global root queue array of _dispatch_root_queues based on the input parameter.

dispatch_queue_global_t dispatch_get_global_queue(long priority, unsigned long flags) {
    dispatch_assert(countof(_dispatch_root_queues) ==
            DISPATCH_ROOT_QUEUE_COUNT);

    if (flags & ~(unsigned long)DISPATCH_QUEUE_OVERCOMMIT) {
        return DISPATCH_BAD_INPUT;
    }
    dispatch_qos_t qos = _dispatch_qos_from_queue_priority(priority);
    
#if! HAVE_PTHREAD_WORKQUEUE_QOS
    if (qos == QOS_CLASS_MAINTENANCE) {
        qos = DISPATCH_QOS_BACKGROUND;
    } else if (qos == QOS_CLASS_USER_INTERACTIVE) {
        qos = DISPATCH_QOS_USER_INITIATED;
    }
#endif

    if (qos == DISPATCH_QOS_UNSPECIFIED) {
        return DISPATCH_BAD_INPUT;
    }
    
    // Read directly from _dispatch_root_queues.
    return _dispatch_get_root_queue(qos, flags & DISPATCH_QUEUE_OVERCOMMIT);
}
Copy the code

18. The dispatch_async function performs process analysis.

When we submit a task to a queue, either asynchronously with dispatch_async or asynchronously with dispatch_async_f, we wrap the submitted task as dispatch_continuation_s, The dispatch_continuation_s structure uses a function pointer (dc_func) to store tasks to be performed, When block tasks are submitted, dispatch_continuation_s stores functions defined by the block structure, not the block itself.

The implementation code of the dispatch_async function can be logically divided into two parts, _dispatch_continuation_init internally copies blocks to the heap _dispatch_Block_copy(work) The second part is the call to the thread (_dispatch_continuation_async) to concurrently process the submitted task function.

First, let’s make a summary:

  • Multiple block tasks submitted by the dispatch_async function to the main queue(dispatch_get_main_queue()) are executed serially in the main thread and no new thread is started.

  • The dispatch_async function submits multiple block tasks to the same custom serial queue to start a new thread, and all block tasks are executed in serial in this new thread.

  • Block tasks submitted by the dispatch_async function to different custom serial queues start a new thread for parallel execution.

  • Multiple block tasks submitted to the parallel queue by the dispatch_async function start a thread for parallel execution.

  • Dispatch_sync, whether a block task is submitted to a serial queue or a parallel queue, they are executed serially in the current thread, it doesn’t start a new thread. But there is a special point here, if the queue is the main queue, the block task can only be executed in the main thread, whether we submit via dispatch_sync or dispatch_async. The main queue is bound to the main thread. If we submit a task to the main queue, it will be executed in the main thread. Of course, the main thread can also execute any other tasks in our custom queue. The following code:

dispatch_async(dispatch_get_global_queue(0.0), ^ {NSLog(@"👉 👉 % @", [NSThread currentThread]);
    
    dispatch_sync(dispatch_get_main_queue(), ^ {// Here is the main thread. (also understood as a new thread on the original thread 😂)
        NSLog(@"👉 % @", [NSThread currentThread]);
    });
    
    dispatch_sync(dispatch_get_global_queue(0.0), ^ {// this is the same thread as 👉👉.
        NSLog(@"👉 👉 % @", [NSThread currentThread]);
    });
});
Copy the code

The dispatch_continuation_s structure is defined as follows:

typedef struct dispatch_continuation_s {
    union {
        const void *do_vtable;
        uintptr_t dc_flags;
    };
    
    union {
        pthread_priority_t dc_priority;
        int dc_cache_cnt;
        uintptr_t dc_pad;
    };
    
    struct dispatch_continuation_s *volatile do_next; // Next task
    struct voucher_s *dc_voucher;
    
    // typedef void (*dispatch_function_t)(void *_Nullable); // Function pointer
    
    // A pointer to the function to be executed
    // (dispatch_async)
    // invoke _dispatch_Block_invoke(work), invoke block pointer,
    // wrap the work function as dispatch_function_t)
    
    dispatch_function_t dc_func; // A function pointer to the block structure
    
    void *dc_ctxt; // Method context (with dispatch_async, this pointer points to blocks in the heap)
    
    void *dc_data; // Related data
    void *dc_other; // Other information
    
} *dispatch_continuation_t;
Copy the code
DISPATCH_ALWAYS_INLINE
static inline dispatch_qos_t _dispatch_continuation_init(dispatch_continuation_t dc,
        dispatch_queue_class_t dqu, dispatch_block_t work,
        dispatch_block_flags_t flags, uintptr_t dc_flags)
{

    ...
    // Use dc_flags to determine whether func is assigned synchronously or asynchronously
    dispatch_function_t func = _dispatch_Block_invoke(work);// Encapsulate work - asynchronous callback (func is just a function of block)
    
    if (dc_flags & DC_FLAG_CONSUME) {
        func = _dispatch_call_block_and_release; // Callback assignment - synchronous callback (func is to execute the block and release the block)
    }
    
    return _dispatch_continuation_init_f(dc, dqu, ctxt, func, flags, dc_flags);
}
Copy the code

When dispatch_continuation_async is ready, call _dispatch_continuation_async(dq, dc, qos, dc->dc_flags).

The _dispatch_continuation_async function uses a macro definition internally: Dx_push, the macro defines the dq_push of the vtable that calls dQU (dispatch_queue_class_t). .

Global search dQ_push found multiple different queues for assignment, For example, the root queue (.dq_push = _dispatch_root_queue_push), main queue (.dq_push = _dispatch_main_queue_push), and concurrent queue (.dq_push = _dispatch_LANe_concurrent_push), serial queues (.dq_push = _dispatch_lane_push), etc., _dispatch_lane_concurrent_push is executed when the dispatch_async function queue parameter is passed to a custom concurrent queue.

I’m probably putting the DC in a queue. Then in the _dispatch_root_queue_poke_slow function:

  • Register callbacks with the _dispatch_root_queues_init method
  • Create a thread through a do-while loop, using the pthread_create method

Dispatch_root_queues_init is a dispatch_once_f singleton, where func is _dispatch_root_queues_init_once, Enter the source code for _dispatch_root_queues_init_once, whose call handle for different transactions within it is _dispatch_worker_thread2.

Its block callback executes the call path: _dispatch_root_queues_init_once ->_dispatch_worker_thread2 -> _dispatch_root_queue_drain -> _dispatch_root_queue_drain -> _dispatch_continuation_pop_inline -> _dispatch_continuation_invoke_inline -> _dispatch_client_callout -> Dispatch_call_block_and_release, and finally a block callback is performed by dX_invoke on a thread taken out of the pool, each dX_push and Dx_invoke pair called.

The idea is that a block in a parallel queue gets popped out and executed in a different thread, and when a parallel queue pops out, it doesn’t wait for the current block to complete before popping out the next block, it says successive pops, Successive blocks in the POP queue are executed in different threads regardless of whether the current block has completed or not.


19. Perform flow analysis with the dispatch_sync function.

The first thing to remember is that the dispatch_sync function does not start a new thread either by submitting a block task to a custom serial queue or to a parallel queue (except for the main queue).

Dispatch_sync -> _dispatch_SYNc_F -> _dispatch_SYNC_F_inline.

The dQ_width value of the serial queue is 1, and the dQ_width value of the custom concurrent queue is 0xFFeULL, and the dQ_width value of the serial queue is 1. The dQ_width value for the root queue is 0xFFFull, followed by branches for serial and parallel queues.

  • ifdqExecute if the argument is a serial queue_dispatch_barrier_sync_f(dq, ctxt, func, dc_flags)Function.
  • ifdqExecute if the argument is a concurrent queue_dispatch_sync_invoke_and_completeFunction.

Let’s look at a simple branch of a concurrent queue. If the dQ passed by the dispatch_sync call is a parallel queue, this is normal execution: _dispatch_sync_INVOke_AND_complete -> _dispatch_sync_function_invoke_inline has three main steps:

  • Push tasks into queues:_dispatch_thread_frame_push(&dtf, dq);
  • Execute task block:_dispatch_client_callout(ctxt, func);
  • To pair tasks:_dispatch_thread_frame_pop(&dtf);

As can be seen from the implementation, tasks are pushed into the queue first, then executed block, then pop, so the tasks are executed sequentially.

dispatch_queue_t custom_Queue = dispatch_queue_create("DISPATCH_QUEUE_SERIAL", DISPATCH_QUEUE_CONCURRENT);
dispatch_async(custom_Queue, ^{
    NSLog(@"👉 % @", [NSThread currentThread]);
    dispatch_sync(custom_Queue, ^{
        NSLog(@"👉 👉 % @", [NSThread currentThread]);
        dispatch_sync(custom_Queue, ^{
            NSLog(@"👉 👉 👉 % @", [NSThread currentThread]);
        });
        NSLog(@"👉 👉 👉 👉 % @", [NSThread currentThread]);
    });
    NSLog(@"👉 👉 👉 👉 👉 % @", [NSThread currentThread]);
});

// Console print:👉 < NSThread:0x60000348d200>{number = 5, name = (null)} 👉 <NSThread:0x60000348d200>{number = 5, name = (null)}
👉👉👉 <NSThread: 0x60000348d200>{number = 5, name = (null)}
👉👉👉👉 <NSThread: 0x60000348d200>{number = 5, name = (null)}
👉👉👉👉👉 <NSThread: 0x60000348d200>{number = 5, name = (null)}
Copy the code

If custom_Queue were defined as a serial queue, it would crash. This is important because in a serial queue a block task must complete and exit before it can execute the next block task, whereas in a parallel queue it does not. Instead of waiting for the previous block to complete before inserting a new block, it can execute the previously unfinished block after completing the new block and exiting the queue. (and all in one thread at this point)

If the dQ parameter is a serial queue, the _dispatch_barrier_sync_f(dq, CTXT, func, dc_flags) branch is executed. Deadlocks are involved here. If the execution flow without deadlocks is the same as if the DQ parameter is a parallel queue, we will only focus on deadlocks here.

The most common main thread deadlock is when we see the control function call stack stop at the __DISPATCH_WAIT_FOR_QUEUE__ function.

The essence of a deadlock is the cyclic waiting of tasks in a serial queue. In a synchronization function, if the queue currently being executed and the queue waiting are the same queue, it will cause a deadlock. The cause of the deadlock is stored in the _dispatch_SYNc_F_slow function call.

The _dispatch_SYNC_F_slow function generates some task information, which is then pushed by _dispatch_trace_ITEM_push and stored in our synchronization queue (FIFO) to execute the function.

The __DISPATCH_WAIT_FOR_QUEUE__ function gets the queue state to see if it is waiting and calls the XOR operation in _dq_state_DRAIN_locked_BY to determine the queue and thread waiting state. If both are waiting, YES is returned, causing a deadlock crash.

_dispatch_sync first gets the TID of the current thread, and then the status returned by the underlying system. Then, it compares the waiting status of the queue with the TID. If the status is the same, it indicates that the queue is being deadlocked and crashes.


20. The principle of dispatch_once execution, how it ensures that it can only be executed once globally in the application.

Dispatch_once ensures that tasks are executed only once, even if multiple threads are invoked simultaneously. Often used to create singletons, swizzeld method, and other functions.

Dispatch_once is a synchronization function that blocks the current thread until the block returns.

The block parameter of dispatch_once can only be called once globally, even in a multi-threaded environment. How to lock or block a thread when multiple threads call dispatch_once?

#ifdef __BLOCKS__
void
dispatch_once(dispatch_once_t *val, dispatch_block_t block)
{
    dispatch_once_f(val, block, _dispatch_Block_invoke(block));
}
#endif
Copy the code

The dispatch_once_t parameter val is used as a predicate with the dispatch_once function and must be initialized to zero. Dispatch_once determines whether the submitted block is executed based on the val parameter being zero. (Static and global variables default to zero)

Dispatch_once_gate_t is a pointer to the dispatch_once_gate_S structure, which contains only a union, The val parameter passed in the dispatch_once function is forcibly converted to dispatch_once_gate_t type.

typedef struct dispatch_once_gate_s {
    union {
        dispatch_gate_s dgo_gate;
        uintptr_t dgo_once;
    };
} dispatch_once_gate_s, *dispatch_once_gate_t;
Copy the code

DLOCK_ONCE_UNLOCKED corresponds to DLOCK_ONCE_DONE, indicating the status before and after dispatch_once. DLOCK_ONCE_UNLOCKED is used to indicate that dispatch_once has not been executed, and DLOCK_ONCE_DONE is used to indicate that dispatch_once has been executed.

#define DLOCK_ONCE_UNLOCKED   ((uintptr_t)0)
#define DLOCK_ONCE_DONE   (~(uintptr_t)0)
Copy the code

Determination of atomicity of _dispatch_once_gate_tryenter when dispatch_once_f is called by multiple threads l (dispatch_once_gate_t L = (dispatch_once_gate_t) val) Whether it is non-zero, which means that functions submitted by dispatch_once_f have been executed (or are being executed), or zero has not been executed.

If l (l->dgo_once) is zero, then _dispatch_once_gate_tryenter assigns l (l->dgo_once) to the ID of the current thread. After the dispatch_once_f dispatch_once_gate_broadcast function is executed, l (L -> dGO_once) is assigned to DLOCK_ONCE_DONE. (_dispatch_lock_value_for_self retrieves the ID of the current thread)

If l (l->dgo_once) is assigned to the current thread ID every time _dispatch_once_gate_tryenter is executed, It corresponds to the following judgment of v == value_self in _dispatch_once_gate_broadcast function. If dispatch_once_f is called by a single thread, there is no other thread blocked waiting, so there is no need for the thread to wake up the operation. In a multithreaded environment, the _dispatch_once_gate_tryenter function is called by a different thread, and each time v is updated with the ID of the current calling thread. Within the _dispatch_once_gate_broadcast function, Value_self is the ID of the first call to dispatch_once_f and v is the ID of the last call to dispatch_once_f. These threads are blocking and waiting for the execution of functions submitted by dispatch_once_f function to complete, so when the execution of functions submitted by dispatch_once_F is completed, the blocking and waiting thread needs to be woken up.

_dispatch_lock_value_for_self retrieves the ID of the current thread, which is assigned to val (dgo_once member variable). Val will be assigned the thread ID until the execution of the dispatch_once_f submitted function is complete, and DLOCK_ONCE_DONE when the execution of the dispatch_once_f submitted function is complete, Static dispatch_once_t onceToken for dispatch_once; The onceToken value is 0 before dispatch_once. The initial onceToken value must be 0. Otherwise, blocks in dispatch_once will not be executed. Prints onceToken, which has a value of -1, if we manually change onceToken to 0 then we can execute the block submitted by dispatch_once again).

The _dispatch_once_wait function uses a do while loop to read &dgo->dgo_once until it is set to DLOCK_ONCE_DONE. I saw some articles saying that _dispatch_thread_semaphore_wait was used to block the thread, which has been updated. Here, in the case of multi-threading, only the do while loop is used to block waiting, and there is no reference to the thread wake up and other resource-consuming operations, which improves the performance of dispatch_once under multi-threading calls.

🎉🎉🎉 To be continued…