This article from the WWDC, study notes, is purely a memo using the GCD build quick and high efficiency of App WWDC developer.apple.com/wwdc15/718

What is the GCD

GCD is multithreaded programming developed by Apple, with a simple API to create new threads to perform tasks we need.

The GCD type

There are two kinds of the GCD

Asynchronous has the ability to start a new thread and to skip the current function and continue.

Synchronous does not have the ability to start a new thread or to skip the current function and continue.

The advantages of the GCD

GCD is apple’s multi-core parallel computing solution GCD will automatically use more CPU cores (such as dual-core, quad-core) GCD will automatically manage the life cycle of threads, engineers just need to tell GCD what program to execute, don’t need to write any management code

The GCD queue

Main Queue

The Main Queue has the highest priority and runs on the Main thread. All UI updates should be completed on the main thread and, if not, cause crashes in the application.

Global Queue

According to QOS(Quality of Service), this thread can be classified into four main types and one default type, from the highest order to the lowest order

1. UserInteractive - similar to the main thread, Work is instantaneous 2. UserInitiated - operation time is only a few seconds or less 3. Default - The system will automatically infer the default type 4Copy the code

Custom Queue

A custom sequence is an order that can be created and provide any required QOS

Verify the priority of different queues in asynchrony
DispatchQueue.global(qos: .userInitiated).async { for i in 0... 5 { print("userInitiated,\(i)") } } DispatchQueue.global(qos: .userInteractive).async { for i in 0... 5 { print("userInteractive-----,\(i)") } } //print userInteractive-----,0 userInitiated,0 userInteractive-----,1 userInitiated,1 userInteractive-----,2 userInitiated,2 userInteractive-----,3 userInitiated,3 userInteractive-----,4 userInteractive-----,5 userInitiated,4 userInitiated,5Copy the code

Because userInteractive has a higher priority than userInitiated, it is faster to print.

Verify the priority of different queues under synchronization
DispatchQueue.global(qos: .userInteractive).sync { for i in 0... 5 { print("userInteractive-----,\(i)") } } DispatchQueue.global(qos: .userInitiated).sync { for i in 0... 5 { print("userInitiated,\(i)") } } print userInteractive-----,0 userInteractive-----,1 userInteractive-----,2 userInteractive-----,3 userInteractive-----,4 userInteractive-----,5 userInitiated,0 userInitiated,1 userInitiated,2 userInitiated,3 userInitiated,4 userInitiated,5Copy the code

In the GCD synchronization queue, userInteractive takes precedence over userInitiated execution. The next step is initiated only after all userInteractive execution is completed.

Understand some of the connections between iOS Runloop and GCD

Main Thread runloop

In this figure, the leftmost part is the event processing and response process of the Runloop in MainThread. The main runloop receives Events, processes Events, and then sends them to the corresponding location for processing through each response chain

The GCD handles background events

Through the method provided by GCD, some time-consuming operations can be done on the non-main thread, so that the App can run more smoothly and respond faster. Note ⚠️ when using GCD, you need to be careful to avoid thread explosions and deadlocks, and non-main thread processing tasks are not a cure-all. If a process consumes a lot of memory or CPU operations, GCD can not help you, but only by breaking down the process step by step.

Main Thread runloop Receives the processing result

When the GCD completes the event, it returns the result to the main thread for feedback, so that the main thread can quickly interact with each other. This is the most typical way to handle events asynchronously

Time-consuming task

GCD block dispatch_block_create_WITH_qOS_class specifies the queue QoS as QOS_CLASS_UTILITY(UI in figure). This QoS system optimizes power for large computing, I/O, networking, and complex data processing.

Using serial queues, synchronous operations perform a lock-like operation

You can use the synchronization Because there are other threads or queue is also may access the data structure when executing a QoS thread Within the queue called “synchronous calls” function In order to exclusive access to the data structure If you then return to the thread The same thing will be he would stop waiting thread to get exclusive access And then call the QoS of the thread A block in a thread that executes its own QoS

But this may occur when a higher priority server thread is waiting for a lower level process to work and we can pass for the waiting periodImprove the QoS of waiting work to solve if you use serial queues and synchronous calls or call block waiting apis if you use multithreaded mutex or any API built on top of it like NSLock

How does GCD use temporary threads to complete tasks

If there are two devices in the thread pool, and there are two tasks in the figure,GCD cannot allocate two threads to three code tasks, 1. The thread is allocated to the first two. 2. When the first one is finished, the thread is allocated to the third program code.

About the GCD thread wait

This works fine until someone in our program needs access to a resource that is not yet available. We call this the wait thread will wait and defer when it needs something like I/O or resource locking

About Thread proliferation

So imagine we have four of these blocks executing on two different threads and the first two of them say hey I need to do I/O and we say, Ok, so we’re going to publish the I/O to disk but then we have to wait for the I/O to come back and then we can bring up another thread to execute the next block and so forth and while the thread is waiting while there’s still work to be done find another thread to execute the next block on the queue

The problem here is: if I only have four blocks running and that’s fine if I have a lot of blocks and they all want to wait we’re going to have a thread surge

If they stop waiting at the same time you have a lot of resource conflicts so it’s very bad for performance

And it’s also a little bit dangerous because it puts a limit on the number of programs that can be proposed and what do we do with new work when we run out of time? This leads to deadlocks

Main thread deadlock

When the main thread serial queue starts to synchronize, it’s going to get stuck and the main thread decides to synchronize to the same serial queue and the problem is that there are no threads available for the serial queue and the main call will block forever and that’s the classic deadlock situation

Recognize common threads

Manger thread

In almost any application that uses GCD, you’ll see the management thread, which is responsible for the helper process scheduling source. You’ll notice that the scheduling management thread is the root framework

Idle GCD thread

(workq_kernreturn) Idle threads in the thread pool. At the bottom of the stack you can see that the starting work queue thread indicates that it is a GCD thread. The work queue currently returns indicating that it is idle

Active GCD thread

Idle mian thread