In development, we often use GCD to handle some asynchronous processes, which feel familiar, but also unfamiliar. Some concepts are still very vague, such as what GCD is, what the task is, the difference between serial queue and concurrent queue, synchronous function and asynchronous function, the use of queue and function together, GCD lower wrapper, etc. Here we will analyze each one.

1.GCD related concepts


GCD stands for Grand Central Dispatch, a pure C language that provides a lot of powerful functions.

Advantages of THE GCD:

  • GCDApple’s solution for multi-core parallel computing
  • GCDWill automatically use moreCPUCore (e.g., dual-core, quad-core)
  • GCDThread lifecycles are automatically managed (creating threads, scheduling tasks, destroying threads)
  • The programmer just has to tellGCDWhat tasks do you want to perform without writing any thread management code

Summary: GCD adds the task to the queue and specifies the function to execute the task.

Task 2.

Tasks are encapsulated as blocks, which have no parameters and no return value. Tasks are scheduled through queues and executed by threads.

How are tasks encapsulated and invoked? That's a question!

3. The function

Functions that perform tasks are classified into asynchronous functions and synchronous functions

  • An asynchronous functiondispatch_async
    • The next statement can be executed without waiting for the current statement to complete
    • Thread execution is enabledblockThe task of
    • Asynchronous is synonymous with multithreading
  • Synchronization functiondispatch_sync
    • The next statement is executed only after the current statement is completed
    • Will not start the thread
    • Executes in the current threadblockThe task of

4. The queue

There are two types of queues: serial queues and concurrent queues. Tasks are arranged in different queues in different ways. Tasks are scheduled through queues and executed by threads arranged by thread pools.

Whether serial queue or concurrent queue, will follow the FIFO principle, that is, first in, first scheduled principle; The speed, or duration, of a task depends on the complexity of the task.

  • Serial queues: The path is narrow. Tasks are arranged in a certain order and executed one by one
  • Concurrent queue: The channel is wide and multiple tasks may be executed at the same time

What the queue is, how it's encapsulated, how it's scheduled, that's what we need to look at.

5. Queues and functions

This explains the difference between a queue, a function, and a task. A queue is used to call a task, and a function is used to execute the task. So how do different combinations of queues and functions work?

  • Synchronize function serial queue
    1. The thread is not started and the task is executed in the current thread
    2. Tasks are executed sequentially, one task after another
    3. It’s going to block
  • Synchronize functions to concurrent queues
    1. The thread is not started and the task is executed in the current thread
    2. Tasks are executed one after the other
  • Asynchronous function serial queue
    1. Will start a thread
    2. Tasks are executed one after the other
  • Asynchronous function concurrent queue
    1. Start a thread to execute tasks on the current thread
    2. Tasks are executed asynchronously, in no order,CPUScheduling the

2. Case analysis of GCD

1. Analyze the task time

Introduce a case study to analyze the time consuming of tasks performed by different functions. See below:

  1. Between the two timings, nothing is done but a serial queue is created, which takes 0.000001 seconds, as shown below:

  2. TextMethod execution time in main thread has been increased to 0.000142 seconds.

  3. Put the task into a serial queue and execute it synchronously, taking 0.000117 seconds. See below:

  4. Put a task into a serial queue and execute it asynchronously. The asynchronous function opens a thread and executes the task. It takes 0.000007 seconds, see the picture below:

The following conclusions can be drawn from the above cases:

  • In any case, it takes time to execute a task, right
  • Asynchronous execution is relatively time-consuming, and asynchronous can be used to solve concurrency, multithreading and other problems in our development

The following is a combination of some more representative cases for analysis

2. Add a synchronization task to the main queue

What about adding a task to the current main queue and executing it synchronously? Will collapse! See below:

Because in the current process, the default queue is the main queue, which is also a serial queue, and the tasks are executed in the following order:

  1. NSlog(@"0")
  2. Dispathc_sync chunks
  3. NSlog(@"2")

The block task in step 2 adds a task NSlog(@”1″) to the current main queue. At this point, there is the problem of waiting for each other, which is what we often call deadlock! Why is that? See below:

Because the dispathc_sync block is a serial queue, NSlog(@”1″) is executed to complete the dispathc_sync block, and NSlog(@”1″) is not executed until NSlog(@”2″) is completed. NSlog(@”2″) will not be executed until the dispathc_sync block is completed. The main queue then enters a related wait state, which is a deadlock! To verify, see the following figure:

How to solve this problem? Change the main queue to a custom serial queue. See below:

3. Add asynchronous tasks to the concurrent queue

In the following case, it is a concurrent queue with a wide channel, so this nesting process does not cause a crash! See below:

What does that look like?

  1. Because the task complexity is basically the same, the print order is:The 1-5-2-4-3;
  2. dispathc_asyncA new thread is started to execute the task.

To verify, see the following figure:

  • What happens if you change an asynchronous function to a synchronous function?

    Because of the concurrent queue, the queued tasks will not block, and because the execution is synchronous, no new threads will be opened and the process will be executed in order. See below:

4. Add a synchronization task to a serial queue

Refer to the following example to create a serial queue and add synchronization task 3 to the serial queue. So what’s going to happen?

  • In fact, this case is the same as adding synchronization tasks to the main queue, which is a serial queue. This case will also crash! Why is that? See the following task diagram for a serial queue:

    From task 2 to a serial queue, the dispathc_sync block executes NSlog(@”3″) to complete, and the NSlog(@”3″) cannot be executed until NSlog(@”4″) has completed. NSlog(@”4″) will not be executed until the dispathc_sync block is completed. The serial queue then enters an associated wait state, which is a deadlock! Although it is in a child thread at this point, the queue channel follows FIFO rules and must wait for the current statement to complete before executing the next statement. So crash!


5. Concurrent queue multitasking

What happens when you add synchronous and asynchronous tasks to a concurrent queue? See the following example:

Result analysis:

  • In the main queue10A task
  • Among themTask 3andTask 0It’s a synchronous task, soTask 3Must be inTask 0In front of the
  • Task 1,Task 2,Task 3The order of is uncertain
  • Task 7,The task of 8,The task of 9Must be inTask 0The back of the

3. Queue source exploration

To study its internal implementation, find the source of the source code, through the assembly view, there is no source. We can see the source of Diaspatch by setting the dispatch_queue_create symbol breakpoint. See below:

GCD comes from the libDispatch.dylib dynamic library. Download libDispatch. dylib source code and explore GCD!

1. The home team

  1. Dispatch_get_main_queue () analysis

    Enter the main queue definition as shown below:

    From the comments above we can see a few things:

    1. The main queue is in the application, used for the main program andmain Runloop
    2. The main queue is created inThe main functionIt’s done before, it’s done beforeDyld applicationIt is created during the load phase

    The main queue is obtained from DISPATCH_GLOBAL_OBJECT. Globally search for DISPATCH_GLOBAL_OBJECT. This function is defined by a macro as follows:

    #define DISPATCH_GLOBAL_OBJECT(type, object) ((type)&(object))
    Copy the code

    The first parameter is a queue type, and the second parameter is a main queue object, which can be understood as a main queue after encapsulation by type and main queue object. A global search for _dispatch_main_q = shows where the main queue is initialized, as shown below:

    The main queue is a structure inherited from the DISPATCH_GLOBAL_OBJECT_HEADER. The global search DISPATCH_GLOBAL_OBJECT_HEADER is defined as follows:

  2. Look for the main queue definition through lable

    The above analysis method is actually a bit of a cheat, but the following is to locate the main queue by label. When debugging in the main thread, you can see the following in the run stack message:

    In the figure, you can see the thread and the stack of tasks executed by the thread. In the main thread, you can see that the queue is a serial type and its corresponding lable is

    Print a custom serial queue, a concurrent queue, a primary queue, and a global queue. See below:

    When creating a custom queue, you need to pass in the name of the queue, which is the queue Lable, and the type of the queue. According to the information printed above, the queue has its corresponding name. Global search is performed in the libDispatch.dylib source code, based on the label of the queue. See below:

    The final location is the same as when dispatch_get_main_queue() is analyzed. The main queue is a structure and is the default label.

  3. So where is the main queue initialized?

    When we looked at the comment on the main queue definition earlier, it was noted that the main function was initialized before the main function, and we came to the conclusion when we looked at the initialization of objc_init() when we looked at dyld: LibSystem_init -> libDispatch_init -> objc_init. Is the initialization of the main queue in the dispatch_init() method?

    The place where the main queue was initialized was successfully found in dispatch_init(), the default queue was retrieved, and the address of the main queue was bound to the current queue and to the main thread.

  4. conclusion

    The main queue is created when the application loads the call to the dispatch_init() method before the main function. The main queue is the current default queue and is bound to the main thread.

The main queue is the serial queue, but where was the serial queue determined? In the following analysis!

2. Global queue

  • dispatch_get_global_queue(0, 0)

    To enter the global queue definition, see the following figure:

    When creating a global concurrent queue, parameters can be passed to provide different concurrent queues based on different quality of service or priority. Then we can conclude that there should be a global set to maintain these concurrent queues.

  • Search with the global queue’s

    Get a set of queues that provide different global queues according to different qualities of service, as shown below:

  • conclusion

    The system maintains a set of global queues, providing different global queues based on quality of service or priority. Our default for development work is dispatch_get_global_queue(0, 0).

3. Customize the queue creation process

Because the source of libDispach. Dylib can not be compiled, only through the key steps to gradually locate the analysis. Search for dispatch_queue_create(const) globally for dispatch_queue_create(const); search for dispatch_queue_create(const);

  • Lable indicates the queue name

  • Attr Specifies the type of queue currently passed in to be created

  • The _dispatch_LANe_CREATE_WITH_target method is called with the default queue type

    As you can see from the figure above, the default queue is defined as NULL.

Then search globally for _dispatch_lane_create_with_target(const) to locate where the queue was created.

// lable -> name // dqa -> type to create // tq -> Default type - serial NULL DISPATCH_NOINLINE static dispatch_queue_t _dispatch_lane_create_with_target(const char *label, dispatch_queue_attr_t dqa, dispatch_queue_t tq, Bool Legacy) {// dqai creates the properties passed in -dQA serial or parallel dispatch_queue_attr_info_t dQAI = _dispatch_queue_attr_to_info(dQA); // // Step 1: Normalize arguments (qos, overcommit, tq) // dispatch_qos_t qos = dqai.dqai_qos; #if ! HAVE_PTHREAD_WORKQUEUE_QOS if (qos == DISPATCH_QOS_USER_INTERACTIVE) { dqai.dqai_qos = qos = DISPATCH_QOS_USER_INITIATED; } if (qos == DISPATCH_QOS_MAINTENANCE) { dqai.dqai_qos = qos = DISPATCH_QOS_BACKGROUND; } #endif // ! HAVE_PTHREAD_WORKQUEUE_QOS _dispatch_queue_attr_overcommit_t overcommit = dqai.dqai_overcommit; if (overcommit ! = _dispatch_queue_attr_overcommit_unspecified && tq) { if (tq->do_targetq) { DISPATCH_CLIENT_CRASH(tq, "Cannot specify both overcommit and " "a non-global target queue"); } } if (tq && dx_type(tq) == DISPATCH_QUEUE_GLOBAL_ROOT_TYPE) { // Handle discrepancies between attr and target queue, attributes win if (overcommit == _dispatch_queue_attr_overcommit_unspecified) { if (tq->dq_priority & DISPATCH_PRIORITY_FLAG_OVERCOMMIT) { overcommit = _dispatch_queue_attr_overcommit_enabled; } else { overcommit = _dispatch_queue_attr_overcommit_disabled; } } if (qos == DISPATCH_QOS_UNSPECIFIED) { qos = _dispatch_priority_qos(tq->dq_priority); } tq = NULL; } else if (tq && ! tq->do_targetq) { // target is a pthread or runloop root queue, setting QoS or overcommit // is disallowed if (overcommit ! = _dispatch_queue_attr_overcommit_unspecified) { DISPATCH_CLIENT_CRASH(tq, "Cannot specify an overcommit attribute " "and use this kind of target queue"); } } else { if (overcommit == _dispatch_queue_attr_overcommit_unspecified) { // Serial queues default to overcommit! overcommit = dqai.dqai_concurrent ? _dispatch_queue_attr_overcommit_disabled : _dispatch_queue_attr_overcommit_enabled; } } if (! tq) { tq = _dispatch_get_root_queue( qos == DISPATCH_QOS_UNSPECIFIED ? DISPATCH_QOS_DEFAULT : qos, overcommit == _dispatch_queue_attr_overcommit_enabled)->_as_dq; if (unlikely(! tq)) { DISPATCH_CLIENT_CRASH(qos, "Invalid queue attribute"); } } // // Step 2: Initialize the queue // if (legacy) { // if any of these attributes is specified, use non legacy classes if (dqai.dqai_inactive || dqai.dqai_autorelease_frequency) { legacy = false; } // Const void *vtable; Dispatch_queue_flags_t DQF = Legacy? DQF_MUTABLE : 0; If (dqai.dqai_concurrent) {// OS_dispatch_##name##_class // OS_dispatch_queue_concurrent - Macro definition concatenation class vtable = DISPATCH_VTABLE(queue_concurrent); } else { vtable = DISPATCH_VTABLE(queue_serial); } switch (dqai.dqai_autorelease_frequency) { case DISPATCH_AUTORELEASE_FREQUENCY_NEVER: dqf |= DQF_AUTORELEASE_NEVER; break; case DISPATCH_AUTORELEASE_FREQUENCY_WORK_ITEM: dqf |= DQF_AUTORELEASE_ALWAYS; break; } if (label) { const char *tmp = _dispatch_strdup_if_mutable(label); if (tmp ! = label) { dqf |= DQF_LABEL_NEEDS_FREE; label = tmp; }} -alloc init queue is an object // OS_dispatch_queue_serial // OS_dispatch_queue_concurrent // (struct dispatch_lane_t dq = _dispatch_object_alloc(vtable, struct dispatch_lane_s)); // alloc _dispatch_queue_init(dq, dqf, dqai.dqai_concurrent ? / / to distinguish between the serial and concurrent queue DISPATCH_QUEUE_WIDTH_MAX: 1, DISPATCH_QUEUE_ROLE_INNER | (dqai. Dqai_inactive? DISPATCH_QUEUE_INACTIVE : 0)); // init dq->dq_label = label; dq->dq_priority = _dispatch_priority_make((dispatch_qos_t)dqai.dqai_qos, dqai.dqai_relpri); if (overcommit == _dispatch_queue_attr_overcommit_enabled) { dq->dq_priority |= DISPATCH_PRIORITY_FLAG_OVERCOMMIT; } if (! dqai.dqai_inactive) { _dispatch_queue_priority_inherit_from_target(dq, tq); _dispatch_lane_inherit_wlh_from_target(dq, tq); } _dispatch_retain(tq); dq->do_targetq = tq; _dispatch_object_debug(dq, "%s", __func__); return _dispatch_trace_queue_create(dq)._dq; // Return dq}Copy the code

Let’s take a closer look at the core function points of this approach. The return value of this method is dq. Although the _dispatch_trace_queue_create method is called to encapsulate DQ, it still returns DQ, so dQ is the focus of our research!

  • Dq Create process

    // OS_dispatch_queue_serial // OS_dispatch_queue_concurrent // The object should have ISA direction (struct dispatch_lane_t dq = _dispatch_object_alloc(vtable, struct dispatch_lane_s)); // alloc _dispatch_queue_init(dq, dqf, dqai.dqai_concurrent ? / / to distinguish between the serial and concurrent queue DISPATCH_QUEUE_WIDTH_MAX: 1, DISPATCH_QUEUE_ROLE_INNER | (dqai. Dqai_inactive? DISPATCH_QUEUE_INACTIVE : 0)); // init dq->dq_label = label; dq->dq_priority = _dispatch_priority_make((dispatch_qos_t)dqai.dqai_qos, dqai.dqai_relpri); if (overcommit == _dispatch_queue_attr_overcommit_enabled) { dq->dq_priority |= DISPATCH_PRIORITY_FLAG_OVERCOMMIT; } if (! dqai.dqai_inactive) { _dispatch_queue_priority_inherit_from_target(dq, tq); _dispatch_lane_inherit_wlh_from_target(dq, tq); } _dispatch_retain(tq); dq->do_targetq = tq; _dispatch_object_debug(dq, "%s", __func__);Copy the code
    • Call the _dispatch_object_alloc method to initialize and allocate memory

      The _dispatch_object_alloc method is called to initialize the queue object. Similar to the NSObject object initialization, pass in the template and size for creating the object. See below:

      The initialization process is completed by calling the _OS_object_alloc_realized method. In this function, initialization is done by calling Calloc, and ISA points to CLS, which is vtable. See below:

    • _dispatch_queue_init is the constructor

      When this method is called, the third argument is:

      dqai.dqai_concurrent ? DISPATCH_QUEUE_WIDTH_MAX : 1
      Copy the code

      Understandably, determine if it is a concurrent queue and if so, pass DISPATCH_QUEUE_WIDTH_MAX, otherwise pass 1. In other words, a 1 is passed in here for a serial queue, or DISPATCH_QUEUE_WIDTH_MAX if it’s a concurrent queue. Its definition is shown in the following figure:

      Enter the implementation of _dispatch_queue_init constructor, find an important clue, DQF_WIDTH(width); It is used to determine the type of queue to distinguish between serial queues and concurrent queues, as shown in the following figure:

      At this point we can also determine the type of the main queue, in the main queue structure definition, DQF_WIDTH(1); , so the main queue is a serial queue.

    • Set dQ, such as DQ_label and DQ_priority

Dq (dTABLE, dqai, DQF) dQ (dTABLE, DQAI, DQF) dQ (DTABLE, DQAI, DQF) So let’s continue the analysis.

  • Dqai initialization

    Dqai = _dispatch_queue_attr_info_t (dQA); dqai = _dispatch_queue_attr_to_info(dQA);Copy the code

    In the first line of _dispatch_LANe_CREATE_WITH_target, dQAI is initialized, where dQA is the type of queue to be created. Check the source code of _dispatch_queue_attr_to_info:

    Dqai is initialized and the type of DQA is determined. If it is a concurrent queue, set concurrent queue to true, otherwise it defaults to serial queue. When constructing dQ with _dispatch_queue_init, the queue type is differentiated, that is, DQF_WIDTH(width); The serial queue width=1, otherwise it is a concurrent queue.

  • vtable

    What is a vtable? You can think of it as a class, or a template class that constructs queues. Vtable initialization process see the following source code:

    const void *vtable; If (dqai.dqai_concurrent) {// OS_dispatch_##name##_class // OS_dispatch_queue_concurrent - Macro definition concatenation class type vTABLE = DISPATCH_VTABLE(queue_concurrent); } else { vtable = DISPATCH_VTABLE(queue_serial); }Copy the code

    Dqai will differentiate the types of queues created earlier, and different Vtables will be initialized according to the types of queues. “DISPATCH_VTABLE” is a macro defined method. A global search for the definition of “DISPATCH_VTABLE” leads to the following complete definition process:

    #define "DISPATCH_VTABLE(name)" #define "DISPATCH_VTABLE(name)" DISPATCH_OBJC_CLASS(name) "#define" vtable symbols - template "#define" #define DISPATCH_CLASS_SYMBOL(name) #define DISPATCH_CLASS_SYMBOL(name) OS_dispatch_##name##_classCopy the code

    The parameters of the DISPATCH_VTABLE function vary with queue types.

    • Concurrent queue

      After the queue_concurrent parameter is passed, the queue type, that is, the corresponding class, is OS_dispatch_queue_concurrent

    • Serial queues

      After the queue_serial parameter is passed, the queue type, that is, the corresponding class, is OS_dispatch_queue_serial

    So vtable corresponds to the type of queue. The class is defined by concatenation, which is consistent with the queue types we use at the application level, as shown in the following figure:

  • conclusion

    At this point we can summarize the process of creating a custom queue. At the bottom level, processes are encapsulated according to the queue name lable and queue type passed in from the upper layer. Initialize the corresponding Vtable, that is, the corresponding queue class (template class), according to the type. Through alloc and init methods to complete the queue memory development and construction initialization process, set the OBJECT isa pointing to the queue, and complete the queue type differentiation.

4. Function execution

As we have summarized above, functions that perform tasks are divided into asynchronous functions and synchronous functions

  • Synchronization functiondispatch_sync
  • An asynchronous functiondispatch_async

So what’s the execution logic for these lines? How do I schedule tasks in a queue and process them? What is the underlying thread creation for asynchronous functions?

1. Synchronization function

In the libdispatch.dylib source code, a global search for dispatch_sync(dis) found the entry to the synchronization function. See below:

The two parameters are queue Dq and task work. Continue tracing, call _dispatch_sync_f function, global search _dispatch_sync_f(dis), see below:

Continue tracing the source code and search globally for _dispatch_sync_f_inline(dis).

At this point:

  • Encapsulation process
    1. dispatch_sync
    2. _dispatch_sync_f
    3. _dispatch_sync_f_inline
  • parameter
    1. dqQueue dq
    2. ctxtThe work task
    3. func: functions defined by macros_dispatch_Block_invoke
          #define _dispatch_Block_invoke(bb) \ ((dispatch_function_t)((struct Block_layout *)bb)->invoke)
      Copy the code

So in the _dispatch_sync_F_inline method, which line should we focus on? Let’s briefly analyze the first part of the judgment process:

if (likely(dq->dq_width == 1)) {
    return _dispatch_barrier_sync_f(dq, ctxt, func, dc_flags);
Copy the code

This part is clearly a synchronous process for the main queue, as analyzed above, when dq_width=1, it represents the main queue. So what’s the difference between _dispatch_sync_f_slow and _dispatch_sync_recurse? And we can tell by the lower symbol breakpoint, and by the lower symbol breakpoint, we see that it made it to _dispatch_sync_f_slow. See below:

Enter the _dispatch_sync_f_slow method, as shown in the following figure:

Which step do I need to study? Obviously, it is unlikely that null will be passed, so the _dispatch_sync_function_invoke method will be used, again using the notation breakpoint. See below:

After entering the _dispatch_sync_function_invoke method, the _dispatch_sync_function_INVOke_inline method is invoked, as shown in the following figure:

Continue tracing the source code into the _dispatch_sync_function_invoke_inline implementation, as shown below:

There are many methods which one is the key one that we need to study? Let’s take a look at the run stack of the synchronization function through BT, as shown below:

The _dispatch_client_callout method finally completes the call execution of the task, enter the _dispatch_client_callout method, see the following figure:

The search shows that there are many _dispatch_client_callout methods, but they all end up executing the work call through func.

  • conclusion

    The function _dispatch_Block_invoke, also known as func, is used to execute the work task in the synchronous process.

2. Asynchronous functions

In the libdispatch.dylib source code, a global search for dispatch_async(dis) found the entry for asynchronous functions. See below:

  • Encapsulation of the task _dispatch_continuation_init

    In this method, work is encapsulated, and the method _dispatch_continuation_init can be thought of as a task wrapper. For implementation, go to the _dispatch_continuation_init method:

    The _dispatch_Block_invoke function is familiar, which is the func of the synchronous function. But there is a judgment, as shown in the following code:

    dispatch_function_t func = _dispatch_Block_invoke(work);
    if (dc_flags & DC_FLAG_CONSUME) {
    func = _dispatch_call_block_and_release;
    Copy the code

    The initial value of DC_flags is DC_FLAG_CONSUME, so after & is processed, this goes into the condition statement, and func is reset to _dispatch_call_block_and_release.

    Enter _dispatch_continuation_init_f, as shown below:

    In this case, work and func are encapsulated in the DC, and the priority of the task is processed at the same time. Because asynchronous functions are called asynchronously by threads or CPUS, the priority of the task is processed here.

Back to the implementation of the dispatch_async method, after completing the task packaging, call the _dispatch_continuation_async method to process the asynchronous function. See below:

Dx_push is a function that takes three parameters: queue, DC (wrapped task), and qos. A global search for DX_push shows where the macro is defined.

Vtable is the template class used to analyze the creation and initialization of the queue, which is the class corresponding to the queue. Dq_push: dq_push: dq_push: dq_push: dq_push: dq_push

The underlying layer provides different entry points for different types of queues, such as a global concurrent queue that calls the _dispatch_root_queue_push method. As entry, search globally for the implementation of the _dispatch_root_queue_push method:

In this process, we just do some judging and encapsulation, and eventually we go to the last line of code _dispatch_root_queue_PUSH_inline to continue the tracer source process:

The _dispatch_root_queue_poke method is called in _dispatch_root_queue_push_inline, The core process for _dispatch_root_queue_poke is _dispatch_root_queue_poke_slow, as shown in the following figure:

  • _dispatch_root_queue_poke_slow implementation

    _dispatch_root_queues_init() is a key process in _dispatch_root_queues_poke_slow, as shown below:

    Enter the _dispatch_root_queues_init() method, which uses singleton processing, as shown in the following figure:

  • The singleton is _dispatch_root_queues_init_once

    So let’s go to the _dispatch_root_queues_init_once method, so what’s going on here? See below:

    Thread pools are initialized, work queues are configured, work queues are initialized, and so on, which explains why _dispatch_root_queues_init_once is a singleton. Singletons can avoid repeated initialization.

    One of the key Settings is the execution function, which is set to _dispatch_worker_thread2. See the following code:

    cfg.workq_cb = _dispatch_worker_thread2;
    Copy the code

    We can verify that the asynchronous function’s final task is called with _dispatch_worker_thread2 by bt printing the run stack information. See the picture below:

  • conclusion

    By tracking the asynchronous processing process, the system performs different DQ_push methods for different queue types, completes the initialization of the thread pool and the configuration of the work queue in the form of singleton, and finally completes the invocation and execution of tasks in the asynchronous function through _dispatch_worker_thread2.

5. Legacy issues

The application of GCD, underlying principles, function execution logic, etc., is analyzed here, but there are still some problems not solved, such as the following issues:

  • Where are threads created for asynchronous functions?
  • The underlying through_dispatch_worker_thread2Method completes the execution of the task, so where does the call originate?
  • GCDWhat is the logic of the singleton in?

There are a few things we haven’t explored in the _dispatch_root_queue_poke_slow method!

The next article will be more in-depth analysis!