Last time we introduced some concepts of multithreading, this time we will focus on GCD, a common multithreading technique used in iOS development.

The concept of the GCD

  • GCD is apple’s solution for multi-core parallel computing
  • – GCD will automatically utilize more CPU cores (e.g. Dual-core, quad-core)
  • GCD automatically manages thread lifecycles (creating threads, scheduling tasks, destroying threads)
  • The programmer just tells the GCD what task he wants to perform and doesn’t have to write any thread management code.

In summary, GCD is a function that adds a task to a queue and specifies its execution.

Basic use of THE GCD

This is how we normally use the GCD

// Create a task block dispatch_block_t block = ^{NSLog(@" this is the task "); }; Dispatch_queue_t queue = dispatch_queue_create("com.lg.cn", NULL); Dispatch_async (queue, block);Copy the code

In summary, there are three:

  • Create the task block “dispatch_block_t”

  • Create queue dispatch_queue_t

  • Add the task to the queue and execute the task function dispatch_async or dispatch_sync

    Two other concepts that we’re actually familiar with are functions and queues.

    • Functions includeSync function (dispatch_sync)andAsynchronous functions (dispatch_async).
    • Queue includesSerial queue (DISPATCH_QUEUE_SERIAL)andParallel queue (DISPATCH_QUEUE_CONCURRENT)

The home side column

The main queue (dispatch_queue_main_t) is the queue that we start when we run the application, and it’s the queue that the main thread is in, and it’s going to run throughout our application. From the comment of our dispatch_get_main_queue function we see that the main queue is a serial queue, which is understandable because the tasks in a serial queue are executed sequentially, and our tasks on the main thread are also consistent with this feature.

// The following line notes that it is a serial queue, but not exactly a standard serial queue. * Because the main queue doesn't behave entirely like a regular serial queue, * it may have unwanted side-effects when used in processes that are not UI apps * (daemons). For such processes, the main queue should be avoided. * * @see dispatch_queue_main_t * * @result * Returns the main queue. This queue is created automatically on behalf of * the main thread before main() is called. */ DISPATCH_INLINE DISPATCH_ALWAYS_INLINE DISPATCH_CONST DISPATCH_NOTHROW dispatch_queue_main_t dispatch_get_main_queue(void) { return DISPATCH_GLOBAL_OBJECT(dispatch_queue_main_t, _dispatch_main_q); }Copy the code

Let’s download the source code for libDispatch, take a look at the source code for dispatch_get_main_queue, The call is to DISPATCH_GLOBAL_OBJECT(dispatch_queue_MAIN_t, _dispatch_main_q), which is a macro

#define DISPATCH_GLOBAL_OBJECT(type, object) (static_cast<type>(&(object)))
Copy the code

You can see that the first parameter type is the type, and the second parameter object the real parameter is _dispatch_main_q, so we’re going to search globally for _dispatch_main_q =

struct dispatch_queue_static_s _dispatch_main_q = { DISPATCH_GLOBAL_OBJECT_HEADER(queue_main), #if ! DISPATCH_USE_RESOLVERS .do_targetq = _dispatch_get_default_queue(true), #endif .dq_state = DISPATCH_QUEUE_STATE_INIT_VALUE(1) | DISPATCH_QUEUE_ROLE_BASE_ANON, .dq_label = "com.apple.main-thread", .dq_atomic_flags = DQF_THREAD_BOUND | DQF_WIDTH(1), .dq_serialnum = 1, };Copy the code

We find that _dispatch_main_q is a structure. You can see that dispatch_queue_main_t is a structure dispatch_queue_Static_s.

Serial queue and concurrent queue source distinction

As we have seen above, the essence of the GCD queue is the dispatch_queue_static_s structure. Does that member in the structure indicate serial or parallel queues? We find the answer in the source code. Our queue is created using the dispatch_queue_create function. The second argument to this function is the type of the queue.

dispatch_queue_t
dispatch_queue_create(const char *label, dispatch_queue_attr_t attr)
{
  return _dispatch_lane_create_with_target(label, attr,
      DISPATCH_TARGET_QUEUE_DEFAULT, true);
}
Copy the code

The call continues to look for the _dispatch_lane_create_with_target function:

DISPATCH_NOINLINE static dispatch_queue_t _dispatch_lane_create_with_target(const char *label, dispatch_queue_attr_t dqa, dispatch_queue_t tq, bool legacy) { dispatch_queue_attr_info_t dqai = _dispatch_queue_attr_to_info(dqa); // // Step 1: Normalize arguments (qos, overcommit, tq) // dispatch_qos_t qos = dqai.dqai_qos; #if ! HAVE_PTHREAD_WORKQUEUE_QOS if (qos == DISPATCH_QOS_USER_INTERACTIVE) { dqai.dqai_qos = qos = DISPATCH_QOS_USER_INITIATED; } if (qos == DISPATCH_QOS_MAINTENANCE) { dqai.dqai_qos = qos = DISPATCH_QOS_BACKGROUND; } #endif // ! HAVE_PTHREAD_WORKQUEUE_QOS _dispatch_queue_attr_overcommit_t overcommit = dqai.dqai_overcommit; if (overcommit ! = _dispatch_queue_attr_overcommit_unspecified && tq) { if (tq->do_targetq) { DISPATCH_CLIENT_CRASH(tq, "Cannot specify both overcommit and " "a non-global target queue"); } } if (tq && dx_type(tq) == DISPATCH_QUEUE_GLOBAL_ROOT_TYPE) { // Handle discrepancies between attr and target queue, attributes win if (overcommit == _dispatch_queue_attr_overcommit_unspecified) { if (tq->dq_priority & DISPATCH_PRIORITY_FLAG_OVERCOMMIT) { overcommit = _dispatch_queue_attr_overcommit_enabled; } else { overcommit = _dispatch_queue_attr_overcommit_disabled; } } if (qos == DISPATCH_QOS_UNSPECIFIED) { qos = _dispatch_priority_qos(tq->dq_priority); } tq = NULL; } else if (tq && ! tq->do_targetq) { // target is a pthread or runloop root queue, setting QoS or overcommit // is disallowed if (overcommit ! = _dispatch_queue_attr_overcommit_unspecified) { DISPATCH_CLIENT_CRASH(tq, "Cannot specify an overcommit attribute " "and use this kind of target queue"); } } else { if (overcommit == _dispatch_queue_attr_overcommit_unspecified) { // Serial queues default to overcommit! overcommit = dqai.dqai_concurrent ? _dispatch_queue_attr_overcommit_disabled : _dispatch_queue_attr_overcommit_enabled; } } if (! tq) { tq = _dispatch_get_root_queue( qos == DISPATCH_QOS_UNSPECIFIED ? DISPATCH_QOS_DEFAULT : qos, overcommit == _dispatch_queue_attr_overcommit_enabled)->_as_dq; if (unlikely(! tq)) { DISPATCH_CLIENT_CRASH(qos, "Invalid queue attribute"); } } // // Step 2: Initialize the queue // if (legacy) { // if any of these attributes is specified, use non legacy classes if (dqai.dqai_inactive || dqai.dqai_autorelease_frequency) { legacy = false; } } const void *vtable; dispatch_queue_flags_t dqf = legacy ? DQF_MUTABLE : 0; if (dqai.dqai_concurrent) { vtable = DISPATCH_VTABLE(queue_concurrent); } else { vtable = DISPATCH_VTABLE(queue_serial); } switch (dqai.dqai_autorelease_frequency) { case DISPATCH_AUTORELEASE_FREQUENCY_NEVER: dqf |= DQF_AUTORELEASE_NEVER; break; case DISPATCH_AUTORELEASE_FREQUENCY_WORK_ITEM: dqf |= DQF_AUTORELEASE_ALWAYS; break; } if (label) { const char *tmp = _dispatch_strdup_if_mutable(label); if (tmp ! = label) { dqf |= DQF_LABEL_NEEDS_FREE; label = tmp; } } dispatch_lane_t dq = _dispatch_object_alloc(vtable, sizeof(struct dispatch_lane_s)); _dispatch_queue_init(dq, dqf, dqai.dqai_concurrent ? DISPATCH_QUEUE_WIDTH_MAX : 1, DISPATCH_QUEUE_ROLE_INNER | (dqai.dqai_inactive ? DISPATCH_QUEUE_INACTIVE : 0)); dq->dq_label = label; dq->dq_priority = _dispatch_priority_make((dispatch_qos_t)dqai.dqai_qos, dqai.dqai_relpri); if (overcommit == _dispatch_queue_attr_overcommit_enabled) { dq->dq_priority |= DISPATCH_PRIORITY_FLAG_OVERCOMMIT; } if (! dqai.dqai_inactive) { _dispatch_queue_priority_inherit_from_target(dq, tq); _dispatch_lane_inherit_wlh_from_target(dq, tq); } _dispatch_retain(tq); dq->do_targetq = tq; _dispatch_object_debug(dq, "%s", __func__); return _dispatch_trace_queue_create(dq)._dq; }Copy the code

This is a long function, but for administrative purposes let’s look at the return value _dispatch_trace_queue_create(dq)._dq. Let’s focus on how the DQ is created. So we mainly create dQ and member assignment process.

dispatch_lane_t dq = _dispatch_object_alloc(vtable,
      sizeof(struct dispatch_lane_s));
  _dispatch_queue_init(dq, dqf, dqai.dqai_concurrent ?
      DISPATCH_QUEUE_WIDTH_MAX : 1, DISPATCH_QUEUE_ROLE_INNER |
      (dqai.dqai_inactive ? DISPATCH_QUEUE_INACTIVE : 0));
Copy the code
static inline dispatch_queue_class_t
_dispatch_queue_init(dispatch_queue_class_t dqu, dispatch_queue_flags_t dqf,
    uint16_t width, uint64_t initial_state_bits)
{
  uint64_t dq_state = DISPATCH_QUEUE_STATE_INIT_VALUE(width);
  dispatch_queue_t dq = dqu._dq;
​
  dispatch_assert((initial_state_bits & ~(DISPATCH_QUEUE_ROLE_MASK |
      DISPATCH_QUEUE_INACTIVE)) == 0);
​
  if (initial_state_bits & DISPATCH_QUEUE_INACTIVE) {
    dq->do_ref_cnt += 2; // rdar://8181908 see _dispatch_lane_resume
    if (dx_metatype(dq) == _DISPATCH_SOURCE_TYPE) {
      dq->do_ref_cnt++; // released when DSF_DELETED is set
    }
  }
​
  dq_state |= initial_state_bits;
  dq->do_next = DISPATCH_OBJECT_LISTLESS;
  dqf |= DQF_WIDTH(width);
  os_atomic_store2o(dq, dq_atomic_flags, dqf, relaxed);
  dq->dq_state = dq_state;
  dq->dq_serialnum =
      os_atomic_inc_orig(&_dispatch_queue_serial_numbers, relaxed);
  return dqu;
}
Copy the code

See the third parameter width replication is DQF | = DQF_WIDTH (width); That is:

  • Width = 1 indicates a serial queue
  • Width = DISPATCH_QUEUE_WIDTH_MAX Indicates a parallel queueDISPATCH_QUEUE_WIDTH_MAXIs defined as follows.
    #define DISPATCH_QUEUE_WIDTH_FULL     0x1000ull
    #define DISPATCH_QUEUE_WIDTH_MAX  (DISPATCH_QUEUE_WIDTH_FULL - 2)
    Copy the code

Inheritance chain for dispatch_queue_t

So what’s the inheritance chain for dispatch_queue_t, if we go to CMD in our code +dispatch_queue_t it goes to DISPATCH_DECL(dispatch_queue); Code, which is the definition of dispatch_queue_t. Let’s search for “DISPATCH_DECL” in the libDispatch source code (where “DISPATCH_DECL” is defined).

#define DISPATCH_DECL(name) \
    typedef struct name##_s : public dispatch_object_s {} *name##_t
///dispatch_queue_t -> dispatch_queue_s -> dispatch_object_s
Copy the code

You can see that the inheritance chain is dispatch_queue_t -> dispatch_queue_s -> dispatch_object_s

Let’s look at the institutions

struct dispatch_queue_s {
  DISPATCH_QUEUE_CLASS_HEADER(queue, void *__dq_opaque1);
  /* 32bit hole on LP64 */
} DISPATCH_ATOMIC64_ALIGN;
Copy the code

Continue with the DISPATCH_QUEUE_CLASS_HEADER structure

#define _DISPATCH_QUEUE_CLASS_HEADER(x, __pointer_sized_field__) \
  DISPATCH_OBJECT_HEADER(x); \
  __pointer_sized_field__; \
  DISPATCH_UNION_LE(uint64_t volatile dq_state, \
      dispatch_lock dq_state_lock, \
      uint32_t dq_state_bits \
  )
#endif
Copy the code

Continue searching from DISPATCH_OBJECT_HEADER:

#define _DISPATCH_OBJECT_HEADER(x) \ struct _os_object_s _as_os_obj[0]; \ OS_OBJECT_STRUCT_HEADER(dispatch_##x); \ struct dispatch_##x##_s *volatile do_next; \ struct dispatch_queue_s *do_targetq; \ void *do_ctxt; \ union { \ dispatch_function_t DISPATCH_FUNCTION_POINTER do_finalizer; \ void *do_introspection_ctxt; The \}Copy the code

The last inheritance is _OS_object_s, so the full inheritance chain is

dispatch_queue_t -> dispatch_queue_s -> dispatch_object_s -> _os_object_s

Function call timing

NSLog(@" function called "); });Copy the code

In this section, we explore when the block parameter of the function is called. Let’s use the synchronous function as an example

DISPATCH_NOINLINE
void
dispatch_sync(dispatch_queue_t dq, dispatch_block_t work)
{
  uintptr_t dc_flags = DC_FLAG_BLOCK;
  if (unlikely(_dispatch_block_has_private_data(work))) {
    return _dispatch_sync_block_with_privdata(dq, work, dc_flags);
  }
  _dispatch_sync_f(dq, work, _dispatch_Block_invoke(work), dc_flags);
}
Copy the code

Work is the block we pass in, so we look at the code associated with the work parameter

Implementation of the _dispatch_Block_invoke function:

#define _dispatch_Block_invoke(bb) \
    ((dispatch_function_t)((struct Block_layout *)bb)->invoke)
Copy the code

As you can see, the _dispatch_Block_invoke function mainly invokes work’s invoke method.

Let’s look at the implementation of _dispatch_sync_f:

static void
_dispatch_sync_f(dispatch_queue_t dq, void *ctxt, dispatch_function_t func,
    uintptr_t dc_flags)
{
  _dispatch_sync_f_inline(dq, ctxt, func, dc_flags);
}
Copy the code

Continue to trace the _dispatch_sync_f_inline function:

DISPATCH_ALWAYS_INLINE static inline void _dispatch_sync_f_inline(dispatch_queue_t dq, void *ctxt, dispatch_function_t func, uintptr_t dc_flags) { if (likely(dq->dq_width == 1)) { return _dispatch_barrier_sync_f(dq, ctxt, func, dc_flags); } if (unlikely(dx_metatype(dq) ! = _DISPATCH_LANE_TYPE)) { DISPATCH_CLIENT_CRASH(0, "Queue type doesn't support dispatch_sync"); } dispatch_lane_t dl = upcast(dq)._dl; // Global concurrent queues and queues bound to non-dispatch threads // always fall into the slow case, see DISPATCH_ROOT_QUEUE_STATE_INIT_VALUE if (unlikely(! _dispatch_queue_try_reserve_sync_width(dl))) { return _dispatch_sync_f_slow(dl, ctxt, func, 0, dl, dc_flags); } if (unlikely(dq->do_targetq->do_targetq)) { return _dispatch_sync_recurse(dl, ctxt, func, dc_flags); } _dispatch_introspection_sync_begin(dl); _dispatch_sync_invoke_and_complete(dl, ctxt, func DISPATCH_TRACE_ARG( _dispatch_trace_item_sync_push_pop(dq, ctxt, func, dc_flags))); }Copy the code

The CTXT and func parameters of the _dispatch_sync_f_inline function are block-related parameters, so let’s make a symbolic breakpoint in the demo project to see which method is executed:

What we find is that we actually call the _dispatch_sync_f_slow function

So let’s keep looking_dispatch_sync_f_slowThe implementation of the

DISPATCH_NOINLINE static void _dispatch_sync_f_slow(dispatch_queue_class_t top_dqu, void *ctxt, dispatch_function_t func, uintptr_t top_dc_flags, dispatch_queue_class_t dqu, uintptr_t dc_flags) { dispatch_queue_t top_dq = top_dqu._dq; dispatch_queue_t dq = dqu._dq; if (unlikely(! dq->do_targetq)) { return _dispatch_sync_function_invoke(dq, ctxt, func); } pthread_priority_t pp = _dispatch_get_priority(); struct dispatch_sync_context_s dsc = { .dc_flags = DC_FLAG_SYNC_WAITER | dc_flags, .dc_func = _dispatch_async_and_wait_invoke, .dc_ctxt = &dsc, .dc_other = top_dq, .dc_priority = pp | _PTHREAD_PRIORITY_ENFORCE_FLAG, .dc_voucher = _voucher_get(), .dsc_func = func, .dsc_ctxt = ctxt, .dsc_waiter = _dispatch_tid_self(), }; _dispatch_trace_item_push(top_dq, &dsc); __DISPATCH_WAIT_FOR_QUEUE__(&dsc, dq); if (dsc.dsc_func == NULL) { // dsc_func being cleared means that the block ran on another thread ie. // case (2) as listed in _dispatch_async_and_wait_f_slow. dispatch_queue_t stop_dq = dsc.dc_other; return _dispatch_sync_complete_recurse(top_dq, stop_dq, top_dc_flags); } _dispatch_introspection_sync_begin(top_dq); _dispatch_trace_item_pop(top_dq, &dsc); _dispatch_sync_invoke_and_complete_recurse(top_dq, ctxt, func,top_dc_flags DISPATCH_TRACE_ARG(&dsc)); }Copy the code

The parameters of concern are still CTXT and func, and as in the previous step, let’s continue with the notation breakpoints _dispatch_sync_invoke_and_complete_recurse and _dispatch_sync_function_invoke to see the code being executed.

Actually called_dispatch_sync_function_invokeFunction:

static void
_dispatch_sync_function_invoke(dispatch_queue_class_t dq, void *ctxt,
    dispatch_function_t func)
{
  _dispatch_sync_function_invoke_inline(dq, ctxt, func);
}
Copy the code

CTXT and func are called in the _dispatch_client_callout function, with multiple implementations:

DISPATCH_ALWAYS_INLINE
static inline void
_dispatch_sync_function_invoke_inline(dispatch_queue_class_t dq, void *ctxt,
    dispatch_function_t func)
{
  dispatch_thread_frame_s dtf;
  _dispatch_thread_frame_push(&dtf, dq);
  _dispatch_client_callout(ctxt, func);
  _dispatch_perfmon_workitem_inc();
  _dispatch_thread_frame_pop(&dtf);
}
Copy the code

CTXT and func are called in the _dispatch_client_callout function, with multiple implementations:

DISPATCH_NOINLINE
void
_dispatch_client_callout(void *ctxt, dispatch_function_t f)
{
  _dispatch_get_tsd_base();
  void *u = _dispatch_get_unwind_tsd();
  if (likely(!u)) return f(ctxt);
  _dispatch_set_unwind_tsd(NULL);
  f(ctxt);
  _dispatch_free_unwind_tsd();
  _dispatch_set_unwind_tsd(u);
}
#undef _dispatch_client_callout
void
_dispatch_client_callout(void *ctxt, dispatch_function_t f)
{
  @try {
    return f(ctxt);
  }
  @catch (...) {
    objc_terminate();
  }
}
Copy the code

There are multiple implementations, but the code that calls the block is f(CTXT).

So the call chain for a block is: dispatch_sync->_dispatch_sync_f->_dispatch_sync_f_inline->_dispatch_sync_f_slow->_dispatch_sync_function_invoke->_dispat Ch_client_callout – > f (CTXT).

Similarly, asynchronous function dispatch_async can also explore a call chain in the same way, and the last call is also f(CTXT), interested children can explore.