While the first two articles focused on GCD functions and queue scheduling and deadlocks and singletons, this article will start exploring other GCD functions.

Barrier function

The most direct function of fence function is to control task execution sequence and synchronization

  • Dispatch_battier_async comes here after the preceding tasks are completed.
  • Dispatch_battier_sync has the same effect but blocks the thread and affects subsequent tasks.
  • The fence function can only control the same concurrent queue.

Use of the fence function

We usually use the fence function as follows:

 dispatch_queue_t concurrentQueue = dispatch_queue_create("jason", DISPATCH_QUEUE_CONCURRENT);
 dispatch_async(concurrentQueue, ^{
     NSLog(@"123");
 }); 
 dispatch_async(concurrentQueue, ^{
     sleep(1);
     NSLog(@"456");
 });
 dispatch_barrier_async(concurrentQueue1, ^{
     NSLog(@"----%@-----",[NSThread currentThread]);
 });
 dispatch_async(concurrentQueue, ^{
     NSLog(@"789");
 });
 NSLog(@"10 11 12");
Copy the code

We print the result:

123
10 11 12
456
----<NSThread: 0x6000012ca180>{number = 2, name = (null)}-----
789
Copy the code

You can see that the fence function blocks its own block and the execution of subsequent asynchronous functions, that is, it must execute after the previous function.

Let’s do it again with the synchronous fence function:

 dispatch_barrier_sync(concurrentQueue1, ^{
     NSLog(@"----%@-----",[NSThread currentThread]);
 });
Copy the code

Print result:

123
456
----<NSThread: 0x6000025a4140>{number = 1, name = main}-----
10 11 12
789
Copy the code

Dispatch_battier_sync blocks NSLog(@”10, 11, 12″); Code execution.

Note: Another point to note is that the fence function cannot block global concurrent queues.

Next we look at the implementation of the barrier function from the source level and why global concurrent queues cannot be blocked by the barrier function.

Fence function underlying principle

Let’s take the dispatch_barrier_sync synchronization function as an example

The debug code is shown below:Debugging code has a pointsleep(30)It is convenient for debugging to wait for calls before the previous method completes execution.

Let’s search dispatch_barrier_sync from libdispatch source:

void
dispatch_barrier_sync(dispatch_queue_t dq, dispatch_block_t work)
{
  uintptr_t dc_flags = DC_FLAG_BARRIER | DC_FLAG_BLOCK;
  if (unlikely(_dispatch_block_has_private_data(work))) {
    return _dispatch_sync_block_with_privdata(dq, work, dc_flags);
  }
  _dispatch_barrier_sync_f(dq, work, _dispatch_Block_invoke(work), dc_flags);
}
Copy the code

Call _dispatch_barrier_sync_f and continue:

static void
_dispatch_barrier_sync_f(dispatch_queue_t dq, void *ctxt,
    dispatch_function_t func, uintptr_t dc_flags)
{
  _dispatch_barrier_sync_f_inline(dq, ctxt, func, dc_flags);
}
Copy the code

Call the _dispatch_barrier_sync_f_inline function:

static inline void _dispatch_barrier_sync_f_inline(dispatch_queue_t dq, void *ctxt, dispatch_function_t func, uintptr_t dc_flags) { dispatch_tid tid = _dispatch_tid_self(); if (unlikely(dx_metatype(dq) ! = _DISPATCH_LANE_TYPE)) { DISPATCH_CLIENT_CRASH(0, "Queue type doesn't support dispatch_sync"); } dispatch_lane_t dl = upcast(dq)._dl; // The more correct thing to do would be to merge the qos of the thread // that just acquired the barrier lock into the queue state. // // However this is too expensive for the fast path, so skip doing it. // The chosen tradeoff is that if an enqueue on a lower priority thread // contends with this fast path, this thread may receive a useless override. // // Global concurrent queues and queues bound to non-dispatch threads // always fall into the slow case, see DISPATCH_ROOT_QUEUE_STATE_INIT_VALUE if (unlikely(! _dispatch_queue_trY_acquire_barrier_sync (dl, tid)) {return _dispatch_sync_f_slow(dl, CTXT, func, DC_FLAG_BARRIER, dl, DC_FLAG_BARRIER | dc_flags); } if (unlikely(dl->do_targetq->do_targetq)) { return _dispatch_sync_recurse(dl, ctxt, func, DC_FLAG_BARRIER | dc_flags); } _dispatch_introspection_sync_begin(dl); _dispatch_lane_barrier_sync_invoke_and_complete(dl, ctxt, func DISPATCH_TRACE_ARG(_dispatch_trace_item_sync_push_pop( dq, ctxt, func, dc_flags | DC_FLAG_BARRIER))); }Copy the code

Here we add two symbolic breakpoints to debug to see if _dispatch_SYNC_F_SLOW or _dispatch_SYNC_F_SE is executed. By adding the symbolic breakpoint, we actually call _dispatch_SYNC_F_SLOW.

static void _dispatch_sync_f_slow(dispatch_queue_class_t top_dqu, void *ctxt, dispatch_function_t func, uintptr_t top_dc_flags, dispatch_queue_class_t dqu, uintptr_t dc_flags) { dispatch_queue_t top_dq = top_dqu._dq; dispatch_queue_t dq = dqu._dq; if (unlikely(! dq->do_targetq)) { return _dispatch_sync_function_invoke(dq, ctxt, func); } pthread_priority_t pp = _dispatch_get_priority(); struct dispatch_sync_context_s dsc = { .dc_flags = DC_FLAG_SYNC_WAITER | dc_flags, .dc_func = _dispatch_async_and_wait_invoke, .dc_ctxt = &dsc, .dc_other = top_dq, .dc_priority = pp | _PTHREAD_PRIORITY_ENFORCE_FLAG, .dc_voucher = _voucher_get(), .dsc_func = func, .dsc_ctxt = ctxt, .dsc_waiter = _dispatch_tid_self(), }; _dispatch_trace_item_push(top_dq, &dsc); __DISPATCH_WAIT_FOR_QUEUE__(&dsc, dq); if (dsc.dsc_func == NULL) { // dsc_func being cleared means that the block ran on another thread ie. // case (2) as listed in _dispatch_async_and_wait_f_slow. dispatch_queue_t stop_dq = dsc.dc_other; return _dispatch_sync_complete_recurse(top_dq, stop_dq, top_dc_flags); } _dispatch_introspection_sync_begin(top_dq); _dispatch_trace_item_pop(top_dq, &dsc); _dispatch_sync_invoke_and_complete_recurse(top_dq, ctxt, func,top_dc_flags DISPATCH_TRACE_ARG(&dsc)); }Copy the code

The _dispatch_sync_F_slow function, which we are familiar with and explored in the previous article, continues to add symbolic breakpoints,

One detail is that __DISPATCH_WAIT_FOR_QUEUE__ waits for sleep(300); The waiting process is __DISPATCH_WAIT_FOR_QUEUE__ processing, The dQ_push and _dispatch_lane_concurrent_push functions are then called ->_dispatch_lane_push-> _dispatch_lane_PUSH_waiter which blocks the current queue.

  • Custom concurrent queues are executed_dispatch_lane_wakeupWill have to wait forbarrierThe judgment.
  • The global concurrent queue will execute_dispatch_root_queue_wakeupMethod, so there’s no waiting method, so there’s no blocking

The _dispatch_lane_class_barrier_complete function is called when the queue for execution is complete ->dx_wakeup->_dispatch_lane_wakeup->_dispatch_queue_wakeup.

The _dispatch_client_callout function is called and the synchronization function is executed.

A semaphore

Use of semaphores

First look at the usage code:

dispatch_queue_t queue = dispatch_get_global_queue(0, 0); dispatch_semaphore_t sem = dispatch_semaphore_create(0); // dispatch_async(queue, ^{dispatch_semaphore_wait(sem, DISPATCH_TIME_FOREVER); // sem starts at 0 and waits for NSLog(@" perform task 1"); NSLog(@" Task 1 completed "); }); Dispatch_async (queue, ^{sleep(2); NSLog(@" Perform task 2"); NSLog(@" Task 2 completed "); dispatch_semaphore_signal(sem); // signal sem+1});Copy the code

Semaphores have three main functions:

  • Dispatch_semaphore_create: creates a semaphore
  • Dispatch_semaphore_wait: wait for semaphore
  • Dispatch_semaphore_signal: indicates signal release

It controls the maximum number of concurrent GCDS.

Underlying principles of semaphores

The underlying principle is really an exploration of three approaches, and we explore them in turn

dispatch_semaphore_wait

Search dispatch_semaphore_wait globally.

intptr_t
dispatch_semaphore_wait(dispatch_semaphore_t dsema, dispatch_time_t timeout)
{
  //--1
  long value = os_atomic_dec2o(dsema, dsema_value, acquire);
  if (likely(value >= 0)) {
    return 0;
  }
  return _dispatch_semaphore_wait_slow(dsema, timeout);
}
Copy the code

In the previous example we created a semaphore of 0 and after — <0 the _dispatch_semaphoRE_wait_slow function is executed

DISPATCH_NOINLINE static intptr_t _dispatch_semaphore_wait_slow(dispatch_semaphore_t dsema, dispatch_time_t timeout) { long orig; _dispatch_sema4_create(&dsema->dsema_sema, _DSEMA4_POLICY_FIFO); switch (timeout) { default: if (! _dispatch_sema4_timedwait(&dsema->dsema_sema, timeout)) { break; } // Fall through and try to undo what the fast path did to // dsema->dsema_value case DISPATCH_TIME_NOW: Orig = dsema->dsema_value; while (orig < 0) { if (os_atomic_cmpxchgv2o(dsema, dsema_value, orig, orig + 1, &orig, relaxed)) { return _DSEMA4_TIMEOUT(); } } // Another thread called semaphore_signal(). // Fall through and drain the wakeup. case DISPATCH_TIME_FOREVER: _dispatch_sema4_wait(&dsema->dsema_sema); break; } return 0; }Copy the code

The visible wait function is _dispatch_sema4_wait:

void
_dispatch_sema4_wait(_dispatch_sema4_t *sema)
{
  int ret = 0;
  do {
    ret = sem_wait(sema);
  } while (ret == -1 && errno == EINTR);
  DISPATCH_SEMAPHORE_VERIFY_RET(ret);
}
Copy the code

Sem_wait is a C method where a do-while loop is used to wait for the semaphore value to meet the condition.

dispatch_semaphore_signal

Search dispatch_semaphore_signal globally

intptr_t
dispatch_semaphore_signal(dispatch_semaphore_t dsema)
{
  //++1
  long value = os_atomic_inc2o(dsema, dsema_value, release);
  if (likely(value > 0)) {
    return 0;
  }
  if (unlikely(value == LONG_MIN)) {
    DISPATCH_CLIENT_CRASH(value,
        "Unbalanced call to dispatch_semaphore_signal()");
  }
  return _dispatch_semaphore_signal_slow(dsema);
}
Copy the code

If the semaphore_signal_slow is still <=0, it goes to _dispatch_semaphore_signal_slow, which is handled until the value of the semaphore is positive.

summary

  • dispatch_semaphore_create: Initializes the semaphore.
  • dispatch_semaphore_wait: Value of the semaphore —
  • dispatch_semaphore_signalValue++ for the semaphore

Scheduling group

Scheduling group usage

Sometimes we need to wait for more than one interface to return data before proceeding to the next operation:

dispatch_group_t group = dispatch_group_create(); dispatch_queue_t queue = dispatch_get_global_queue(0, 0); Dispatch_group_async (group, queue, ^{// download task 1 sleep(2); }); Dispatch_group_async (group, queue, ^{// download task 2 sleep(2); }); Dispatch_group_notify (group, dispatch_get_main_queue(), ^{// Next step after all tasks are executed. });Copy the code

Or:

dispatch_group_t group = dispatch_group_create(); dispatch_queue_t queue = dispatch_get_global_queue(0, 0); dispatch_group_enter(group); Dispatch_async (queue, ^{// download task 1 sleep(2); dispatch_group_leave(group); }); dispatch_group_enter(group); Dispatch_async (queue, ^{ dispatch_group_leave(group); }); Dispatch_group_notify (group, dispatch_get_main_queue(), ^{// Next step after all tasks are executed. });Copy the code

Underlying principles of scheduling groups

We analyze from three aspects, how groups control synchronization, the collocation of dispatch_group_enter and dispatch_group_leave, and the principle of dispatch_group_async.

Dispatch_group_create function
dispatch_group_t
dispatch_group_create(void)
{
  return _dispatch_group_create_with_count(0);
}
Copy the code

Basically a data structure that creates a group

DISPATCH_ALWAYS_INLINE
static inline dispatch_group_t
_dispatch_group_create_with_count(uint32_t n)
{
  dispatch_group_t dg = _dispatch_object_alloc(DISPATCH_VTABLE(group),
      sizeof(struct dispatch_group_s));
  dg->do_next = DISPATCH_OBJECT_LISTLESS;
  dg->do_targetq = _dispatch_get_default_queue(false);
  if (n) {
    os_atomic_store2o(dg, dg_bits,
        (uint32_t)-n * DISPATCH_GROUP_VALUE_INTERVAL, relaxed);
    os_atomic_store2o(dg, do_ref_cnt, 1, relaxed); // <rdar://22318411>
  }
  return dg;
}
Copy the code
Dispatch_group_enter function
void dispatch_group_enter(dispatch_group_t dg) { // The value is decremented on a 32bits wide atomic so that the carry // For the 0 -> -1 transition is not propagated to the upper 32bits. // os_atomic_sub_orig2o(dg, dg_bits, DISPATCH_GROUP_VALUE_INTERVAL, acquire); uint32_t old_value = old_bits & DISPATCH_GROUP_VALUE_MASK; if (unlikely(old_value == 0)) { _dispatch_retain(dg); // <rdar://problem/22318411> } if (unlikely(old_value == DISPATCH_GROUP_VALUE_MAX)) { DISPATCH_CLIENT_CRASH(old_bits, "Too many nested calls to dispatch_group_enter()"); }}Copy the code

Start count == 0, do –, change to -1,

Dispatch_group_leave function
void dispatch_group_leave(dispatch_group_t dg) { // The value is incremented on a 64bits wide atomic so that the carry for // the -1 -> 0 transition increments the generation atomically. uint64_t new_state, old_state = os_atomic_add_orig2o(dg, dg_state, DISPATCH_GROUP_VALUE_INTERVAL, release); uint32_t old_value = (uint32_t)(old_state & DISPATCH_GROUP_VALUE_MASK); if (unlikely(old_value == DISPATCH_GROUP_VALUE_1)) {//old_state == -1 old_state += DISPATCH_GROUP_VALUE_INTERVAL; do { new_state = old_state; if ((old_state & DISPATCH_GROUP_VALUE_MASK) == 0) { new_state &= ~DISPATCH_GROUP_HAS_WAITERS; new_state &= ~DISPATCH_GROUP_HAS_NOTIFS; } else { // If the group was entered again since the atomic_add above, // we can't clear the waiters bit anymore as we don't know for // which generation the waiters are for new_state &= ~DISPATCH_GROUP_HAS_NOTIFS; } if (old_state == new_state) break; } while (unlikely(! os_atomic_cmpxchgv2o(dg, dg_state, old_state, new_state, &old_state, relaxed))); return _dispatch_group_wake(dg, old_state, true); // Wake up notify} if (unlikely(old_value == 0)) {DISPATCH_CLIENT_CRASH((uintptr_t)old_value, "Unbalanced call to dispatch_group_leave()"); }}Copy the code

Old_state! = new_state will always be in the while loop, and when equality is invoked _dispatch_group_wake will wake the blocking function.

dispatch_group_notify
static inline void _dispatch_group_notify(dispatch_group_t dg, dispatch_queue_t dq, dispatch_continuation_t dsn) { uint64_t old_state, new_state; dispatch_continuation_t prev; dsn->dc_data = dq; _dispatch_retain(dq); prev = os_mpsc_push_update_tail(os_mpsc(dg, dg_notify), dsn, do_next); if (os_mpsc_push_was_empty(prev)) _dispatch_retain(dg); os_mpsc_push_update_prev(os_mpsc(dg, dg_notify), prev, dsn, do_next); if (os_mpsc_push_was_empty(prev)) { os_atomic_rmw_loop2o(dg, dg_state, old_state, new_state, release, { new_state = old_state | DISPATCH_GROUP_HAS_NOTIFS; Os_atomic_rmw_loop_give_up ({/// block callout function return _dispatch_group_wake(dg, new_state, false); }); }}); }}Copy the code

Block execution is performed when old_state == 0 (dispatch_group_leave).

dispatch_group_async
void dispatch_group_async(dispatch_group_t dg, dispatch_queue_t dq, Dispatch_block_t db) {dispatch_continuation_t dc = _dispatch_continuation_alloc(); uintptr_t dc_flags = DC_FLAG_CONSUME | DC_FLAG_GROUP_ASYNC; dispatch_qos_t qos; qos = _dispatch_continuation_init(dc, dq, db, 0, dc_flags); _dispatch_continuation_group_async(dg, dq, dc, qos); } static inline void _dispatch_continuation_group_async(dispatch_group_t dg, dispatch_queue_t dq, dispatch_continuation_t dc, dispatch_qos_t qos) { dispatch_group_enter(dg); // Enter dc->dc_data = dg; _dispatch_continuation_async(dq, dc, qos, dc->dc_flags); } static inline void _dispatch_continuation_async(dispatch_queue_class_t dqu, dispatch_continuation_t dc, dispatch_qos_t qos, uintptr_t dc_flags) { #if DISPATCH_INTROSPECTION if (! (dc_flags & DC_FLAG_NO_INTROSPECTION)) { _dispatch_trace_item_push(dqu, dc); } #else (void)dc_flags; #endif return dx_push(dqu._dq, dc, qos); // Call function}Copy the code

Dispatch_group_enter is called at _dispatch_continuation_group_async, and dispatch_group_leave is executed after dX_push, which is the callout function.

DISPATCH_ALWAYS_INLINE static inline void _dispatch_continuation_with_group_invoke(dispatch_continuation_t dc) { struct dispatch_object_s *dou = dc->dc_data; unsigned long type = dx_type(dou); if (type == DISPATCH_GROUP_TYPE) { _dispatch_client_callout(dc->dc_ctxt, dc->dc_func); _dispatch_trace_item_complete(dc); /// Execute leave method dispatch_group_leave((dispatch_group_t)dou); } else { DISPATCH_INTERNAL_CRASH(dx_type(dou), "Unexpected object type"); }}Copy the code

summary

  • dispatch_group_enteranddispatch_group_leaveCome in pairs
  • dispatch_group_enterGroup value (0->1)
  • dispatch_group_leave++ group value (-1->0)
  • dispatch_group_notifyThe bottom layer determines whether the state of the group is 0 and notifies the execution of the block if the state is 0.
  • There are two ways to wake up tasks: 1.dispatch_group_leave2,dispatch_group_notify
  • dispatch_group_asyncIs equivalent toenter+leave, the underlying implementation contains a pair of Enter +leave.

dispatch_source

The dispatch_source scenario we usually use is a timer:

- (void)timeDone{// countdown time __block int timeout = 3; Dispatch_queue_t globalQueue = dispatch_get_global_queue(0, 0); // create timer dispatch_source_t timer = dispatch_source_create(dispatch_source_timer, 0, 0, globalQueue); -source dispatch source-start Specifies the time when the timer is triggered for the first time. The parameter type is dispatch_time_t, which is an opaque type and we can't manipulate it directly. We need the dispatch_time and dispatch_walltime functions to create them. In addition, the constants DISPATCH_TIME_NOW and DISPATCH_TIME_FOREVER are often useful. - interval Interval - leeway Specifies the accuracy of the dispatch_source_set_timer(timer,dispatch_walltime(NULL, 0),1.0*NSEC_PER_SEC, 0); // Dispatch_source_set_event_handler (timer, ^{ Disable if (timeout <= 0) {// Cancel dispatch source dispatch_source_cancel(timer); }else{ timeout--; Dispatch_async (dispatch_get_main_queue(), ^{dispatch_get_main_queue(), ^{NSLog(@" countdown - %d", timeout); }); }}); // Start dispatch_resume(timer); }Copy the code