Blog links for more on GCD’s Dispatch_group

The dispatch_Semaphore implementation is based on the low-level implementation of dispatch_Semaphore that was introduced earlier. Before looking at the source code, let’s take a look at how we apply it. Consider A scenario where there is A time-consuming operation A, two network requests B and C, and A time-consuming operation C. Refresh the page after ABC has been executed. We can do this with dispatch_group. Here’s the key:

- (void)viewDidLoad { [super viewDidLoad]; __block NSInteger number = 0; dispatch_group_t group = dispatch_group_create(); //A Dispatch_group_async (dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{sleep(3); number += 2222; }); //B network request dispatch_group_enter(group); [self sendRequestWithCompletion:^(id response) { number += [responseintegerValue]; dispatch_group_leave(group); }]; //C Network request dispatch_group_enter(group); [self sendRequestWithCompletion:^(id response) { number += [responseintegerValue];
        dispatch_group_leave(group);
    }];
    
    dispatch_group_notify(group, dispatch_get_main_queue(), ^{
        NSLog(@"%zd", number); }); } - (void) sendRequestWithCompletion (void (^) (id response)) completion {/ / simulate a network request dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0); dispatch_async(queue, ^{ sleep(2); dispatch_async(dispatch_get_main_queue(), ^{if (completion) completion(@1111);
        });
    });
}
Copy the code

Let’s take a look at the API for dispatch_group based on the flow above

dispatch_group_create

dispatch_group_t
dispatch_group_create(void)
{
	return (dispatch_group_t)dispatch_semaphore_create(LONG_MAX);
}
Copy the code

Dispatch_group_create creates a dispatch_semaphore with value LONG_MAX

dispatch_group_async

void
dispatch_group_async(dispatch_group_t dg, dispatch_queue_t dq,
		dispatch_block_t db)
{
	dispatch_group_async_f(dg, dq, _dispatch_Block_copy(db),
			_dispatch_call_block_and_release);
}
Copy the code

Dispatch_group_async is only the encapsulation of dispatch_group_async_f

dispatch_group_async_f

void
dispatch_group_async_f(dispatch_group_t dg, dispatch_queue_t dq, void *ctxt,
		dispatch_function_t func)
{
	dispatch_continuation_t dc;

	_dispatch_retain(dg);
	dispatch_group_enter(dg);

	dc = fastpath(_dispatch_continuation_alloc_cacheonly());
	if(! dc) { dc = _dispatch_continuation_alloc_from_heap(); } dc->do_vtable = (void *)(DISPATCH_OBJ_ASYNC_BIT | DISPATCH_OBJ_GROUP_BIT); dc->dc_func = func; dc->dc_ctxt = ctxt; dc->dc_group = dg; // No fastpath/slowpath hint because we simply don't know if (dq->dq_width ! = 1 && dq->do_targetq) { return _dispatch_async_f2(dq, dc); } _dispatch_queue_push(dq, dc); }Copy the code

Dispatch_group_async_f is similar to dispatch_async_f. Much dispatch_group_async_f dispatch_group_enter (dg); In addition, dispatch_group_asynC_f has a DISPATCH_OBJ_GROUP_BIT identifier in do_vtable assignment. If dispatch_group_enter is added, dispatch_group_leave must exist. Dispatch_continuation_pop = dispatch_queue = dispatch_queue

	_dispatch_client_callout(dc->dc_ctxt, dc->dc_func);
	if//group dispatch_group_leave(dg) {//group dispatch_group_leave(dg); _dispatch_release(dg); }Copy the code

Therefore, dispatch_group_leave of dispatch_group_async_F is called from _dispatch_continuation_POP.

Here’s a summary of how dispatch_group_async_f works:

  1. calldispatch_group_enter;
  2. Log information such as blocks and queues todispatch_continuation_tStructure, and add it to the group list;
  3. _dispatch_continuation_popDuring execution, it will determine whether the task is group. If so, it will be invoked after executing the taskdispatch_group_leaveTo achieve a semaphore balance.

dispatch_group_enter

void
dispatch_group_enter(dispatch_group_t dg)
{
	dispatch_semaphore_t dsema = (dispatch_semaphore_t)dg;

	(void)dispatch_semaphore_wait(dsema, DISPATCH_TIME_FOREVER);
}
Copy the code

“Dispatch_group_enter” converts “dispatch_group_t” to “dispatch_semaphore_t” and calls “dispatch_semaphore_wait”. Atomicity is reduced by 1. So dispatch_group_enter is the encapsulation of dispatch_semaphore_wait.

dispatch_group_leave

void dispatch_group_leave(dispatch_group_t dg) { dispatch_semaphore_t dsema = (dispatch_semaphore_t)dg; dispatch_atomic_release_barrier(); long value = dispatch_atomic_inc2o(dsema, dsema_value); //dsema_value atomicity + 1if (slowpath(value == LONG_MIN)) {//内存溢出,由于dispatch_group_leave在dispatch_group_enter之前调用
		DISPATCH_CLIENT_CRASH("Unbalanced call to dispatch_group_leave()");
	}
	ifSlowpath (value == dsema-> dsemA_ORIg)) {void)_dispatch_group_wake(dsema); }}Copy the code

Dispatch_group_leave converts dispatch_group_t to dispatch_semaphore_t and increases the atomicity of dsemA_value by 1. If value is LONG_MIN, the program crashes; If value is equal to dsemA_orig, all tasks are completed, call _dispatch_group_wake to wake up the group. Because I’m doing atomicity minus one when I enter. So we need to add atomicity by 1 when we leave.

Here is the relationship between Enter and leave:

  1. Dispatch_group_leave and dispatch_group_enter are used together. When dispatch_group_enter is called and dispatch_group_leave is not called, dSEMA_ORIg will not go to the wake up logic because value is not equal to dSEMA_ORIg. The dispatch_group_notify task cannot be executed or the dispatch_group_wait thread fails to receive signals.

  2. Dispatch_group_enter must appear before dispatch_group_leave. When dispatch_group_leave is called one more time than dispatch_group_enter or before dispatch_group_enter, dispatch_group_leave is atomically incremented, When value is LONGMAX+1, data length overflow occurs and becomes LONG_MIN. Since value == LONG_MIN is established, the program crashes.

dispatch_group_notify

void
dispatch_group_notify(dispatch_group_t dg, dispatch_queue_t dq,
		dispatch_block_t db)
{
	dispatch_group_notify_f(dg, dq, _dispatch_Block_copy(db),
			_dispatch_call_block_and_release);
}
Copy the code

Dispatch_group_notify is the encapsulation of dispatch_group_notifY_F.

dispatch_group_notify_f

void dispatch_group_notify_f(dispatch_group_t dg, dispatch_queue_t dq, void *ctxt, void (*func)(void *)) { dispatch_semaphore_t dsema = (dispatch_semaphore_t)dg; struct dispatch_sema_notify_s *dsn, *prev; // FIXME -- This should be updated to use the continuation cachewhile(! (dsn = calloc(1, sizeof(*dsn)))) { sleep(1); } dsn->dsn_queue = dq; dsn->dsn_ctxt = ctxt; dsn->dsn_func = func; _dispatch_retain(dq); dispatch_atomic_store_barrier(); Prev = dispatch_atomic_xchg2O (dSEMA, dsemA_notifY_tail, DSN);if (fastpath(prev)) {
		prev->dsn_next = dsn;
	} else {
		_dispatch_retain(dg);
		(void)dispatch_atomic_xchg2o(dsema, dsema_notify_head, dsn);
		if(dsema-> dsemA_value == dsema-> dsemA_ORIg) {// Group _dispatch_group_wake(dsema); }}}Copy the code

So the dispatch_group_notify function simply stores all callback notifications in a linked list, waiting to be called.

_dispatch_group_wake

static long _dispatch_group_wake(dispatch_semaphore_t dsema) { struct dispatch_sema_notify_s *next, *head, *tail = NULL;  long rval; Head = dispatch_atomic_xchg2O (dsemA, dsemA_notifY_head, NULL); head = dispatch_atomic_xchg2O (dsema, dsemA_notify_head, NULL)if(head) {// snapshot before anything is notified/woken <rdar://problem/8554546> // Set dsemA_notify_tail to NULL, Tail = dispatch_atomic_xchg2O (dsemA, dsemA_notifY_tail, NULL); } // Configure dsemA_group_waiters to 0 for DsemA, and return the original value Rval = dispatch_atomic_xchg2O (dsemA, dsemA_group_waiters, 0);if(rval) {// call semaphore_signal to wake up the semaphore waiting for the group so that dispatch_group_wait returns. // wake group waiters#if USE_MACH_SEM
		_dispatch_semaphore_create_port(&dsema->dsema_waiter_port);
		do {
			kern_return_t kr = semaphore_signal(dsema->dsema_waiter_port);
			DISPATCH_SEMAPHORE_VERIFY_KR(kr);
		} while (--rval);
#elif USE_POSIX_SEM
		do {
			int ret = sem_post(&dsema->dsema_sem);
			DISPATCH_SEMAPHORE_VERIFY_RET(ret);
		} while (--rval);
#endif
	}
	if(head) {// Get the list and call dispatch_async_f asynchronously to execute the tasks in the notify function. // async group notify blocksdo {
			dispatch_async_f(head->dsn_queue, head->dsn_ctxt, head->dsn_func);
			_dispatch_release(head->dsn_queue);
			next = fastpath(head->dsn_next);
			if(! next && head ! = tail) {while(! (next = fastpath(head->dsn_next))) { _dispatch_hardware_pause(); } } free(head); }while ((head = next));
		_dispatch_release(dsema);
	}
	return 0;
}
Copy the code

_dispatch_group_wake has two functions:

  1. Call semaphore_signal to wake up the semaphore waiting for the group, causing the dispatch_group_wait function to return.

  2. Get the linked list and call dispatch_asynC_f to asynchronously execute the tasks in the notify function, that is, Block.

Now that we have a pretty good idea of the dispatch_group process, we can use a graph to show it:

dispatch_group_wait

long
dispatch_group_wait(dispatch_group_t dg, dispatch_time_t timeout)
{
	dispatch_semaphore_t dsema = (dispatch_semaphore_t)dg;

	if(dsema-> dsemA_value == dsema-> dsemA_ORIg) {// There is no task to executereturn 0;
	}
	if(timeout == 0) {// Return timeout#if USE_MACH_SEM
		return KERN_OPERATION_TIMED_OUT;
#elif USE_POSIX_SEM
		errno = ETIMEDOUT;
		return(1);#endif
	}
	return _dispatch_group_wait_slow(dsema, timeout);
}
Copy the code

Dispatch_group_wait is used to wait for tasks in the group to complete.

_dispatch_group_wait_slow

static long
_dispatch_group_wait_slow(dispatch_semaphore_t dsema, dispatch_time_t timeout)
{
	long orig;

again:
	// check before we cause another signal to be sent by incrementing
	// dsema->dsema_group_waiters
	if (dsema->dsema_value == dsema->dsema_orig) {
		return _dispatch_group_wake(dsema);
	}
	// Mach semaphores appear to sometimes spuriously wake up. Therefore,
	// we keep a parallel count of the number of times a Mach semaphore is
	// signaled (6880961).
	(void)dispatch_atomic_inc2o(dsema, dsema_group_waiters);
	// check the values again in case we need to wake any threads
	if (dsema->dsema_value == dsema->dsema_orig) {
		return _dispatch_group_wake(dsema);
	}

#if USE_MACH_SEM
	mach_timespec_t _timeout;
	kern_return_t kr;

	_dispatch_semaphore_create_port(&dsema->dsema_waiter_port);

	// From xnu/osfmk/kern/sync_sema.c:
	// wait_semaphore->count = -1; /* we don't keep an actual count */ // // The code above does not match the documentation, and that fact is // not surprising. The documented semantics are clumsy to use in any // practical way. The above hack effectively tricks the rest of the // Mach semaphore logic to behave like the libdispatch algorithm. switch (timeout) { default: do { uint64_t nsec = _dispatch_timeout(timeout); _timeout.tv_sec = (typeof(_timeout.tv_sec))(nsec / NSEC_PER_SEC); _timeout.tv_nsec = (typeof(_timeout.tv_nsec))(nsec % NSEC_PER_SEC); kr = slowpath(semaphore_timedwait(dsema->dsema_waiter_port, _timeout)); } while (kr == KERN_ABORTED); if (kr ! = KERN_OPERATION_TIMED_OUT) { DISPATCH_SEMAPHORE_VERIFY_KR(kr); break; } // Fall through and try to undo the earlier change to // dsema->dsema_group_waiters case DISPATCH_TIME_NOW: while ((orig = dsema->dsema_group_waiters)) { if (dispatch_atomic_cmpxchg2o(dsema, dsema_group_waiters, orig, orig - 1)) { return KERN_OPERATION_TIMED_OUT; } } // Another thread called semaphore_signal(). // Fall through and drain the wakeup. case DISPATCH_TIME_FOREVER: do { kr = semaphore_wait(dsema->dsema_waiter_port); } while (kr == KERN_ABORTED); DISPATCH_SEMAPHORE_VERIFY_KR(kr); break; } #elif USE_POSIX_SEM #endif goto again; }Copy the code

From the code above we can see that the logic of _dispatch_group_wait_slow and _dispatch_semaphore_wait_slow are very close. Both use semaphore of the Mach kernel to send signals. The difference is that _dispatch_semaphoRE_wait_slow calls a return when the wait is over, whereas _dispatch_group_wait_slow calls _dispatch_group_wake when the wait is over.

conclusion

  1. “Dispatch_group” is a semaphore whose initial value is “LONG_MAX”.

  2. Dispatch_group_enter and dispatch_group_leave must be used in pairs and nested.

  3. If there are more dispatch_group_enter dispatch_group_leave dispatch_group_leave dispatch_group_leave dispatch_group_leave dispatch_group_leave dispatch_group_leave dispatch_group_leave The dispatch_group_notify task cannot be executed or the dispatch_group_wait thread fails to receive signals. If there is more dispatch_group_leave, a crash occurs.