This chapter content

  1. Barrier function
  2. A semaphore
  3. Scheduling group
  4. dispatch_source

This chapter content

  1. Familiar with fence functions, semaphores, call groups, dispatch_source usage
  2. Understand the underlying principles

Barrier function

There are synchronous dispatch_barrier_sync and asynchronous dispatch_barrier_async. It’s a similar process to our synchronization and asynchrony. See its use process for example first

The fence function is characterized by blocking queues

Fence function problems: 1. Global concurrency does not take effect, so use custom concurrency queues. 2. Must be in the same queue (so cross layer fence function is very weak, so simple use is ok. Then solve this problem recommend scheduling group)

use

// What is the process of executing a concurrent queue? dispatch_queue_t concurrentQueue = dispatch_queue_create("cooci", DISPATCH_QUEUE_CONCURRENT); dispatch_async(concurrentQueue, ^{ NSLog(@"1"); }); dispatch_async(concurrentQueue, ^{ sleep(1); NSLog(@"2"); }); dispatch_barrier_async(concurrentQueue, ^{ NSLog(@"----%@-----",[NSThread currentThread]); sleep(2); }); dispatch_async(concurrentQueue, ^{ NSLog(@"3"); }); dispatch_async(concurrentQueue, ^{ NSLog(@"4"); }); NSLog(@"5"); // dispatch_barrier_async: dispatch_barrier_async: dispatch_barrier_async: dispatch_barrier_async: dispatch_barrier_async: dispatch_barrier_async: dispatch_barrier_async // it can be 5,1,2,[NSThread currentThread],3,4. // dispatch_barrier_sync: 1,2,[NSThread currentThread],3,4,5 The order of 3, 4 and 5 is uncertain. // We can see that synchronization also blocks the threadCopy the code

The fence function is used as a lock

// Array threads are not safe, Dispatch_queue_t concurrentQueue = dispatch_queue_create("cooci", DISPATCH_QUEUE_CONCURRENT); for (int i = 0; i<1000; i++) { dispatch_async(concurrentQueue, ^{ NSString *imageName = [NSString stringWithFormat:@"%d.jpg", (i % 10)]; NSURL *url = [[NSBundle mainBundle] URLForResource:imageName withExtension:nil]; NSData *data = [NSData dataWithContentsOfURL:url]; UIImage *image = [UIImage imageWithData:data]; // If you do not add a barrier, you will crash to access wild Pointers. // Dispatch_barrier_async (concurrentQueue, ^{[self.marray addObject:image]; }); }); }Copy the code

Synchronous fence

The thread is blocked because of the synchronization feature to see if its source code is similar to dispatch_sync, and the _dispatch_barrier_sync_f function is used when the synchronization is serial. The underlying source code tends to be similar

dispatch_barrier_sync

void dispatch_barrier_sync(dispatch_queue_t dq, dispatch_block_t work) { uintptr_t dc_flags = DC_FLAG_BARRIER | DC_FLAG_BLOCK; if (unlikely(_dispatch_block_has_private_data(work))) { return _dispatch_sync_block_with_privdata(dq, work, dc_flags); } // This is an intermediate function that calls _dispatch_barrier_sync_f_inline. //_dispatch_barrier_sync_f_inline(dq, CTXT, func, dc_flags); _dispatch_barrier_sync_f(dq, work, _dispatch_Block_invoke(work), dc_flags); }Copy the code

_dispatch_barrier_sync_f_inline

// DC_FLAG_BARRIER is the identifier of the fence function, which is also very similar to the synchronization function such as _dispatch_sync_f_inline

static inline void _dispatch_barrier_sync_f_inline(dispatch_queue_t dq, void *ctxt, dispatch_function_t func, uintptr_t dc_flags) { dispatch_tid tid = _dispatch_tid_self(); if (unlikely(dx_metatype(dq) ! = _DISPATCH_LANE_TYPE)) { DISPATCH_CLIENT_CRASH(0, "Queue type doesn't support dispatch_sync"); } dispatch_lane_t dl = upcast(dq)._dl; // The more correct thing to do would be to merge the qos of the thread // that just acquired the barrier lock into the queue state. // // However this is too expensive for the fast path, so skip doing it. // The chosen tradeoff is that if an enqueue on a lower priority thread // contends with this fast path, this thread may receive a useless override. // // Global concurrent queues and queues bound to non-dispatch threads // always fall into the slow case, see DISPATCH_ROOT_QUEUE_STATE_INIT_VALUE if (unlikely(! _dispatch_queue_trY_acquire_barrier_sync (dl, tid)) { Prove the fence may also be a deadlock return _dispatch_sync_f_slow (dl, CTXT func, DC_FLAG_BARRIER, dl, DC_FLAG_BARRIER | dc_flags); If (unlikely(DL ->do_targetq->do_targetq)) {return _dispatch_sync_recurse(dl, async)  ctxt, func, DC_FLAG_BARRIER | dc_flags); } _dispatch_introspection_sync_begin(dl); _dispatch_lane_barrier_sync_invoke_and_complete(dl, ctxt, func DISPATCH_TRACE_ARG(_dispatch_trace_item_sync_push_pop( dq, ctxt, func, dc_flags | DC_FLAG_BARRIER))); }Copy the code

_dispatch_sync_recurse

This function, which was not shown in the last section, is not important. Just do while to see if there are any tasks that need to be done. And we want the flow of the fence function, if we want the task of the fence function to execute, we have to finish the task in front of the queue. So you can’t execute the code until it’s done

static void _dispatch_sync_recurse(dispatch_lane_t dq, void *ctxt, dispatch_function_t func, uintptr_t dc_flags) { dispatch_tid tid = _dispatch_tid_self(); dispatch_queue_t tq = dq->do_targetq; Do {// Serial if (likely(tq->dq_width == 1)) {if (Unlikely (! _dispatch_queue_try_acquire_barrier_sync(tq, tid))) { return _dispatch_sync_f_slow(dq, ctxt, func, dc_flags, tq, DC_FLAG_BARRIER); }} else {dispatch_queue_concurrent_t dl = upcast(tq)._dl; if (unlikely(! _dispatch_queue_try_reserve_sync_width(dl))) { return _dispatch_sync_f_slow(dq, ctxt, func, dc_flags, tq, 0); } } tq = tq->do_targetq; } while (unlikely(tq->do_targetq)); _dispatch_introspection_sync_begin(dq); _dispatch_sync_invoke_and_complete_recurse(dq, ctxt, func, dc_flags DISPATCH_TRACE_ARG(_dispatch_trace_item_sync_push_pop( dq, ctxt, func, dc_flags))); }Copy the code

_dispatch_sync_invoke_and_complete_recurse

This is the intermediate transition

static void
_dispatch_sync_invoke_and_complete_recurse(dispatch_queue_class_t dq,
		void *ctxt, dispatch_function_t func, uintptr_t dc_flags
		DISPATCH_TRACE_ARG(void *dc))
{
	_dispatch_sync_function_invoke_inline(dq, ctxt, func);
	_dispatch_trace_item_complete(dc);
	_dispatch_sync_complete_recurse(dq._dq, NULL, dc_flags);
}
Copy the code

_dispatch_sync_complete_recurse

The synchronization function also goes through the same process, so the bottom layer of GCD is more and more similar, but because of the different features in front of the different API execution process. And this explains why global concurrent queues don’t have a fence effect, right

static void _dispatch_sync_complete_recurse(dispatch_queue_t dq, dispatch_queue_t stop_dq, uintptr_t dc_flags) { bool barrier = (dc_flags & DC_FLAG_BARRIER); do { if (dq == stop_dq) return; If (barrier) {// dx_wakeup is a macro definition, // Global concurrency is _dispatch_root_queue_wakeup // normal concurrency is _dispatch_lane_wakeup dx_wakeup(dq, 0, DISPATCH_WAKEUP_BARRIER_COMPLETE); } else {_dispatch_lane_non_barrier_complete(upcast(dq)._dl, 0); } dq = dq->do_targetq; barrier = (dq->dq_width == 1); } while (unlikely(dq->do_targetq)); }Copy the code

Global concurrency versus custom concurrent wake up functions

_dispatch_root_queue_wakeup Global concurrency

You can see that global concurrency does not handle the fence function. Why is that? Because global concurrent queues don’t have to be only used by you, they might still be used by the system, and if you suddenly put a fence in there you’re blocking the thread.

void _dispatch_root_queue_wakeup(dispatch_queue_global_t dq, DISPATCH_UNUSED dispatch_qos_t qos, Dispatch_wakeup_flags_t flags) {// Wait for if (! (flags & DISPATCH_WAKEUP_BLOCK_WAIT)) { DISPATCH_INTERNAL_CRASH(dq->dq_priority, "Don't try to wake up or override a root queue"); If (flags & DISPATCH_WAKEUP_CONSUME_2) {return _dispatch_release_2_tailCall (dq); }}Copy the code

_dispatch_lane_wakeup Common concurrency

void _dispatch_lane_wakeup(dispatch_lane_class_t dqu, dispatch_qos_t qos, dispatch_wakeup_flags_t flags) { dispatch_queue_wakeup_target_t target = DISPATCH_QUEUE_WAKEUP_NONE; If (unlikely(flags & DISPATCH_WAKEUP_BARRIER_COMPLETE)) {return _dispatch_lane_barrier_complete(dqu, qos, flags); } if (_dispatch_queue_class_probe(dqu)) { target = DISPATCH_QUEUE_WAKEUP_TARGET; } return _dispatch_queue_wakeup(dqu, qos, flags, target); }Copy the code

_dispatch_lane_barrier_complete

See this function in fact is almost enough, as for the completed code interested can follow, there is also the fence processing

static void _dispatch_lane_barrier_complete(dispatch_lane_class_t dqu, dispatch_qos_t qos, dispatch_wakeup_flags_t flags) { dispatch_queue_wakeup_target_t target = DISPATCH_QUEUE_WAKEUP_NONE; dispatch_lane_t dq = dqu._dl; if (dq->dq_items_tail && ! DISPATCH_QUEUE_IS_SUSPENDED(dq)) { struct dispatch_object_s *dc = _dispatch_queue_get_head(dq); // If it is a serial queue, it will wait. Or you will go to perform complete code if (likely (dq - > dq_width = = 1 | | _dispatch_object_is_barrier (dc))) {if (_dispatch_object_is_waiter (dc)) { return _dispatch_lane_drain_barrier_waiter(dq, dc, flags, 0); } } else if (dq->dq_width > 1 && ! _dispatch_object_is_barrier(dc)) {// The dispatch_object_is_barrier(dc) { Return _dispatch_lane_drain_non_barriers(dq, dc, flags); } if (! (flags & DISPATCH_WAKEUP_CONSUME_2)) { _dispatch_retain_2(dq); flags |= DISPATCH_WAKEUP_CONSUME_2; } target = DISPATCH_QUEUE_WAKEUP_TARGET; } uint64_t owned = DISPATCH_QUEUE_IN_BARRIER + dq->dq_width * DISPATCH_QUEUE_WIDTH_INTERVAL; return _dispatch_lane_class_barrier_complete(dq, qos, flags, target, owned); }Copy the code

Asynchronous fence

Asynchronous fencing does not block the execution of other threads.

dispatch_barrier_async

See the previous chapter on handling asynchronous functions. It’s very similar, and the process is exactly the same. It’s not shown anymore, except that the _dispatch_lane_push function is called, and wakeup is actually synchronized as well

void
dispatch_barrier_async(dispatch_queue_t dq, dispatch_block_t work)
{
	dispatch_continuation_t dc = _dispatch_continuation_alloc();
	uintptr_t dc_flags = DC_FLAG_CONSUME | DC_FLAG_BARRIER;
	dispatch_qos_t qos;

	qos = _dispatch_continuation_init(dc, dq, work, 0, dc_flags);
	_dispatch_continuation_async(dq, dc, qos, dc_flags);
}
Copy the code

A semaphore

Off topic: In the past, scheduling groups were encapsulated semaphores, but now scheduling groups are written by themselves.

Semaphore usage scenarios: 1. When locking, 2. Control the maximum number of concurrent requests

The main apis of semaphores:

  1. dispatch_semaphore_createCreate a semaphore. Normally, a “1” is used as a lock.
  2. dispatch_semaphore_waitSemaphore waiting.
  3. dispatch_semaphore_signalThe semaphore is coming (released) to be used with 2.

use

We can use semaphores for uploading, or we can use synchronized locking

// Task 1 and Task 2 are executed at the same time, since it is executed as one task, followed by Task 3. // If the number of concurrent tasks is changed to 2, then task 1, task 2 and task 3 will be executed together. Dispatch_queue_t queue = dispatch_get_global_queue(0, 0); // if >=0, return nil dispatch_semaphore_t sem = dispatch_semaphore_create(1); dispatch_async(queue, ^{ dispatch_semaphore_wait(sem, DISPATCH_TIME_FOREVER); NSLog(@" Perform task 1"); NSLog(@" Task 1 completed "); }); Dispatch_async (queue, ^{NSLog(@" perform task 2"); NSLog(@" Task 2 completed "); dispatch_semaphore_signal(sem); }); dispatch_async(queue, ^{ dispatch_semaphore_wait(sem, DISPATCH_TIME_FOREVER); NSLog(@" Perform task 3"); NSLog(@" Task 3 completed "); dispatch_semaphore_signal(sem); });Copy the code
If the above code were changed to the following, then task 2 would be executed first, followed by task 1. Why is that? Because the signal will send a signal that says wait no longer. Dispatch_queue_t queue = dispatch_get_global_queue(0, 0); dispatch_semaphore_t sem = dispatch_semaphore_create(0); dispatch_async(queue, ^{ dispatch_semaphore_wait(sem, DISPATCH_TIME_FOREVER); NSLog(@" Perform task 1"); NSLog(@" Task 1 completed "); }); Dispatch_async (queue, ^{NSLog(@" perform task 2"); NSLog(@" Task 2 completed "); dispatch_semaphore_signal(sem); });Copy the code

dispatch_semaphore_signal

Intptr_t dispatch_semaphore_signal(dispatch_semaphore_t dsema) { Long value = OS_atomic_INC2O (dSEMa, dsemA_value, release); if (likely(value > 0)) { return 0; If (unlikely(value == LONG_MIN)) {DISPATCH_CLIENT_CRASH(value, "Unbalanced call to dispatch_semaphore_signal()"); } return _dispatch_semaphore_signal_slow(dsema); }Copy the code

dispatch_semaphore_wait

Wait for the semaphore value to become positive

Intptr_t dispatch_semaphore_wait(dispatch_semaphore_t dsema, dispatch_time_t timeout) { Long value = os_ATOMic_DEC2O (DSEMA, DSEMA_value, acquire); if (likely(value >= 0)) { return 0; } return _dispatch_semaphore_wait_slow(dsema, timeout); }Copy the code

_dispatch_semaphore_wait_slow

static intptr_t _dispatch_semaphore_wait_slow(dispatch_semaphore_t dsema, dispatch_time_t timeout) { long orig; _dispatch_sema4_create(&dsema->dsema_sema, _DSEMA4_POLICY_FIFO); // This is the second argument to wait, enumeration. // DISPATCH_TIME_FOREVER and DISPATCH_TIME_NOW switch (timeout) {default: if (! _dispatch_sema4_timedwait(&dsema->dsema_sema, timeout)) { break; } // Fall through and try to undo what the fast path did to // dsema->dsema_value case DISPATCH_TIME_NOW: orig = dsema->dsema_value; // If it is now, While (orig < 0) {if (os_atomic_cMPxchgv2O (dsema, dsemA_value, orig + 1, &orig, relaxed)) { return _DSEMA4_TIMEOUT(); } } // Another thread called semaphore_signal(). // Fall through and drain the wakeup. case DISPATCH_TIME_FOREVER: Semaphore_wait semaphore_wait semaphore_wait semaphore_wait The most important thing is that there is a do-while loop in this function. So the semaphore wait is just executing a loop _dispatch_sema4_WAIT (& DSEMa -> dsemA_sema); break; } return 0; }Copy the code

Scheduling group

Several API

  1. Dispatch_group_create create a group

  2. Dispatch_group_async enters group tasks, (encapsulates 3 functions of 5)

  3. Dispatch_group_notify Notifies of group task completion

  4. Dispatch_group_wait Time for executing group tasks

  5. “Dispatch_group_leave” and “dispatch_group_enter” are rented and grouped together with “dispatch_async”.

use

Dispatch_group_t group = dispatch_group_create(); // Tasks 1 and 2 do not interfere with each other. dispatch_queue_t queue = dispatch_get_global_queue(0, 0); Dispatch_group_async (group, queue, ^{// Perform task 1}); // dispatch_group_enter(group); Dispatch_async (queue, ^{dispatch_group_leave(group); }); Dispatch_group_notify (dispatch_get_main_queue(), ^{// Task group task completed});Copy the code

Analysis of the

The previous time was encapsulated semaphores. Now I write my own

dispatch_group_t
dispatch_group_create(void)
{
	return _dispatch_group_create_with_count(0);
}
Copy the code

_dispatch_group_create_with_count

Similar to the semaphore create. It’s just a create.

static inline dispatch_group_t
_dispatch_group_create_with_count(uint32_t n)
{
        
	dispatch_group_t dg = _dispatch_object_alloc(DISPATCH_VTABLE(group),
			sizeof(struct dispatch_group_s));
	dg->do_next = DISPATCH_OBJECT_LISTLESS;
	dg->do_targetq = _dispatch_get_default_queue(false);
	if (n) {
		os_atomic_store2o(dg, dg_bits,
				(uint32_t)-n * DISPATCH_GROUP_VALUE_INTERVAL, relaxed);
		os_atomic_store2o(dg, do_ref_cnt, 1, relaxed); // <rdar://22318411>
	}
	return dg;
}
Copy the code

dispatch_group_enter

Unlike the semaphore, -1 does not wait, but actually blocks the task in this function. Compare dispatch_group_leave to this function.

void dispatch_group_enter(dispatch_group_t dg) { // The value is decremented on a 32bits wide atomic so that the carry // For the 0 -> -1 transition is not propagated to the upper 32bits. Uint32_t old_bits = OS_atomic_sub_ORIG2O (dg, dg_bits, DISPATCH_GROUP_VALUE_INTERVAL, acquire); uint32_t old_value = old_bits & DISPATCH_GROUP_VALUE_MASK; if (unlikely(old_value == 0)) { _dispatch_retain(dg); // <rdar://problem/22318411> } if (unlikely(old_value == DISPATCH_GROUP_VALUE_MAX)) { DISPATCH_CLIENT_CRASH(old_bits, "Too many nested calls to dispatch_group_enter()"); }}Copy the code

dispatch_group_leave

When it leaves the group it wakes up dispatch_group_notify, which changes Enter from 0 to -1, and leave which changes -1 to 0 and dispatch_group_notify is executed

void dispatch_group_leave(dispatch_group_t dg) { // The value is incremented on a 64bits wide atomic so that the carry For // the -1 -> 0 transition increments the generation atomically. // The -1 will increments the generation atomically. // The hexadecimal value of -1 is large 0xffFFFF... uint64_t new_state, old_state = os_atomic_add_orig2o(dg, dg_state, DISPATCH_GROUP_VALUE_INTERVAL, release); // OldValue is DISPATCH_GROUP_VALUE_1. // DISPATCH_GROUP_VALUE_MASK is the same as DISPATCH_GROUP_VALUE_1 old_value = (uint32_t)(old_state & DISPATCH_GROUP_VALUE_MASK); If (unlikely(old_value == DISPATCH_GROUP_VALUE_1)) {old_state += DISPATCH_GROUP_VALUE_INTERVAL; do { new_state = old_state; if ((old_state & DISPATCH_GROUP_VALUE_MASK) == 0) { new_state &= ~DISPATCH_GROUP_HAS_WAITERS; new_state &= ~DISPATCH_GROUP_HAS_NOTIFS; } else { // If the group was entered again since the atomic_add above, // we can't clear the waiters bit anymore as we don't know for // which generation the waiters are for new_state &= ~DISPATCH_GROUP_HAS_NOTIFS; } if (old_state == new_state) break; } while (unlikely(! os_atomic_cmpxchgv2o(dg, dg_state, old_state, new_state, &old_state, relaxed))); Return _dispatch_group_wake(dg, old_state, true); } if (unlikely(old_value == 0)) { DISPATCH_CLIENT_CRASH((uintptr_t)old_value, "Unbalanced call to dispatch_group_leave()"); }}Copy the code

_dispatch_group_notify

Task group execution

static inline void _dispatch_group_notify(dispatch_group_t dg, dispatch_queue_t dq, dispatch_continuation_t dsn) { uint64_t old_state, new_state; dispatch_continuation_t prev; dsn->dc_data = dq; _dispatch_retain(dq); prev = os_mpsc_push_update_tail(os_mpsc(dg, dg_notify), dsn, do_next); if (os_mpsc_push_was_empty(prev)) _dispatch_retain(dg); os_mpsc_push_update_prev(os_mpsc(dg, dg_notify), prev, dsn, do_next); If (os_MPSC_PUSH_was_EMPTY (prev)) {os_ATOMIC_RMW_loop2O (dg, dg_state, old_state, new_state, release, { new_state = old_state | DISPATCH_GROUP_HAS_NOTIFS; if ((uint32_t)old_state == 0) { os_atomic_rmw_loop_give_up({ return _dispatch_group_wake(dg, new_state, false); }); }}); }}Copy the code

dispatch_source

Its CPU fit is so small that it tries not to hog resources. It is called dispatch_source_merge_data by any thread and executes its defined handle (known as block). This process is called Cuntom event. Is the type of event that Dispatch_Source supports handling. Block execution is controlled by conditions. And it’s not affected by runloop, it’s not limited by running loops. workloop

Addendum: A handle is a pointer to a pointer that points to a class or structure that is closely related to the system. For example, instance HANDLE (HINSTANCE), bitmap HANDLE (HBITMAP), Device representation HANDLE (HDC), icon HANDLE (HICON), and generic HANDLE (HANDLE)

A few common functions:

  1. dispatch_source_createThe parameter (type: indicates the type of the event source. DISPATCH_SOURCE_TYPE_TIMER is used as the timer. Handle: is the source handle and passes more than 0. Mask: mask of the event flag, pass more than 0)
  2. dispatch_source_set_event_handlerTo set the source event callback.
  3. dispatch_source_merge_dataThe source event sets the data
  4. dispatch_source_get_data, get the source event data, that is, set 3, get 4
  5. dispatch_resumeTo continue.
  6. dispatch_suspendhang

Usage scenarios

@interface ViewController () @property (weak, nonatomic) IBOutlet UIProgressView *progressView; @property (nonatomic, strong) dispatch_source_t source; @property (nonatomic, strong) dispatch_queue_t queue; @property (nonatomic, assign) NSUInteger totalComplete; @property (nonatomic) BOOL isRunning; @end @implementation ViewController - (void)viewDidLoad { [super viewDidLoad]; self.totalComplete = 0; self.queue = dispatch_queue_create("seginal", NULL); Self. source = dispatch_source_create(dispatch_source_data_add, 0, 0, dispatch_get_main_queue()); dispatch_source_set_event_handler(self.source, ^{ NSLog(@"%@",[NSThread currentThread]); NSUInteger value = dispatch_source_get_data(self.source); self.totalComplete += value; NSLog (@ "progress: % 2 f, the self. The totalComplete / 100.0); Self. ProgressView. Progress = self. TotalComplete / 100.0; }); self.isRunning = YES; dispatch_resume(self.source); } - (IBAction)didClickStartOrPauseAction:(id)sender { if (self.isRunning) { dispatch_suspend(self.source); dispatch_suspend(self.queue); NSLog(@" paused "); self.isRunning = NO; [sender setTitle: @ "pause in.." forState: UIControlStateNormal]; }else{ dispatch_resume(self.source); dispatch_resume(self.queue); NSLog(@" executed "); self.isRunning = YES; [sender setTitle: @ "pause in.." forState: UIControlStateNormal]; }} - (void)touch began :(NSSet< touch *> *)touches :(UIEvent *)event{NSLog(@" touch "); for (int i= 0; i<100; i++) { dispatch_async(self.queue, ^{ if (! Self. isRunning) {NSLog(@" paused "); return; } sleep(1); // Dispatch_source_merge_data (self.source, 1); }); }}Copy the code

Timers, only for usage

-(instancetype)initWithTarget:(id )target withSelector:(SEL )selector withTime:(double )time withDelayTime:(double)delay  { self = [super init]; if (self) { self.queue = dispatch_get_global_queue(0, 0); self.source = dispatch_source_create(DISPATCH_SOURCE_TYPE_TIMER, 0, 0, self.queue); dispatch_time_t start = dispatch_time(DISPATCH_TIME_NOW, (delay * NSEC_PER_SEC)); // Timer error value, the smaller the more accurate, the greater the CPU load. Uint64_t leeway = 1 * NSEC_PER_SEC; dispatch_source_set_timer(self.source, start, time * NSEC_PER_SEC, leeway); dispatch_source_set_event_handler(self.source, ^{ if ([target respondsToSelector:selector]) { [target performSelector:selector]; }}); } return self; } // start -(void)start {dispatch_resume(self.source); } // Suspend -(void)suspend {dispatch_suspend(self.source); } // cancel -(void)cancle {dispatch_source_cancel(self.source); }Copy the code