This is the fourth day of my participation in the August More text Challenge. For details, see:August is more challenging

Write in the front: the exploration of the underlying principles of iOS is my usual development and learning in the continuous accumulation of a step forward road. I hope it will be helpful for readers to record my journey of discovery.Copy the code

The directory is as follows:

  1. Exploring the underlying principles of iOS alloc
  2. Exploration of the underlying principles of iOS structure in vivo alignment
  3. The nature of the object explored by iOS underlying principles & the underlying implementation of ISA
  4. The underlying principles of the ISA-class (part 1)
  5. The underlying principles of the ISA class (middle)
  6. The underlying principles of isA-Class (part 2)
  7. Exploring the nature of Runtime Runtime & Methods in iOS Underlying principles
  8. Objc_msgSend explores the underlying principles of iOS
  9. Runtime Runtime slow lookup process
  10. Dynamic method resolution for exploring the underlying principles of iOS
  11. IOS underlying principles to explore the message forwarding process
  12. IOS Application loading principle
  13. Application loading principle (2)
  14. IOS underlying principle exploration class load
  15. IOS Underlying principles to explore the classification of loading
  16. Associated object for iOS underlying principle exploration
  17. Exploration of iOS underlying principles of sorcerer KVC
  18. Exploring the underlying principles of iOS — KVO Principles | more challenging in August
  19. Exploration of the underlying principles of iOS rewrite KVO | August more text challenges
  20. Exploring the underlying principles of iOS multithreading | more challenging in August
  21. Exploring the underlying principles of iOS: GCD functions and queues

Summary column for the above

  • Summary of the phase of iOS underlying principle exploration

Tidy up the details

  • Summary of iOS development details

preface

In the last chapter, GCD function and queue, we focused on the concept and summary of GCD in the use of functions and queues. From today, we begin to study the underlying data structure of GCD and the underlying execution logic of synchronous asynchronous tasks, and explore its use skills, underlying principles and the pitfalls in the interview. Ok, no more words, let’s start today’s content, everyone refueling!!

The underlying data structure of GCD

Through the introduction of the theoretical knowledge of GCD in the above chapter, we are most concerned about how the GCD is implemented at the bottom level. Next, we will start to look at the underlying implementation of GCD step by step from the invocation of the OC layer.

Start with an interview question

    // What is the print order?

    dispatch_queue_t queue = dispatch_queue_create("com.superman.cn", DISPATCH_QUEUE_CONCURRENT);

    dispatch_async(queue, ^{
        NSLog(@"1");
    });
    
    dispatch_async(queue, ^{
        NSLog(@"2");
    });

    dispatch_sync(queue, ^{ NSLog(@"3"); });
    
    NSLog(@"0");

    dispatch_async(queue, ^{
        NSLog(@"Seven");
    });
    dispatch_async(queue, ^{
        NSLog(@"8");
    });
    dispatch_async(queue, ^{
        NSLog(@"9");
    });
Copy the code

First of all, queue is a concurrent queue, 1, 2 are added asynchronously, so the order of the queue is uncertain, 3 is added synchronously, so the order of 1, 2, 3 is uncertain, but 3 will block 0, so 3 is executed before 0, and 7, 8, 9 are added asynchronously. So their order is not fixed, but it takes time for threads to add asynchronous tasks, so 7, 8, and 9 are most likely to be executed after 0.

The queue

There are two types of queues: serial queues and concurrent queues. The following four types of queues are generally encountered in our daily use

    // Serial queue
    dispatch_queue_t serial = dispatch_queue_create("kc", DISPATCH_QUEUE_SERIAL);
    
    // Concurrent queue
    dispatch_queue_t conque = dispatch_queue_create("cooci", DISPATCH_QUEUE_CONCURRENT);
    
    / / the home team
    dispatch_queue_t mainQueue = dispatch_get_main_queue();

    // Global concurrent queue
    dispatch_queue_t globQueue = dispatch_get_global_queue(0.0);
Copy the code

We all know the main queue is a serial queue, but why is it a serial queue?

Regular operation we first click in to see:

/ *! * @function dispatch_get_main_queue * * @abstract * Returns the default queue that is bound to the main thread. * * @discussion * In order to invoke a block committed to the main queue, an application must * call dispatch_main(), NSApplicationMain(), * or use the CFRunLoop thread on main. * * The main queue is used to interact with the main thread and main run loop in the application context. * * Because the main queue does not behave exactly like a normal serial queue, * it can have unwanted side effects when used in processes of non-UI applications * (daemons). For such a process, the main queue should be avoided. * * @see dispatch_queue_MAIN_t * * @result * Returns the main queue. This queue represents the main thread automatically created before * calls main(). * /
DISPATCH_INLINE DISPATCH_ALWAYS_INLINE DISPATCH_CONST DISPATCH_NOTHROW
dispatch_queue_main_t
dispatch_get_main_queue(void)
{
	return DISPATCH_GLOBAL_OBJECT(dispatch_queue_main_t, _dispatch_main_q);
}
Copy the code

The comment further explains that dispatch_GET_main_queue is actually a serial queue. Below, we further demonstrate through the source code.

Next breakpoint, look at the stack information:

We see that the underlying block_invoke is called, and the underlying call comes from libdispatch.dylib, so we’ll go to Apple Source Browser and download a copy of the Source code and continue with the analysis.

Dispatch_get_main_queue source analysis

Once the source code is downloaded and opened, search directly for the dispatch_get_main_queue we want to study

dispatch_queue_main_t
dispatch_get_main_queue(void)
{
    return DISPATCH_GLOBAL_OBJECT(dispatch_queue_main_t, _dispatch_main_q);
}
Copy the code

Next, look for DISPATCH_GLOBAL_OBJECT

#define DISPATCH_GLOBAL_OBJECT(type, object) ((OS_OBJECT_BRIDGE type)&(object))
...
#define DISPATCH_GLOBAL_OBJECT(type, object) (static_cast<type>(&(object)))
...
#define DISPATCH_GLOBAL_OBJECT(type, object) ((type)&(object))
Copy the code

As you can see, the second argument _dispatch_main_q is object, which, as you can tell from the name, is the real object, and the first argument is its type.

So, the focus is on the exploration of _dispatch_main_q. After searching for _dispatch_main_q, we find a lot of results, not what we want, and after searching for (after search, we don’t find it directly, so let’s see what operations are performed on its assignment, after _dispatch_main_q =, we find what we want:

// 6618342 Contact the team that owns the Instrument DTrace probe before
// renaming this symbol
struct dispatch_queue_static_s _dispatch_main_q = {
	DISPATCH_GLOBAL_OBJECT_HEADER(queue_main),
#if! DISPATCH_USE_RESOLVERS .do_targetq = _dispatch_get_default_queue(true),
#endif
	.dq_state = DISPATCH_QUEUE_STATE_INIT_VALUE(1) |
			DISPATCH_QUEUE_ROLE_BASE_ANON,
	.dq_label = "com.apple.main-thread",
	.dq_atomic_flags = DQF_THREAD_BOUND | DQF_WIDTH(1),
	.dq_serialnum = 1};Copy the code

After finding the assignment, we also see the assignment to its label, which we can see when we print the main thread, or we can search com.apple.main-thread directly to locate the source code here.

Ok, so let’s look at how to analyze queues that are serial. After analyzing its internal attributes, dQ_atomic_flags and Dq_serialnum are relatively close to each other about serial queues because

.dq_atomic_flags = DQF_THREAD_BOUND | DQF_WIDTH(1),

.dq_serialnum = 1,

At this point, we can’t directly determine whether these are the things that are necessarily associated with serial queues. We also need to confirm that there are certain features that serial queues necessarily have, and then, in combination with what is assigned to _dispatch_main_q in this source code, we can say that _dispatch_main_q is a serial queue.

Now let’s look at the characteristics of serial queues and concurrent queues, starting with the creation.

dispatch_queue_create

dispatch_queue_t
dispatch_queue_create(const char *label, dispatch_queue_attr_t attr)
{
	return _dispatch_lane_create_with_target(label, attr,
			DISPATCH_TARGET_QUEUE_DEFAULT, true);
}
Copy the code

_dispatch_lane_create_with_target

With dispatch_queue_CREATE we get to dispatch_lane_create_with_target.

We are concerned with the return value of the created function, so its internal implementation code focuses on the return value.

    // Trace is the underlying trace of apple
    // The dq is important here
    return _dispatch_trace_queue_create(dq)._dq;
Copy the code

Then follow dQ:

DISPATCH_NOINLINE
static dispatch_queue_t
_dispatch_lane_create_with_target(const char *label, dispatch_queue_attr_t dqa,
		dispatch_queue_t tq, bool legacy){...// Allocate memory
	dispatch_lane_t dq = _dispatch_object_alloc(vtable,
			sizeof(struct dispatch_lane_s));
	/ / initialization
	// dqai.dqai_concurrent ? DISPATCH_QUEUE_WIDTH_MAX : 1 
	// Yes concurrency - DISPATCH_QUEUE_WIDTH_MAX
	// Is serial - 1
	_dispatch_queue_init(dq, dqf, dqai.dqai_concurrent ?
			DISPATCH_QUEUE_WIDTH_MAX : 1, DISPATCH_QUEUE_ROLE_INNER |
			(dqai.dqai_inactive ? DISPATCH_QUEUE_INACTIVE : 0));

	dq->dq_label = label;
	dq->dq_priority = _dispatch_priority_make((dispatch_qos_t)dqai.dqai_qos,
			dqai.dqai_relpri);
	if (overcommit == _dispatch_queue_attr_overcommit_enabled) {
		dq->dq_priority |= DISPATCH_PRIORITY_FLAG_OVERCOMMIT;
	}
	if(! dqai.dqai_inactive) { _dispatch_queue_priority_inherit_from_target(dq, tq); _dispatch_lane_inherit_wlh_from_target(dq, tq); } _dispatch_retain(tq); dq->do_targetq = tq; _dispatch_object_debug(dq,"%s", __func__);
	return _dispatch_trace_queue_create(dq)._dq;
}
Copy the code

_dispatch_queue_init

static inline dispatch_queue_class_t
_dispatch_queue_init(dispatch_queue_class_t dqu, dispatch_queue_flags_t dqf, uint16_t width, uint64_t initial_state_bits)
{
	uint64_t dq_state = DISPATCH_QUEUE_STATE_INIT_VALUE(width);
	dispatch_queue_t dq = dqu._dq;

	dispatch_assert((initial_state_bits & ~(DISPATCH_QUEUE_ROLE_MASK |
			DISPATCH_QUEUE_INACTIVE)) == 0);

	if (initial_state_bits & DISPATCH_QUEUE_INACTIVE) {
		dq->do_ref_cnt += 2; // rdar://8181908 see _dispatch_lane_resume
		if (dx_metatype(dq) == _DISPATCH_SOURCE_TYPE) {
			dq->do_ref_cnt++; // released when DSF_DELETED is set
		}
	}

	dq_state |= initial_state_bits;
	dq->do_next = DISPATCH_OBJECT_LISTLESS;
	dqf |= DQF_WIDTH(width);
	os_atomic_store2o(dq, dq_atomic_flags, dqf, relaxed);
	dq->dq_state = dq_state;
	dq->dq_serialnum =
			os_atomic_inc_orig(&_dispatch_queue_serial_numbers, relaxed);
	return dqu;
}
Copy the code

The operation for the third parameter, width, comes in

dqf |= DQF_WIDTH(width);

So DQF = 1 is a serial queue.

Dq ->dq_serialnum (it’s just an identifier = 17) :

_dispatch_queue_serial_numbers

unsigned long volatile _dispatch_queue_serial_numbers = DISPATCH_QUEUE_SERIAL_NUMBER_INIT;

// skip zero
// 1-main_q // Where 1 represents the main queue
// 2 - mgr_q
// 3 - mgr_root_q
/ / 4,5,6,7,8,9,10,11,12,13,14,15 - global the queues
// 17 - workloop_fallback_q
// we use 'xadd' on Intel, so the initial value == next assigned
#define DISPATCH_QUEUE_SERIAL_NUMBER_INIT 17
extern unsigned long volatile _dispatch_queue_serial_numbers;
Copy the code

Here, we look at the global concurrent queue:

dispatch_queue_global_s

struct dispatch_queue_global_s _dispatch_root_queues[] = {
#define _DISPATCH_ROOT_QUEUE_IDX(n, flags) \
		((flags & DISPATCH_PRIORITY_FLAG_OVERCOMMIT) ? \
		DISPATCH_ROOT_QUEUE_IDX_##n##_QOS_OVERCOMMIT : \
		DISPATCH_ROOT_QUEUE_IDX_##n##_QOS)
#define _DISPATCH_ROOT_QUEUE_ENTRY(n, flags, ...) \
	[_DISPATCH_ROOT_QUEUE_IDX(n, flags)] = { \
		DISPATCH_GLOBAL_OBJECT_HEADER(queue_global), \
		.dq_state = DISPATCH_ROOT_QUEUE_STATE_INIT_VALUE, \
		.do_ctxt = _dispatch_root_queue_ctxt(_DISPATCH_ROOT_QUEUE_IDX(n, flags)), \
		.dq_atomic_flags = DQF_WIDTH(DISPATCH_QUEUE_WIDTH_POOL), \
		.dq_priority = flags | ((flags & DISPATCH_PRIORITY_FLAG_FALLBACK) ? \
				_dispatch_priority_make_fallback(DISPATCH_QOS_##n) : \
				_dispatch_priority_make(DISPATCH_QOS_##n, 0)), \
		__VA_ARGS__ \
	}
	_DISPATCH_ROOT_QUEUE_ENTRY(MAINTENANCE, 0,
		.dq_label = "com.apple.root.maintenance-qos",
		.dq_serialnum = 4,
	),
	_DISPATCH_ROOT_QUEUE_ENTRY(MAINTENANCE, DISPATCH_PRIORITY_FLAG_OVERCOMMIT,
		.dq_label = "com.apple.root.maintenance-qos.overcommit",
		.dq_serialnum = 5,
	),
	_DISPATCH_ROOT_QUEUE_ENTRY(BACKGROUND, 0,
		.dq_label = "com.apple.root.background-qos",
		.dq_serialnum = 6,
	),
	_DISPATCH_ROOT_QUEUE_ENTRY(BACKGROUND, DISPATCH_PRIORITY_FLAG_OVERCOMMIT,
		.dq_label = "com.apple.root.background-qos.overcommit",
		.dq_serialnum = 7,
	),
	_DISPATCH_ROOT_QUEUE_ENTRY(UTILITY, 0,
		.dq_label = "com.apple.root.utility-qos",
		.dq_serialnum = 8,
	),
	_DISPATCH_ROOT_QUEUE_ENTRY(UTILITY, DISPATCH_PRIORITY_FLAG_OVERCOMMIT,
		.dq_label = "com.apple.root.utility-qos.overcommit",
		.dq_serialnum = 9,
	),
	_DISPATCH_ROOT_QUEUE_ENTRY(DEFAULT, DISPATCH_PRIORITY_FLAG_FALLBACK,
		.dq_label = "com.apple.root.default-qos",
		.dq_serialnum = 10,
	),
	_DISPATCH_ROOT_QUEUE_ENTRY(DEFAULT,
			DISPATCH_PRIORITY_FLAG_FALLBACK | DISPATCH_PRIORITY_FLAG_OVERCOMMIT,
		.dq_label = "com.apple.root.default-qos.overcommit",
		.dq_serialnum = 11,
	),
	_DISPATCH_ROOT_QUEUE_ENTRY(USER_INITIATED, 0,
		.dq_label = "com.apple.root.user-initiated-qos",
		.dq_serialnum = 12,
	),
	_DISPATCH_ROOT_QUEUE_ENTRY(USER_INITIATED, DISPATCH_PRIORITY_FLAG_OVERCOMMIT,
		.dq_label = "com.apple.root.user-initiated-qos.overcommit",
		.dq_serialnum = 13,
	),
	_DISPATCH_ROOT_QUEUE_ENTRY(USER_INTERACTIVE, 0,
		.dq_label = "com.apple.root.user-interactive-qos",
		.dq_serialnum = 14,
	),
	_DISPATCH_ROOT_QUEUE_ENTRY(USER_INTERACTIVE, DISPATCH_PRIORITY_FLAG_OVERCOMMIT,
		.dq_label = "com.apple.root.user-interactive-qos.overcommit",
		.dq_serialnum = 15,),};Copy the code

Here, it is further verified that dq_serialnum is just an identifier, and the priority is the first argument passed to _DISPATCH_ROOT_QUEUE_ENTRY.

Here dispatch_queue_global_s is a stUCt and is a collection that can be used at any time and retrieved at any time depending on the parameters passed.

Since dispatch_queue_t

    // Main queue type
    dispatch_queue_main_t
    // Global concurrent queue type
    dispatch_queue_global_t
Copy the code

You can see that the types of queues are not consistent, so when we create a queue we use a dispatch_queue_T type to connect, so we can infer further from this underlying type.

DISPATCH_DECL(dispatch_queue); . #define DISPATCH_DECL(name) OS_OBJECT_DECL_SUBCLASS(name, dispatch_object) ... #define OS_OBJECT_DECL_SUBCLASS(name,super) \
OS_OBJECT_DECL_IMPL(name, NSObject, <OS_OBJECT_CLASS(super)>)
...
#define OS_OBJECT_DECL_IMPL(name, adhere, ...) \
	OS_OBJECT_DECL_PROTOCOL(name, __VA_ARGS__) \
	typedef adhere<OS_OBJECT_CLASS(name)> \
		* OS_OBJC_INDEPENDENT_CLASS name##_t
...
#define OS_OBJECT_DECL_PROTOCOL(name, ...) \
	@protocol OS_OBJECT_CLASS(name) __VA_ARGS__ \
	@end
...
#define OS_OBJECT_CLASS(name) OS_##name
Copy the code

#define DISPATCH_DECL()

DISPATCH_DECL(dispatch_queue); . #define DISPATCH_DECL(name) \ typedef struct name##_s : public dispatch_object_s {} *name##_tCopy the code

That’s what you get

typedef struct dispatch_queue_s : Public dispatch_object_s {} * dispatch_queue_t Dispatch_queue_s inherits from dispatch_object_s; This may seem a little confusing, but just to give you an analogy of our class, class inherits from objc_class, and objc_class inherits from objc_object.

Inheritance chain for dispatch_object_s

Below the macro definition source above, there is a typedef union:

typedef union {
    struct _os_object_s *_os_obj;
    struct dispatch_object_s *_do;
    struct dispatch_queue_s *_dq;
    struct dispatch_queue_attr_s *_dqa;
    struct dispatch_group_s *_dg;
    struct dispatch_source_s *_ds;
    struct dispatch_channel_s *_dch;
    struct dispatch_mach_s *_dm;
    struct dispatch_mach_msg_s *_dmsg;
    struct dispatch_semaphore_s *_dsema;
    struct dispatch_data_s *_ddata;
    struct dispatch_io_s *_dchannel;
} dispatch_object_t DISPATCH_TRANSPARENT_UNION;
Copy the code

“Dispatch_object_s” inherits from “dispatch_object_t”.

struct dispatch_object_s { _DISPATCH_OBJECT_HEADER(object); }; . #define _DISPATCH_OBJECT_HEADER(x) \ struct _os_object_s _as_os_obj[0]; \
    OS_OBJECT_STRUCT_HEADER(dispatch_##x); \
    struct dispatch_##x##_s *volatile do_next; \
    struct dispatch_queue_s *do_targetq; \
    void *do_ctxt; \
    union { \
            dispatch_function_t DISPATCH_FUNCTION_POINTER do_finalizer; \
            void*do_introspection_ctxt; The \}... #define OS_OBJECT_STRUCT_HEADER(x) \ _OS_OBJECT_HEADER(\const void *_objc_isa, \
    do_ref_cnt, \
    do_xref_cnt); \
    const struct x##_vtable_s *do_vtable


...

#define _OS_OBJECT_HEADER(isa, ref_cnt, xref_cnt) \
    isa; /* must be pointer-sized and use __ptrauth_objc_isa_pointer */ \
    int volatile ref_cnt; \
    int volatile xref_cnt

Copy the code

Data Structure summary

In this case, the relationship of the entire inheritance chain is as follows by wrapping macros layer by layer:

Dispatch_queue_t -> dispatch_queue_s, dispatch_queue_S -> _OS_OBJect_s -> dispatch_object_s;

How is the dispatch_sync block called

The use of dispatch_sync

dispatch_sync(dispatch_get_global_queue(0.0), ^{
    NSLog(@"SuperMan function Analysis");
});
Copy the code

In the Demo above, which we use most often, we add a blocck block task to dispatch_get_global_queue. We know that blocks are called by block() after the block is defined, but when we use dispatch_sync, we don’t see the call, the task in the block can also be executed. How does that work? We are now left to explore the internal implementation.

First, the internal implementation:

void
dispatch_sync(dispatch_queue_t dq, dispatch_block_t work)
{
    uintptr_t dc_flags = DC_FLAG_BLOCK;
    if (unlikely(_dispatch_block_has_private_data(work))) {
            return_dispatch_sync_block_with_privdata(dq, work, dc_flags); } _dispatch_sync_f(dq, work, _dispatch_Block_invoke(work), dc_flags); }... #define _dispatch_Block_invoke(bb) \ ((dispatch_function_t)((struct Block_layout *)bb)->invoke)Copy the code

_dispatch_sync_f

// Intermediate layer encapsulation
static void
_dispatch_sync_f(dispatch_queue_t dq, void *ctxt, dispatch_function_t func,
		uintptr_t dc_flags)
{
    _dispatch_sync_f_inline(dq, ctxt, func, dc_flags);
}

Copy the code

_dispatch_sync_f_inline

static inline void
_dispatch_sync_f_inline(dispatch_queue_t dq, void *ctxt,
		dispatch_function_t func, uintptr_t dc_flags)
{
	if (likely(dq->dq_width == 1)) {
		return _dispatch_barrier_sync_f(dq, ctxt, func, dc_flags);
	}

	if(unlikely(dx_metatype(dq) ! = _DISPATCH_LANE_TYPE)) { DISPATCH_CLIENT_CRASH(0."Queue type doesn't support dispatch_sync");
	}

	dispatch_lane_t dl = upcast(dq)._dl;
	// Global concurrent queues and queues bound to non-dispatch threads
	// always fall into the slow case, see DISPATCH_ROOT_QUEUE_STATE_INIT_VALUE
	if(unlikely(! _dispatch_queue_try_reserve_sync_width(dl))) {return _dispatch_sync_f_slow(dl, ctxt, func, 0, dl, dc_flags);
	}

	if (unlikely(dq->do_targetq->do_targetq)) {
		return _dispatch_sync_recurse(dl, ctxt, func, dc_flags);
	}
	_dispatch_introspection_sync_begin(dl);
	_dispatch_sync_invoke_and_complete(dl, ctxt, func DISPATCH_TRACE_ARG(
			_dispatch_trace_item_sync_push_pop(dq, ctxt, func, dc_flags)));
}
Copy the code

We don’t know what branch we’re going to go to, but we can tell by the next sign point.

Code execution goes to _dispatch_sync_f_slow:

_dispatch_sync_f_slow

static void
_dispatch_sync_f_slow(dispatch_queue_class_t top_dqu, void *ctxt,
		dispatch_function_t func, uintptr_t top_dc_flags,
		dispatch_queue_class_t dqu, uintptr_t dc_flags)
{
	dispatch_queue_t top_dq = top_dqu._dq;
	dispatch_queue_t dq = dqu._dq;
	if(unlikely(! dq->do_targetq)) {return _dispatch_sync_function_invoke(dq, ctxt, func);
	}

	pthread_priority_t pp = _dispatch_get_priority();
	struct dispatch_sync_context_s dsc = {
		.dc_flags    = DC_FLAG_SYNC_WAITER | dc_flags,
		.dc_func     = _dispatch_async_and_wait_invoke,
		.dc_ctxt     = &dsc,
		.dc_other    = top_dq,
		.dc_priority = pp | _PTHREAD_PRIORITY_ENFORCE_FLAG,
		.dc_voucher  = _voucher_get(),
		.dsc_func    = func,
		.dsc_ctxt    = ctxt,
		.dsc_waiter  = _dispatch_tid_self(),
	};

	_dispatch_trace_item_push(top_dq, &dsc);
	__DISPATCH_WAIT_FOR_QUEUE__(&dsc, dq);

	if (dsc.dsc_func == NULL) {
		// dsc_func being cleared means that the block ran on another thread ie.
		// case (2) as listed in _dispatch_async_and_wait_f_slow.
		dispatch_queue_t stop_dq = dsc.dc_other;
		return _dispatch_sync_complete_recurse(top_dq, stop_dq, top_dc_flags);
	}

	_dispatch_introspection_sync_begin(top_dq);
	_dispatch_trace_item_pop(top_dq, &dsc);
	_dispatch_sync_invoke_and_complete_recurse(top_dq, ctxt, func,top_dc_flags
			DISPATCH_TRACE_ARG(&dsc));
}
Copy the code

Next to the breakpoint, we’ll go to _dispatch_sync_function_invoke

_dispatch_sync_function_invoke

static void
_dispatch_sync_function_invoke(dispatch_queue_class_t dq, void *ctxt,
dispatch_function_t func)
{
_dispatch_sync_function_invoke_inline(dq, ctxt, func);
}
Copy the code

_dispatch_sync_function_invoke_inline

static inline void
_dispatch_sync_function_invoke_inline(dispatch_queue_class_t dq, void *ctxt,
		dispatch_function_t func)
{
	dispatch_thread_frame_s dtf;
	_dispatch_thread_frame_push(&dtf, dq);
	_dispatch_client_callout(ctxt, func);
	_dispatch_perfmon_workitem_inc();
	_dispatch_thread_frame_pop(&dtf);
}
Copy the code

For CTXT and func, _dispatch_client_callout(CTXT, func);

_dispatch_client_callout

void
_dispatch_client_callout(void *ctxt, dispatch_function_t f)
{
	_dispatch_get_tsd_base();
	void *u = _dispatch_get_unwind_tsd();
	if(likely(! u))return f(ctxt);
	_dispatch_set_unwind_tsd(NULL);
	f(ctxt);
	_dispatch_free_unwind_tsd();
	_dispatch_set_unwind_tsd(u);
}
Copy the code

It executes f(CTXT); Call execution.

Finally, put a breakpoint in the sync block to verify:

How is the dispatch_async task block called

dispatch_async

void
dispatch_async(dispatch_queue_t dq, dispatch_block_t work)
{
    dispatch_continuation_t dc = _dispatch_continuation_alloc();
    uintptr_t dc_flags = DC_FLAG_CONSUME;
    dispatch_qos_t qos;
    // Task encapsulation and priority encapsulation
    // An asynchronous function means that an asynchronous call will not be ordered. Task callbacks are asynchronous and related to CPU scheduling
    // If you need to call it at some point, you can call it again
    qos = _dispatch_continuation_init(dc, dq, work, 0, dc_flags);
    _dispatch_continuation_async(dq, dc, qos, dc->dc_flags);
}
Copy the code

_dispatch_continuation_init

static inline dispatch_qos_t
_dispatch_continuation_init(dispatch_continuation_t dc, dispatch_queue_class_t dqu, dispatch_block_t work, dispatch_block_flags_t flags, uintptr_t dc_flags)
{
	void *ctxt = _dispatch_Block_copy(work);

	dc_flags |= DC_FLAG_BLOCK | DC_FLAG_ALLOCATED;
	if (unlikely(_dispatch_block_has_private_data(work))) {
		dc->dc_flags = dc_flags;
		dc->dc_ctxt = ctxt;
		// will initialize all fields but requires dc_flags & dc_ctxt to be set
		return _dispatch_continuation_init_slow(dc, dqu, flags);
	}

	dispatch_function_t func = _dispatch_Block_invoke(work);
	if (dc_flags & DC_FLAG_CONSUME) {
		func = _dispatch_call_block_and_release;
	}
	return _dispatch_continuation_init_f(dc, dqu, ctxt, func, flags, dc_flags);
}
Copy the code

_dispatch_continuation_init_f

static inline dispatch_qos_t
_dispatch_continuation_init_f(dispatch_continuation_t dc,
		dispatch_queue_class_t dqu, void *ctxt, dispatch_function_t f,
		dispatch_block_flags_t flags, uintptr_t dc_flags)
{
	pthread_priority_t pp = 0;
	dc->dc_flags = dc_flags | DC_FLAG_ALLOCATED;
	dc->dc_func = f;
	dc->dc_ctxt = ctxt;
	// in this context DISPATCH_BLOCK_HAS_PRIORITY means that the priority
	// should not be propagated, only taken from the handler if it has one
	if(! (flags & DISPATCH_BLOCK_HAS_PRIORITY)) { pp = _dispatch_priority_propagate(); } _dispatch_continuation_voucher_set(dc, flags);return _dispatch_continuation_priority_set(dc, dqu, pp, flags);
}
Copy the code

In synch, it’s just f(CTXT); Call, here it is

    dc->dc_func = f;
    dc->dc_ctxt = ctxt;
Copy the code

For a packaging, assigned to the DC; At the end, _dispatch_continuation_priority_set is also used for priority processing.

In the qos line of dispatch_async, tasks are encapsulated and prioritized. Because asynchronous functions are called asynchronously, the execution is disordered, and the CPU scheduling is related, the system needs to save the task first, waiting for the CPU scheduling execution. So, continue with the following line:

_dispatch_continuation_async


#define dx_push(x, y, z) dx_vtable(x)->dq_push(x, y, z)


...



static inline void
_dispatch_continuation_async(dispatch_queue_class_t dqu, dispatch_continuation_t dc, dispatch_qos_t qos, uintptr_t dc_flags){#if DISPATCH_INTROSPECTION
	if(! (dc_flags & DC_FLAG_NO_INTROSPECTION)) { _dispatch_trace_item_push(dqu, dc); } #else
	(void)dc_flags;
#endif
	return dx_push(dqu._dq, dc, qos);
}
Copy the code

dx_vtable(x)->dq_push(x, y, z)

Here, we should focus on what -> goes to call. And then dq_push has a lot of assignments, and when you get to this point, it suddenly becomes clear that in dx_vtable(x), you’re actually determining whether the current _dq is a serial queue or a concurrent queue and then finding the specific contents and executing the corresponding contents.

dq_push

. DISPATCH_VTABLE_SUBCLASS_INSTANCE(queue_global, lane, .do_type = DISPATCH_QUEUE_GLOBAL_ROOT_TYPE, .do_dispose = _dispatch_object_no_dispose, .do_debug = _dispatch_queue_debug, .do_invoke = _dispatch_object_no_invoke, .dq_activate = _dispatch_queue_no_activate, .dq_wakeup = _dispatch_root_queue_wakeup, .dq_push = _dispatch_root_queue_push, ); .Copy the code

It’s a good idea to follow here, but it’s a little faster to look at the stack information:

bt

And you can see that the end result is _dispatch_client_callout

void
_dispatch_client_callout(void *ctxt, dispatch_function_t f)
{
	_dispatch_get_tsd_base();
	void *u = _dispatch_get_unwind_tsd();
	if(likely(! u))return f(ctxt);
	_dispatch_set_unwind_tsd(NULL);
	f(ctxt);
	_dispatch_free_unwind_tsd();
	_dispatch_set_unwind_tsd(u);
}
Copy the code

Ultimately, it’s the execution of the mission.

dispatch_async -> _dispatch_continuation_async -> dx_push -> _dispatch_root_queue_push -> _dispatch_root_queue_push_inline -> _dispatch_root_queue_poke -> _dispatch_root_queue_poke_slow -> _dispatch_root_queues_init -> dispatch_once_f(&_dispatch_root_queues_pred, NULL, _dispatch_root_queues_init_once) -> _dispatch_worker_thread2 -> _dispatch_root_queue_drain -> _dispatch_queue_override_invoke -> _dispatch_client_callout -> _dispatch_call_block_and_release

supplement

_dispatch_queue_override_invoke

There is no call to _dispatch_queue_override_invoke found in the process, but according to the stack information, it comes here after _dispatch_root_queue_drain. We will clarify the question here in the next section.

The singleton

static dispatch_once_t onceToken;   
dispatch_once(&onceToken, ^{  
    class = [[self alloc] init];
});
Copy the code

The two most important parameters in simple interest are onceToken and block.

There are two core points:

  1. Why is simple interest called once?
  2. Why is this block of simple interest called?

Call it once because: the val underlying internal implementation is encapsulated as dispatch_once_gate_t, and this variable is used to get an association of the underlying atomicity. Associate a variable of uintPtr_t type V to query. The current onceToken is a global static variable. Depending on each simple interest, each static variable is different. In order to ensure uniqueness, the underlying version uses the kVC-like format of OS_atomic_LOAD. If the retrieved value is DLOCK_ONCE_DONE: it has been processed once, then the tune is returned. When the first code execution comes in: the thread locks itself for security, ensuring that the current task is executed uniquely, preventing the same onceToken from being executed multiple times. After the lock, the call to the block executes. At the end of the call, the lock is unlocked and the value of v is set to DLOCK_ONCE_DONE (the next time the block is not called). So the uniqueness of simple interest is guaranteed.

End with the interview questions

// How many are printed separately?

- (void)MTDemo{
    while (self.num < 5) {
        dispatch_async(dispatch_get_global_queue(0.0), ^{
            self.num++;
        });
    }
    NSLog(@"end : %d",self.num); 
}

- (void)KSDemo{
   
    for (int i= 0; i<10000; i++) {
        dispatch_async(dispatch_get_global_queue(0.0), ^{
            self.num++;
        });
    }
    NSLog(@"end : %d",self.num); 
}
Copy the code

Answer:

  1. > = 5
  2. < = 10000

Because:

  • First, in a while loop, self.num will not break out of the loop until it has ++ 5 times and num has ++ finished; It is not clear how many tasks were added during this period, but later on when printing, it would be at least 5.

  • Second, in the for loop, the loop just adds 10000 tasks to the asynchrony, and at the end of the print, regardless of whether all of those tasks are finished or not.

  • Whether the task is executed synchronously or asynchronously, it will take some time when the thread queue is scheduled.

These two interview questions tell us that data is not safe when performing tasks asynchronously.

So how can we keep our data safe? This requires that the data be manipulated with locks, which we will explore in more detail in a later chapter.

extension

atomic

Even when num is declared atomic is not safe because it can only secure setters and getters, not asynchronously read data by multiple threads.