Have you ever been confused about the terms synchronous, asynchronous, serial queue and parallel queue during iOS development and interviews? When these terms are combined, synchronous serial queue, asynchronous serial queue, synchronous parallel queue and asynchronous parallel queue, whether the performance of these situations is unclear at runtime, this article tries to clarify the differences between them from the perspective of source code, in the development interview, to better grasp the use of GCD interface.
1. Data structure
dispatch_queue_t
Dispatch_queue_t is a pointer to dispatch_queue_s. By declaring dispatch_queue_t in the code, we can get the following expansion in OC’s locale.
DISPATCH_DECL(dispatch_queue);
// The final expansion is as follows
@protocol OS_dispatch_queue <OS_dispatch_object>
@end
typedef NSObject<OS_dispatch_queue> * __attribute__((objc_independent_class)) dispatch_queue_t
// OS_dispatch_object is defined by the following macro
OS_OBJECT_DECL_CLASS(dispatch_object);
// The final expansion is as follows
@protocol OS_dispatch_object <NSObject>
@end
typedef NSObject<OS_dispatch_object> * __attribute__((objc_independent_class)) dispatch_object_t
Copy the code
What does OS_dispatch_queue do? This is just a virtual class in OC. We can create a Demo project, create a queue, and look at the “dispatch_queue_t” pointer returned by the Demo project through the breakpoint. We can see that the corresponding OC class is OS_dispatch_queue and does not have any member variables. When we assign this class, we assign it by calloc, according to the dispatch_queue_s structure. So what we can do is we can assign a value of dispatch_queue_s to the assigned object and change that. That is, dispatch_queue_s and OS_dispatch_queue are identifiers of the same object. It’s kind of like toll-free bridge. Of course, it is not exactly the same as toll-free bridge classes, and the methods and functions of the two classes are not interoperable. The ISA for this class is pointing to the dispatch_queue_vtable_s structure. Describes the address of the function that needs to be called to perform some operations on this queue.
Common structure and inheritance relationships in GCD
As noted in the queue_internal header, the following two diagrams illustrate the inheritance relationships between structures commonly used by API users and structures used internally in the GCD source code.
Figure 1 shows that dispatch_queue_serial_t, dispatch_queue_global_t, dispatch_queue_concurrent_t are inherited from dispatch_queue_t. The main queue dispatch_queue_main_t inherits dispatch_queue_serial_t. So we know that the main queue is a serial queue and this is where we get our conclusion.
_class_t is a pointer type that points to one of the structures of _s. _class_t is a union type that points to one of the structures of _s. It is used for conversions between multiple structures.
You might wonder how C can establish this inheritance relationship. Through the source code, we can see, the GCD with struct declaration these structures, just in a statement the structure through the layers of the macro definition, forced two hierarchy structure statement of a set of members of the same variable, this also increases the difficulty to read the code, get a statement of the structure, will be a bit to kung fu.
Another thing is how do you convert between base and subclasses? The GCD uses the union structure, which defines the type to which a pointer may point. This is just a rough implementation of the object, but it is important to be careful when casting the pointer. Conversions between struct types are not safe, and there is no compiler to help us find the problem.
typedef union {
struct _os_object_s* _os_obj;
struct dispatch_object_s* _do;
struct dispatch_queue_s* _dq;
struct dispatch_queue_attr_s* _dqa;
struct dispatch_group_s* _dg;
struct dispatch_source_s* _ds;
struct dispatch_mach_s* _dm;
struct dispatch_mach_msg_s* _dmsg;
struct dispatch_semaphore_s* _dsema;
struct dispatch_data_s* _ddata;
struct dispatch_io_s* _dchannel;
} dispatch_object_t DISPATCH_TRANSPARENT_UNION;
typedef struct dispatch_queue_s *dispatch_queue_t
Copy the code
So dispatch_object_t is declared as a union, and the member is one of these Pointers. This is similar to the above concept of base classes in objects. Each pointer to a structure in a union is a type of one of the “subclasses”. So there is no relationship between object-oriented programming and programming languages, and you can write good object-oriented code in C. Now let’s look at the declaration for dispatch_queue_s
struct dispatch_queue_s {
struct dispatch_object_s _as_do[0].
struct _os_object_s _as_os_obj[0].
const struct dispatch_queue_vtable_s *do_vtable;
int volatile do_ref_cnt;
int volatile do_xref_cnt;
struct dispatch_queue_s *volatile do_next;
struct dispatch_queue_s *do_targetq;
void *do_ctxt;
void *do_finalizer;
DISPATCH_UNION_LE(uint64_t volatile dq_state, dispatch_lock dq_state_lock, uint32_t dq_state_bits );
void *__dq_opaque1;
unsigned long dq_serialnum;
const char *dq_label;
DISPATCH_UNION_LE(uint32_t volatile dq_atomic_flags, const uint16_t dq_width, const uint16_t __dq_opaque2 );
dispatch_priority_t dq_priority;
union {
struct dispatch_queue_specific_head_s *dq_specific_head;
struct dispatch_source_refs_s *ds_refs;
struct dispatch_timer_source_refs_s *ds_timer_refs;
struct dispatch_mach_recv_refs_s *dm_recv_refs;
};
int volatile dq_sref_cnt;
}
Copy the code
There are several member variables that are described here:
_as_do
and_as_os_obj
This use of arrays, which does not comply with the C language standard, is a compiler extension. Declared variables do not allocate space, and are used to convert between different types.do_vtable
So this is just some of the methods that define how to manipulate the queue, like how to push into the queue, how to wake up the queue. It is mainly used to solve the coupling between the operation of the queue and the queue structure. If it is an OC class, its ISA also points to vtable.do_ref_cnt
This is a reference count, similar to the ARC rule.dq_label
Store the name of the queue.
2. Serial and parallel queues
One of the most common ways we use GCD is to create a queue, usually using the following code
// Serial queue
dispatch_queue_t sq = dispatch_queue_create("test", DISPATCH_QUEUE_SERIAL);
// Parallel queue
dispatch_queue_t cq = dispatch_queue_create("testcq", DISPATCH_QUEUE_CONCURRENT);
Copy the code
There are two points to focus on:
First, you need to look at the types of the two ATTRs.
The first DISPATCH_QUEUE_SERIAL is a macro, expanded to NULL, and is the default property. The second DISPATCH_QUEUE_CONCURRENT is expanded to
DISPATCH_GLOBAL_OBJECT(dispatch_queue_attr_t,_dispatch_queue_attr_concurrent)
Copy the code
Let’s say dispatch_queue_attr_t. Dispatch_queue_attr_t is the pointer type of dispatch_queue_attr_s. The instance of DISPATCH_QUEUE_CONCURRENT is _dispatch_queue_attr_concurrent. Is a globally initialized variable. This variable is placed in the _dispatch_queue_attrs[] array. All dispatch_queue_attr_t Pointers are placed in this array. Will correspond to the structure that dispatch_queue_attr_info_t points to, such as whether concurrent, qos priority, etc. For the same dispatch_queue_attr_s instance, the priority attribute is different in the array. The following is the conversion from dispatch_queue_attr_info_t to dispatch_queue_attr_t. As you can see, the index of the array is calculated using the property of dispatch_queue_attr_info_t. The conversion from dispatch_queue_attr_t to dispatch_queue_attr_info_t is the reverse of this calculation process and will not be described here.
static dispatch_queue_attr_t
_dispatch_queue_attr_from_info(dispatch_queue_attr_info_t dqai)
{
size_t idx = 0;
idx *= DISPATCH_QUEUE_ATTR_OVERCOMMIT_COUNT;
idx += dqai.dqai_overcommit;
idx *= DISPATCH_QUEUE_ATTR_AUTORELEASE_FREQUENCY_COUNT;
idx += dqai.dqai_autorelease_frequency;
idx *= DISPATCH_QUEUE_ATTR_QOS_COUNT;
idx += dqai.dqai_qos;
idx *= DISPATCH_QUEUE_ATTR_PRIO_COUNT;
idx += (size_t)(-dqai.dqai_relpri); idx *= DISPATCH_QUEUE_ATTR_CONCURRENCY_COUNT; idx += ! dqai.dqai_concurrent; idx *= DISPATCH_QUEUE_ATTR_INACTIVE_COUNT; idx += dqai.dqai_inactive;return (dispatch_queue_attr_t)&_dispatch_queue_attrs[idx];
}
Copy the code
Second, the dispatch_queue_create method
dispatch_queue_t
dispatch_queue_create(const char *label, dispatch_queue_attr_t attr)
{
return _dispatch_lane_create_with_target(label, attr,
DISPATCH_TARGET_QUEUE_DEFAULT, true);
}
Copy the code
This function returns dispatch_queue_t, pointing to the allocated queue structure. The _dispatch_queue_CREATE function is ultimately created using the _dispatch_lane_create_with_target function. Let’s look at the implementation of _dispatch_lane_CREATE_with_target.
_dispatch_lane_create_with_target
_dispatch_lane_create_with_target(label, attr, DISPATCH_TARGET_QUEUE_DEFAULT, true)
Copy the code
The first input is, respectively, the name of the queue to be created. Attr is the property structure referred to by DISPATCH_QUEUE_SERIAL or DISPATCH_QUEUE_CONCURRENT above. The macro expansion of DISPATCH_TARGET_QUEUE_DEFAULT is NULL and the last parameter legacy is true.
static dispatch_queue_t
_dispatch_lane_create_with_target(const char *label, dispatch_queue_attr_t dqa,
dispatch_queue_t tq, bool legacy)
{
/ / 【 1 】
dispatch_queue_attr_info_t dqai = _dispatch_queue_attr_to_info(dqa);
qos = ...
overcommit = ...
/ / [2]
if(! tq) { tq = _dispatch_get_root_queue( qos == DISPATCH_QOS_UNSPECIFIED ? DISPATCH_QOS_DEFAULT : qos, overcommit == _dispatch_queue_attr_overcommit_enabled)->_as_dq;if(unlikely(! tq)) { DISPATCH_CLIENT_CRASH(qos,"Invalid queue attribute"); }}/ / [3]
legacy = ...
const void *vtable = ...
dqf = ...
label = ...
/ / [4]
dispatch_lane_t dq = _dispatch_object_alloc(vtable, sizeof(struct dispatch_lane_s));
/ / [5]
_dispatch_queue_init(dq, dqf, dqai.dqai_concurrent ?
DISPATCH_QUEUE_WIDTH_MAX : 1, DISPATCH_QUEUE_ROLE_INNER |
(dqai.dqai_inactive ? DISPATCH_QUEUE_INACTIVE : 0));
/ / [6]
dq->dq_label = label;
dq->dq_priority = ...
/ / 【 7 】
_dispatch_retain(tq);
dq->do_targetq = tq;
/ / [8]
return dq._dq;
}
Copy the code
To highlight the main steps of the creation process, the truncated code omits the processing of some parameters. The following sections describe their general functionality by specifying line numbers.
[1] Initialize qos with dispatch_queue_attr_t, overcommit,
[2] Set the TQ variable. Of interest is the acquisition of tQ (the target queue, which can be understood as the global queue on which the queue we created depends), which is obtained through the _dispatch_get_root_queue method. The GCD creates a global pool of 16 queues, skipping the queues with index 0. 1 is main_q, 2 is mgr_q, 3 is mgr_root_q, 4-15 is global queues. 17 for workloop_fallback_q. Here is the index mapped to the thread pool based on qos and overCOMMIT parameters. To get the queue. You can also see that the target queue of the newly created queue is a global thread pool. Therefore, you can determine whether a queue is a global queue by determining whether it has do_targetQ.
[3] Set Legacy, VTABLE, DQF, and Label according to the conditions.
[4] The _dispatch_object_alloc method allocates queue blocks in the heap. Isa and DO_vtable are two names of the same pointer, similar to the union structure.
[5] _dispatch_queue_init Initializes the value of the member variable in the structure.
[6] Set the label and priority of the queue.
[7] Set the target queue of the newly created DQ to the above TQ. And increase the reference count of tQ.
[8] Finally, the queue pointer is returned, and this is the entire process of creating the queue.
Here’s a summary of the entire creation process, as shown in the figure below.
Creating a queue is to allocate the object of the queue in memory, initialize the member variables, and hold the properties of the queue. As you can see, there is no relationship with the underlying threads, only when we dispatch a task with dispatch_async or dispatch_sync, the underlying threads are created to execute the task as needed. The following sections explore how asynchronous and synchronous dispatches relate to the underlying thread through calls.
3. Asynchronous dispatch (dispatch_async)
dispatch_async(dispatch_queue_t dq, dispatch_block_t work)
Copy the code
The interface that is distributed asynchronously is as above, where there are two parameters: Dq is the target queue to add the task, and work is the task we want to execute, which is a block type.
There will be two types of DQ, the serial queue and the parallel queue. Depending on the queue, they execute different code paths. Here is a snippet of code we often use to submit a task to a queue.
// Asynchronous distribution of a serial queue
dispatch_queue_t sq = dispatch_queue_create("test", DISPATCH_QUEUE_SERIAL);
dispatch_async(sq, ^{
NSLog(@"hello");
});
// Asynchronous distribution of parallel queues
dispatch_queue_t cq = dispatch_queue_create("test", DISPATCH_QUEUE_CONCURRENT);
dispatch_async(cq, ^{
NSLog(@"hello");
});
Copy the code
The following sections first describe the same parts of the two queue dispatches and then explain the differences between them. The implementation of dispatch_async is as follows.
dispatch_async(dispatch_queue_t dq, dispatch_block_t work)
{
/ / [9]
dispatch_continuation_t dc = _dispatch_continuation_alloc();
/ / [10]
uintptr_t dc_flags = DC_FLAG_CONSUME;
dispatch_qos_t qos;
/ / [11]
qos = _dispatch_continuation_init(dc, dq, work, 0, dc_flags);
/ / [12]
_dispatch_continuation_async(dq, dc, qos, dc->dc_flags);
}
Copy the code
【9】_dispatch_continuation_alloc is a function that allocates continuation structures. Continuation_alloc is a function that implements caching mechanisms associated with threads. Maintains a list of continuations that are reused to avoid frequent memory allocation and improve performance.
[10] “dispatch_continuation_t” is used to describe GCD tasks, where “dc_flags” describes task types such as barrier, block, group, and so on. See the definition of DC_FLAG_SYNC_WAITER and other macros. DC_FLAG_CONSUME, used here, indicates that the continuation resource has been released during execution. This flag is usually set in async.
[11] _dispatch_continuation_init Initializes continuations. Here we look at the implementation
DISPATCH_ALWAYS_INLINE
static inline dispatch_qos_t
_dispatch_continuation_init_f(dispatch_continuation_t dc,
dispatch_queue_class_t dqu, void *ctxt, dispatch_function_t f,
dispatch_block_flags_t flags, uintptr_t dc_flags)
{
pthread_priority_t pp = 0;
dc->dc_flags = dc_flags | DC_FLAG_ALLOCATED;
dc->dc_func = f;
dc->dc_ctxt = ctxt;
pp = ...
_dispatch_continuation_voucher_set(dc, flags);
return _dispatch_continuation_priority_set(dc, dqu, pp, flags);
}
DISPATCH_ALWAYS_INLINE
static inline dispatch_qos_t
_dispatch_continuation_init(dispatch_continuation_t dc,
dispatch_queue_class_t dqu, dispatch_block_t work,
dispatch_block_flags_t flags, uintptr_t dc_flags)
{
void*ctxt = _dispatch_Block_copy(work); dc_flags |= DC_FLAG_BLOCK | DC_FLAG_ALLOCATED; .dispatch_function_t func = _dispatch_Block_invoke(work);
if (dc_flags & DC_FLAG_CONSUME) {
func = _dispatch_call_block_and_release;
}
return _dispatch_continuation_init_f(dc, dqu, ctxt, func, flags, dc_flags);
}
Copy the code
You can see that the flags, CTXT, func, and priority fields are primarily set for the continuation. The block we want to execute is stored in the func and CTXT fields.
[12] Finally, it is handed over to the _dispatch_continuation_async function. The following is the implementation of the function to remove some debug and the performance detection code.
static inline void
_dispatch_continuation_async(dispatch_queue_class_t dqu,
dispatch_continuation_t dc, dispatch_qos_t qos, uintptr_t dc_flags)
{
/ /... Some debugging, performance probe handling
return dx_push(dqu._dq, dc, qos);
}
Copy the code
Dx_push is a macro that eventually expands to
Dx_push (dqu _dq, dc, qos) - > dx_vtable (dqu. _dq) - > dq_push (dqu _dq, dc, Qos), & (dqu _dq) - > do_vtable - > _os_obj_vtable - > dq_push (dqu _dq, dc, qos)Copy the code
Here, all types of queues (serial queues, parallel queues, management queues, etc.) involved in GCD are abstracted. By assigning corresponding function Pointers such as DQ_push (how to join the queue for execution) and Dq_wakeup (how to wakeup the queue) to different queues, it is unnecessary to care about specific implementation details when operating a certain queue. Achieve decoupling of invocation and implementation. The idea of decoupling is the same as in object-oriented programming through overstates.
When the queue is created, _dispatch_queue_init initializes the corresponding DO_vtable. The do_VTABLE Pointers to the serial and parallel queues point to the following structure.
DISPATCH_VTABLE_SUBCLASS_INSTANCE(queue_serial, lane,
.do_type = DISPATCH_QUEUE_SERIAL_TYPE,
.do_dispose = _dispatch_lane_dispose,
.do_debug = _dispatch_queue_debug,
.do_invoke = _dispatch_lane_invoke,
.dq_activate = _dispatch_lane_activate,
.dq_wakeup = _dispatch_lane_wakeup,
.dq_push = _dispatch_lane_push,
);
DISPATCH_VTABLE_SUBCLASS_INSTANCE(queue_concurrent, lane,
.do_type = DISPATCH_QUEUE_CONCURRENT_TYPE,
.do_dispose = _dispatch_lane_dispose,
.do_debug = _dispatch_queue_debug,
.do_invoke = _dispatch_lane_invoke,
.dq_activate = _dispatch_lane_activate,
.dq_wakeup = _dispatch_lane_wakeup,
.dq_push = _dispatch_lane_concurrent_push,
);
Copy the code
At this point, the program will execute different code based on the queue passed disPTACH_ASYNc. The following sections describe how disPTACH_ASYNc executes when a serial queue and a parallel queue are passed in, respectively.
Dispatches asynchronously on a serial queue
The _dispatch_LANe_push method inserts the continuation into the bidirectional list pointed to by dQ_ITEMs_tail, dQ_ITEMs_head, of the queue. Finally, _dispatch_lane_push calls dq_wakeup to wakeup the queue. We see that the serial queue dq_wakeup points to _dispatch_lane_wakeup. Here’s the simplified code.
void
_dispatch_queue_wakeup(dispatch_queue_class_t dqu, dispatch_qos_t qos,
dispatch_wakeup_flags_t flags, dispatch_queue_wakeup_target_t target)
{
dispatch_queue_t dq = dqu._dq;
if (target) {
uint64_told_state, new_state, enqueue; enqueue = ... ; qos = _dispatch_queue_wakeup_qos(dq, qos); old_state = ... ; new_state = ... ;if (likely((old_state ^ new_state) & enqueue)) {
dispatch_queue_t tq = ...
return_dispatch_queue_push_queue(tq, dq, new_state); }}}Copy the code
Check the state of the state field to determine whether the current queue should be added to its target queue. Since state may be accessed by multiple threads, all access to the field is done atomic, typically ending with _dispatch_queue_wakeup going to the _dispatch_queue_push_queue method.
DISPATCH_ALWAYS_INLINE
static inline void
_dispatch_queue_push_queue(dispatch_queue_t tq, dispatch_queue_class_t dq,
uint64_t dq_state)
{
#if DISPATCH_USE_KEVENT_WORKLOOP
if (likely(_dq_state_is_base_wlh(dq_state))) {
return _dispatch_event_loop_poke((dispatch_wlh_t)dq._dq, dq_state,
DISPATCH_EVENT_LOOP_CONSUME_2);
}
#endif // DISPATCH_USE_KEVENT_WORKLOOP
return dx_push(tq, dq, _dq_state_max_qos(dq_state));
}
Copy the code
As you can see, there are two ways to schedule the execution of the queue. Currently, the KEVENT_WORKLOOP method is used by most Apple platforms. By default, the current created queue is added to the system’s global queue to form an array structure. GCD manages the newly created queue tasks by managing the system’s global queue. The KEVENT_WORKLOOP approach will be discussed below.
As you can see, with the KEVENT_WORKLOOP approach, the destination queue parameter is not required. It then executes _dispatch_event_loop_poke, as its name indicates, and, like poker, the system distributes time slices to the respective queues through certain policies, and then executes tasks in the queues. This method is compatible with a variety of event distribution mechanisms, such as the Poll, select system call implementation queue management, and other distribution mechanisms. Today’s common KEVENT_WORKLOOP approach is ultimately executed by _dispatch_kevent_workloop_poke.
static void
_dispatch_kevent_workloop_poke(dispatch_wlh_t wlh, uint64_t dq_state,
uint32_t flags)
{
uint32_t kev_flags = KEVENT_FLAG_IMMEDIATE | KEVENT_FLAG_ERROR_EVENTS;
dispatch_kevent_s ke;
int action;
action = _dispatch_event_loop_get_action_for_state(dq_state);
override:
/ / [!]
_dispatch_kq_fill_workloop_event(&ke, action, wlh, dq_state);
/ / [!]
if (_dispatch_kq_poll(wlh, &ke, 1, &ke, 1.NULL.NULL, kev_flags)) {
// ... error handler
}
if(! (flags & DISPATCH_EVENT_LOOP_OVERRIDE)) {// Consume the reference that kept the workloop valid
// for the duration of the syscall.
return _dispatch_release_tailcall((dispatch_queue_t)wlh);
}
if (flags & DISPATCH_EVENT_LOOP_CONSUME_2) {
return _dispatch_release_2_tailcall((dispatch_queue_t)wlh); }}Copy the code
There are two important approaches to this approach. _dispatch_kq_fill_workloop_event(&ke, action, WLH, dq_state) and _dispatch_kq_poll(WLH, &ke, 1, &ke, 1, NULL, NULL, kev_flags)
Where _dispatch_kq_fill_workloop_event is a dispatch_kevent_S structure that allocates a dispatch_kevent_s based on the properties of the queue, wrapping the operation of requesting the kernel, where NOTE_WL_THREAD_REQUEST indicates that the kernel wants to request a thread.
The implementation of _dispatch_kq_poll is as follows, with some exception handling and invalid branch code removed.
DISPATCH_NOINLINE
static int
_dispatch_kq_poll(dispatch_wlh_t wlh, dispatch_kevent_t ke, int n,
dispatch_kevent_t ke_out, int n_out, void *buf, size_t *avail,
uint32_t flags)
{
bool kq_initialized = false;
int r = 0;
/ / [12]
dispatch_once_f(&_dispatch_kq_poll_pred, &kq_initialized, _dispatch_kq_init);
retry:
{
flags |= KEVENT_FLAG_WORKLOOP;
if(! (flags & KEVENT_FLAG_ERROR_EVENTS)) { flags |= KEVENT_FLAG_DYNAMIC_KQ_MUST_EXIST; } r = kevent_id((uintptr_t)wlh, ke, n, ke_out, n_out, buf, avail, flags);
#endif // DISPATCH_USE_KEVENT_WORKLOOP
}
if (unlikely(r == - 1)) {
//error handler
}
return r;
}
Copy the code
Here the system call kevent_ID is made using the dispatch_kevent_s structure created in the previous function. Wait for request event scheduling. The main task of the kQueue and KEvent system calls is to request the system to allocate the thread and notify the user mode when the request is completed. Kqueue is an extensible event notification interface similar to EPoll in Linux.
[12] The _dispatch_kq_init method is called once, which handles the initialization of kQueue, RootQueue, and ManagerQueue. You can see from the code that this method calls the _dispatch_root_queues_init_once method. This method in turn calls _pthread_workqueue_init_with_kEvent. The _pthread_workqueue_init_with_kEvent method is implemented in the pThread library. This library will handle the interaction with the kernel, and ultimately ensure that when the kQueue and KEvent system calls that we requested are complete, we will call back to the _dispatch_workloop_worker_Thread method entry, and I won’t go into the details of the implementation of this process here. Take a look at the source code for pThreads and the kernel.
The _dispatch_workloop_worker_Thread invokes the invoke function of the queue through the do_vtable of the queue. The serial queue is the _dispatch_lane_invoke function, and finally takes the task to be executed from the queue’s task list. And processes the status of the task completion queue. The call stack is as follows.
The _dispatch_lane_serial_drain method is called on _dispatch_lane_Invoke, and its main function is to traverse the list of tasks added to the serial queue in first-in, first-out order.
Dispatches asynchronously on parallel queues
Here is the implementation of _dispatch_LANe_concurrent_push. The parallel queue eventually calls the _dispatch_continuation_redirect_push method to add the task.
void
_dispatch_lane_concurrent_push(dispatch_lane_t dq, dispatch_object_t dou,
dispatch_qos_t qos)
{
// <rdar://problem/24738102&24743140> reserving non barrier width
// doesn't fail if only the ENQUEUED bit is set (unlike its barrier
// width equivalent), so we have to check that this thread hasn't
// enqueued anything ahead of this call or we can break ordering
if (dq->dq_items_tail == NULL&&! _dispatch_object_is_waiter(dou) && ! _dispatch_object_is_barrier(dou) && _dispatch_queue_try_acquire_async(dq)) {return _dispatch_continuation_redirect_push(dq, dou, qos);
}
_dispatch_lane_push(dq, dou, qos);
}
Copy the code
The implementation of _dispatch_continuation_redirect_push is as follows.
static void
_dispatch_continuation_redirect_push(dispatch_lane_t dl,
dispatch_object_t dou, dispatch_qos_t qos)
{
dou._dc = ...
dispatch_queue_t dq = dl->do_targetq;
if(! qos) qos = _dispatch_priority_qos(dq->dq_priority); dx_push(dq, dou, qos); }Copy the code
The continuation is rewrapped, replacing the function implementation that the invoke of a continuation points to, and the rewrapped continuation is added to the global queue of the system, adding the global queue call, so the task list of the parallel queue is always empty.
_dispatch_root_queue_push method,
DISPATCH_NOINLINE
void
_dispatch_root_queue_push(dispatch_queue_global_t rq, dispatch_object_t dou,
dispatch_qos_t qos)
{
#if HAVE_PTHREAD_WORKQUEUE_QOS
if (_dispatch_root_queue_push_needs_override(rq, qos)) {
return _dispatch_root_queue_push_override(rq, dou, qos);
}
#else
(void)qos;
#endif
_dispatch_root_queue_push_inline(rq, dou, dou, 1);
}
DISPATCH_ALWAYS_INLINE
static inline void
_dispatch_root_queue_push_inline(dispatch_queue_global_t dq,
dispatch_object_t _head, dispatch_object_t _tail, int n)
{
struct dispatch_object_s *hd = _head._do, *tl = _tail._do;
if (unlikely(os_mpsc_push_list(os_mpsc(dq, dq_items), hd, tl, do_next))) {
return _dispatch_root_queue_poke(dq, n, 0); }}Copy the code
This method adds the rewrapped continuation to the root queue, and then invokes _dispatch_root_queue_poke to start task scheduling on the root queue. The _dispatch_root_queue_POke_slow method is then called to schedule the parallel queue tasks.
DISPATCH_NOINLINE
static void
_dispatch_root_queue_poke_slow(dispatch_queue_global_t dq, int n, int floor)
{
int remaining = n;
int r = ENOSYS;
_dispatch_root_queues_init();
#if! DISPATCH_USE_INTERNAL_WORKQUEUE
#if DISPATCH_USE_PTHREAD_ROOT_QUEUES
if (dx_type(dq) == DISPATCH_QUEUE_GLOBAL_ROOT_TYPE)
#endif
{
r = _pthread_workqueue_addthreads(remaining,
_dispatch_priority_to_pp_prefer_fallback(dq->dq_priority));
(void)dispatch_assume_zero(r);
return;
}
#endif / /! DISPATCH_USE_INTERNAL_WORKQUEUE
}
Copy the code
This function calls _pthread_workqueue_addThreads (which is in the pThread library). The _pthread_workqueue_addThreads method will assign threads to the workqueue via a system call. When the system call is complete, it will call back to the _dispatch_worker_thread2 method registered when root_queue_init was initialized. The stack is shown below.
Then _dispatch_root_queue_drain pulls out the continuation we added earlier and executes it, finally calling the code in the block we passed in.
summary
The figure below summarizes the above asynchronous dispatch execution for the two queues.
As you can see from the above, the implementation of asynchronous dispatch involves the pThread library and the kernel-provided system calls, as shown in the figure below.
As you can see above, both serial and parallel queues make system calls, requesting threads.
Serial queues are initiated by the Kqueue system call, while parallel queues are initiated only by adding threads. The reason for this difference is that if we add multiple tasks to a serial queue, the tasks need to be executed in one thread so that only one task can be executed at a time, The kevent_id system call will pass a dispatch_kevent_s structure to identify the corresponding thread of the queue. Each time a task is added, it is guaranteed to be added to a thread.
For the concurrent queue, because at the same time can perform multiple tasks in the queue, so add tasks directly can add a new thread, will not, of course, each adding a task will open up a new thread, when the distribution of thread after performing a parallel task queue, after can be used to perform new tasks, similar to the concept of a thread pool, In the code we can see that there is no limit to the amount of concurrent queues, so that’s why we don’t have an API to directly specify the amount of concurrent queues when using concurrent queues.
4. Synchronous dispatch (dispatch_sync)
Let’s look at the implementation of synchronous dispatch, which can be either serial or parallel.
dispatch_queue_t sq = dispatch_queue_create("test", DISPATCH_QUEUE_SERIAL);
dispatch_sync(sq, ^{
NSLog(@"hello");
});
dispatch_queue_t cq = dispatch_queue_create("test", DISPATCH_QUEUE_CONCURRENT);
dispatch_async(cq, ^{
NSLog(@"hello");
});
Copy the code
The implementation of dispatch_sync is as follows.
DISPATCH_NOINLINE
void
dispatch_sync(dispatch_queue_t dq, dispatch_block_t work)
{
uintptr_t dc_flags = DC_FLAG_BLOCK;
if (unlikely(_dispatch_block_has_private_data(work))) {
return _dispatch_sync_block_with_privdata(dq, work, dc_flags);
}
_dispatch_sync_f(dq, work, _dispatch_Block_invoke(work), dc_flags);
}
DISPATCH_NOINLINE
static void
_dispatch_sync_f(dispatch_queue_t dq, void *ctxt, dispatch_function_t func,
uintptr_t dc_flags)
{
_dispatch_sync_f_inline(dq, ctxt, func, dc_flags);
}
Copy the code
The implementation of _dispatch_sync_f_inline is as follows
DISPATCH_ALWAYS_INLINE
static inline void
_dispatch_sync_f_inline(dispatch_queue_t dq, void *ctxt,
dispatch_function_t func, uintptr_t dc_flags)
{
if (likely(dq->dq_width == 1)) {
return _dispatch_barrier_sync_f(dq, ctxt, func, dc_flags);
}
if(unlikely(dx_metatype(dq) ! = _DISPATCH_LANE_TYPE)) { DISPATCH_CLIENT_CRASH(0."Queue type doesn't support dispatch_sync");
}
dispatch_lane_t dl = upcast(dq)._dl;
// Global concurrent queues and queues bound to non-dispatch threads
// always fall into the slow case, see DISPATCH_ROOT_QUEUE_STATE_INIT_VALUE
if(unlikely(! _dispatch_queue_try_reserve_sync_width(dl))) {return _dispatch_sync_f_slow(dl, ctxt, func, 0, dl, dc_flags);
}
if (unlikely(dq->do_targetq->do_targetq)) {
return _dispatch_sync_recurse(dl, ctxt, func, dc_flags);
}
_dispatch_introspection_sync_begin(dl);
_dispatch_sync_invoke_and_complete(dl, ctxt, func DISPATCH_TRACE_ARG(
_dispatch_trace_item_sync_push_pop(dq, ctxt, func, dc_flags)));
}
Copy the code
Dq ->dq_width == 1 from the above call to _dispatch_queue_init during the serial queue creation process. So synchronizing a serial queue is ultimately a method called _dispatch_barrier_sync_f. From this we can see that if we were to join a parallel queue, we would call the _dispatch_sync_invoke_and_complete function directly. The following sections are divided into serial queues and parallel queues.
Distributed synchronously on a serial queue
Next on the serial queue is called
DISPATCH_ALWAYS_INLINE
static inline void
_dispatch_barrier_sync_f_inline(dispatch_queue_t dq, void *ctxt,
dispatch_function_t func, uintptr_t dc_flags)
{
dispatch_tid tid = _dispatch_tid_self();
dispatch_lane_t dl = upcast(dq)._dl;
if(unlikely(! _dispatch_queue_try_acquire_barrier_sync(dl, tid))) {return _dispatch_sync_f_slow(dl, ctxt, func, DC_FLAG_BARRIER, dl,
DC_FLAG_BARRIER | dc_flags);
}
if (unlikely(dl->do_targetq->do_targetq)) {
return _dispatch_sync_recurse(dl, ctxt, func,
DC_FLAG_BARRIER | dc_flags);
}
_dispatch_lane_barrier_sync_invoke_and_complete(dl, ctxt, func
DISPATCH_TRACE_ARG(_dispatch_trace_item_sync_push_pop(
dq, ctxt, func, dc_flags | DC_FLAG_BARRIER)));
}
Copy the code
Queues do not have multiple dependencies, so recursive calls are not required. The _dispatch_introspection_sync_begin function is just a function to help you debug performance statistics, so you don’t have to pay attention to it. And by the way, we know that if we resubmit a synchronous task to the same serial queue in a serial queue, that will cause a deadlock, so the second call to dispatch_sync here will not get barrier_sync, so we’ll go to _dispatch_sync_f_slow. The __DISPATCH_WAIT_FOR_QUEUE__ function checks that the queue to be executed is the same as the queue to be locked, and then triggers crash. Back to the topic, I’ll go to _dispatch_lane_barrier_sync_invoke_and_complete.
DISPATCH_NOINLINE
static void
_dispatch_lane_barrier_sync_invoke_and_complete(dispatch_lane_t dq,
void *ctxt, dispatch_function_t func DISPATCH_TRACE_ARG(void *dc))
{
_dispatch_sync_function_invoke_inline(dq, ctxt, func);
_dispatch_trace_item_complete(dc);
if (unlikely(dq->dq_items_tail || dq->dq_width > 1)) {
return _dispatch_lane_barrier_complete(dq, 0.0);
}
const uint64_tfail_unlock_mask = ... ;uint64_t old_state, new_state;
// similar to _dispatch_queue_drain_try_unlock
os_atomic_rmw_loop2o(dq, dq_state, old_state, new_state, release, {
new_state = old_state - DISPATCH_QUEUE_SERIAL_DRAIN_OWNED;
new_state &= ~DISPATCH_QUEUE_DRAIN_UNLOCK_MASK;
new_state &= ~DISPATCH_QUEUE_MAX_QOS_MASK;
if (unlikely(old_state & fail_unlock_mask)) {
os_atomic_rmw_loop_give_up({
return _dispatch_lane_barrier_complete(dq, 0.0); }); }});if (_dq_state_is_base_wlh(old_state)) {
_dispatch_event_loop_assert_not_owned((dispatch_wlh_t)dq); }}Copy the code
The _dispatch_sync_function_invoke_inline method saves the information about the current thread, and then executes the submitted block task on the current thread, restoring the thread information. Finally, clean up after task execution is complete.
Distribute synchronously on parallel queues
In the _dispatch_sync_F_inline function, the serial queue executes to _dispatch_sync_invoke_and_complete.
DISPATCH_NOINLINE
static void
_dispatch_sync_invoke_and_complete(dispatch_lane_t dq, void *ctxt,
dispatch_function_t func DISPATCH_TRACE_ARG(void *dc))
{
_dispatch_sync_function_invoke_inline(dq, ctxt, func);
_dispatch_trace_item_complete(dc);
_dispatch_lane_non_barrier_complete(dq, 0);
}
Copy the code
_dispatch_sync_function_invoke_inline is the current thread that first saves the context of the thread, then executes the submitted task, and replies to the previously executed context. The _dispatch_lane_non_barrier_complete handles the cleanup of completed tasks, whether to release queues, and so on.
summary
As you can see above, synchronous dispatch is much simpler to implement than asynchronous dispatch. Synchronous dispatch is when the submitted task is executed directly on the current thread. Serialization is a little more complicated because you can’t execute multiple tasks at the same time, and you need to determine if there are any tasks in the current queue that are executing. The parallel queue, however, does not care whether the current task is executing, it can be directly executed in the current thread. Of course, after the task is completed, the status of the processing queue needs to be set.
5. To summarize
This article briefly introduced the implementation of serial and parallel queue creation and the execution process of submitting synchronous and asynchronous tasks. There are a lot of implementation details, but I can’t do everything here, and I still need to explore a lot of details. The entire GCD implementation is not just the code in LibDispatch, it is also closely dependent on the pThread and XNU kernel implementations.
Back to the original question. Parallel and serial, synchronous and asynchronous are two separate sets of concepts. Parallel and serial describe how tasks can be executed if we submit multiple tasks in the same queue. Parallelism means that submitted tasks can be executed on multiple threads simultaneously, and serialization means that tasks in a queue can only be executed one by one on a particular thread.
Synchronous and asynchronous describe the relationship between the currently executing tasks and the tasks added to the queue. Synchronous dispatch means that the current task is suspended until the execution of the task added to the queue is complete. As you can see from the implementation of dispatch_sync, our submitted task will normally be executed on the current thread and then returned to the call to continue with the subsequent code. The corresponding asynchronous dispatch is just to submit the task to the queue, and then continue to execute the current task, as to when to execute the task submitted to the queue, controlled by the underlying thread scheduling. In general, queued tasks will not be executed in the current thread (deadlock may result if executed in the current serial queue). As we can see from the implementation, the asynchronous dispatch on the serial queue, after executing the kevent_id system call, will return to the dispatch_async call to continue with the code, and the submitted task will be executed in a new thread. When the task in the queue is executed, Depends on the implementation of kQueue in the kernel.
Hi, I am kuaishou e-commerce changtian
Kuaishou e-commerce wireless technology team is recruiting talents 🎉🎉🎉! We are the core business line of the company, here gathered all kinds of experts, but also full of opportunities and challenges. With the rapid development of the business, the team is also expanding rapidly. Welcome to join us and create world-class e-commerce products together
Hot positions: Android/iOS Senior Development, Android/iOS Expert, Java Architect, Product Manager (E-commerce background), Test Development… A lot of HC waiting for you ~
For internal recommendation, please send your resume to our email: [email protected] <<<, and note my name ~ 😘