Grand Central Dispatch (GCD) is a relatively new solution to multi-core programming developed by Apple.

Execute code concurrently on multicore hardware by work to dispatch queues managed by the system Dispatch Systems manage Dispatch queues that execute code simultaneously on multi-core hardware. It is used to optimize applications to support multi-core processors and other symmetric multiprocessing systems. It can be understood that the Dispatch queue encapsulates the operation of the underlying multi-core system. We only need to care about the operation of the Dispatch queue, and do not need to care about which core the task is assigned to, or even which thread the task is executed in (of course, it is necessary to study the main thread and sub-thread in order to further learn the main thread and sub-thread).

Grand Central Dispatch (GCD)

The GCD overview

Dispatch, also known as Grand Central Dispatch (GCD), contains language features, runtime libraries, and system enhancements, These features provide systematic and comprehensive improvements to support concurrent code execution on multi-core hardware in macOS, iOS, watchOS, and tvOS.

Extensions have been made to the BSD subsystem, Core Foundation, and Cococoa API to use these enhancements to help systems and applications run faster, more efficiently, and more responsive. Consider how difficult it is for a single application to effectively use multiple cores, let alone do so on different computers with different numbers of computing cores, or in environments where multiple applications compete for these cores. Running at the system level, GCD can better meet the needs of all running applications, matching them to available system resources in a balanced manner.

Dispatch Objects and ARC

GCD source is open source, we first finished learning GCD API use, and then directly into its source code to explore what it is, here is a general cognitive concept on the line.

What are Dispatch Objects? There are several types of Dispatch Objects, including dispatch_queue_t, dispatch_group_t, and dispatch_source_t. The basic Dispatch Object interface allows you to manage memory, pause and resume execution, define Object context, record task data, and more.

By default, when dispatch objects are built using the Objective-C compiler, they are declared as objective-C types. This behavior allows you to take ARC and enable memory leak checking for static profilers. It also allows you to add objects to Cocoa collections (NSMutableArray, NSMutableDictionary…) In the.

When building an application using the Objective-C compiler, all dispatch objects are Objective-C objects. Therefore, when automatic reference counting (ARC) is enabled, scheduling objects are automatically retained and released, just like any other Objective-C object. If ARC is not enabled (under MRC), the dispatch_retain and dispatch_release functions (or Objective-C semantics) are required to retain and release dispatch objects. You can’t use retain and release in Core Foundation.

If you need to use retain and release semantics in arc-enabled applications (to maintain compatibility with existing code), You can disable Objective-C based Dispatch objects by adding DOS_OBJECT_USE_OBJC=0 to the compiler flag.

If you need to use retain and release semantics in arc-enabled applications with higher deployment goals (to maintain compatibility with existing code), You can disable Objective-C based Dispatch objects by adding DOS_OBJECT_USE_OBJC = 0 to the compiler flag.

Types in GCD

To get a better understanding of GCD, let’s start with the common types we use in the GCD API on a daily basis. Understanding the specific definitions of these types will help us understand how GCD is used and the internal implementation logic. H file dispatch_queue_t is a macro definition: DISPATCH_DECL(dispatch_queue); See there’s no _t in the parentheses, so where does the little _t tail come from? Let’s take a look at the DISPATCH_DECL macro. The series of macros involved are defined in usr/include/ OS /object.h.

Let’s just look at GCD in Objective-C for the sake of clarity. (Swift/C++/C conversions will be covered in the next article.)

  • DISPATCH_DECLMacro definition:
#define DISPATCH_DECL(name) OS_OBJECT_DECL_SUBCLASS(name, dispatch_object)

DISPATCH_DECL(dispatch_queue) ➡ ️OS_OBJECT_DECL_SUBCLASS(dispatch_queue, dispatch_object)
Copy the code
  • OS_OBJECT_DECL_SUBCLASSMacro definition:
#define OS_OBJECT_DECL_SUBCLASS(name, super) \
        OS_OBJECT_DECL_IMPL(name, <OS_OBJECT_CLASS(super)>)

OS_OBJECT_DECL_SUBCLASS(dispatch_queue, dispatch_object) ➡️ OS_OBJECT_DECL_IMPL(dispatch_queue, <OS_OBJECT_CLASS(dispatch_object)>)
Copy the code
  • OS_OBJECT_CLASSMacro definition 🙁# #Operators can be used for the replacement part of a macro function. This operator combines two language symbols into a single language symbol, providing a means of concatenating actual arguments for macro extensions.
#define OS_OBJECT_CLASS(name) OS_##name

<OS_OBJECT_CLASS(dispatch_object) > ➡ ️ < OS_dispatch_object >OS_OBJECT_DECL_IMPL(dispatch_queue, <OS_OBJECT_CLASS(dispatch_object) >) ➡ ️OS_OBJECT_DECL_IMPL(dispatch_queue, <OS_dispatch_object>)
Copy the code
  • OS_OBJECT_DECL_IMPLMacro definition:
#define OS_OBJECT_DECL_IMPL(name, ...) \
        OS_OBJECT_DECL_PROTOCOL(name, __VA_ARGS__) \
        typedef NSObject<OS_OBJECT_CLASS(name)> \
                * OS_OBJC_INDEPENDENT_CLASS name##_t
        
OS_OBJECT_DECL_IMPL(dispatch_queue, < OS_dispatch_object >) ➡ ️OS_OBJECT_DECL_PROTOCOL(dispatch_queue, <OS_dispatch_object>) \
    typedef NSObject<OS_dispatch_queue> \
            * OS_OBJC_INDEPENDENT_CLASS dispatch_queue_t
Copy the code
  • OS_OBJC_INDEPENDENT_CLASSMacro definition:
#if __has_attribute(objc_independent_class)
#define OS_OBJC_INDEPENDENT_CLASS __attribute__((objc_independent_class))
#endif // __has_attribute(objc_independent_class)

#ifndef OS_OBJC_INDEPENDENT_CLASS
#define OS_OBJC_INDEPENDENT_CLASS
#endif

OS_OBJECT_DECL_PROTOCOL(dispatch_queue, <OS_dispatch_object>) \
typedef NSObject<OS_dispatch_queue> \
        * OS_OBJC_INDEPENDENT_CLASS dispatch_queue_t➡ ️OS_OBJECT_DECL_PROTOCOL(dispatch_queue, <OS_dispatch_object>) \
typedef NSObject<OS_dispatch_queue> \
        * dispatch_queue_t
Copy the code
  • OS_OBJECT_DECL_PROTOCOLMacro definition:
#define OS_OBJECT_DECL_PROTOCOL(name, ...) \
        @protocol OS_OBJECT_CLASS(name) __VA_ARGS__ \
        @end

OS_OBJECT_DECL_PROTOCOL(dispatch_queue, <OS_dispatch_object>) \
typedef NSObject<OS_dispatch_queue> \
        * dispatch_queue_t ➡️

@protocol OS_dispatch_queue <OS_dispatch_object> \
@end \
typedef NSObject<OS_dispatch_queue> \
        * dispatch_queue_t
Copy the code

The continuous macro definitions are sorted out here DISPATCH_DECL(dispatch_queue); That is:

@protocol OS_dispatch_queue <OS_dispatch_object>
@end

typedef NSObject<OS_dispatch_queue> * dispatch_queue_t;
Copy the code

The OS_dispatch_queue protocol inherits the OS_dispatch_object protocol and defines an alias for dispatch_queue_t for Pointers to NSObject instances of the object type. For example, dispatch_queue_t globalQueue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0); Get a global concurrent object, which is an NSObject pointer to the OS_dispatch_queue protocol (dispatch_queue_t is an NSObject pointer).

Where does the OS_dispatch_object protocol come from? OS_OBJECT_DECL_CLASS(dispatch_object); Macro, the following is the interpretation of the specific content it represents.

/*
* By default, dispatch objects are declared as Objective-C types when building
* with an Objective-C compiler. This allows them to participate in ARC, in RR
* management by the Blocks runtime and in leaks checking by the static
* analyzer, and enables them to be added to Cocoa collections.
* See <os/object.h> for details.
*/

/* * By default, dispatch objects are declared as Objective-C types (NSObject) when built using the Objective-C compiler. * This allows them to participate in ARC, RR (retain/release) management through the Blocks runtime, and leak checking through static parsers, * And add them to the Cocoa collection (NSMutableArray, NSMutableDictionary...) In the. For more information, see. * /
OS_OBJECT_DECL_CLASS(dispatch_object);

// Verify that dispatch_queue_t is added to the OC collection:
dispatch_queue_t globalQueue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
NSMutableArray *array = [NSMutableArray array];
[array addObject:globalQueue];
NSLog(@"array: %@", array);

// Call isKindOfClass to determine the type
NSLog(@"🍑 🍑 % d", [globalQueue isKindOfClass:[NSObject class]]);
// The console prints like this:

2020- 11- 16 18:27:24.218954+0800 Simple_iOS[69017:9954014] array:  ( ⬅️ dispatch_queue_tIs added to NSMutableArray"<OS_dispatch_queue_global: com.apple.root.default-qos>"
)
2020- 11- 16 18:27:24.219246+0800 Simple_iOS[69017:9954014] 🍑 🍑1⬅ ️dispatch_queue_tIs a pointer to type NSObjectCopy the code
  • OS_OBJECT_DECL_CLASSMacro definition:
#define OS_OBJECT_DECL_CLASS(name) \
        OS_OBJECT_DECL(name)

OS_OBJECT_DECL_CLASS(dispatch_object) ➡ ️OS_OBJECT_DECL(dispatch_object)
Copy the code
  • OS_OBJECT_DECLMacro definition:
#define OS_OBJECT_DECL(name, ...) \
        OS_OBJECT_DECL_IMPL(name, <NSObject>)

OS_OBJECT_DECL(dispatch_object) ➡ ️OS_OBJECT_DECL_IMPL(dispatch_object, <NSObject>)
Copy the code
  • OS_OBJECT_DECL_IMPLMacro definition:
#define OS_OBJECT_DECL_IMPL(name, ...) \
        OS_OBJECT_DECL_PROTOCOL(name, __VA_ARGS__) \
        typedef NSObject<OS_OBJECT_CLASS(name)> \
                * OS_OBJC_INDEPENDENT_CLASS name##_t
                
OS_OBJECT_DECL_IMPL(dispatch_object, < NSObject >) ➡ ️OS_OBJECT_DECL_PROTOCOL(dispatch_object, <NSObject>) \
typedef NSObject<OS_dispatch_object> \
        * OS_OBJC_INDEPENDENT_CLASS dispatch_object_t ➡️
        
@protocol OS_dispatch_object <NSObject> \
@end \
typedef NSObject<OS_dispatch_object> \
        * dispatch_object_t  
Copy the code

OS_OBJECT_DECL_CLASS(dispatch_object); That is:

@protocol OS_dispatch_object <NSObject> 
@end

typedef NSObject<OS_dispatch_object> * dispatch_object_t;  
Copy the code

OS_dispatch_object is a protocol that inherits from the NSObject protocol and defines an alias of dispatch_object_t for Pointers to the NSObject instance object type that follows the protocol. (OC: dispatch_object_t is the NSObject pointer)

The OS_OBJECT_DECL_CLASS(name) macro defines a protocol derived from the NSObject protocol. The name of the protocol is OS_ prefix. And define an alias name for a pointer to the type of NSObject instance object that complies with the protocol, suffixed _t for name.

#define DISPATCH_DECL_SUBCLASS(name, base) OS_OBJECT_DECL_SUBCLASS(name, base) Specify the protocol from which it inherits (NSObject and OS_dispatch_object are the default), but ensure that the base protocol is already defined.

The following is interpreted in objective-C context. (Swift/C++/C conversions will be covered in the next article.)

dispatch_queue_t

Dispatch is an abstract model for expressing concurrency through a simple but powerful API. At the core, Dispatches provide serial FIFO queues to which blocks can be submitted. The blocks submitted to these Dispatch queues are called on a fully managed thread pool. There is no guarantee on which thread the blocks will be called (the system itself fetches available threads from the thread pool). However, it guarantees that only one block submitted to the FIFO Dispatch queue will be called at a time. When multiple queues have blocks to process, the system is free to allocate additional threads to call those blocks concurrently. When the queue becomes empty, these threads are released automatically.

DISPATCH_DECL(dispatch_queue);
Copy the code

The transformation macro is defined as:

@protocol OS_dispatch_queue <OS_dispatch_object>
@end

typedef NSObject<OS_dispatch_queue> * dispatch_queue_t;
Copy the code

Dispatch Queues call the work items submitted to them.

Dispatch queues come in many forms, the most common of which is Dispatch serial queues (Dispatch_queue_serial_t). The system manages a thread pool that handles dispatch queues and invokes the work items submitted to them. Conceptually, a Dispatch queue can have its own thread of execution, and the interaction between queues is highly asynchronous. The dispatch queue does reference counting by calling dispatch_retain() and dispatch_release(). Pending work items submitted to the queue also retain references to the queue until they are completed. Once all references to the queue are freed, the system reallocates the queue (the queue is freed and destroyed).

dispatch_queue_global_t

DISPATCH_DECL_SUBCLASS(dispatch_queue_global, dispatch_queue);
Copy the code

The transformation macro is defined as:

@protocol OS_dispatch_queue_global <OS_dispatch_queue>
@end

typedef NSObject<OS_dispatch_queue_global> * dispatch_queue_global_t;
Copy the code

OS_dispatch_queue_global is a protocol inherited from the OS_dispatch_queue protocol and has an alias of dispatch_queue_global_t for Pointers to NSObject instance object types that comply with this protocol.

Dispatch Global Concurrent Queues are abstractions around a system thread pool that invoke work items submitted to a dispatch queue.

Dispatch Global Concurrent Queues provide priority buckets (hash buckets, to be discussed later in the source code) atop a system-managed thread pool. The number of threads allocated to this pool is determined by the system based on demand and system load. In particular, the system tries to maintain a good concurrency level for this resource, and new threads are created when too many existing worker threads are blocked in the system call. (One important difference between NSThreads and GCD is that threads in GCD are automatically created and assigned by the system, while NSthreads are manually created or started.)

Global Concurrent queues are shared resources, so it is the responsibility of each user of this resource not to commit an unlimited amount of work to the pool, especially work that can block, as this can cause the system to produce a large number of threads (aka: Thread explosion).

Work items submitted to Global Concurrent queues are not guaranteed to be sorted relative to the order in which they are submitted, and work items submitted to these queues can be invoked concurrently (concurrency queues, after all).

Dispatch Global Concurrent queues are known global objects returned by the dispatch_get_global_queue function and cannot be modified. Functions such as dispatch_suspend, dispatch_resume, dispatch_set_context, etc. are not valid for this type of queue calls.

dispatch_queue_serial_t

DISPATCH_DECL_SUBCLASS(dispatch_queue_serial, dispatch_queue);
Copy the code

The transformation macro is defined as:

@protocol OS_dispatch_queue_serial <OS_dispatch_queue>
@end

typedef NSObject<OS_dispatch_queue_serial> * dispatch_queue_serial_t;
Copy the code

OS_dispatch_queue_serial is a protocol inherited from the OS_dispatch_queue protocol and has an alias of dispatch_queue_serial_t for Pointers to NSObject instance object types that comply with this protocol.

Dispatch Serial queues call the work items submitted to them sequentially in FIFO order.

Dispatch Serial queues are lightweight objects to which work items can be submitted to be invoked in FIFO order. Serial queues can only invoke one work item at a time, but independent serial queues can invoke their work items independently and concurrently relative to each other.

Serial queues can locate each other (dispatch_set_target_queue) (serial queues can target each other). Serial queues at the bottom of the queue hierarchy provide an exclusion context: at any given time, at most one work item submitted to any queue in the hierarchy will run. Such a hierarchy provides a natural structure for organizing application subsystems.

Create a serial queue by passing the dispatch queue attribute derived from DISPATCH_QUEUE_SERIAL to dispatch_queue_CREATE_WITH_target. (The serial queue creation process will be interpreted later in the source code)

dispatch_queue_main_t

DISPATCH_DECL_SUBCLASS(dispatch_queue_main, dispatch_queue_serial);
Copy the code

The transformation macro is defined as:

@protocol OS_dispatch_queue_main <OS_dispatch_queue_serial>
@end

typedef NSObject<OS_dispatch_queue_main> * dispatch_queue_main_t;
Copy the code

OS_dispatch_queue_main is a protocol derived from the OS_dispatch_queue_serial protocol, and defines an alias of dispatch_queue_main_t for Pointers to the NSObject instance object type that follows the protocol. (See here the main queue is a special serial queue.)

Dispatch_queue_main_t is the default queue type bound to the main thread.

The primary queue is a serial queue (dispatch_queue_serial_t) that is bound to the main thread of the application. To call work items submitted to the main queue, the application must call dispatch_main, NSApplicationMain, or use CFRunLoop on the main thread.

The main queue is a well-known global object that is automatically created on behalf of the main thread during process initialization and returned by dispatch_get_main_queue and cannot be modified. Functions such as dispatch_suspend, dispatch_resume, dispatch_set_context, etc., are not valid for calls to this type of queue (there is only one main queue and multiple global concurrent queues).

dispatch_queue_concurrent_t

DISPATCH_DECL_SUBCLASS(dispatch_queue_concurrent, dispatch_queue);
Copy the code

The transformation macro is defined as:

@protocol OS_dispatch_queue_concurrent <OS_dispatch_queue>
@end

typedef NSObject<OS_dispatch_queue_concurrent> * dispatch_queue_concurrent_t;
Copy the code

OS_dispatch_queue_concurrent is a protocol inherited from the OS_dispatch_queue protocol. And an alias of dispatch_queue_concurrent_t is defined for Pointers to the NSObject instance object type that follows the protocol.

Dispatch concurrent queues call the workitems submitted to them simultaneously and accept the concept of and admit a notion of barrier workitems. (A barrier is a call to the dispatch_barrier_async function to submit work items to the queue).

Dispatch Concurrent queues are lightweight objects to which regular and barrier work items can be submitted. A barrier work item is invoked when any other type of work item is excluded (in FIFO order). Work items after the Barrier work item are executed concurrently only after work items that are submitted before the Barrier work item are executed concurrently.

Regular work items can be invoked concurrently on the same concurrent queue in any order. However, regular work items are not invoked until any previously committed barrier work items are invoked.

In other words, if serial queues are equivalent to mutex in a Dispatch world, concurrent queues are equivalent to reader-writer locks, where regular items are readers and barriers are writers.

Create a concurrent queue by passing the dispatch queue attributes derived from DISPATCH_QUEUE_CONCURRENT to dispatch_queue_CREATE_WITH_target.

Note: When a regular work item (Readers) with a lower priority is called, scheduling concurrent queues at this time does not achieve priority inversion avoidance and prevents calls to barriers with a higher priority (writer).

dispatch_block_t

Everyday work items submitted to the queue using GCD are blocks with names like dispatch_block_t and return values of void.

typedef void (^dispatch_block_t)(void);
Copy the code

The type of blocks submitted to dispatch queues, with no arguments and no return value. When building without Objective-C ARC, block objects allocated to or copied to the heap must be released via a -[release] message or the Block_release() function.

Block allocations declared as literals are stored on the stack. So the following is an invalid build:

dispatch_block_t block;
if (x) {
    block = ^{ printf("true\n"); };
} else {
    block = ^{ printf("false\n"); };
}
block(a);// unsafe!!!
Copy the code

What happened behind the scenes:

if (x) {
    struct Block __tmp_1 =. ;// setup details
    block = &__tmp_1;
} else{ struct Block __tmp_2 = ... ;// setup details
    block = &__tmp_2;
}
Copy the code

As shown in the example, the address of the stack variable is escaping its allocation range. That’s a classic C bug. Instead, blocks must be copied to the heap using the Block_copy() function or by sending a -[copy] message. (If you are familiar with the internal structure of block, you will feel very friendly.)

dispatch_async

Dispatch_async submits a block to be executed asynchronously on the dispatch queue.

#ifdef __BLOCKS__
API_AVAILABLE(macos(10.6), ios(4.0))
DISPATCH_EXPORT DISPATCH_NONNULL_ALL DISPATCH_NOTHROW
void
dispatch_async(dispatch_queue_t queue, dispatch_block_t block);
#endif
Copy the code

The dispatch_async function is the basic mechanism for submitting blocks to the dispatch queue. Calls to the dispatch_async function always return immediately after a block is committed and never wait for the block to be called. The familiar asynchronous call does not block because dispatch_async returns immediately after the block has been submitted, whereas the dispatch_sync call does not return until the block has been executed. We may subconsciously feel that the type of queue dispatch_async is sent by the dispatch_async function will affect whether or not the function returns immediately. The dispatch_async function returns immediately whether a block is sent to a concurrent queue or a serial queue and does not block the current thread.

The destination queue (dispatch_queue_t queue) determines whether this block is called serially or concurrently with other blocks submitted to the same queue. (If queue is a concurrent queue, multiple threads are started to execute all blocks concurrently, if queue is a serial queue (except for the main queue), only one thread is started to execute all blocks serially, if the main queue is not started, all blocks are executed serially in the main thread.) The dispatch_async function submits blocks to different serial queues that are processed in parallel with each other. (They execute blocks in serial queues concurrently in different threads.)

Queue: Target scheduling queue to which blocks are submitted. The system keeps the reference on the target queue until the block call completes. The result of passing NULL in this parameter is indeterminate.

Block: block submitted to the destination scheduling queue. This function executes the Block_copy() and Block_release() functions on behalf of the caller. The result of passing NULL in this parameter is indeterminate.

dispatch_function_t

Dispatch_function_t is a function pointer that returns void and takes void * (nullable).

typedef void (*dispatch_function_t)(void *_Nullable);
Copy the code

dispatch_async_f

Dispatch_async_f submits a function to be executed asynchronously on the dispatch queue.

API_AVAILABLE(macos(10.6), ios(4.0))
DISPATCH_EXPORT DISPATCH_NONNULL1 DISPATCH_NONNULL3 DISPATCH_NOTHROW
void
dispatch_async_f(dispatch_queue_t queue,
        void *_Nullable context, dispatch_function_t work);
Copy the code

For details, see dispatch_async.

Queue: The target scheduling queue to which the function is submitted. The system keeps the reference on the target queue until the function returns after execution. The result of passing NULL in this parameter is indeterminate.

Context: The context parameter defined by the application to be passed to the function as an argument when the work function executes.

Work: An application-defined function called on the target queue. The first argument passed to this function is the context argument supplied to dispatch_async_f. The result of passing NULL in this parameter is indeterminate.

dispatch_sync

Dispatch_sync submits a block for synchronous execution on the dispatch queue.

#ifdef __BLOCKS__
API_AVAILABLE(macos(10.6), ios(4.0))
DISPATCH_EXPORT DISPATCH_NONNULL_ALL DISPATCH_NOTHROW
void
dispatch_sync(dispatch_queue_t queue, DISPATCH_NOESCAPE dispatch_block_t block);
#endif
Copy the code

The dispatch_sync function is used to submit work items to a Dispatch queue, like the dispatch_async function, but dispatch_sync does not return until the work item is complete. (the dispatch_sync function does not return until the block submitted to the queue is complete, blocking the current thread).

Work items submitted to the queue using the dispatch_sync function do not comply with certain queue properties of the queue (such as automatic release frequency and QOS classes) when invoked.

Calling dispatch_sync for the current queue will result in a dead-lock (if you call dispatch_sync from any serial queue (including the main thread) to commit a block to the current serial queue, it must be deadlocked). Using dispatch_sync can also cause multi-party dead-lock due to mutex. It is better to use Dispatch_async.

Unlike dispatch_async, reservations are not performed on the target queue. Because the call to this function is synchronous, dispatch_sync “borrows” the caller’s reference.

As an optimization, dispatch_sync invokes the work item on the thread that submitted it, unless the queue being passed is the primary queue or the queue targeted at it (see dispatch_queue_MAIN_t, dispatch_set_target_queue).

Queue: Target scheduling queue to which blocks are submitted. The result of passing NULL in this parameter is indeterminate.

Block: Block to be called on the destination scheduling queue. The result of passing NULL in this parameter is indeterminate.

dispatch_sync_f

Dispatch_sync_f submits a function to be executed synchronously on the dispatch queue.

API_AVAILABLE(macos(10.6), ios(4.0))
DISPATCH_EXPORT DISPATCH_NONNULL1 DISPATCH_NONNULL3 DISPATCH_NOTHROW
void
dispatch_sync_f(dispatch_queue_t queue,
        void *_Nullable context, dispatch_function_t work);
Copy the code

For details, see dispatch_sync.

Queue: The target scheduling queue to which the function is submitted. The result of passing NULL in this parameter is indeterminate.

Context: The context parameter defined by the application to be passed to the function as an argument when the work function executes.

Work: An application-defined function called on the target queue. The first argument passed to this function is the context argument supplied to dispatch_sync_f. The result of passing NULL in this parameter is indeterminate.

dispatch_async_and_wait

Dispatch_async_and_wait submits a block to be executed synchronously on the dispatch queue.

#ifdef __BLOCKS__
API_AVAILABLE(macos(10.14), ios(12.0), tvos(12.0), watchos(5.0))
DISPATCH_EXPORT DISPATCH_NONNULL_ALL DISPATCH_NOTHROW
void
dispatch_async_and_wait(dispatch_queue_t queue,
        DISPATCH_NOESCAPE dispatch_block_t block);
#endif
Copy the code

A work item is submitted to a dispatch queue, such as dispatch_ASYNc, but dispatch_ASYNC_AND_WAIT does not return until the work item is complete. As with the dispatch_sync family, dispatch_asynC_AND_WAIT is deadlocked (see Dispatch_sync).

However, dispatch_ASYNC_AND_WAIT differs from the functionality of the Dispatch_SYNC family in two basic respects: how it considers queue properties and how it selects the execution context for invoking work items.

A work item submitted to a queue using dispatch_ASYNC_AND_WAIT is invoked to observe all queue attributes of the queue (including automatic release frequency or QOS classes).

When the Runtime starts a thread to invoke asynchronous work items that have been submitted to the specified queue, the service thread will also be used to perform synchronous work submitted to the queue via dispatch_asynC_AND_WAIT. However, if the Runtime does not have a thread serving the specified queue (because it has no queued work items, or only synchronous work items), then dispatch_asynC_AND_WAIT invokes work items on the calling thread, similar to the Dispatch_sync series.

As an exception, if the queue submitting work does not target the global concurrent queue (for example, because it targets the main queue), the thread calling dispatch_ASYNC_AND_WAIT will never invoke the work item.

In other words, dispatch_ASYNC_AND_WAIT is similar to submitting the dispatch_BLOCK_CREATE work item to a queue and then waiting for it, as shown in the following code example. However, dispatch_asynC_AND_WAIT is significantly more efficient when no new thread is required to execute the work item (because it uses the stack of the submission thread and does not require heap allocation).

dispatch_block_t b = dispatch_block_create(0, block);
dispatch_async(queue, b);
dispatch_block_wait(b, DISPATCH_TIME_FOREVER);
Block_release(b);
Copy the code

Queue: Target scheduling queue to which blocks are submitted. The result of passing NULL in this parameter is indeterminate.

Block: Block to be called on the destination scheduling queue. The result of passing NULL in this parameter is indeterminate.

dispatch_async_and_wait_f

Dispatch_async_and_wait_f submits a function to be executed synchronously on the dispatch queue.

API_AVAILABLE(macos(10.14), ios(12.0), tvos(12.0), watchos(5.0))
DISPATCH_EXPORT DISPATCH_NONNULL1 DISPATCH_NONNULL3 DISPATCH_NOTHROW
void
dispatch_async_and_wait_f(dispatch_queue_t queue,
        void *_Nullable context, dispatch_function_t work);
Copy the code

For details, see dispatch_asynC_and_WAIT.

Queue: The target scheduling queue to which the function is submitted. The result of passing NULL in this parameter is indeterminate.

Context: The context parameter defined by the application to be passed to the function as an argument when the work function executes.

Work: An application-defined function called on the target queue. The first argument passed to this function is the context argument supplied to dispatch_asynC_and_WAIT_f. The result of passing NULL in this parameter is indeterminate.

DISPATCH_APPLY_AUTO

Convert 0 strong to dispatch_queue_t as a constant. The DISPATCH_APPLY_AUTO_AVAILABLE macro is defined as 1 in versions higher than macOS 10.9, higher than iOS 7.0, and any tvOS or watchOS versions, and 0 in other cases.

#if DISPATCH_APPLY_AUTO_AVAILABLE
#define DISPATCH_APPLY_AUTO ((dispatch_queue_t _Nonnull)0) // change 0 to dispatch_queue_t
#endif
Copy the code

A constant passed to dispatch_apply or dispatch_apply_f to request that the system automatically use a worker thread as close to the configuration of the current thread as possible.

Passing this constant as a queue parameter when submitting a block for a parallel invocation automatically uses the global concurrent queue that best matches the caller’s quality of service. (dispatch_apply and dispatch_apply_f: dispatch_queue_t queue: DISPATCH_APPLY_AUTO)

Note: You should not assume which global concurrent queues will actually be used. Use this constant to deploy backwards to macOS 10.9, iOS 7.0, and any tvOS or watchOS release.

dispatch_apply

Dispatch_apply submits a block to the dispatch queue for a parallel call. (Rapid iteration)

#ifdef __BLOCKS__
API_AVAILABLE(macos(10.6), ios(4.0))
DISPATCH_EXPORT DISPATCH_NONNULL3 DISPATCH_NOTHROW
void
dispatch_apply(size_t iterations,
        dispatch_queue_t DISPATCH_APPLY_QUEUE_ARG_NULLABILITY queue,
        DISPATCH_NOESCAPE void (^block)(size_t));
#endif
Copy the code

A block is submitted to the scheduling queue for parallel invocation. This function waits for the task block to complete before returning, and if the specified queue is concurrent, the block can be called concurrently, so it must be a reentrant safe block.

Each invocation of this block passes the current iteration index.

Iterations: Specifies the number of iterations to be executed.

Queue: Scheduling queue to which blocks are submitted. The preferred value to pass is DISPATCH_APPLY_AUTO to automatically use the queue appropriate for the calling thread.

Block: The block to be called has a specified number of iterations. The result of passing NULL in this parameter is indeterminate.

Normally we would iterate through with a for loop, but GCD gives us the fast iterative method dispatch_apply. Dispatch_apply adds tasks to a specified queue based on the specified number of times and waits for all queues to complete. If dispatch_apply is used on a serial queue, it is executed sequentially and synchronously, just like for loops. But this misses the point of rapid iteration. We can use concurrent queues for asynchronous execution. For example, to iterate over the numbers 0 to 5, the for loop takes one element at a time and iterates through it. Dispatch_apply can traverse multiple numbers simultaneously (asynchronously) in multiple threads. Also, dispatch_apply waits for all tasks to complete on both serial and concurrent queues, just like a synchronous operation or the dispatch_group_wait method in a queue group. Because tasks are executed asynchronously in a concurrent queue, the duration of execution of each task varies, as does the order in which the tasks are finally terminated. But apply- end must be executed at the end. This is because the dispatch_apply method waits for all tasks to complete.

dispatch_apply_f

Dispatch_apply_f submits a function to the dispatch queue for parallel invocation. (Rapid iteration)

API_AVAILABLE(macos(10.6), ios(4.0))
DISPATCH_EXPORT DISPATCH_NONNULL4 DISPATCH_NOTHROW
void
dispatch_apply_f(size_t iterations,
        dispatch_queue_t DISPATCH_APPLY_QUEUE_ARG_NULLABILITY queue,
        void *_Nullable context, void (*work)(void *_Nullable, size_t));
Copy the code

For details, see dispatch_apply.

dispatch_get_current_queue

Dispatch_get_current_queue returns the queue where the currently executing block is running.

API_DEPRECATED("unsupported interface".macos(10.6.10.9), ios(4.0.6.0)) // It has been abandoned. Do not reuse it
DISPATCH_EXPORT DISPATCH_PURE DISPATCH_WARN_RESULT DISPATCH_NOTHROW
dispatch_queue_t
dispatch_get_current_queue(void);
Copy the code

When dispatch_get_CURRENT_queue () is called outside the context of the submitted block, it returns the default concurrent queue.

It is recommended for debugging and logging only: code must not make any assumptions about the returned queue unless it is one of the global queues or a queue created by the code itself. If the queue is not one returned by dispatch_get_current_queue(), the code cannot assume that synchronized execution of the queue will not be deadlocked.

When dispatch_get_current_queue() is called on the main thread, it may or may not return the same value as dispatch_get_main_queue(). Comparing the two is not an efficient way to test whether code executes on the main thread (see dispatch_ASSERt_queue and dispatch_ASSERt_queue_NOT).

Returns the current queue. This feature is deprecated and will be removed in a future release.

dispatch_get_main_queue

Dispatch_get_main_queue returns the default queue bound to the main thread. (_dispatch_main_q a global variable that automatically builds the main thread and main queue when the program starts)

API_AVAILABLE(macos(10.6), ios(4.0))
DISPATCH_EXPORT
struct dispatch_queue_s _dispatch_main_q; // _dispatch_main_q is a global dispatch_queue_s structure variable

DISPATCH_INLINE DISPATCH_ALWAYS_INLINE DISPATCH_CONST DISPATCH_NOTHROW
dispatch_queue_main_t
dispatch_get_main_queue(void)
{
    // the DISPATCH_GLOBAL_OBJECT macro converts _dispatch_main_q to dispatch_queue_main_t and returns.
    return DISPATCH_GLOBAL_OBJECT(dispatch_queue_main_t, _dispatch_main_q);
}

#define OS_OBJECT_BRIDGE __bridge

// This macro definition is also very simple. It simply converts object to type type
#define DISPATCH_GLOBAL_OBJECT(type, object) ((OS_OBJECT_BRIDGE type)&(object))
Copy the code

To call the block submitted to the main queue, the application must call dispatch_main(), NSApplicationMain(), or use CFRunLoop on the main thread.

The main queue is used to interact with the main thread and the main runloop in the application context.

Because primary queues do not behave exactly like regular serial queues, they can have harmful side effects when used in processes that are not UI applications (daemons). For this type of process, avoid using the main queue.

Returns the main queue. This queue is automatically created on behalf of the main thread before main() is called. (_dispatch_main_q)

dispatch_queue_priority_t

Type of dispatch_queue_PRIORITY, indicating queue priority.

#define DISPATCH_QUEUE_PRIORITY_HIGH 2
#define DISPATCH_QUEUE_PRIORITY_DEFAULT 0
#define DISPATCH_QUEUE_PRIORITY_LOW (-2)
#define DISPATCH_QUEUE_PRIORITY_BACKGROUND INT16_MIN

typedef long dispatch_queue_priority_t;
Copy the code

DISPATCH_QUEUE_PRIORITY_HIGH: scheduling project will run at high priority to the queue, the queue in any default priority or scheduler before lower priority queues.

DISPATCH_QUEUE_PRIORITY_DEFAULT: Items scheduled to queues will run with default priority, that is, queue execution will be scheduled after all high-priority queues have been scheduled, but before any lower-priority queues have been scheduled.

DISPATCH_QUEUE_PRIORITY_LOW: DISPATCH_QUEUE_PRIORITY_LOW: Items scheduled to the queue will run at low priority, that is, queue execution will be scheduled after all default and high priority queues have been dispatched.

DISPATCH_QUEUE_PRIORITY_BACKGROUND: Items scheduled to the queue will run at background priority, that is, after all queues of higher priority have been scheduled, the queue execution will be scheduled, and the system will run items on the queue at background state setPriority (2) on the thread (that is, disk I/O is restricted and the thread’s scheduling priority is set to the lowest value).

dispatch_get_global_queue

Dispatch_get_global_queue returns a well-known global concurrent queue for a given quality of service (qOS_class_t) (or priority as defined by dispatch_queue_priority_t) class.

API_AVAILABLE(macos(10.6), ios(4.0))
DISPATCH_EXPORT DISPATCH_CONST DISPATCH_WARN_RESULT DISPATCH_NOTHROW
dispatch_queue_global_t
dispatch_get_global_queue(long identifier, unsigned long flags);
Copy the code

Identifier: Service quality level defined in qOS_class_t or priority defined in dispatch_queue_priority_t.

It is recommended to use quality of service class values to identify well-known global concurrent queues:

  • QOS_CLASS_USER_INTERACTIVE
  • QOS_CLASS_USER_INITIATED
  • QOS_CLASS_DEFAULT
  • QOS_CLASS_UTILITY

Global concurrent queues can still be identified by their priorities, which map to the following QOS classes:

  • DISPATCH_QUEUE_PRIORITY_HIGH: QOS_CLASS_USER_INITIATED
  • DISPATCH_QUEUE_PRIORITY_DEFAULT: QOS_CLASS_DEFAULT
  • DISPATCH_QUEUE_PRIORITY_LOW: QOS_CLASS_UTILITY
  • DISPATCH_QUEUE_PRIORITY_BACKGROUND: QOS_CLASS_BACKGROUND

Flags: Reserved for future use. Passing any value other than zero may result in NULL being returned, so it is fine to pass 0 on a daily basis.

Result: Returns the global queue for the request, or NULL if the global queue does not exist.

dispatch_queue_attr_t

Attributes of a scheduling queue.

DISPATCH_DECL(dispatch_queue_attr);
Copy the code

The transformation macro is defined as:

@protocol OS_dispatch_queue_attr <OS_dispatch_object>
@end

typedef NSObject<OS_dispatch_queue_attr> * dispatch_queue_attr_t;
Copy the code

OS_dispatch_queue_attr is a protocol derived from the OS_dispatch_object protocol and defines an alias of dispatch_queue_attr_t for Pointers to NSObject instance object types that follow the protocol. (dispatch_queue_attr_t NSObject)

dispatch_queue_attr_make_initially_inactive

Dispatch_queue_attr_make_initially_inactive returns a property value that can be supplied to dispatch_queue_CREATE or dispatch_queue_CREATE_WITH_target, So that the queue you create is initially inactive.

API_AVAILABLE(macos(10.12), ios(10.0), tvos(10.0), watchos(3.0))
DISPATCH_EXPORT DISPATCH_WARN_RESULT DISPATCH_PURE DISPATCH_NOTHROW
dispatch_queue_attr_t
dispatch_queue_attr_make_initially_inactive(
        dispatch_queue_attr_t _Nullable attr);
Copy the code

Scheduling queues can be created in an inactive state. Queues in this state must be activated before any blocks associated with them can be called.

A queue that is inactive cannot be released and dispatch_activate must be called before the last reference to the queue created with this attribute is released.

You can change the target queue of an inactive queue using dispatch_set_target_queue. Once the initially inactive queue is activated, changes to the target queue are no longer allowed.

Attr: Queue attribute values are to be combined with initially inactive attributes.

Return: Returns the values of the attributes that can be supplied to dispatch_queue_CREATE and dispatch_queue_CREATE_WITH_target. The new value combines the property specified by the “attr” parameter with the property that was originally inactive.

DISPATCH_QUEUE_SERIAL

DISPATCH_QUEUE_SERIAL macro definition, which is only NULL, dispatch_queue_t serialQueue = dispatch_queue_create(“com.com”, DISPATCH_QUEUE_SERIAL); NULL is a macro used to create serial queues.

#define DISPATCH_QUEUE_SERIAL NULL
Copy the code

Properties that can be used to create scheduling queues that serially invoke blocks in FIFO order. (dispatch_queue_serial_t)

DISPATCH_QUEUE_SERIAL_INACTIVE

#define DISPATCH_QUEUE_SERIAL_INACTIVE \
        dispatch_queue_attr_make_initially_inactive(DISPATCH_QUEUE_SERIAL)
Copy the code

Property that can be used to create scheduling queues that invoke blocks sequentially in FIFO order. This property is initially inactive.

DISPATCH_QUEUE_CONCURRENT

Properties that can be used to create a scheduling queue that simultaneously invokes blocks and supports barrier blocks submitted through the scheduling barrier API. (Block task blocks for regular blocks and Barriers)

#define DISPATCH_GLOBAL_OBJECT(type, object) ((OS_OBJECT_BRIDGE type)&(object))

#define DISPATCH_QUEUE_CONCURRENT \
        DISPATCH_GLOBAL_OBJECT(dispatch_queue_attr_t, \
        _dispatch_queue_attr_concurrent)
API_AVAILABLE(macos(10.7), ios(4.3))
DISPATCH_EXPORT
struct dispatch_queue_attr_s _dispatch_queue_attr_concurrent; // There is a global variable of type dispatch_queue_attr_s.
Copy the code

Similarly, the DISPATCH_QUEUE_CONCURRENT macro definition forces the global variable _dispatch_queue_attr_CONCURRENT to dispatch_queue_attr_t.

DISPATCH_QUEUE_CONCURRENT_INACTIVE

Property that can be used to create a dispatch queue that can simultaneously invoke blocks and support barrier blocks submitted through the dispatch barrier API (Dispatch_barrier_Async), and is initially inactive.

#define DISPATCH_QUEUE_CONCURRENT_INACTIVE \
        dispatch_queue_attr_make_initially_inactive(DISPATCH_QUEUE_CONCURRENT)
Copy the code

dispatch_autorelease_frequency_t

Dispatch_autorelease_frequency_t Value passed to the dispatch_queue_attr_make_with_AUTORELEase_frequency function.

// Enumerate macro definitions
#define DISPATCH_ENUM(name, type, ...) \
typedef enum : type { __VA_ARGS__ } __DISPATCH_ENUM_ATTR name##_t

DISPATCH_ENUM(dispatch_autorelease_frequency, unsigned long,
    DISPATCH_AUTORELEASE_FREQUENCY_INHERIT DISPATCH_ENUM_API_AVAILABLE(
            macos(10.12), ios(10.0), tvos(10.0), watchos(3.0=))0,
    DISPATCH_AUTORELEASE_FREQUENCY_WORK_ITEM DISPATCH_ENUM_API_AVAILABLE(
            macos(10.12), ios(10.0), tvos(10.0), watchos(3.0=))1,
    DISPATCH_AUTORELEASE_FREQUENCY_NEVER DISPATCH_ENUM_API_AVAILABLE(
            macos(10.12), ios(10.0), tvos(10.0), watchos(3.0=))2,);Copy the code

DISPATCH_AUTORELEASE_FREQUENCY_INHERIT: has this automatic release frequency (autorelease frequency) scheduling queues will target queue inherits from its behavior, this is to manually create the default behavior of the queue.

DISPATCH_AUTORELEASE_FREQUENCY_WORK_ITEM: A dispatch queue with this autorelease frequency pushes and pops an autorelease pool around the execution of each block asynchronously submitted to it. Reference dispatch_queue_attr_make_with_autorelease_frequency ().

DISPATCH_AUTORELEASE_FREQUENCY_NEVER: has this automatic release frequency (autorelease frequency) of the scheduling queue never around asynchronous execution submitted to set it block the execution of a single free pool, this is the behavior of the global concurrent queue.

dispatch_queue_attr_make_with_autorelease_frequency

Dispatch_queue_attr_make_with_autorelease_frequency Returns the dispatch queue attribute value whose autorelease frequency is set to the specified value.

API_AVAILABLE(macos(10.12), ios(10.0), tvos(10.0), watchos(3.0))
DISPATCH_EXPORT DISPATCH_WARN_RESULT DISPATCH_PURE DISPATCH_NOTHROW
dispatch_queue_attr_t
dispatch_queue_attr_make_with_autorelease_frequency(
        dispatch_queue_attr_t _Nullable attr,
        dispatch_autorelease_frequency_t frequency);
Copy the code

Any block submitted to this queue (via dispatch_async, dispatch_barrier_async, dispatch_group_notify, etc.) is executed asynchronously when the queue uses the frequency automatically released by work item (directly or inherited from its target queue). If it’s surrounded by a single Objective-c@autoreleasepool scope.

Any blocks submitted asynchronously to this queue (via dispatch_ASYNc, dispatch_barrier_ASYNc, dispatch_group_notify, etc.) will be executed when the queue uses the automatic release frequency for each work item (directly or inherited from its target queue), As if surrounded by a separate Objective-c @Autoreleasepool scope.

The automatic release frequency has no effect on blocks synchronously submitted to queues (via dispatch_sync, dispatch_barrier_sync).

Global concurrent queues have the behavior DISPATCH_AUTORELEASE_FREQUENCY_NEVER. By default, DISPATCH_AUTORELEASE_FREQUENCY_INHERIT is used for manually created scheduling queues.

Queues created using this property cannot change the target queue after activation. See dispatch_set_target_queue and dispatch_activate.

Attr: Queue attribute value to be combined with the specified auto release frequency or NULL.

Frequency: Automatic release frequency of requests.

Return: Returns the value of the attribute that can be supplied to dispatch_queue_CREATE, or NULL if the requested automatic release frequency is invalid. This new value combines the properties specified by the attR parameter with the selected automatic release frequency.

DISPATCH_QUEUE_SERIAL_WITH_AUTORELEASE_POOL

DISPATCH_QUEUE_SERIAL_WITH_AUTORELEASE_POOL use this property to create the scheduling queue in FIFO order serial call block, Surround the execution of any blocks asynchronously submitted to it with the equivalent of a single Objective-c@autoreleasepool scope.

#define DISPATCH_QUEUE_SERIAL_WITH_AUTORELEASE_POOL \
        dispatch_queue_attr_make_with_autorelease_frequency(\
                DISPATCH_QUEUE_SERIAL, DISPATCH_AUTORELEASE_FREQUENCY_WORK_ITEM)
Copy the code

DISPATCH_QUEUE_CONCURRENT_WITH_AUTORELEASE_POOL

DISPATCH_QUEUE_CONCURRENT_WITH_AUTORELEASE_POOL Scheduling queues created with this property can concurrently invoke blocks and support barrier blocks submitted using the Dispatch Barrier API. It also surrounds the execution of any blocks asynchronously submitted to it, which are equivalent to a single Objective-c@autoreleasepool.

#define DISPATCH_QUEUE_CONCURRENT_WITH_AUTORELEASE_POOL \
        dispatch_queue_attr_make_with_autorelease_frequency(\
                DISPATCH_QUEUE_CONCURRENT, DISPATCH_AUTORELEASE_FREQUENCY_WORK_ITEM)
Copy the code

dispatch_qos_class_t

Qos_class_t alias.

#if __has_include(<sys/qos.h>)
typedef qos_class_t dispatch_qos_class_t;
#else
typedef unsigned int dispatch_qos_class_t;
#endif
Copy the code

qos_class_t

An Abstract Thread Quality of Service (QOS) Classificatio. Abstract type of Thread Quality of Service (QOS).

// __QOS_ENUM enumerates macro definitions
#define __QOS_ENUM(name, type, ...) enum { __VA_ARGS__ }; typedef type name##_t
#define __QOS_CLASS_AVAILABLE(...)

#if defined(__cplusplus) || defined(__OBJC__) || __LP64__
#if defined(__has_feature) && defined(__has_extension)
#if __has_feature(objc_fixed_enum) || __has_extension(cxx_strong_enums)
#undef __QOS_ENUM
#define __QOS_ENUM(name, type, ...) typedef enum : type { __VA_ARGS__ } name##_t
#endif
#endif
#if __has_feature(enumerator_attributes)
#undef __QOS_CLASS_AVAILABLE
#define __QOS_CLASS_AVAILABLE __API_AVAILABLE
#endif
#endif

// See here for enumeration values
__QOS_ENUM(qos_class, unsigned int,
    QOS_CLASS_USER_INTERACTIVE
            __QOS_CLASS_AVAILABLE(macos(10.10), ios(8.0=))0x21,
    QOS_CLASS_USER_INITIATED
            __QOS_CLASS_AVAILABLE(macos(10.10), ios(8.0=))0x19,
    QOS_CLASS_DEFAULT
            __QOS_CLASS_AVAILABLE(macos(10.10), ios(8.0=))0x15,
    QOS_CLASS_UTILITY
            __QOS_CLASS_AVAILABLE(macos(10.10), ios(8.0=))0x11,
    QOS_CLASS_BACKGROUND
            __QOS_CLASS_AVAILABLE(macos(10.10), ios(8.0=))0x09,
    QOS_CLASS_UNSPECIFIED
            __QOS_CLASS_AVAILABLE(macos(10.10), ios(8.0=))0x00,);Copy the code

For the sake of memory, let’s translate thread quality of service into thread quality of service.

A thread quality of Service (QOS) class is an ordered abstract representation of the nature of the work expected to be performed by a PThread, dispatch Queue, or NSOperation. Each class specifies the maximum thread scheduling priority for that band (which can be used in combination with relative priority offsets within the band), as well as quality-of-service characteristics such as timer delay, CPU throughput, I/O throughput, network socket traffic management behavior, and so on.

Do your best to allocate available system resources to each QOS class. Quality of Service (QOS) degradation occurs only during system resource contention and is proportional to the QOS level. That is, QOS classes represent user-initiated work attempting to achieve peak throughput, while QOS classes attempt to achieve peak energy and thermal efficiency, even in the absence of contention. Finally, the use of QOS classes does not allow threads to replace any restrictions that might apply to the entire process.

  • QOS_CLASS_USER_INTERACTIVE: indicates that the work performed by this thread is user interaction. This type of work is required to have a high priority over other work on the system. Specifying this QOS class is a request to run with almost all available system CPU and I/O bandwidth, even in a race. This is not a energy-efficient QOS class for large tasks. The use of this QOS class should be limited to critical interactions with users, such as handling events on the main event loop, view drawing, animation, and so on. (It can be interpreted as telling the system to process user interaction events with higher priority)
  • QOS_CLASS_USER_INITIATED: indicates that the work performed by this thread was started by the user and that the user may be waiting for results. Such work is required to run at a lower priority than critical user interaction work, but at a higher priority than other work on the system. This is not a energy-efficient QOS class for large tasks. Its use should be limited to operations that are short enough in duration to make it less likely that the user will switch tasks while waiting for results. A typical user-initiated work will indicate progress through the display of placeholder content or mode user interfaces.
  • QOS_CLASS_DEFAULT: Indicates the default QOS class used by the system in the absence of more specific QOS information. This type of work is required to be lower in priority than critical user INTERACTIVE and user INITIATED work, but higher than UTILITY and BACKGROUND tasks.pthread_createThreads created without QOS class attributes will default toQOS_CLASS_DEFAULT. This QOS class value is not intended to be used as a work class and should only be set when propagating or restoring system-provided QOS class values.
  • QOS_CLASS_UTILITY: QOS classes that indicate work being performed by this thread may or may not be started by the user, and the user is unlikely to wait for results immediately. Such work is required to be lower in priority than critical user INTERACTIVE and user INITIATED work, but higher than low-level system maintenance tasks. Use this QOS classIndicates that the work should be carried out in an energy efficient manner. The progress of the utility’s work may or may not be shown to the user, but the effect of the work is visible to the user.
  • QOS_CLASS_BACKGROUND: Indicates that the QOS class that this thread is performing work was not started by the user and that the user may not know the result. This type of work requires precedence over other work, and the use of this QOS class indicates that work should be run in the most energy efficient and efficient manner.
  • QOS_CLASS_UNSPECIFIED: indicates that QOS information is missing or deleted. As an API return value, it may indicate that the thread or pthread attribute configuration is incompatible with the old API or conflicts with QOS class systems.

QOS_MIN_RELATIVE_PRIORITY

Minimum relative priority that can be specified in the QOS class. These priorities are relative only within a given QOS class and are only meaningful to the current process.

#define QOS_MIN_RELATIVE_PRIORITY (-15)
Copy the code

dispatch_queue_attr_make_with_qos_class

Dispatch_queue_attr_make_with_qos_class returns the value of the attribute that can be supplied to dispatch_queue_CREATE or dispatch_queue_CREATE_WITH_target, To assign a QOS class (parameter: qOS_class) and a relative priority (parameter: relative_PRIORITY) to the queue.

API_AVAILABLE(macos(10.10), ios(8.0))
DISPATCH_EXPORT DISPATCH_WARN_RESULT DISPATCH_PURE DISPATCH_NOTHROW
dispatch_queue_attr_t
dispatch_queue_attr_make_with_qos_class(dispatch_queue_attr_t _Nullable attr,
        dispatch_qos_class_t qos_class, int relative_priority);
Copy the code

If specified this way, the QOS class and relative priority take precedence over the priority inherited from the destination queue (if any) of the scheduling queue, as long as it does not result in a lower QOS class and relative priority.

Global queue priorities are mapped to the following QOS classes:

  • DISPATCH_QUEUE_PRIORITY_HIGH: QOS_CLASS_USER_INITIATED
  • DISPATCH_QUEUE_PRIORITY_DEFAULT: QOS_CLASS_DEFAULT
  • DISPATCH_QUEUE_PRIORITY_LOW: QOS_CLASS_UTILITY
  • DISPATCH_QUEUE_PRIORITY_BACKGROUND: QOS_CLASS_BACKGROUND

Such as:

dispatch_queue_t queue;
dispatch_queue_attr_t attr;

// The arguments are DISPATCH_QUEUE_SERIAL and QOS_CLASS_UTILITY
attr = dispatch_queue_attr_make_with_qos_class(DISPATCH_QUEUE_SERIAL, QOS_CLASS_UTILITY, 0);

queue = dispatch_queue_create("com.example.myqueue", attr);
Copy the code

QOS classes and relative priorities set on queues in this way have no effect on blocks submitted to queues synchronously (via dispatch_sync, dispatch_barrier_sync).

Attr: indicates the queue attribute value to be combined with the QOS class, or NULL.

Qos_class: QOS class value. Only existing enumerated values can be passed. Passing any other values results in NULL being returned.

Relative_priority: relative priority of the QOS class. This value is the negative offset of a given category from the maximum supported scheduler priority, and passing a value greater than zero or less than QOS_MIN_RELATIVE_PRIORITY (-15) will result in NULL being returned.

Return: Returns the attribute values that can be supplied to dispatch_queue_CREATE and dispatch_queue_CREATE_WITH_target; NULL is returned if an invalid QOS class is requested. The new value combines the attributes specified by the attr parameter, the new QOS class (parameter: qOS_class), and the relative priority (parameter: relative_PRIORITY).

DISPATCH_TARGET_QUEUE_DEFAULT

DISPATCH_TARGET_QUEUE_DEFAULT is passed to dispatch_queue_CREATE_WITH_target, dispatch_set_target_queue and dispatch_source_CREATE Function to indicate that the default target queue of the relevant object type should be used.

#define DISPATCH_TARGET_QUEUE_DEFAULT NULL
Copy the code

dispatch_queue_create_with_target

Dispatch_queue_create_with_target Creates a new dispatch queue with the specified destination queue.

API_AVAILABLE(macos(10.12), ios(10.0), tvos(10.0), watchos(3.0))
DISPATCH_EXPORT DISPATCH_MALLOC DISPATCH_RETURNS_RETAINED DISPATCH_WARN_RESULT
DISPATCH_NOTHROW
dispatch_queue_t
dispatch_queue_create_with_target(const char *_Nullable label,
        dispatch_queue_attr_t _Nullable attr, dispatch_queue_t _Nullable target)
        DISPATCH_ALIAS_V2(dispatch_queue_create_with_target);
Copy the code

Dispatch queues created with the DISPATCH_QUEUE_SERIAL or NULL attribute invoke blocks in FIFO order. (Serial queue)

Dispatch queues created with the DISPATCH_QUEUE_CONCURRENT attribute can call blocks concurrently (similar to global concurrent queues (the queues obtained by the dispatch_get_global_queue function), but with potentially more overhead), Barrier blocks submitted via the dispatch_barrier_async API are also supported, for example implementing effective reader schemes (multi-read single-write model).

When the dispatch queue is no longer needed, it should be released using dispatch_release. Note that any pending blocks submitted asynchronously to the queue will hold a reference to that queue. Therefore, the queue is not released until pending blocks have been completed.

A queue created with dispatch_queue_CREATE_WITH_target cannot change its target queue unless the queue created is inactive (see dispatch_queue_attr_make_initially_inactive), In this case, you can change the target queue until the newly created queue is activated with dispatch_activate.

Label: String label attached to a queue. This parameter is optional and can be NULL.

Attr: predefined properties, such as DISPATCH_QUEUE_SERIAL, DISPATCH_QUEUE_CONCURRENT or the result of calling the dispatch_queue_attr_make_with_ * function.

Target: indicates the target queue of the newly created queue. The destination queue is retained. If DISPATCH_TARGET_QUEUE_DEFAULT is used, the destination queue of the queue is configured as the default destination queue of the specified queue type.

dispatch_queue_create

Dispatch_queue_create Creates a new dispatch queue to which blocks can be submitted.

API_AVAILABLE(macos(10.6), ios(4.0))
DISPATCH_EXPORT DISPATCH_MALLOC DISPATCH_RETURNS_RETAINED DISPATCH_WARN_RESULT
DISPATCH_NOTHROW
dispatch_queue_t
dispatch_queue_create(const char *_Nullable label,
        dispatch_queue_attr_t _Nullable attr);
Copy the code

Dispatch queues created with the DISPATCH_QUEUE_SERIAL or NULL attribute invoke blocks in FIFO order.

Dispatch queues created with the DISPATCH_QUEUE_CONCURRENT attribute can call blocks concurrently (similar to global concurrent queues (the queues obtained by the dispatch_get_global_queue function), but with potentially more overhead), Barrier blocks submitted via the dispatch_barrier_async API (dispatch_barrier_async function) are supported, for example to implement an effective reader scheme (multi-read single-write model).

When the dispatch queue is no longer needed, it should be released using dispatch_release. Note that any pending blocks submitted asynchronously to the queue will hold a reference to that queue. Therefore, the queue is not released until pending blocks have been completed.

By passing the result of the dispatch_queue_attr_make_with_qOS_class function to the attr parameter of this function, You can specify a quality of service class (dispatch_qOS_class_t qOS_class) and a relative priority (int relative_PRIORITY) for the newly created queue. The quality-of-service level so specified takes precedence over the quality-of-service level of the target queue (if any) of the newly created scheduling queue, as long as this does not result in lower QOS classes and relative priority.

If the qos level is not specified, the target queue of the newly created scheduling queue is the global concurrent queue with the default priority.

Label: String label attached to a queue. This parameter is optional and can be NULL.

Attr: predefined properties, such as DISPATCH_QUEUE_SERIAL, DISPATCH_QUEUE_CONCURRENT or the result of calling the dispatch_queue_attr_make_with_ * function.

Result: The newly created Dispatch queue is returned.

DISPATCH_CURRENT_QUEUE_LABEL

Constant passed to the dispatch_queue_get_label function to retrieve the label of the current queue.

#define DISPATCH_CURRENT_QUEUE_LABEL NULL
Copy the code

dispatch_queue_get_label

Dispatch_queue_get_label returns the label for the given queue specified when the queue is created, or an empty string if a NULL label is specified.

API_AVAILABLE(macos(10.6), ios(4.0))
DISPATCH_EXPORT DISPATCH_PURE DISPATCH_WARN_RESULT DISPATCH_NOTHROW
const char *
dispatch_queue_get_label(dispatch_queue_t _Nullable queue);
Copy the code

Passing DISPATCH_CURRENT_QUEUE_LABEL returns the label of the current queue.

Queue Indicates the queue to be queried, or DISPATCH_CURRENT_QUEUE_LABEL.

dispatch_queue_get_qos_class

Dispatch_queue_get_qos_class returns the QOS class and relative priority of the given queue (pass in an int pointer relative_priority_ptr to get the relative priority, The QOS class returns a value of type dispatch_qOS_class_t.

API_AVAILABLE(macos(10.10), ios(8.0))
DISPATCH_EXPORT DISPATCH_WARN_RESULT DISPATCH_NONNULL1 DISPATCH_NOTHROW
dispatch_qos_class_t
dispatch_queue_get_qos_class(dispatch_queue_t queue,
        int *_Nullable relative_priority_ptr);
Copy the code

If the given queue is created using the attribute value returned from dispatch_queue_attr_make_WITH_qOS_class, this function returns the QOS class and relative priority specified at the time; For any other attribute values, it returns a QOS class with QOS_CLASS_UNSPECIFIED and a relative priority of 0.

If the given queue is one of the global queues, this function returns the allocated QOS class values recorded under dispatch_get_global_queue with a relative priority of 0. Otherwise, 0 is returned. For the main queue, it returns the QOS value provided by qOS_class_main and the relative priority 0.

Queue: indicates the queue to be queried.

Relative_priority_ptr: pointer to the int variable that will be populated with a relative priority offset or NULL from the QOS class.

dispatch_set_target_queue

Dispatch_set_target_queue Sets the target queue of the given object.

API_AVAILABLE(macos(10.6), ios(4.0))
DISPATCH_EXPORT DISPATCH_NOTHROW
void
dispatch_set_target_queue(dispatch_object_t object,
        dispatch_queue_t _Nullable queue);
Copy the code

The target queue of the object is responsible for processing the object.

If a quality of service level and relative priority are not specified for a scheduling queue at creation time, the scheduling queue’s quality of service level is inherited from its target queue. The dispatch_get_global_queue function can be used to get the target queue for a particular quality of service class, but it is recommended to use dispatch_queue_attr_make_with_qOS_class instead.

Blocks submitted to a serial queue that the destination queue is another serial queue are not called at the same time as blocks submitted to the destination queue or any other queue that has the same destination queue.

The result of introducing loops into the hierarchy of the target queue is uncertain.

The destination queue of the scheduling source specifies where to submit its event handler and cancellation handler blocks.

The target queue of a scheduled I/O channel specifies where its I/O operations are performed. If the priority of the channel’s destination queue is set to DISPATCH_QUEUE_PRIORITY_BACKGROUND, then when there is I/O contention, I/O operations performed by dispatch_IO_read or dispatch_io_write on this queue are restricted.

For all other scheduled object types, the only function of the target queue is to determine where the object’s Finalizer function is called.

Typically, changing the target queue of an object is an asynchronous operation that does not take effect immediately and does not affect blocks that are already associated with the specified object.

However, if an object is inactive when dispatch_set_target_queue is called, the target queue change takes effect immediately and affects blocks already associated with the specified object. After activating the initially inactive object, a call to dispatch_set_target_queue results in the declaration and termination of the process.

If a scheduling queue is active and targeted by other scheduling objects, changing its destination queue results in undefined behavior.

Object: indicates the object to be modified. The result of passing NULL in this parameter is indeterminate.

Queue: New destination queue of the object. The queue is retained and the previous target queue, if any, is released. If queue is DISPATCH_TARGET_QUEUE_DEFAULT, the object destination queue of the given object type is set to the default destination queue.

dispatch_main

Dispatch_main Performs the block submitted to the main queue.

API_AVAILABLE(macos(10.6), ios(4.0))
DISPATCH_EXPORT DISPATCH_NOTHROW DISPATCH_NORETURN
void
dispatch_main(void);
Copy the code

This function “resides” on the main thread and waits for the block to be committed to the main queue, and never returns.

Applications that call NSApplicationMain or CFRunLoopRun on the main thread do not need to call dispatch_main.

dispatch_time_t

Dispatch_time_t Some abstract representation of time; Where zero means “now” and DISPATCH_TIME_FOREVER means “infinity”, each value in between is an opaque encoding.

typedef uint64_t dispatch_time_t;
Copy the code

dispatch_after

Dispatch_after Schedules a block to be executed on a given queue after a specified time.

#ifdef __BLOCKS__
API_AVAILABLE(macos(10.6), ios(4.0))
DISPATCH_EXPORT DISPATCH_NONNULL2 DISPATCH_NONNULL3 DISPATCH_NOTHROW
void
dispatch_after(dispatch_time_t when, dispatch_queue_t queue,
        dispatch_block_t block);
#endif

/ / sample:
// Execute after 2 seconds in serialQueue
dispatch_after(dispatch_time(DISPATCH_TIME_NOW, 2 * NSEC_PER_SEC), serialQueue, ^{
    NSLog(@"🚥 🚥 % @", [NSThread currentThread]);
});
Copy the code

Passing DISPATCH_TIME_NOW as the “when” parameter is supported, but not as ideal as calling dispatch_async. Passing DISPATCH_TIME_FOREVER is indeterminate.

When: time milestone returned from dispatch_time or dispatch_walltime.

Queue: The queue to which a given block will be committed at a specified time. The result of passing NULL in this parameter is indeterminate.

Block: a block of code to execute. The result of passing NULL in this parameter is indeterminate.

dispatch_after_f

Dispatch_after_f arranges a function to be executed on a given queue after a specified time, with context as its argument.

API_AVAILABLE(macos(10.6), ios(4.0))
DISPATCH_EXPORT DISPATCH_NONNULL2 DISPATCH_NONNULL4 DISPATCH_NOTHROW
void
dispatch_after_f(dispatch_time_t when, dispatch_queue_t queue,
        void *_Nullable context, dispatch_function_t work);
Copy the code

Refer to the dispatch_after function, but replace the block with a function.

Dispatch Barrier API

The Dispatch Barrier API is a mechanism for submitting barrier blocks to a Dispatch queue, similar to the Dispatch_async/Dispatch_sync API. It can implement an efficient reader/writer scheme. Barrier blocks exhibit special behavior only when they are submitted to queues created with the DISPATCH_QUEUE_CONCURRENT attribute. On such a queue, the barrier block will not run until all blocks submitted to the queue earlier have been completed, and any blocks submitted to the queue after the barrier block will not run until the barrier block is complete. Barrier blocks behave the same as blocks submitted using the Dispatch_ASYNc/Dispatch_SYNC API when submitted to global queues or queues created without the DISPATCH_QUEUE_CONCURRENT attribute. (If dispatch_async and dispatch_barrier_async are sent to dispatch_get_global_queue, the queue will be executed concurrently without the barrier function.)

dispatch_barrier_async

Dispatch_barrier_async Submits barrier blocks to be executed asynchronously on the dispatch queue. (Same as dispatch_async does not block the current thread and returns to execute subsequent statements, but subsequent blocks are not executed until the barrier block has completed.)

#ifdef __BLOCKS__
API_AVAILABLE(macos(10.7), ios(4.3))
DISPATCH_EXPORT DISPATCH_NONNULL_ALL DISPATCH_NOTHROW
void
dispatch_barrier_async(dispatch_queue_t queue, dispatch_block_t block);
#endif
Copy the code

Commit a block to a dispatch queue such as dispatch_async, but mark the block as a barrier (only relevant to the DISPATCH_QUEUE_CONCURRENT queue).

Queue: The scheduled queue to which the block is submitted. The system keeps the reference on the target queue until the block is complete. The result of passing NULL in this parameter is indeterminate.

Block: block submitted to the destination scheduling queue. This function performs Block_copy and Block_release on behalf of the caller. The result of passing NULL in this parameter is indeterminate.

dispatch_barrier_async_f

Dispatch_barrier_async_f same as above, just replace block with function.

API_AVAILABLE(macos(10.7), ios(4.3))
DISPATCH_EXPORT DISPATCH_NONNULL1 DISPATCH_NONNULL3 DISPATCH_NOTHROW
void
dispatch_barrier_async_f(dispatch_queue_t queue,
        void *_Nullable context, dispatch_function_t work);
Copy the code

dispatch_barrier_sync

Dispatch_barrier_sync Submits barrier blocks for synchronous execution on the dispatch queue (blocking the current thread until the Barrier block is complete).

#ifdef __BLOCKS__
API_AVAILABLE(macos(10.7), ios(4.3))
DISPATCH_EXPORT DISPATCH_NONNULL_ALL DISPATCH_NOTHROW
void
dispatch_barrier_sync(dispatch_queue_t queue,
        DISPATCH_NOESCAPE dispatch_block_t block);
#endif
Copy the code

Submit a block to a dispatch queue such as dispatch_sync (blocking the current thread until the block completes execution), but mark the block as a barrier (only relevant to DISPATCH_QUEUE_CONCURRENT queues).

Queue: The scheduled queue to which the block is submitted. The result of passing NULL in this parameter is indeterminate.

Block: Block to be called on the destination scheduling queue. The result of passing NULL in this parameter is indeterminate.

dispatch_barrier_sync_f

As with dispatch_barrier_sync, just replace blocks with functions.

API_AVAILABLE(macos(10.7), ios(4.3))
DISPATCH_EXPORT DISPATCH_NONNULL1 DISPATCH_NONNULL3 DISPATCH_NOTHROW
void
dispatch_barrier_sync_f(dispatch_queue_t queue,
        void *_Nullable context, dispatch_function_t work);
Copy the code

dispatch_barrier_async_and_wait

Dispatch_barrier_async_and_wait submits a block for synchronous execution on the dispatch queue.

#ifdef __BLOCKS__
API_AVAILABLE(macos(10.14), ios(12.0), tvos(12.0), watchos(5.0))
DISPATCH_EXPORT DISPATCH_NONNULL_ALL DISPATCH_NOTHROW
void
dispatch_barrier_async_and_wait(dispatch_queue_t queue,
        DISPATCH_NOESCAPE dispatch_block_t block);
#endif
Copy the code

Commit a block to a dispatch queue such as dispatch_asynC_AND_WAIT, but mark the block as a barrier (only relevant to the DISPATCH_QUEUE_CONCURRENT queue).

dispatch_barrier_async_and_wait_f

As with dispatch_barrier_asynC_and_wait, just replace blocks with functions.

API_AVAILABLE(macos(10.14), ios(12.0), tvos(12.0), watchos(5.0))
DISPATCH_EXPORT DISPATCH_NONNULL1 DISPATCH_NONNULL3 DISPATCH_NOTHROW
void
dispatch_barrier_async_and_wait_f(dispatch_queue_t queue,
        void *_Nullable context, dispatch_function_t work);
Copy the code

Dispatch queue-specific contexts

This API allows different subsystems to associate contexts with shared queues without risk of collisions, and to retrieve contexts from blocks executed on that queue or any of its sub-queues in the target queue hierarchy.

dispatch_queue_set_specific

Dispatch_queue_set_specific associate sub-subsystem specific contexts with scheduling queues to obtain sub-subsystem specific keys (where the key parameter type is const void * (pointer pointing can be changed, but what it points to cannot be changed by this pointer).

API_AVAILABLE(macos(10.7), ios(5.0))
DISPATCH_EXPORT DISPATCH_NONNULL1 DISPATCH_NOTHROW
void
dispatch_queue_set_specific(dispatch_queue_t queue, const void *key,
        void *_Nullable context, dispatch_function_t _Nullable destructor);
Copy the code

When a new context is set for the same key, or after all references to the queue are released, the specified destructor is invoked using the context on the default priority global concurrent queue.

Queue: Changes the scheduling queue. The result of passing NULL in this parameter is indeterminate.

Key: The key for which context is to be set, usually a pointer to a subsystem-specific static variable. The key is only compared as a pointer and is never dereferenced. It is not recommended to pass string constants directly, keep NULL keys, and ignore attempts to set context for them.

Context: The object’s new subsystem specific context. This may be NULL.

Destructor: Pointer to the destructor. This can be NULL, and if the context is NULL, it is ignored.

dispatch_queue_get_specific

Dispatch_queue_get_specific returns the sub-system-specific context associated with the dispatch queue for sub-system-unique keys.

API_AVAILABLE(macos(10.7), ios(5.0))
DISPATCH_EXPORT DISPATCH_NONNULL1 DISPATCH_PURE DISPATCH_WARN_RESULT
DISPATCH_NOTHROW
void *_Nullable
dispatch_queue_get_specific(dispatch_queue_t queue, const void *key);
Copy the code

Returns the context of the specified key if it has been set on the specified queue.

Key: The key to get the context, usually a pointer to a subsystem-specific static variable. The key is only compared as a pointer, not dereferenced, and passing string constants directly is not recommended.

Result: Specifies the context of the key; NULL if no context is found.

dispatch_get_specific

Dispatch_get_specific returns the current subsystem specific context for the subsystem unique key.

API_AVAILABLE(macos(10.7), ios(5.0))
DISPATCH_EXPORT DISPATCH_PURE DISPATCH_WARN_RESULT DISPATCH_NOTHROW
void *_Nullable
dispatch_get_specific(const void *key);
Copy the code

When a block call is executed from a queue, returns the context of the specified key if it has been set on the queue; Otherwise, return the result of dispatch_get_specific performed on the target queue of the queue; NULL is returned if the current queue is a global concurrent queue.

Key: The key to get the context, usually a pointer to a subsystem-specific static variable. The key is only compared as a pointer, not dereferenced, and passing string constants directly is not recommended.

Result: Specifies the context of the key; NULL if no context is found.

Dispatch assertion API

The Dispatch Assertion API asserts at runtime that code is executing in the context of (or from) a given queue. It can be used to check whether blocks accessing the resource do so from the appropriate queue for securing the resource. It can also be used to verify that a block that could cause a deadlock if run on a given queue is never executed on that queue.

dispatch_assert_queue

Dispatch_assert_queue verifies that the current block is executed on the given scheduling queue.

API_AVAILABLE(macos(10.12), ios(10.0), tvos(10.0), watchos(3.0))
DISPATCH_EXPORT DISPATCH_NONNULL1
void
dispatch_assert_queue(dispatch_queue_t queue)
        DISPATCH_ALIAS_V2(dispatch_assert_queue);
Copy the code

This function verifies that some code wants to run on a particular scheduling queue.

This function returns if the currently executing block has been committed to the specified queue or any queue targeting it (see dispatch_set_target_queue).

If the currently executing block is submitted using the synchronization API (dispatch_sync, dispatch_barrier_sync, etc.), the context of the submitted block is also evaluated (recursively). This function returns if it is found that the synchronized committed block itself has been committed to the specified queue or any queue that targets it.

Otherwise, this function declares that the explanation is logged to the system and the application is terminated.

Passing the result of dispatch_get_MAIN_queue to this function verifies whether the current block has been committed to the main queue or the queue committed to it, or whether it is running on the main thread (in any context).

This function also declares and terminates the application when dispatch_ASSERt_queue is called outside the context of the submitted block (for example, from the context of a manually created thread using pthread_CREATE).

Queue: The scheduling queue on which the current block should run. The result of passing NULL in this parameter is indeterminate.

dispatch_assert_queue_barrier

Dispatch_assert_queue_barrier verifies that the current block is executing on a given scheduling queue and acts as a barrier on that queue.

API_AVAILABLE(macos(10.12), ios(10.0), tvos(10.0), watchos(3.0))
DISPATCH_EXPORT DISPATCH_NONNULL1
void
dispatch_assert_queue_barrier(dispatch_queue_t queue);
Copy the code

The behavior is exactly the same as dispatch_ASSERt_queue, and also checks whether the current block acts as a barrier on the specified queue, if the specified queue is serial, Always true (see DISPATCH_BLOCK_BARRIER or dispatch_barrier_async).

Queue: Scheduling queue in which the current block should run as a barrier. The result of passing NULL in this parameter is indeterminate.

dispatch_assert_queue_not

Dispatch_assert_queue_not Verifies that the current block is not executing on the given dispatch queue.

API_AVAILABLE(macos(10.12), ios(10.0), tvos(10.0), watchos(3.0))
DISPATCH_EXPORT DISPATCH_NONNULL1
void
dispatch_assert_queue_not(dispatch_queue_t queue)
        DISPATCH_ALIAS_V2(dispatch_assert_queue_not);
Copy the code

Equivalent to dispatch_ASSERt_queue, but the equality test is the opposite. This means that it will terminate the application when dispatch_ASSERt_queue returns and vice versa.

Queue: Scheduling queue on which the current block should not be running. The result of passing NULL in this parameter is indeterminate.

Dispatch_assert_queue_debug, dispatch_ASSERt_queue_barrier_DEBUG, and dispatch_ASSERt_queue_not_DEBUG are available only in DEBUG mode.

The <dispatch/queue.h> file contains a lot of information, which requires patience to learn. ⛽ ️ ⛽ ️

Refer to the link

Reference link :🔗

  • swift-corelibs-libdispatch-main
  • Dispatch official document
  • IOS libdispatch analyses
  • GCD– Baidu Encyclopedia entry
  • IOS multithreading: an exhaustive summary of “GCD”
  • IOS basic learning – multi-threaded GCD exploration
  • Types in GCD
  • IOS Objective-C GCD queue
  • Perverted libDispatch structure analysis -object structure