The GCD definition

Grand Central Dispatch(GCD Grand Central Dispatch 😂) is one of the techniques for performing tasks in one step. Thread management of technologies in applications is typically implemented in code at the system level. All the developer needs to do is define the task to execute, append it to the appropriate Dispatch Queue, and GCD can generate the necessary threads and schedule the task execution. Because thread management is implemented as part of the system, it can be managed uniformly and tasks can be performed, making it more efficient than previous threads.

In other words, GCD implements extremely tedious multithreaded programming with a very concise description.

Dispatch_async (queue, ^{/* * long processing * such as database access, AR recognition, etc. */ * * Long processing ends, the main thread uses the result of the processing. */ dispatch_async(dispatch_get_main_queue(), ^{/* * processing that can only be performed on the main thread, such as update interface */})});Copy the code

Dispatch_async (queue, ^{indicates that processing is performed in a background thread.

Dispatch_async (dispatch_get_main_queue(), ^{allows processing to be performed in the main thread.

Before importing the GCD, The Cocoa framework provides the NSObject class performSelectorInBackground: < # (nonnull SEL) # > withObject: < # (nullable Id # > instance methods and performSelectorOnMainThread: < # (nonnull SEL) # > withObject: < # # (nullable id) > WaitUntilDone :<#(BOOL)#> Simple multithreaded programming techniques such as instance methods.

For example, we can implement the previous code using GCD using the performSelector family of methods:

/* * NSObject performSelectorInBackground: withObject: Methods to perform a background thread * / - (void) launchThreadByNSObject_performSelectorInBackground_withObject {[self performSelectorInBackground:@selector(doWork) withObject:nil];

- (void)doWork{ NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init]; /* * Long processing * such as database access, AR recognition, etc. */ * * Long processing ends, the main thread uses the processing result. */ [self performSelectorOnMainThread:@selector(doneWork) withObject:nil waitUntilDone:NO]; [pool drain]; } /* * main thread processing */ -(void)doneWork{/* * processing that can only be performed on the main thread, such as updating the interface */}Copy the code

Multithreaded programming

A piece of source code is converted by the compiler to the CPU command line (binary code). Assemble CPU command lines and data as an application to install on a Mac or iPhone. The Mac and iPhone operating systems OS X and iOS first configure the CPU command line contained in the application to the memory after the application is launched as instructed by the user. The CPU executes the CPU command line one by one, starting at the address specified by the application.

In the case of control statements or function calls such as the IF and for statements of OC, the address of the execution command line is far away from the current location (location migration). However, since a CPU can only execute one command at a time, it cannot execute two parallel commands that are separated somewhere, because the line of CPU commands executed through the CPU is like an unbifurcated avenue, and its execution does not diverged.

The “CPU command executed by 1 CPU is listed as a unbifurcated path”, i.e. thread.

Now a physical CPU chip actually has 64 cpus (64 cores), and if one CPU core is virtually working for two CPU cores, it makes sense to use more than one CPU core on a computer. Nevertheless, the concept of threads remains the same.

This kind of no bifurcation path path is not only 1, when there are more than one, that is multithreading. In multithreading, one CPU core executes commands on multiple paths.

Although CPU technology continues to advance, one CPU core can always execute one CPU command at a time.

The core XNU kernel of OS X and iOS switches execution paths when operating system events occur, such as every time a system call is invoked. The path status information, such as THE CPU register information, is saved to the memory block dedicated to the path. The CPU register information is restored from the memory block dedicated to the target path, and the CPU command line of the path switchover continues. This is called “context switching”.

Because multithreaded applications can switch context from one thread to another multiple times, it looks as if one CPU core could execute multiple threads in parallel. And in the case of multiple CPU cores, it doesn’t look like it, but really provides the technology for multiple CPU cores to execute multiple threads in parallel. This technique of using multithreaded programming is called multithreaded programming.

However, multithreaded programming is actually a programming technique prone to all kinds of problems. For example, multiple threads updating the same resource will lead to data inconsistencies (data contention), threads that stop waiting time will cause multiple threads to continue to wait on each other (deadlock), and using too many threads will consume a lot of memory.

Multithreaded programming should be used, even though it is prone to various problems, because multithreaded programming ensures that your application is responsive to problems.

When the application starts, the user interface is depicted by the thread that executes first, the “main thread,” the time it takes to touch the screen, and so on. If long processing takes place in the main thread, such as AR using the image recognition or database access, the main thread can be blocked. In OS X and iOS applications, this prevents the execution of the main loop called RunLoop in the main thread, causing problems such as the inability to update the user interface and the application’s screen stopping for long periods of time.

This is why long processing is not executed in the main thread but in other threads.

Using multithreaded programming, the performance of the user interface can be maintained in the long time processing.


1.Dispatch Queue

Apple’s official instructions to the COMMUNIST Party are that all developers need to do is define tasks to execute and nail them to the appropriate Dispatch Queue.

The code is described as follows:

Dispatch_async (queue, ^{/* * desired task */});Copy the code

The source code uses Block syntax to “define the task you want to perform” and “append” assignments to the “Dispatch queue” in the variable queue via the dispatch_async function. This alone allows you to specify that the Block is executed in another thread.

A Dispatch Queue is a wait Queue that performs processing. Through apis such as the dispatch_async function, the desired processing is described in Block syntax and appended to the Dispatch Queue. The Dispatch Queue performs processing in an append order (first in FIRt out FIFO).

In addition, there are two kinds of Dispatch queues in the execution process, one is the Serail Dispatch Queue waiting for the current execution process, and the other is the Concurrent Dispatch Queue not waiting for the current execution process.

Type of Dispatch Queue instructions
Serail Dispatch Queue Wait for ongoing processing to complete
Concurrent Dispatch Queue Do not wait for ongoing processing to end

Serial Dispatch Queue – The thread pool provides only one thread to execute a task, so the latter task must wait until the previous task finishes.

Concurrent Dispatch Queue — The thread pool provides multiple threads to execute tasks, so multiple tasks can be started in sequence for Concurrent execution.

This allows multiple processes to be executed in parallel without waiting for processing to end, but the number of parallel processes depends on the current state of the system. That is, iOS and OS X determine the number of Concurrent processes in the Concurrent Dispatch Queue based on the current system status such as the number of processes in the Dispatch Queue, number of CPU cores, and CPU load.

The XNU kernel, the core of iOS and OS X, determines how many threads should be used and generates only as many as needed to perform processing. In addition, the XNU kernel terminates threads that are no longer needed when processing ends and the number of processes that should be executed is reduced. The XNU kernel perfectly manages and executes multiple processing threads using only a Concurrent Dispatch Queue.

For example, a Concurrent Dispatch Queue can process four threads in parallel.

Thread 0 Thread 1 Thread 2 Thread 3
blk0 blk1 blk2 blk3
blk4 blk6 blk5

When processing is performed in a Concurrent Dispatch Queue like this, the order of execution changes depending on what is being processed and the state of the system. It differs from the Serail Dispatch Queue, which executes in a fixed order. Use the Serial Dispatch Queue when you cannot change the order of processing executed or when you do not want to execute multiple processes in parallel.


The Dispatch Queue is generated using GCD’s dispatch_queue_create function.

The Serial Dispatch Queue generates the following code:

dispatch_queue_t mySerialDispatchQueue = dispatch_queue_create("com.example.gcd.MySerialDispatchQueue",NULL);
Copy the code

Precautions for the number of Serial Dispatch queues:

A Concurrent Dispatch Queue executes multiple appends in parallel, whereas a Serial Dispatch Queue can only execute one append at a time. Although the Serial Dispatch Queue and Concurrent Dispatch Queue are limited by system resources, any number of Dispatch queues can be generated using the dispatch_queue_create function.

When multiple Serial Dispatch queues are generated, each Serial Dispatch Queue is executed in parallel. Although only one append can be executed simultaneously in one Serial Dispatch Queue, four processes can be executed simultaneously if one is appended to each of the four Serial Dispatch queues.

If multithreading is used too much, it will consume a lot of memory, cause a lot of context switching, and greatly reduce the response performance of the system. So use the Serial Dispatch Queue only when, in order to avoid one of the multithreaded programming problems, a single thread updating the same resource causes data contention.

Use the Concurrent Dispatch Queue when you want to perform processing in parallel without problems such as data contention. For a Concurrent Dispatch Queue, however, the problems of the Serial Dispatch Queue do not occur because the XNU kernel only uses efficiently managed threads, no matter how many are generated.

The first argument to the dispatch_queue_CREATE function specifies the name of the Serial Dispatch Queue. The reverse order FQDN (fully qualified domain name) of the application ID is recommended for the Dispatch Queue. This name is represented as the Dispatch Queue name in the debugger for Xcode and Instruments. In addition, the name appears in the CrashLog generated when the application crashes.

When the Serial Dispatch Queue is generated, the second parameter is specified as NULL; When generating a Concurrent Dispatch Queue, the second parameter is specified as DISPATCH_QUEUE_CONCURRENT.

The return value of dispatch_queue_create is dispatch_queue_t for Dispatch Queue.

dispatch_queue_t  myConcurrentDispatchQueue = dispatch_queue_create("com.example.gcd.MyConcurrentDispatchQueue",DISPATCH_QUEUE_CONCURRENT);

dispatch_async(myConcurrentDispatchQueue, ^{
    NSLog(@"block on myConcurrentDispatchQueue");

Copy the code

The above code performs the specified in myConcurrentDispatchQueue Block.

Also note that the Dispatch Queue is not handled as an OC object like a Block, so it must be released manually.

Copy the code

Accordingly, there is also a dispatch_retain function

Copy the code

Like reference counting memory management, the Dispatch Queue is managed by reference counting of dispatch_retain and dispatch_release functions.

In the above example, a Block is appended to the Concurrent Dispatch Queue via dispatch_async. In other words, the Block holds the Dispatch Queue via dispatch_retain. It does not matter whether the type is Serial Dispatch Queue or Concurrent Dispatch Queue. Once the Block is finished, the Dispatch Queue held by the Block is released via dispatch_release.

In other words, if a Block is appended to the Dispatch Queue by dispatch_async, even if the Dispatch Queue is immediately released, the Dispatch Queue will not be abandoned because it is owned by the Block, because the Block can execute. When the Block finishes, the Dispatch Queue is released, at which point no one has the Dispatch Queue because it’s obsolete.

In addition, the GCD API with “Create” in its API name must be held through dispatch_retain when generating objects, and must be released through dispatch_release when the generated objects are not needed.

3.Main Dispatch Queue/Global Dispatch Queue

Dispatch Queue provided by system standards: Main Dispatch Queue/Global Dispatch Queue

The Main Dispatch Queue is the Dispatch Queue executed by the Main thread. The Main Dispatch Queue is a Serial Dispatch Queue because there is only one Main thread.

The appending to the Main Dispatch Queue takes place in the Main thread RunLoop. Because it is executed on the Main thread, some processing that must be performed on the Main thread, such as interface updates, is appended to the Main Dispatch Queue.

The other Global Dispatch Queue is a Concurrent Dispatch Queue that can be used by all applications. It is not necessary to generate Concurrent Dispatch queues one by one through the dispatch_queue_create function. Just get the Global Dispatch Queue and use it.

The Global Dispatch Queue has four execution priorities: high priority, default priority, low priority, and background priority. Threads for the Global Dispatch Queue managed through the XNU kernel use the execution priority of their respective Global Dispatch Queue as the execution priority of the thread. When you append to the Global Dispatch Queue, you should select the Global Dispatch Queue with the execution priority corresponding to the content to be processed.

However, threads used in the Global Dispatch Queue via the XNU kernel do not guarantee real time, so execution priority is only a rough guess. For example, a Global Dispatch Queue with a background priority can only differentiate to this extent when processing content execution as optional.

Type of Dispatch Queue

The name of the Type of Dispatch Queue instructions
Main Dispatch Queue Serial Dispatch Queue Main thread execution
Global Dispatch Queue(High Priority) Concurrent Dispatch Queue Execution priority: High
Global Dispatch Queue(Default Priority) Concurrent Dispatch Queue Execution priority: default
Global Dispatch Queue(Low Priority) Concurrent Dispatch Queue Execution priority: Low
Global Dispatch Queue(Background Priority) Concurrent Dispatch Queue Execution priority: Background

The methods for obtaining various Dispatch queues are as follows:

Dispatch_queue_t mainDispatchQueue = dispatch_get_main_queue(); Global DispatchQueue (High Priority) dispatch_queue_t globalDispatchQueueHigh = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0); / / Global Dispatch Queue (the Default Priority) acquisition method dispatch_queue_t globalDispatchQueueDefault = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0); Dispatch_queue_t globalDispatchQueueLow = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_LOW, 0); / / Global Dispatch Queue (Background Priority) acquisition method dispatch_queue_t globalDispatchQueueBackground = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_BACKGROUND, 0);Copy the code

In addition, the dispatch_retain and dispatch_release functions for the Main and Global Dispatch queues do not cause any changes or problems.

/* * Execute Block */ in the Global Dispatch Queue(Default Priority) dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{/* * parallel processing */ /* * Execute Block */ dispatch_async(dispatch_get_main_queue(), ^{// processing that can only be performed in the Main process});  });Copy the code


Serial/Concurrent Dispatch queues (Serial/Concurrent Dispatch queues) generated by the dispatch_queue_create function use threads with the same execution priority as the default Global Dispatch Queue. The dispatch_set_target_queue function is used for the execution priority of the programmatically generated Dispatch Queue.

dispatch_queue_t mySerialDispatchQueue = dispatch_queue_create("com.example.gcd.MySerialDispatchQueue",NULL); / / Global Dispatch Queue (Background Priority) acquisition method dispatch_queue_t globalDispatchQueueBackground = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_BACKGROUND, 0); dispatch_set_target_queue(mySerialDispatchQueue, globalDispatchQueueBackground);Copy the code

Specify the Dispatch Queue whose execution priority you want to change as the first parameter of the dispatch_set_target_queue function, and the Global Dispatch Queue whose priority is the same as the execution priority you want to use as the second parameter (target).

By specifying the Dispatch Queue as parameter to the dispatch_set_target_queue function, you can not only change the execution priority of the Dispatch Queue, but also make the execution level of the Dispatch Queue. If the dispatch_set_target_queue function is executed on multiple Serial Dispatch queues and the target is specified as a Serial Dispatch Queue, Multiple Serial Dispatch queues that would otherwise execute in parallel can only execute one process simultaneously on the target Serial Dispatch Queue.

In cases where non-parallel processing must be appended to multiple Serial Dispatch queues, you can prevent parallel processing by specifying the destination to a particular Serial Dispatch Queue using the dispatch_set_target_queue function.


Processing is performed after a specified time, using the dispatch_after function.

dispatch_time_t time = dispatch_time(DISPATCH_TIME_NOW, 3ull*NSEC_PER_SEC);

dispatch_after(time, dispatch_get_main_queue(), ^{
        NSLog(@"waited at least three seconds");
Copy the code

Note that the dispatch_after function does not perform processing after a specified time, but merely appending processing to the Dispatch Queue at a specified time.

Since the Main Dispatch Queue is executed in the Main RunLoop, in runloops such as every 1/60 second, the blocks are executed at the fastest after 3 seconds and at the slowest after 3 seconds +1/60 seconds. This will take longer if the Main Dispatch Queue has a large number of processing appending or if the Main thread itself is extended.

The first argument to this function is a value of type dispatch_time_t, which can be made with either dispatch_time or diapatch_walltime.

The first parameter of the dispatch_time function is often used with the value DISPATCH_TIME_NOW, which represents the present time. The second argument is the product of the value and NSEC_PER_SEC to produce a value in nanoseconds. “Ull” is a C numeric literal, a string used to indicate a type in reality (for unsigned long long). If NSEC_PER_MSEC is used, it can be calculated in milliseconds.

The dispatch_walltime function gets a value of type dispatch_time_t from the struct timespec time used in POSIX. It is used to calculate absolute time. This can be used as a crude alarm clock function.

Struct timespec times can easily be made with NSDate class objects.

dispatch_time_t getDispatchTimeByDate(NSDate *date)
    NSTimeInterval interval;
    double second,subsecond;
    struct timespec time ;
    dispatch_time_t milestone;
    interval = [date timeIntervalSince1970];
    subsecond = modf(interval, &second);
    time.tv_sec = second;
    time.tv_sec = subsecond * NSEC_PER_SEC;
    milestone = dispatch_walltime(&time, 0);
    return milestone;

Copy the code

6.Dispatch Group

It is not uncommon to want to finish processing after multiple processes appended to the Dispatch Queue have all finished. When only one Serial Dispatch Queue is used, you simply append all the processing you want to execute to the Serial Dispatch Queue and append the processing at the end. However, things get complicated when using a Concurrent Dispatch Queue or multiple Dispatch queues at the same time, and Dispatch groups are used.

dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
dispatch_group_t group  = dispatch_group_create();
dispatch_group_async(group, queue, ^{
dispatch_group_async(group, queue, ^{
dispatch_group_async(group, queue, ^{
dispatch_group_notify(group, dispatch_get_main_queue(), ^{

Copy the code

Because appends like the Global Dispatch Queue (Concurrent Dispatch Queue) are executed in parallel by multiple threads, the order in which the appends are executed is variable. Changes when executed, but the done of the result of this execution must be the last output.

Whatever processing is appended to the Dispatch Queue, the Dispatch Group can be used to monitor the end of the execution of those processing. Once all processing execution is detected to have ended, the finished processing can be appended to the Dispatch Queue. That’s why you use Dispatch Groups.

The dispatch_group_create function generates a dispatch_group_t Dispatch Group. You need release when you’re done with it.

The dispatch_group_async function appends blocks to a specified Dispatch Queue. Unlike the dispatch_async function, the generated Dispatch Group is specified as the first argument and the specified Block belongs to the specified Dispatch Group.

Also, as when a Block is appended to the Dispatch Queue, the Block holds the Dispatch Group through the dispatch_retain function, making the Block part of the Dispatch Group. If the Block finishes executing, the Block releases the holding Dispatch Group via dispatch_release. Once the Dispatch Group is used, the blocks belonging to the Dispatch Group are released by dispatch_release.

At the end of all processing appended to the Dispatch Group, the dispatch_group_notify function in the source code appends the Block to the Dipspatch Queue, specifying the first parameter as the Dispatch Group to be part time. At the end of all processing appended to the Dispatch Group, the Block of the third parameter is appended to the Dispatch Queue of the second parameter.

Alternatively, you can use the dispatch_group_wait function within the Dispatch Group to simply wait for all processing execution to complete.

Copy the code
dispatch_time_t time = dispatch_time(DISPATCH_TIME_NOW, (int64_t)(1ull * NSEC_PER_SEC);
long result = dispatch_group_wait(group,time);
if(result == 0){// All processes belonging to the Dispatch Group are completed}else{// a process belonging to the Dispatch Group is still executing}Copy the code

If the dispatch_group_wait function does not return a value of 0, it means that a process belonging to the Dispatch Queue is still executing despite the specified time elapsed. If the return value is 0, then all processing is complete. When the wait time is DISPATCH_TIME_FOREVER and the dispatch_group_wait function returns, all dispatch_group_wait processes must be completed, so the return value is always 0.

Waiting means that once the dispatch_group_wait function is called, it is called and does not return. That is, the current thread of dispatch_group_wait stops. The thread executing the dispatch_group_wait function stops at the time specified in the dispatch_group_wait function or before all processing belonging to the specified Dispatch Group has finished.

If you specify DISPATCH_TIME_NOW, you can determine whether a process that belongs to the Dispatch Group is complete without waiting.

long reuslt = dispatch_group_wait(group,DISPATCH_TIME_NOW);
Copy the code

In each RunLoop of the Main thread, it is possible to check whether the execution has finished so as not to waste extra wait time. However, it is generally recommended to append the execution to the Main Dispatch Queue with dispatch_group_notify. More simplified code.


The dispatch_barrier_async function waits for all Concurrent processing appended to the Concurrent Dispatch Queue to complete before appending the specified processing to the Concurrent Dispatch Queue. The Concurrent Dispatch Queue then reverts to normal action after the processing appended by the dispatch_barrier_async function has completed, and the processing appended to the Concurrent Dispatch Queue starts and executes.

Copy the code

Efficient database and file access can be achieved using the Concurrent Dispatch Queue and dispatch_barrier_async functions.


Dispatch_async is asynchronous execution. The dispatch_async dispatch_async dispatch_async dispatch_async is executed asynchronously.

Dispatch_sync means that the specified Block synchronization is appended to the specified Dispatch Queue. The dispatch_sync function waits until the appended Block is complete.

A call to dispatch_sync does not return until the specified processing is complete, causing deadlocks.


The dispatch_apply function is an API associated with dispatch_sync and Dispatch Queue. This function appends the specified Block to the specified Dispatch Queue a specified number of times and waits for all processing to complete.

dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
dispatch_apply(10, queue, ^(size_t index) {

Copy the code

The execution result is as follows:

ImageOrientation[14610:4954147] 4 2017-12-16 20:05:39.811031+0800 ImageOrientation[14610:4954147 ImageOrientation[14610:4954149] 2 2017-12-16 20:05:39.811060+0800 ImageOrientation[14610:4954148] 1 2017-12-16 ImageOrientation[14610:495445] 0 2017-12-16 20:05:39.812820+0800 ImageOrientation[14610:4954149] 4 2017-12-16 20:05:39.812878+0800 ImageOrientation[14610:4954147] 5 2017-12-16 20:05:39.812891+0800 ImageOrientation[14610:4954148] 6 2017-12-16 20:05:39.813257+0800 ImageOrientation[14610:4954149] 7 2017-12-16 ImageOrientation[14610:4954147] 8 2017-12-16 20:05:39.813312+0800 ImageOrientation[14610:4954148] 9 The 2017-12-16 20:05:39. 814222 + 0800 ImageOrientation (14610-4954045)done

Copy the code

Because processing is performed in the Global Dispatch Queue, the execution time for each process is variable. However, the last done in the output must be in the last position because the dispatch_apply function waits for all processing to complete.

The first argument is the number of repetitions, the second argument is the Dispatch Queue of the appended object, and the third argument is the appended processing. The third argument is a Block with an argument, which is used to append blocks repeatedly as the first argument and to distinguish between blocks.

Such as:

dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
dispatch_apply([array count], queue, ^(size_t index) {
    NSLog(@"%zu:%@",index,[array objectAtIndex:index]);

Copy the code

Since the dispatch_apply function waits for processing to complete, it is recommended that dispatch_apply be executed in the dispatch_async function

dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0); Dispatch_async (queue,^{dispatch_apply([array count], queue, dispatch_apply([array count], queue, ^(size_t index) { NSLog(@"%zu:%@",index,[array objectAtIndex:index]); }); // Dispatch_asyc (dispatch_get_main_queue(),^{NSLog(@)"done");


Copy the code


The dispatch_suspend function suspends the specified Dispatch Queue.

Copy the code

The dispatch_resume function restores the specified Dispatch Queue.

Copy the code

These functions have no effect on the processing that has already been executed; after suspension, processes appended to the Dispatch Queue that have not yet been executed stop executing after that, while recovery allows these processes to continue executing.

11.Dispatch Semaphore

Dispatch Semaphore is a hold count signal. This technique is a count type signal in multithreaded programming. The so-called signal is similar to the flag often used to cross the road. It can raise the flag when passing, and lower the flag when not passing. In Dispatch Semaphore, count is used to do this. Wait when the count is 0, subtract 1 without waiting when the count is 1 or greater than 1.

dispatch_semaphore_t semaphore = dispatch_semaphore_create(1);
Copy the code

The argument represents the initial value of the count. This example initializes the count to 1.

Copy the code

The dispatch_semaphore_wait function waits for the Dispatch Semaphore count to reach heavy rain or equal to 1. When the count is greater than or equal to 1, or when the count is greater than or equal to 1 in standby, the count is subtracted and returned from the dispatch_semaphore_wait function. The second parameter is the wait time.

dispatch_time_t time = dispatch_time(DISPATCH_TIME_NOW,1ull*NSEC_PER_SEC);

long result = dispatch_semaphore_wait(sepmaphore,time);

if(result == 0){// Since the count of the Dispatch Semaphore is greater than or equal to 1, or the count is greater than or equal to 1 during the specified time in standby, the count minus 1 // the execution requires exclusive control processing}else{// Since the count of the Dispatch Semaphore is 0, standby until the specified time is reached}Copy the code
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT,0); // Generate a Dispatch semaphore with the initial count set to 1. Ensure that only one thread can access the NSMutableArray class object at a time. dispatch_semaphore_t semaphore = dispatch_semaphore_create(1); NSMutableArray * array = [[NSMutableArray alloc] init];for(int i=0; i<100000; I++){dispatch_async(queue,^{// wait for dispatch semaphore count to be greater than or equal to 1. dispatch_semaphore_wait(semaphore,DISPATCH_TIME_FOREVER); // Since the count is greater than or equal to 1, subtract 1 from the count. Dispatch_semaphore_wait returns. // It is safe to access the NSMutableArray class object because only one thread can access it. [array addObject:[NSNumber numberWithInt:i]]; // If there is a thread waiting for the count to increase through ispatch_semaphore_wait, it is executed by the thread waiting first. dispatch_semaphore_signal(semaphore); }); } // Use end, release. dispatch_release(semaphore);Copy the code


This function is the API that ensures that the specified processing is executed only once in the execution of the application.

static int initialized = NO;
if(initialized == NO){// Initialize operation... initialized = YES; }Copy the code

Can be written as:

static dispatch_one_t pred; Dispatch_once (&pred,^{// initialization});Copy the code

13. Dispatch I/O

Often used for file segmentation reading, improve reading speed.