What is communist Party CD?

Grand Central Dispatch, or GCD, is a set of low-level apis that provide a new way to do concurrent programming. In terms of basic functionality, GCD is a bit like NSOperationQueue in that both allow programs to split tasks into single tasks and submit them to a work queue for concurrent or serial execution. GCD is more low-level and efficient than NSOpertionQueue, and it’s not part of the Cocoa framework.

In addition to the parallel execution of code, GCD provides a highly integrated event control system. Handles can be set up to respond to file descriptors, Mach ports (Mach ports are used for interprocess communication on OS X), processes, timers, signals, user-generated events. These handles are executed concurrently via GCD.

GCD’s API is largely based on blocks, although GCD can be used without blocks, such as using traditional C mechanisms to provide function and context Pointers. When used in conjunction with Blocks, GCD has proven to be very easy to use and works at its best.

You can tap “man Dispatch” on your Mac to get GCD files.

Why?

GCD offers many advantages over traditional multithreaded programming:

Easy to use :GCD is easier to use than Thread. Because GCD is based on work Unit rather than computation like Thread, GCD can control tasks such as waiting for tasks to end, monitoring file descriptors, executing code periodically, and suspending work. Block-based ancestry makes it extremely easy to pass context between different code scopes.

Efficiency :GCD is implemented so lightweight and elegantly that it is in many places more practical and faster than creating dedicated threads that consume resources. It’s about ease of use: part of what makes GCD easy to use is that you can just use it without worrying so much about efficiency.

Performance :GCD automatically increases or decreases the number of threads based on system load, which reduces context switching and increases computational efficiency.

Dispatch Objects

Although GCD is pure C, it is built in an object-oriented style. A GCD object is called a Dispatch object. Dispatch objects, like Cocoa objects, are reference-counted. Use the dispatch_RELEASE and dispatch_retain functions to manipulate the reference count of the Dispatch object for memory management. But unlike Cocoa objects, dispatch objects do not participate in the garbage collection system, so even if GC is enabled, you have to manually manage the memory of GCD objects.

Dispatch queues and Dispatch Sources (described below) can be hung and recovered, can have an associated arbitrary context pointer, and can have an associated task completion trigger function. See “man Dispatch_object” for more information on these functions.

Dispatch Queues

The basic concept of GCD is the Dispatch Queue. The Dispatch queue is an object that accepts tasks and executes them in a first-come, first-executed order. Dispatch queues can be concurrent or serial. Concurrent tasks are appropriately concurrent based on system load like NSOperationQueue, and serial queues only perform a single task at a time.

There are three queue types in GCD:

The main queue: Has The same function as The main thread. In fact, tasks submitted to the main Queue are executed in the main thread. The main queue can be obtained by calling dispatch_get_main_queue(). Since the main queue is associated with the main thread, this is a serial queue.

Global queues are concurrent queues shared by the entire process. There are three global queues in the process: high, medium (default), and low priority queues. You can call the dispatch_get_global_queue function to pass in the priority to access the queue.

User queues: User queues (GCD doesn’t call them that, but there’s no name for them, so we call them user queues) are created with the function dispatch_queue_create. These queues are serial. Because of this, they can be used to complete a synchronization mechanism, sort of like mutex in traditional threads.

Create a queue

To use a user queue, we first have to create one. Just call the function dispatch_queue_create. The first argument to the function is a label, which is purely for debug. Apple would like to suggest that we use inversion domain naming queue, such as “com. Dreamingwish. Subsystem. Task”. These names are displayed in the crash log and can also be called by the debugger, which can be useful in debugging. The second argument is not currently supported, so just pass NULL.

Submit the Job

Submitting a Job to a queue is simple: call the dispatch_async function, passing in a queue and a block. The queue will execute the block’s code when it’s the block’s turn to execute. Here is an example of a task in the background that performs a very long task:

dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{

[self goDoSomethingLongAndInvolved];

NSLog(@”Done doing something long and involved”);

});

The dispatch_async function returns immediately and the block is executed asynchronously in the background.

Of course, it’s usually not a problem to simply NSLog a message when the task is done. In a typical Cocoa application, you will most likely want to update the interface when the task is complete, which means executing some code in the main thread. You can do this simply by using nested dispatches, executing background tasks in the outer layer and dispatching tasks to the main queue in the inner layer:

dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{

[self goDoSomethingLongAndInvolved];

dispatch_async(dispatch_get_main_queue(), ^{

[textField setStringValue:@”Done doing something long and involved”];

});

});

There is also a function called dispatch_sync, which does the same thing as dispatch_async, but waits for the code in the block to complete and return. In combination with the __block type modifier, it can be used to get a value from the block in execution. For example, you might have a piece of code executing in the background that needs to fetch a value from the interface control layer. You can simply do this with dispatch_sync:

__block NSString *stringValue;

dispatch_sync(dispatch_get_main_queue(), ^{

// __block variables aren’t automatically retained

// so we’d better make sure we have a reference we can keep

stringValue = [[textField stringValue] copy];

});

[stringValue autorelease];

// use stringValue in the background now

There’s a better way to do this — with a more “asynchronous” style. Instead of blocking the background thread to fetch a value from the interface layer, you can use nested blocks to abort the background thread, then fetch the value from the main thread, and then commit the post-processing to the background thread:

dispatch_queue_t bgQueue = myQueue;

dispatch_async(dispatch_get_main_queue(), ^{

NSString *stringValue = [[[textField stringValue] copy] autorelease];

dispatch_async(bgQueue, ^{

// use stringValue in the background now

});

});

Depending on your needs, myQueue can be a user queue or a global queue.

No longer use locks

User queues can be used instead of locks to complete the synchronization mechanism. In traditional multithreaded programming, you might have an object to be used by multiple threads, and you need a lock to protect the object:

NSLock *lock;

The access code would look like this:

– (id)something

{

id localSomething;

[lock lock];

localSomething = [[something retain] autorelease];

[lock unlock];

return localSomething;

}

– (void)setSomething:(id)newSomething

{

[lock lock];

if(newSomething ! = something)

{

[something release];

something = [newSomething retain];

[self updateSomethingCaches];

}

[lock unlock];

}

To use GCD, queue can be used instead:

dispatch_queue_t queue;

To be used for synchronization, queue must be a user queue, not a global queue, so use usingDispatch_queue_CREATE to initialize one. You can then wrap the access code for shared data with dispatch_async or dispatch_sync:

– (id)something

{

__block id localSomething;

dispatch_sync(queue, ^{

localSomething = [something retain];

});

return [localSomething autorelease];

}

– (void)setSomething:(id)newSomething

{

dispatch_async(queue, ^{

if(newSomething ! = something)

{

[something release];

something = [newSomething retain];

[self updateSomethingCaches];

}

});

}

It’s worth noting that the Dispatch Queue is very lightweight, so you can use it as much as you used to with Lock.

Now you might be asking, “That’s all well and good, but is it interesting? I just changed some code to do the same thing.”

Actually, there are several benefits to using the GCD approach:

Parallel computation: Notice how -setSomething: uses dispatch_async in the second version of the code. Call -setSomething: returns immediately, and the whole lot of work is done in the background. This is great if updateSomethingCaches are a time-consuming task, and the caller is going to perform a processor-heavy task.

Security: With GCD, it is impossible to accidentally write code with unpaired locks. In regular Lock code, we probably let the code return before unlocking it. With GCD, queues usually run continuously and you have to give back control.

Control: With GCD we can suspend and resume the Dispatch queue in a way that lock-based methods cannot. We can also point a user queue to another Dspatch queue so that this user queue inherits the properties of that Dispatch queue. With this approach, the priority of the queue can be adjusted — by pointing the queue to a different global queue, which can even be used to execute code on the main thread if necessary.

Integration :GCD’s event system is integrated with the Dispatch Queue. Any events or timers that an object needs to use can be pointed to from the queue of the object, making the handles automatically executed on the queue, making the handles automatically synchronized with the object.

conclusion

Now you know the basics of GCD, how to create a Dispatch queue, how to submit jobs to the Dispatch Queue, and how to use queues for thread synchronization. Next I’ll show you how to use GCD to write parallel execution code to take full advantage of multi-core systems. I’ll also discuss deeper aspects of GCD, including the event system and queue targeting.

I go to

Jjunjoe column

The original address