Although newer technologies such as operation objects and Grand Central Dispatch (GCD) provide a more modern and efficient infrastructure for implementing concurrency.

One technique for asynchronously starting a task is Grand Central Dispatch (GCD). This technique takes thread-management code that you would normally write in your own applications and pushes it down to the system level. All you have to do is define the task to execute and add it to the appropriate scheduling queue. The GCD is responsible for creating the required threads and scheduling tasks to run on them. Because thread management is now part of the system, GCD provides a holistic approach to task management and execution that is more efficient than traditional threads.

Operation Queues are Objective-C objects that behave much like Dispatch queues. You define tasks to execute, and then add them to the action queue, which handles the planning and execution of those tasks. As with GCD, action queues handle all thread management for you to ensure that tasks are executed as quickly and efficiently as possible on the system.

Dispatch Queues

Scheduling queues are a C-based mechanism for performing custom tasks. Dispatch queues can execute tasks serially or concurrently, but always in first-in, first-out order. (In other words, the scheduling queue always dequeues and starts tasks in the same order that they are added to the queue.) The serial scheduling queue runs only one task at a time and waits for the task to complete before exiting the queue and starting the task. A new one. In contrast, concurrent scheduling queues can start as many tasks as possible without waiting for the already started tasks to complete.

  • They provide a straightforward programming interface.
  • They provide automatic global thread pool management.
  • They are more memory efficient (because the thread stack does not linger in application memory).
  • Asynchronous scheduling of tasks to scheduling queues cannot cause queues to deadlock.
  • Serial scheduling queues provide a more efficient alternative to locks and other synchronization primitives.

The tasks you submit to the Dispatch queue must be wrapped in functions or block Objects (Blocks Programming Topics).

Dispatch Sources

Dispatch Sources is a C-based mechanism for asynchronously handling specific types of system events. Dispatch Sources encapsulates information about specific types of system events and submits specific block objects or functions to scheduling queues when the event occurs. You can use Dispatch Sources to monitor the following types of system events:

  • Timers
  • Signal handlers
  • Descriptor-related events
  • Process-related events
  • Mach port events
  • Custom events that you trigger

Operation Queues

Operation Queues are equivalent to concurrent Dispatch Queues in Cocoa and are implemented by the NSOperationQueue class. While Dispatch Queues always execute tasks in first-in, first-out order, Operation Queues take other factors into account when determining the order in which tasks are executed. The main factor among these factors is whether a given task depends on the completion of other tasks. You can configure dependencies when defining tasks, and you can use them to create complex execution sequence diagrams for tasks.

The task submitted to Operation Queue must be an instance of the NSOperation class. An action object is an Objective-C object that encapsulates the work you want to perform and any data you need to perform that action. Because the NSOperation class is essentially an abstract base class, you typically define custom subclasses to perform tasks. However, the Foundation framework contains specific subclasses that you can create and use directly to perform tasks.

Operation Object generates key-value Observing (KVO) notifications, which can be a useful way to monitor the progress of a task. Although action queues always execute operations concurrently, you can use dependencies to ensure that they are executed sequentially when needed.

Asynchronous Design Techniques

Before you consider redesigning your code to support concurrency, you should consider whether to do so. Concurrency improves the responsiveness of your code by ensuring that your main thread is free to respond to user events. It can even improve the efficiency of your code by using more CPU cores to do more work in the same amount of time. However, this also adds overhead and overall complexity to the code, making it more difficult to write and debug. Therefore, it is worth taking some time at the beginning of the design cycle to set some goals and consider the approach that needs to be taken.

Efficiency tips

In addition to simply breaking code up into smaller tasks and adding them to queues, there are other ways to use queues to improve the overall efficiency of your code:

  • If memory usage is a factor, consider calculating values directly in the task. If your application is already memory constrained, it may now be faster to calculate values directly than to load cached values from main memory. Computed values use the registers and caches of a given processor kernel directly, which is much faster than main memory. Of course, this should only be done if testing shows that this is a performance win.
  • Identify serial tasks early and try to make them more parallel. If a task must be executed serially because it depends on some shared resource, consider changing the architecture to remove the shared resource. You could consider making a copy of the resource for each client that needs one, or eliminating the resource altogether.
  • Avoid locks. The support provided by the Dispatch Queue and Operation Queue makes locking unnecessary in most cases. Instead of using locks to protect some shared resources, specify a serial queue (or use Operation Object dependencies) to execute the tasks in the correct order.
  • Rely as much as possible on the system framework. The best way to achieve concurrency is to take advantage of the built-in concurrency provided by the system framework. Many frameworks use threading and other techniques internally to implement concurrent behavior. When defining tasks, look to see if the existing framework defines functionality or methods that fully meet your expectations and can be implemented at the same time. Using this API saves you effort and is more likely to give you maximum concurrency.

Performance impact

Operation Queue, Dispatch Queue, and Dispatch Sources are provided to make it easier to execute more code at the same time. However, these techniques are not guaranteed to make the application more efficient or responsive. Using queues to meet your needs without overloading your application’s other resources remains your responsibility. For example, although you could create 10,000 action objects and commit them to the action queue, doing so would cause your application to allocate potentially large amounts of memory, which could lead to pagination and performance degradation.

Before introducing any amount of concurrency into your code (whether using queues or threads), you should always collect a baseline set of metrics that reflect your application’s current performance. After introducing changes, you should collect additional metrics and compare them to the benchmark to see if the overall efficiency of the application has improved. If the introduction of concurrency makes your application less efficient or responsive, you should use the performance tools available to examine the underlying causes.

Apple Doc Translate.

Concurrency Programming Guide