This post was originally posted on my blog

GCD stands for Grand Central Dispatch and is a multi-core programming solution developed by Apple. The method was first introduced in Mac OSX 10.6 snow leopard and later introduced in IOS4.0. GCD is an alternative to NSThread, NSOperationQueue, etc.

The GCD que

/ / 1 queue

dispatch_queue_t queue = dispatch_queue_create("Typeco", DISPATCH_QUEUE_CONCURRENT);

// 2. Task: No parameter, no return value

void(^block)(void) = ^ {

     NSLog(@"This is a mission to be executed.");

};

/ / 3. Function

dispatch_async(queue, block);

Copy the code

In order to distinguish between tasks and functions, I will separate blocks to make GCD easier and more intuitive. To summarize:

GCD simply adds the specified block to the specified queue, and then assigns the dispatch function to execute it. Tasks are executed according to the FIFO principle of queues: first in, first out.

The queue

  • Serial queue

    Tasks in the queue are executed one by one. After each task is executed, the next task is executed. Custom serial queues are as follows:

    dispatch_queue_t queue = dispatch_queue_create("Typeco", DISPATCH_QUEUE_SERIAL);

    Copy the code

    The first parameter label is the string that identifies the queue and the second parameter identifies the type of queue. Here we create a serial queue: DISPATCH_QUEUE_SERIAL.

    #define DISPATCH_QUEUE_SERIAL NULL

    PS: The effect of passing NULL to the second parameter of the serial queue is the same.

  • Concurrent queue (Concurrent)

    Multiple tasks are allowed to be executed in parallel, but in a random order, depending on CPU scheduling, as discussed later in this section.

    dispatch_queue_t queue = dispatch_queue_create("Typeco", DISPATCH_QUEUE_CONCURRENT);

    Copy the code

    Same here, but for the second parameter we select DISPATCH_QUEUE_CONCURRENT

  • The queue system

    • dispatch_get_main_queue()The primary queue is the only serial queue that is automatically created when the application starts (before main) and is bound to the main thread.
    • Dispatch_get_global_queue (0, 0)Global concurrent queues, where we usually pass 0 by default if we don’t have any special requirements. Global concurrent queues can only be retrieved, not created.

Now that we know what queues are, we know how tasks added to queues are executed, and we need to look at how functions affect queue task execution.

function

  • Synchronization function dispatch_sync()

    • You must wait for the current statement to complete before executing the next statement
    • The thread is not started and the block task is executed in the current thread
    • The order of execution still follows the FIFO principle of the current queue
  • Asynchronous function dispatch_async()

    • The next statement can be executed without waiting for the current statement to complete
    • Opens the thread to execute the block’s task
    • Asynchrony is a byword for multithreading
  • Thread and queue relationship

    Specific tests can be downloaded to Demo!

    The difference between serial concurrent The home side column
    synchronous 1. The task is executed on the current thread without starting the thread

    2. Tasks are executed one after another

    3. There will be congestion
    1. The task is executed on the current thread without starting the thread

    2. Tasks are executed one after another
    1. The deadlock is stuck and cannot be executed
    asynchronous 1. Start a new thread

    2. Tasks are executed one after another
    1. Start the thread and execute tasks in the current thread

    2. Tasks are executed asynchronously and in no order due to CPU scheduling
    1. No new thread is started and the tasks are still executed in the main thread

    Thus, threads and queues are not directly related.

Low-level implementation of queues and functions

First of all, you can go to GCD source code to download and view.

Queue creation

The first parameter of dispatch_queue_create is a string that identifies a queue, and the second parameter is serial or concurrent, so we’ll focus on the second parameter:

   / *

The initial methods, in turn, call the core code below

* /


    dispatch_queue_create("sync_serial", DISPATCH_QUEUE_SERIAL);

    / *

Mainly look at the following method implementation

Dqa is DISPATCH_QUEUE_SERIAL

* /


    _dispatch_lane_create_with_target(const char *label, dispatch_queue_attr_t dqa,

                                      dispatch_queue_t tq, bool legacy)







   // omit a lot of code here...







    // Open up memory - generate the queue of response objects

    dispatch_lane_t dq = _dispatch_object_alloc(vtable,

                                                sizeof(struct dispatch_lane_s));





    / *

Constructor, the key point here, if it is dqai_concurrent then the queue width DISPATCH_QUEUE_WIDTH_MAX is otherwise 1

This means that the serial queue width is 1 and concurrency is unlimited

* /


    _dispatch_queue_init(dq, dqf, dqai.dqai_concurrent ?

                         DISPATCH_QUEUE_WIDTH_MAX : 1, DISPATCH_QUEUE_ROLE_INNER |

                         (dqai.dqai_inactive ? DISPATCH_QUEUE_INACTIVE : 0));



    / / label

    dq->dq_label = label;

    / / priority

    dq->dq_priority = _dispatch_priority_make((dispatch_qos_t)dqai.dqai_qos,

                                              dqai.dqai_relpri);

    / *

This is reserved for queue, it's described in the API that if it's not ARC we're going to call dispatch_release when we're done creating it

* /


    _dispatch_retain(tq);



    return dq;

Copy the code

This is just an overview of how dispatch_queue_create creates queues and how queues distinguish between serial and concurrent. Please download the source code for details.

A semaphore

To study semaphores is to focus on three methods:

dispatch_semaphore_create

 dispatch_semaphore_t

dispatch_semaphore_create(long value)

{

    dispatch_semaphore_t dsema;



    // If the internal value is negative, then the absolute of the value is

    // equal to the number of waiting threads. Therefore it is bogus to

    // initialize the semaphore with a negative value.

    if (value < 0) {

        return DISPATCH_BAD_INPUT;

    }



    dsema = _dispatch_object_alloc(DISPATCH_VTABLE(semaphore),

            sizeof(struct dispatch_semaphore_s));

    dsema->do_next = DISPATCH_OBJECT_LISTLESS;

    dsema->do_targetq = _dispatch_get_default_queue(false);

    dsema->dsema_value = value;

    _dispatch_sema4_init(&dsema->dsema_sema, _DSEMA4_POLICY_FIFO);

    dsema->dsema_orig = value;

    return dsema;

}

Copy the code

We initialize dispatch_semaphore_t and assign it to **dema_value**, remember this field, we may need to use it later in our analysis.

dispatch_semaphore_signal

 long

dispatch_semaphore_signal(dispatch_semaphore_t dsema)

{

    long value = os_atomic_inc2o(dsema, dsema_value, release);

    if (likely(value > 0)) {

        return 0;

    }

    if (unlikely(value == LONG_MIN)) {

        DISPATCH_CLIENT_CRASH(value.

                "Unbalanced call to dispatch_semaphore_signal()");

    }

    return _dispatch_semaphore_signal_slow(dsema);

}

Copy the code

os_atomic_inc2o —-> os_atomic_add2o(p, f, 1, m)—-> os_atomic_add(&(p)->f, (v), m) —-> _os_atomic_c11_op((p), (v), M, add, +)—-> deem_value+1 os_atomic_INC2O os_atomic_INC2O

dispatch_semaphore_wait

long

dispatch_semaphore_wait(dispatch_semaphore_t dsema, dispatch_time_t timeout)

{

    // value++

    long value = os_atomic_dec2o(dsema, dsema_value, acquire);

    if (likely(value >= 0)) {

        return 0;

    }

    return _dispatch_semaphore_wait_slow(dsema, timeout);

}

Copy the code

Os_atomic_dec2o is the same as os_ATOMIC_INC2O, except that it decrement the semaphore by 1, and returns 0 if greater than or equal to 0

So all we see in these two methods is the addition and subtraction of a number (semaphore). How does that affect our thread at the bottom?

do {

        _dispatch_trace_runtime_event(worker_unpark, dq, 0);

        _dispatch_root_queue_drain(dq, pri, DISPATCH_INVOKE_REDIRECTING_DRAIN);

        _dispatch_reset_priority_and_voucher(pp, NULL);

        _dispatch_trace_runtime_event(worker_park, NULL.0);

    } while (dispatch_semaphore_wait(&pqc->dpq_thread_mediator,

            dispatch_time(0, timeout)) == 0);

Copy the code

In queue.c, I found the above code, which is actually a do.. The queue continues FIFO as long as the semaphore is >=0, or waits until the wait returns 0. The loop is broken when the +1 operation is performed on the semaphore after the signal method is executed.

To sum up, we usually use semaphores like this:

- (void)demo {

    dispatch_semaphore_t sema = dispatch_semaphore_create(1);

    dispatch_semaphore_wait(sema, DISPATCH_TIME_FOREVER);

    dispatch_async(dispatch_get_global_queue(0.0), ^ {

        // Time consuming operation, asynchronous operation

        sleep(5);

        dispatch_semaphore_signal(sema);

    });

}

Copy the code

If thread 1 has performed a wait and the semaphore is 0, then another thread 2 will be in the do state. If thread 1 has performed a wait and the semaphore is 0, it will be in the do state. The while loop waits until thread 1 executes signal to perform the +1 operation on the semaphore, at which point the loop is broken to allow thread 2 to access the operation.

In order to solve the security problem caused by preventing multiple threads from accessing the same resource at the same time, we can also use the barrier function:

- (void)demo {

    dispatch_semaphore_t sema = dispatch_semaphore_create(0);

    dispatch_async(dispatch_get_global_queue(0.0), ^ {

        // Execute method 1

        NSLog(@" Execute method 1");

        sleep(5);

        dispatch_semaphore_signal(sema);

    });

    dispatch_semaphore_wait(sema, DISPATCH_TIME_FOREVER);

    dispatch_async(dispatch_get_global_queue(0.0), ^ {

       // Execute method 2

        NSLog(@" Execute method 2");

    });

}

Copy the code

The only difference here is that when initialized, the default wait is 0, and the asynchronous thread signal waits until it executes subsequent tasks across the wait.

Scheduling group

The following is based on libDispatch source code analysis: Source code reference, the source code part is quite extensive, please download the detailed code, here as little as possible paste code, try to use a flow chart to show the general process:

Dispatch_group is a semaphore based synchronization mechanism. Its core functions are as follows:

  • dispatch_group_enter
  • dispatch_group_leave
  • dispatch_group_wait
  • dispatch_group_async
  • dispatch_group_notify

There are four groups of control flows in the figure:

  • At the top are two parallel asyncs that execute asynchronously and implicitly call the Enter and leave methods internally
  • In the upper right are plain Enter and leave
  • In the lower left corner is the Wait control, which uses a blocking mode to wait until the signal conforms
  • Notify has two branches in the lower right corner, which are consistent with that of notify’s block. If the conditions are not met, the notify’s block will be put into the queue corresponding to the group, and the execution will be triggered when the semaphore is satisfied.

The best thing to do is to wake up the notify. All async and normal methods will go Enter and leave, so the signal is determined by the pairing of Enter and leave. As the API says, Enter and leave should appear in pairs. Because the inside helps you realize in pairs:

void

dispatch_group_async(dispatch_group_t group, dispatch_queue_t queue.dispatch_block_t block)

{

    dispatch_retain(group);

    dispatch_group_enter(group);

    dispatch_async(queueThe ^ {

        block();

        dispatch_group_leave(group);

        dispatch_release(group);

    });

}

Copy the code

Therefore, when executing leave, each control flow should check whether the semaphore meets the requirement. If so, notify will be executed, otherwise wait. Since the key is that the execution of leave method triggers notify, we can focus on the implementation of Leave:

void

dispatch_group_leave(dispatch_group_t dg)

{

    // The value is incremented on a 64bits wide atomic so that the carry for

    // the -1 -> 0 transition increments the generation atomically.

    uint64_t new_state, old_state = os_atomic_add_orig2o(dg, dg_state,

            DISPATCH_GROUP_VALUE_INTERVAL, release);

    uint32_t old_value = (uint32_t)(old_state & DISPATCH_GROUP_VALUE_MASK);



    if (unlikely(old_value == DISPATCH_GROUP_VALUE_1)) {

        old_state += DISPATCH_GROUP_VALUE_INTERVAL;

        do {

            new_state = old_state;

            if ((old_state & DISPATCH_GROUP_VALUE_MASK) == 0) {

                new_state &= ~DISPATCH_GROUP_HAS_WAITERS;

                new_state &= ~DISPATCH_GROUP_HAS_NOTIFS;

            } else {

                // If the group was entered again since the atomic_add above,

                // we can't clear the waiters bit anymore as we don't know for

                // which generation the waiters are for

                new_state &= ~DISPATCH_GROUP_HAS_NOTIFS;

            }

            if (old_state == new_state) break;

        } while(unlikely(! os_atomic_cmpxchgv2o(dg, dg_state,

                old_state, new_state, &old_state, relaxed)));

        return _dispatch_group_wake(dg, old_state, true);

    }



    if (unlikely(old_value == 0)) {

        DISPATCH_CLIENT_CRASH((uintptr_t)old_value,

                "Unbalanced call to dispatch_group_leave()");

    }

}

Copy the code

One of them does… A while loop, which compares the states each time until the states match, calls the WAKE method to wake the group.

Scheduling group usage

  • (Asynchronous request 1 + asynchronous request 2) ==> Asynchronous request 3, which depends on the return of request 1 and request 2

     dispatch_group_t group = dispatch_group_create();

        dispatch_group_async(group, dispatch_get_global_queue(0.0), ^ {

            NSLog(@" Asynchronous request 1");

        });

        dispatch_group_async(group, dispatch_get_global_queue(0.0), ^ {

            sleep(3);

            NSLog(@" Asynchronous request 2");

        });



        dispatch_group_notify(group, dispatch_get_global_queue(0.0), ^ {

            NSLog(@"1 and 2 have completed asynchronous request 3");

        });



        dispatch_group_async(group, dispatch_get_global_queue(0.0), ^ {

            NSLog(@" Asynchronous request 4");

        });

    Copy the code

    GCD_Demo[19428:6087378] Asynchronous request 1

    GCD_Demo[19428:6087375] Asynchronous request 4

    GCD_Demo[19428:6087377] Asynchronous request 2

    GCD_Demo[19428:6087377] 1 and 2 have finished executing asynchronous request 3

    I purposely added a 4 after notify, which is not sequential.

  • enter + leave

    dispatch_group_t group = dispatch_group_create();

        dispatch_queue_t queue = dispatch_get_global_queue(0.0);



        dispatch_group_enter(group);

        dispatch_async(queue, ^{

            NSLog(@" Asynchronous request 1");

            dispatch_group_leave(group);

        });



        dispatch_group_enter(group);

        dispatch_async(queue, ^{

            sleep(3);

            NSLog(@" Asynchronous request 2");

            dispatch_group_leave(group);

        });





        dispatch_group_notify(group, dispatch_get_global_queue(0.0), ^ {

            NSLog(@"1 and 2 have completed asynchronous request 3");

        });

    Copy the code

    The effect is the same, except that group_async is not used.

So far GCD analysis concluded, welcome comments and exchanges.