A deadlock

  • Write a serial asynchronous thread task, then add a synchronous thread task, deadlock error
- (void)textDemo1{
    
    dispatch_queue_t queue = dispatch_queue_create("cooci".NULL);
    NSLog(@ "1");
    dispatch_async(queue, ^{
        NSLog(@ "2");
        dispatch_sync(queue, ^{
            NSLog(@ "3");
        });
        NSLog(@ "4");
    });
    NSLog(5 "@"); } as a result,1,5,2Death lock analysis is issued upon completion of normal execution1.Serial queue tasks are first-in, first-out2.dispatch_asyncBlock task added2 dispatch_syncPiece of tasks and4, includingdispatch_syncTo perform in3, the order of the tasks in the queue is1,5,2, synchronization block,4 33.2Execute after completiondispatch_syncTask, the synchronization function needs to be executed3To perform4, but4in3Those added before the mission are executed on a first-in, first-out basis4To perform3Tasks waiting for each other cause deadlock problems.Copy the code
  • Error stack information was reported

  • After the dispatch_sync_F_slow function

  • Find the error function

Call dispatch_sync on queue “owned by current thread”

  • Deadlock condition dq_state, DSC -> DSC_waiter same, the thread that needs to be called is a waiting thread, resulting in a deadlock.
if (unlikely(_dq_state_drain_locked_by(dq_state, dsc->dsc_waiter))) {

DISPATCH_CLIENT_CRASH((uintptr_t)dq_state,

"dispatch_sync called on queue "

"already owned by current thread");

}
Copy the code

Singleton principle analysis

  • The val argument is a global static variable, the block argument is task encapsulation, and l is the gate attribute of this global static variable

  • dispatch_once_f

  • Thread lock, thread safety

  • return _dispatch_once_callout(l, ctxt, func); Performs tasks in the block
  • Close the door after completing the task

  • Assignment handling of two macros

  • Labeling done

  • return _dispatch_once_wait(l); If the current state is not complete and the done bit is not marked, wait is entered

Barrier function

Most direct functions: control task execution sequence and synchronization

  • Dispatch_barrier_async is not sent until the preceding tasks are completed

  • Dispatch_barrier_sync has the same effect, but blocks the thread and affects subsequent tasks

  • The fence function can control only one concurrent queue, not all concurrent queues.

  • Analysis of the underlying

  • _dispatch_barrier_sync_f

  • _dispatch_barrier_sync_f_inline

Enter _dispatch_SYNC_F_SLOW or _dispatch_SYNC_RECURSE

  • _dispatch_sync_recurse

There’s an endless loop of do-while recursion that you can’t move until the current queue has been emptied_dispatch_sync_invoke_and_complete_recurse

  • _dispatch_sync_invoke_and_complete_recurse

  • _dispatch_sync_complete_recurse

Do -while checks whether there is a barrierdx_wakeupWake up the previous tasks and enter after completion_dispatch_lane_non_barrier_completeIndicates that the barrier is currently complete and free

  • _dispatch_lane_non_barrier_complete

Perform state repair, otherwise keep dX_wakeup in an infinite loop

  • Dx_wakeup is dq_wakeup,

  • It assigns different functions to different queue types, makes different function calls,
    • Global concurrent queues call _dispatch_root_queue_wakeup
    • Normal concurrent queues call _dispatch_lane_wakeup
    • Look at the different classes of the two functions to explain why global concurrent queues cannot perform the fence function.
  • _dispatch_root_queue_wakeup

  • _dispatch_lane_wakeup
    • Enter if barrier exists in the current queue_dispatch_lane_barrier_completeFence completion function,

    • If it is a synchronous queue dq_width = 1, the wait starts
    • If it is an asynchronous queue, it enters _dispatch_lane_drain_non_barriers

    Do a do-while loop to ensure that the above tasks are completed and entered_dispatch_lane_non_barrier_complete_finishRemove barriers from queues

  • Finally, go to _dispatch_lane_class_barrier_Complete to complete the task in the fence function.

Clear the barrier marker in the queue and complete the task as normal. Because the global concurrent queue also has the function of the system task background task to process, so add the fence to block, the background task can not be executed, affecting the processing of the system.

  • The disadvantage of fence is that business network requests are generally AFN network requests, and the creation of queue is completed in the framework, so we cannot obtain the queue of network requests, so the fence function will be more troublesome to use.

Semaphore dispatch_semaphore_t

  • Dispatch_semaphore_create Creates a semaphore
  • Dispatch_semaphore_wait Signal waiting
  • Dispatch_semaphore_signal Indicates signal release
  • Synchronization -> When lock, control the maximum number of concurrent GCD, can control the number of tasks completed at a time
    dispatch_queue_t queue = dispatch_get_global_queue(0.0);
    dispatch_semaphore_t sem = dispatch_semaphore_create(1);
    dispatch_queue_t queue1 = dispatch_queue_create("cooci", NULL);

    1 / / task
 

    dispatch_async(queue, ^{
        dispatch_semaphore_wait(sem, DISPATCH_TIME_FOREVER); / / wait for

        NSLog(@"Mission 1");
        NSLog(@"Task 1 completed.");
        dispatch_semaphore_signal(sem); / / signal
    });
    
    2 / / task
    dispatch_async(queue, ^{
        dispatch_semaphore_wait(sem, DISPATCH_TIME_FOREVER); / / wait for

        sleep(2);

        NSLog(@"Mission 2");
        NSLog(@"Mission 2 completed.");
        dispatch_semaphore_signal(sem); / / signal
    });
    
    3 / / task
    dispatch_async(queue, ^{
        dispatch_semaphore_wait(sem, DISPATCH_TIME_FOREVER);
        sleep(2);

        NSLog(@"Mission 3");
        NSLog(@"Mission 3 completed.");
        dispatch_semaphore_signal(sem);
    });


    / / task 4
    dispatch_async(queue, ^{
        dispatch_semaphore_wait(sem, DISPATCH_TIME_FOREVER);
        sleep(2);

        NSLog(@"Mission 4");
        NSLog(@"Mission 4 completed.");
        dispatch_semaphore_signal(sem);
    });
Copy the code

Analysis of the

  • Semaphore creation is useful as long as it is greater than or equal to 0 otherwise it is useless

  • dispatch_semaphore_wait

os_atomic_dec2o= –1 value>=0 then reset to 0 and enter when the semaphore I created is 0_dispatch_semaphore_wait_slowfunction

  • _dispatch_semaphore_wait_slow

Enter a switch to judge the duration, if it is wrong does not conform to the rules, then jump out,DISPATCH_TIME_NOW, timeout processing,DISPATCH_TIME_FOREVER ' 'goes to _dispatch_sema4_wait

  • _dispatch_sema4_wait

It’s a do-while loop waiting

  • dispatch_semaphore_signal

os_atomic_inc2o= ++1, value>0, reset to 0, normal execution, if the increment of 1 is still less than 0, an exception is reported,dispatch_semaphore_waitIf too many operations are performed, _dispatch_semaphoRE_signal_slow is displayed

  • _dispatch_semaphore_signal_slow

Semaphore creation +1,

Conclusion: The principle of semaphore is value ++ and –, when the value is less than 0, the task will enter the do-while loop and wait for the task to enter wait, waiting for the signal quantity to change to positive, when the other task is completed, ++ after the semaphore is greater than 0, then enter the next task.

Scheduling group

Function: Control task execution order dispatch_group_create Create a group. Dispatch_group_async Task dispatch_group_notify Task execution is complete Dispatch_group_enter Enters a group. Dispatch_group_leave leaves a group

  • Simple use of scheduling groups
- (void)viewDidLoad {
    [super viewDidLoad];

    self.imageView = [[UIImageView alloc] initWithFrame:CGRectMake(20.300.300.200)];
    self.imageView.image = [UIImage imageNamed:@"backImage"];
    [self.view addSubview:self.imageView];
    
    [self groupDemo];
}


/** Scheduling group test */
- (void)groupDemo{
    
// dispatch_group_enter(group);
// dispatch_group_leave(group);
    
    dispatch_group_t group = dispatch_group_create();
    dispatch_queue_t queue = dispatch_get_global_queue(0.0);
    dispatch_group_async(group, queue, ^{
        // Create a scheduling group
        NSString *logoStr1 = @"https://p9-juejin.byteimg.com/tos-cn-i-k3u1fbpfcp/09f14cef6f3d4859a85610da67ed38de~tplv-k3u1fbpfcp-watermark.image?";
        NSData *data1 = [NSData dataWithContentsOfURL:[NSURL URLWithString:logoStr1]];
        UIImage *image1 = [UIImage imageWithData:data1];
        [self.mArray addObject:image1];
    });


// dispatch_group_async(group, queue, ^{
// // Create a scheduling group
/ / nsstrings * logoStr2 = @ "FM = https://f12.baidu.com/it/u=3172787957, 1000491180 & 72";
// NSData *data2 = [NSData dataWithContentsOfURL:[NSURL URLWithString:logoStr2]];
// UIImage *image2 = [UIImage imageWithData:data2];
// [self.mArray addObject:image2];
/ /});
    
// Enter group and hire in pairs
// dispatch_group_leave(group);
    dispatch_group_enter(group);

    dispatch_async(queue, ^{
        // Create a scheduling group
       NSString *logoStr2 = @"https://p9-juejin.byteimg.com/tos-cn-i-k3u1fbpfcp/09f14cef6f3d4859a85610da67ed38de~tplv-k3u1fbpfcp-watermark.image?";
        NSData *data2 = [NSData dataWithContentsOfURL:[NSURL URLWithString:logoStr2]];
        UIImage *image2 = [UIImage imageWithData:data2];
        [self.mArray addObject:image2];
// dispatch_group_enter(group);
        dispatch_group_leave(group);

    });
    
// long time = dispatch_group_wait(group, 1);
//
// if (time == 0) {
//
/ /}
    
    
    dispatch_group_notify(group, dispatch_get_main_queue(), ^{
        UIImage *newImage = nil;
       NSLog(@"Number of arrays :%ld",self.mArray.count);
       for (int i = 0; i<self.mArray.count; i++) {
           UIImage *waterImage = self.mArray[i];
           newImage =[KC_ImageTool kc_WaterImageWithWaterImage:waterImage backImage:newImage waterImageRect:CGRectMake(20.100*(i+1), 100.40)];
       }
        self.imageView.image = newImage;
    });

}
Copy the code
  • Realize the image watermark effect

  • Question 1: How does the scheduling group control the flow
  • Problem 2: Scheduling group incoming and outgoing group routing
  • Problem 3: dispatch_group_async = dispatch_group_Enter +dispatch_group_leave

Analysis of the underlying

  • dispatch_group_create

  • dispatch_group_enter(dispatch_group_t dg)

So the dg that’s passed in here is going to be a zero and this is going to be a minus one operation signal

  • dispatch_group_leave

So what we’re doing here is we’re changing negative 1 to 0,os_atomic_add_orig2oThis is the plus one operation,

  • 1. Add dg=-1, old_state =-1 + 1 = 0, old_value is 0 after operation
  • 2. Add dg=0, old_state = 1 + 1 = 1, old_value = 1

  • =0 after operation, 0 is not equal to DISPATCH_GROUP_VALUE_1, do-while loop Go to _dispatch_group_wake to wake up the dispatch_group_notify operation. Otherwise, “1” will enter the following judgment and error processing will be reported

  • _dispatch_group_notify

Old_state ==0_dispatch_group_wakeThe entire process can be as simple as 0-1 blocking and +1 waking up the next step

  • dispatch_group_async

  • _dispatch_continuation_group_async

There is a group operation that changes the original semaphore 0 to -1

  • Process analysis

  • Assuming that we are now in a global concurrent queue, the following process is a process analysis that performs a block

  • If I have a group tag here

  • _dispatch_continuation_with_group_invoke

So I’m going to leave the group,

  • Perform block

Conclusion: _dispatch_continuation_group_async automatically performs group entry and group exit operations internally

In everyday applications, we often use NSTimer. NSTimer needs to be added to NSRunloop and is affected by mode. If mode is not set correctly, NSTimer will also be affected when scrollView slides. If the Runloop is running continuously, the timer may be delayed.

GCD provides a solution with dispatch_source. Dispatch_source has the following features:

  • The time is more accurate.CPUSmall load, less occupation of resources
  • You can useThe child threadTo solve the problem of timer running on the main threadCard UI problem
  • You can pause, you can continue, you don’t have toNSTimerThe same needs to be created again

Key methods of dispatch_source:

  • dispatch_source_createCreate the source
  • dispatch_source_set_event_handlerSet the source event callback
  • dispatch_source_merge_dataSource event sets data
  • dispatch_source_get_dataGet the source event data
  • dispatch_resumeContinue to
  • dispatch_suspendhang

Two important parameters:

  • dispatch_source_type_tThe type of source to create
  • dispatch_queue_tThe dispatch queue to which the event handler block will be submitted

Event source type:

  • DISPATCH_SOURCE_TYPE_DATA_ADDUsed to merge data
  • DISPATCH_SOURCE_TYPE_DATA_ORBitwise OR is used to merge data
  • DISPATCH_SOURCE_TYPE_DATA_REPLACEThe newly obtained data value replaces the existing one
  • DISPATCH_SOURCE_TYPE_MACH_SENDThe scheduling source that monitors the Mach port has the right to send, not receive

    DISPATCH_SOURCE_TYPE_MACH_RECVMonitor pending messages for Mach ports
  • DISPATCH_SOURCE_TYPE_MEMORYPRESSUREMonitor system changes and memory stress conditions
  • DISPATCH_SOURCE_TYPE_PROCA scheduling source that monitors events for external processes
  • DISPATCH_SOURCE_TYPE_READMonitors the bytes available to read from the schedule source of the file descriptor
  • DISPATCH_SOURCE_TYPE_SIGNALSignals used to monitor the current process
  • DISPATCH_SOURCE_TYPE_TIMERTime-based scheduling source
  • DISPATCH_SOURCE_TYPE_VNODEMonitor the scheduling source for event file descriptors
  • DISPATCH_SOURCE_TYPE_WRITEMonitor events, write bytes to buffer space