This is the sixth article in the Objective-C series and the last in the Effective Objective-C 2.0 series.

  • Objective-c (1) Familiar with Objective-C
  • Objective-c (2) Objects, messages, and runtime
  • Objective-c language (3) Interface and API design
  • Objective-c (4) Protocols and classification
  • Objective-c language (5) System framework
  • Objective-c (6) Block and GCD

1. Best practices

  • Use Handler Blocks to reduce code fragmentation
    • When you create an object, you can use an inline Handler Block to declare the relevant business logic together.
    • When there are multiple instances to monitor, if you use delegate mode, you often need to switch according to the incoming object, but if you use handler Block to implement, you can directly put the Block and related objects together;
    • When you design your API, if you encounter a Handler Block, you can add a parameter that allows the caller to determine on which queue the Block should be scheduled to execute.
  • Using circular references that occur in blocks is to be avoided
  • Use more queues and less synchronization locks
    • Dispatch queues can be used to express synchronization semantics rather than using@synchronizedorNSLockObjects are simpler;
    • Combining synchronous and asynchronous dispatch enables the same synchronous behavior as normal locking without blocking the thread performing the asynchronous dispatch;
    • Using synchronous queues and fence blocks can make synchronous behavior more efficient.
  • Use GCD more, use performSelector less
    • performSelectorThe ARC compiler is unable to insert the appropriate memory management methods because it cannot determine what selector will be executed.
    • performSelectorThe number of selectors that can be handled by a series of methods is too limited. The return value types of selectors and the number of parameters that the sender can give to the method are limited.
    • If you want to delay execution, it’s best not to use itperformSelectorInstead, you should wrap the task in a Block and call GCD to do it.
  • Know when to use GCD and operation queue
    • Cancel an operation;
    • Specify dependencies between operations;
    • Monitoring through key value observation mechanismNSOperation.NSOperationObject many properties are suitable for listening through key-value observation mechanisms, such asisCancelled.isFinished;
    • Specify the priority of the operation;
  • usedispatch_onceTo execute thread-safe code that only needs to be run once
  • Don’t usedispatch_get_current_queue
  • Through the Dispatch Group mechanism, tasks are executed based on the status of system resources
    • A series of tasks can be subsumed into onedispatch_groupDevelopers can be notified when a set of tasks is complete.
    • throughdispatch_groupCan be dispatched to a queue to perform multiple tasks at the same time.

Second, practical explanation

2.1 Understand the concept of Block

Block is denoted by “^” (stripper or caret) :

^ {//Block implementation here
}
Copy the code

A Block is a value that has its own type. Like an int, float, or Objective-C object, you can assign a Block to a variable, similar to a function pointer. The complete syntax of a Block is as follows:

return_type (^block_name)(parameters)
Copy the code

Look at an example:

int (^addBlock)(int a, int b) = ^(int a, int b) {
   return a+b;
}
Copy the code

Call:

int add = addBlock(2.5) //add = 7
Copy the code

Here’s how blocks are written in various cases:

/ / property
@property (copy ,nonatomic)void (^callBack)(NSString *);

// Function arguments
- (void)callbackAsAParameter:(void(^) (NSString *print))callBack {
    callBack(@"i am alone");
}

[self callbackAsAParameter:^(NSString *print) {
    NSLog(@"here is %@",print);
}];
    
//typedef
typedef void (^callBlock)(NSString *status);
CallBlock block = ^void(NSString *str){
    NSLog(@ "% @",str);
};
Copy the code

The power of a Block is that it can capture all variables within the scope of its declaration. That’s all the variables in the Block that you can use in that range.

int additional = 5;
int (^addBlock)(int a, int b) = ^(int a, int b) {
    return a+b+additional;
}
int add = addBlcok(2.5); //add = 12
Copy the code

If you need to modify the variables captured by the Block, you need to add __block.

2.1.1 Function Pointers

To better illustrate blocks, here’s a pointer to a function.

A function pointer is a pointer variable to a function. Thus the “function pointer” itself should first be a pointer variable, except that the pointer variable points to the function. Just as a pointer variable can point to an integer variable, a character, or an array, here it points to a function. As mentioned earlier, when C is compiled, each function has an entry address, which is the address to which the function pointer points. Once you have a pointer variable to a function, you can use that pointer variable to call a function, just as you can use a pointer variable to reference any other type of variable. These concepts are roughly the same. Function Pointers have two uses: to call a function and to take arguments to a function. Here’s an example:

#include<stdio.h>
int max(int x,int y){return(x>y? x:y); }int main()
{
    int (*ptr)(int.int);
    int a, b, c;
    ptr = max;		//ptr = &max;
    scanf("%d%d", &a, &b);
    c = (*ptr)(a,b);
    printf("a=%d, b=%d, max=%d", a, b, c);
    return 0;
}
Copy the code

2.1.2 Internal structure of Block

The Block itself is an object. The first variable in the memory that holds the Block isa pointer to the Class object, called isa, and the rest of the memory contains all the information that the object needs to function properly:

  • The Impl is a structure. An internal FuncPtr refers to the implementation code of a Block, and this parameter represents a Block. Block implements the original standard C language needs “opaque void pointer” state transfer into transparent, and easy to use.

  • Descriptor is a pointer to a structure, and each Block contains that structure. Function Pointers corresponding to copy and Dispose auxiliary functions are declared. Helper functions run when a Block copies or discards a Block object.

    • Size: Block size.
    • Copy: auxiliary function that preserves captured objects;
    • Dispose: auxiliary function, release the captured object;
  • So a Block will copy all the variables that it captures after descriptor, and notice that it’s not copying the objects themselves, it’s copying the pointer variables that point to those objects.

    Why does the invoke function need to pass a Block as an object argument? The reason is that these captured variables are read out of memory when a Block is executed.

2.1.3 Global Block, stack Block, and heap Block

When a Block is defined, the area of memory it occupies is allocated on the stack, that is, the Block is only valid within the range in which it is defined. Such as:

void (^block)();
if(/ /) {
   block = ^{
       NSLog(@"Block A");
    };
} else {
   block = ^{
      NSLog(@"Block B");
   };
}
Copy the code

If /else blocks are on the stack, and the stack memory may be overwritten when out of range. So at run time, it might work correctly, or it might crash. This depends on whether the compiler overwrites the Block memory.

Block objects in stack memory do not need to worry about object release, because stack memory is managed by the system, the system will ensure that objects are recycled.

To solve this problem, you can send a copy message to the Block object to perform the copy. Block objects can be copied from stack memory to heap memory.

Block objects in heap memory, the same as ordinary objects, have reference count, copy is incremental reference count, in ARC without manual release, when the reference count is 0 automatically released, etc.

void (^block)();
if(/ /) {
   [block = ^{
   NSLog(@"Block A");
   } copy];
} else {
   [block = ^{
            NSLog(@"Block B");
   } copy];
}
Copy the code

In addition to the “stack blocks” and “heap blocks” above, there is another class called “global blocks”. Global blocks have the following characteristics:

  • No state is captured, such as peripheral variables, and the runtime does not need any state to participate;

  • Blocks use an entire area of memory that is fully determined by the compiler, so global blocks can be declared in global memory instead of being created on the stack each time they are used.

  • Copying a global Block is an empty operation because a global Block can never be reclaimed by the system.

  • A global Block corresponds to a singleton;

    Here is a global Block:

void (^block) = ^{
    NSLog(@"This is a block");
};
Copy the code

Since all the information needed to run the Block can be determined in the compiler, it can be made a global Block, which is a complete optimization technique. Treating such a simple Block as if it were a complex Block would do unnecessary things to copy or discard the Block.

2.2 Reduce code dispersion with Handler Block

A delegate agent can do a lot of this with asynchronous callback processing, but the delegate model can lead to extremely fragmented code.

Using handlers to centralize code is a good option.

// Style 1:
HONetworkFetcher *fetcher = [HONetworkFetcher alloc] initWirhURL:url];
[fetcher startWithCompletionHandler:^(NSData* data){
    //handle success
} failureHandler:^(NSError *error){
	    //handle failure
}];

// Style 2:
HONetworkFetcher *fetcher = [HONetworkFetcher alloc] initWirhURL:url];
[fetcher startWithCompletionHandler:^(NSData* data,NSError *error){
  	if(error){
  	 	//handle success
  	}else{
      	//handle failure}}];Copy the code

Style 1: The code is easy to understand and written separately from the logic of success and failure, ignoring success or failure situations if necessary.

Style 2:

  • Disadvantages: Error detection is required and all logic is kept together, which can make blocks long and complex.
  • Advantages: More flexible. For example, if the network fails in the middle of the data download, the data, including the related errors, can be passed to the Block for storing the downloaded data and handling the errors. Another advantage is that the code calling the API may find errors in handling a successful response. At this point, successful error handling can be handled with real errors without causing code redundancy. If handled separately, you end up with two copies of the same error-handling code, and further, abstracted into a common method, losing the purpose of reducing code dispersion.

Conclusion:

  • When you create an object, you can use an inline Handler Block to declare the relevant business logic together.

  • When there are multiple instances to monitor, if you use delegate mode, you often need to switch according to the incoming object, but if you use handler Block to implement, you can directly put the Block and related objects together;

  • When you design your API, if you encounter a Handler Block, you can add a parameter that allows the caller to determine on which queue the Block should be scheduled to execute.

    - (id <NSObject>)addObserverForName:(nullable NSNotificationName)name object:(nullable id)obj queue:(nullable NSOperationQueue *)queue usingBlock:(void(^) (NSNotification *note))block NS_AVAILABLE(10_6, 4_0);
    Copy the code

2.3 Use circular references that occur in blocks

The following code:

@interface HONetworkFetcher(a)

@property (nonatomic ,strong ,readwrite) NSURL *url;
@property (nonatomic ,copy)HONetworkFetcherCompletionHadler completionHandler;
@property (nonatomic ,strong)NSData *downloadedData;

@end
Copy the code
@implementation HONetworkFetcher

- (instancetype)initWithURL:(NSURL *)url {
    if (self = [super init]) {
        _url = url;
    }
    return self;
}

- (void)startWithCompletionHandler:(HONetworkFetcherCompletionHadler)completion
{
    self.completionHandler = completion;
        //start the request
        //request sets downloadedData property
        //When request is finished ,p_requestCompleted is called
}

- (void)p_requestCompleted
{
    if(_completionHandler) { _completionHandler(_downloadedData); }}@end
Copy the code

A class makes the following call:

@interface HOClass : NSObject

@end

@interface HOClass(a)
{
    HONetworkFetcher *_networkFetcher;
    NSData *_fetchData;
}

@end
@implementation HOClass

- (void)downloadData
{
    NSURL *url = [NSURL URLWithString:@"www.com"];
    _networkFetcher = [[HONetworkFetcher alloc] initWithURL:url];
    [_networkFetcher startWithCompletionHandler:^(NSData *data) {
        _fetchData = data;
    }];
}
@end
Copy the code

Analyze the scenario:

The HOClass instance object instance variable _networkFetcher refers to the fetcher, _networkFetcher holds the completionHandler, which in turn refers to _fetchData, This is equivalent to holding an instance object of the HOClass, thus creating a circular reference.

The way to break a circular reference is simply to break the triangulation, either by making _networkFetcher no longer refereed, or by making the picker no longer hold the completionHandler.

Here’s one solution:

- (void)downloadData
{
    NSURL *url = [NSURL URLWithString:@"www.com"];
    _networkFetcher = [[HONetworkFetcher alloc] initWithURL:url];
    [_networkFetcher startWithCompletionHandler:^(NSData *data) {
        _fetchData = data;
        _networkFetcher = nil;
    }];
}
Copy the code

Alternatively, the object referenced by the Completion Handler ends up referring to the Block itself. The getter holds a Completion handler, which in turn references the getter’s URL.

- (void)downloadData
{
    NSURL *url = [NSURL URLWithString:@"www.com"];
    HONetworkFetcher * networkFetcher = [[HONetworkFetcher alloc] initWithURL:url];
    [_networkFetcher startWithCompletionHandler:^(NSData *data) {
        NSLog(@"Request URL %@ finished",networkFetcher.url)
        _fetchData = data;
    }];
}
Copy the code

The retention loop above is also easy to break:

- (void)p_requestCompleted
{
    if (_completionHandler) {
        _completionHandler(_downloadedData);
    }
    self.completionHandler = nil;
}
Copy the code

  • If the object captured by a Block directly or indirectly preserves the Block itself, then you have to worry about retention rings;
  • It is important to find the right time to remove the reserved ring, rather than blaming the API caller.

2.4 Use more queues and less synchronization locks

In Objective-C, multiple threads execute the same code and use locks to implement some kind of synchronization mechanism. Before GCD, there were two ways:

One is “sync Block” :

- (void)synchronizedMethod
{
	// The synchronized behavior is for self, which automatically creates a lock on the given object and waits for the code in the Block to complete. At the end of the code, the lock is released. If you lock the self object frequently, you will need to wait until the unrelated code on the other side completes before you can continue executing the current code.
    @synchronized(self) {//Safe
       //do whatever}}Copy the code

Another way is:


_lock = [[NSlock alloc] init];
- (void)synchronizedMethod
{
    [_lock lock];
    //Safe
    [_lock unlock];
}
Copy the code

Both of the above methods have their drawbacks: in extreme cases, they result in deadlocks and are not very efficient.

The alternative: the COMMUNIST Party.

_syncQueue = dispatch_queue_create("sync.queue", DISPATCH_QUEUE_SERIAL);

- (NSString*)someString {
    __block NSString *localSomeString;
  	  dispatch_sync(_syncQueue, ^{
         localSomeString = _someString;
    });
    return localSomeString;
}

- (void)setSomeString:(NSString *)someString {
    dispatch_sync(_syncQueue, ^{
       _someString = someString 
    });
}
Copy the code

This ensures that all read and write operations are in the same queue, which is much more efficient (GCD is based on low-level optimizations) and much cleaner (all synchronization is done in GCD) than the locking mechanism above.

The thing that can be optimized is that you can take the value method, read it asynchronously, send out the asynchronous operation in a serial queue, and start a new thread to do the asynchronous operation, rather than synchronous operation where all the operations are on the same thread. As follows:

- (NSString*)someString {
    __block NSString *localSomeString;
    dispatch_async(_syncQueue, ^{
         localSomeString = _someString;
    });
    return localSomeString;
}
Copy the code

This is an optimization, but there is an optimization trap, which is to perform asynchronous distribution, you need to copy the Block. If copying a Block takes longer to execute than the Block takes to execute, this is a “pseudo-optimization” and is slower than the original. Because this example is simple, it may be slower after the change.

Multiple get methods can be executed concurrently, but the get and set methods cannot be executed concurrently. Instead of using a fence function, use a concurrent queue:

_syncQueue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);

- (NSString*)someString {
    __block NSString *localSomeString;
  	  dispatch_sync(_syncQueue, ^{
         localSomeString = _someString;
    });
    return localSomeString;
}

- (void)setSomeString:(NSString *)someString {
     dispatch_barrier_sync(_syncQueue, ^{
        _someString = someString 
    });
}
Copy the code

Here is the implementation:

If a concurrent queue finds that the next block to be processed is a fence block, it will not execute the fence block until all the current concurrent blocks have been executed, and then continue processing in the normal way.

Let’s test the performance. This is definitely faster than what we just did.

Note that the setup function can also be implemented with synchronous fence blocks, which may be more efficient since asynchrony requires copying blocks.

To choose a solution, it is better to test the actual performance.

  • Dispatch queues can be used to express synchronization semantics rather than using@synchronizedorNSLockObjects are simpler;
  • Combining synchronous and asynchronous dispatch enables the same synchronous behavior as normal locking without blocking the thread performing the asynchronous dispatch;
  • Using synchronous queues and fence blocks can make synchronous behavior more efficient.

2.5 use GCD more and performSelector less

Any method can be called in NSObject, as simple as the following:

- (id)performSelector:(SEL)selector	
Copy the code

The power of this approach is evident when selectors are determined at run time. This is equivalent to using dynamic binding again over dynamic binding:

SEL selector;
if(/*some condition */) {
    selector = @selector(bar);
} else if(/* some ohter condition */) {
    selector = @selector(foo);
} else {
    selector = @selector(baz);
}
[object performSelector:selector];
Copy the code

The trade-off with this feature is that if you compile this code under ARC, the compiler will issue the following warning:

​ warning:performSelector may cause a leak because its selector is unknown [-Warc-performSelector-leaks]

Since the selectors cannot be determined, no memory management rules are applied to determine whether the return value needs to be freed. ARC takes the cautious approach of not adding a release operation. Doing so, however, can lead to memory leaks. Here is an example:

SEL selector;
if(/*some condition */) {
    selector = @selector(newObject);
} else if(/* some ohter condition */) {
    selector = @selector(copy);
} else {
    selector = @selector(someProperty);
}
id ret =[object performSelector:selector];
Copy the code

This code, while executing the first and second selectors, requires the RET object to be released, while the third does not. But this problem is easy to ignore or not detect with a static parser.

Editor’s note: According to Apple’s naming rules, when the first and second selectors create objects, they own them, so they need to be released.

Second, the performSelector method returns only id types, that is, only void or object types, not scalar types such as integers.

In addition, there are several performSelector methods as follows:

- (id)performSelector:(SEL)aSelector withObject:(id)object;
- (id)performSelector:(SEL)aSelector withObject:(id)object1 withObject:(id)object2;
Copy the code
@interface NSObject (NSThreadPerformAdditions)

- (void)performSelectorOnMainThread:(SEL)aSelector withObject:(nullable id)arg waitUntilDone:(BOOL)wait modes:(nullable NSArray<NSString *> *)array;
- (void)performSelectorOnMainThread:(SEL)aSelector withObject:(nullable id)arg waitUntilDone:(BOOL)wait;
	// equivalent to the first method with kCFRunLoopCommonModes

- (void)performSelector:(SEL)aSelector onThread:(NSThread *)thr withObject:(nullable id)arg waitUntilDone:(BOOL)wait modes:(nullable NSArray<NSString *> *)array NS_AVAILABLE(10_5.2_0);
- (void)performSelector:(SEL)aSelector onThread:(NSThread *)thr withObject:(nullable id)arg waitUntilDone:(BOOL)wait NS_AVAILABLE(10_5.2_0);
	// equivalent to the first method with kCFRunLoopCommonModes
- (void)performSelectorInBackground:(SEL)aSelector withObject:(nullable id)arg NS_AVAILABLE(10_5.2_0);

@end
Copy the code

However, all of the above delayed executions can be handled with dispatch_after:

dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(3.0 * NSEC_PER_SEC)), dispatch_get_main_queue(), ^{
        //todo
 });
Copy the code
  • performSelectorThe ARC compiler is unable to insert the appropriate memory management methods because it cannot determine what selector will be executed.
  • performSelectorThe number of selectors that can be handled by a series of methods is too limited. The return value types of selectors and the number of parameters that the sender can give to the method are limited.
  • If you want to delay execution, it’s best not to use itperformSelectorInstead, you should wrap the task in a Block and call GCD to do it.

2.6 Master the usage timing of the GCD and operation queue

Using NSOperation and NSOperationQueue:

  • Cancel an operation;
  • Specify dependencies between operations;
  • Monitoring through key value observation mechanismNSOperation.NSOperationObject many properties are suitable for listening through key-value observation mechanisms, such asisCancelled.isFinished;
  • Specify the priority of the operation;

2.7 Use dispatch_once to execute thread-safe code that only needs to be run once

+ (instancetype)sharedManager {
    static HOClass *shared = nil;
    @synchronized (self) {
        if(! shared) { shared = [[selfalloc] init]; }}return self;
}
Copy the code

Better implementation:

+ (instancetype)sharedManager {
    static HOClass *shared = nil;
    static dispatch_once_t onceToken;
    dispatch_once(&onceToken, ^{
        shared = [[self alloc]init];
    });
    return shared;
}
Copy the code

Dispatch_once simplifies code and is completely thread-safe. It is also more efficient and does not use heavyweight synchronization mechanisms.

2.8 Do not use dispatch_get_current_queue

  • dispatch_get_current_queueFunctions often behave differently from what developers expect. This function is deprecated and should only be used for debugging purposes.
  • Since dispatch queues are organized hierarchically, it is impossible to describe the concept of “current queue” by a single queue object.
  • dispatch_get_current_queueFunction to resolve deadlocks caused by non-reentrant code, however, problems solved by this function can often be resolved by using queue-specific data instead.

2.9 Perform tasks based on system resources using the Dispatch Group mechanism

Dispatch_group groups tasks into groups. Callers can wait for tasks to complete or provide callback functions to continue. Callers are notified when tasks are complete.

  • A series of tasks can be subsumed into onedispatch_groupDevelopers can be notified when a set of tasks is complete.
  • throughdispatch_groupCan be dispatched to a queue to perform multiple tasks at the same time. The GCD then schedules these concurrent tasks based on the state of system resources. Developers implementing this feature themselves would need to write a lot of code.