This blog post is a reading note for the second part of the book high Performance iOS Application Development, “Core Optimization,” which covers three areas: Memory management, Power consumption, and concurrent programming.

1. Memory management

IPhone and iPad devices have very limited memory resources. If an application exceeds the memory usage limit for a single process, it is terminated by the operating system. For this reason, successful memory management plays a central role in the implementation of iOS applications.

Unlike the (garbage-based)Java runtime, the Objective-C and Swift iOS runtimes use reference counting. The downside of using reference counting is that it can lead to repeated memory frees and circular references if developers are not careful.

Therefore, it is important to understand iOS memory management.

1.1 Memory Consumption

Memory consumption refers to the AMOUNT of RAM consumed by an application.

IOS’s virtual memory model does not include swap memory, which, unlike desktop applications, means that disks are not used for paging memory. The end result is that applications can only use limited RAM. These RAM users include not only applications running in the foreground, but also operating system services, and even background tasks performed by other applications.

Memory consumption in an application is divided into two parts: stack size and heap size.

1.1.1 stack size

Each newly created thread in the application has a dedicated stack space, which is made up of reserved memory and initially committed memory. The stack can be used freely for as long as the thread exists. The maximum stack space of a thread is small, which determines the following limitations.

  • The maximum number of methods that can be called recursively. Each method has its own stack frame and consumes the entire stack space.
  • Maximum number of variables that can be used in a method. All variables are loaded into the method’s stack frame and consume some stack space.
  • The maximum depth of views that can be embedded in the view hierarchy. Rendering a composite view recursively invokes layoutSubViews and drawRect methods throughout the view hierarchy tree. If the hierarchy is too deep, stack overflow may result.

1.1.2 heap size

All threads of each process share the same heap. The heap size an application can use is usually much smaller than the RAM value of the device.

The application has no control over the heap allocated to it. Only the operating system can manage the heap.

Using NSString, loading images, creating or using JSON/XML data, using views, and so on all consume a lot of heap memory. If your app uses a lot of images (like Flickr and Instagram apps), then you need to pay extra attention to minimizing average and peak memory usage.

It is a good idea to always keep your application’s memory requirements low in proportion to RAM. Although it is not mandatory, it is strongly recommended that you use no more than 80% to 85% of the operating system, leaving enough memory for core services. Do not ignore the didReceiveMemoryWarning signal.

1.2 Memory management model

The memory management model is based on the concept of holding relationships. When an object is created inside a method, that method holds the object. If an object is being held, its memory cannot be reclaimed.

Once all the tasks associated with an object have been completed, the holding relationship has been abandoned. This process does not shift ownership, but increases or decreases the number of holders, respectively. When the number of holders drops to zero, the object is released.

This holding relation counting is often formally referred to as reference counting.

1.3 Automatically Releasing A Pool Block

Auto-free pool blocks are a tool that allows you to relinquish ownership of an object without it being immediately reclaimed. This is useful when returning objects from methods.

It also ensures that objects created within a block are recycled when the block is complete. This is useful in scenarios where multiple objects are created. Local blocks can be used to free objects as early as possible, keeping memory usage low.

int main(int argc, char * argv[]) {
    NSString * appDelegateClassName;
    @autoreleasepool {
        // Setup code that might create autoreleased objects goes here.
        appDelegateClassName = NSStringFromClass([AppDelegate class]);
    }
    return UIApplicationMain(argc, argv, nil, appDelegateClassName);
}
Copy the code

This code is the @AutoReleasepool block in main.m. All objects in the block that received an AutoRelease message will receive a Release message at the end of the AutoReleasepool block. More importantly, each autoRelease call sends a release message. This means that if an object receives autoRelease messages more than once, it will receive Release messages more than once. This is great because it ensures that the reference count of the object drops to the value it was before using the AutoReleasepool block. If the count is 0, the object is reclaimed, keeping the memory usage low.

If you look at the code for the main method, you can see that the entire application is in an Autoreleasepool block, which means that all AutoRelease objects are eventually reclaimed without causing a memory leak.

1.4 Automatic Reference Counting

ARC is a compiler feature. It evaluates the life cycle of the object in the code and automatically injects the appropriate memory management calls at compile time. The compiler also generates the appropriate dealloc methods. This means that the biggest challenges related to tracking memory usage, such as ensuring that objects are recycled in a timely manner, are solved.

  • The rules of the ARC
    • The retain, release, Autorelease, or retainCount methods cannot be implemented or called. This restriction applies not only to objects, but also to selectors. Therefore, [obj release] or @selector(retain) are compile-time errors.
    • The dealloc methods can be implemented, but they cannot be called. Not only can you not call the dealloc method of another object, you can’t call the superclass. [super dealloc] is a compile-time error. But you can still call CFRetain, CFRelease, and other related methods on objects of type Core Foundation.
    • The NSAllocateObject and NSDeallocateObject methods cannot be called. The alloc method should be used to create the object and the runtime is responsible for reclaiming the object.
    • Object Pointers cannot be used in C structures.
    • There is no automatic conversion between id and void * types. If necessary, then you must do display conversion.
    • Instead of using NSAutoreleasePool, use the Autoreleasepool block instead.
    • NSZone memory regions cannot be used.
    • Attribute accessor names cannot start with new to ensure interoperability with MRC.

1.5 Reference Types

ARC introduces a new reference type: weak references. Understanding these reference types in depth is important for memory management. The following two types are supported.

  • Strong reference. Strong references are the default reference type. Memory pointed to by strong references is not freed. Strong references increase the reference count by one, extending the life cycle of the object.
  • A weak reference. A weak reference is a special type of reference. It does not increase the reference count and therefore does not extend the life cycle of the object. Weak references are especially important in Objective-C programming with ARC enabled.

1.5.1 Variable qualifiers

ARC provides four life cycle qualifiers for variables.

  • __strongThis is the default qualifier and does not need to be introduced. Objects stay in memory for a long time as long as there is a strong reference to them. Can be__strongUnderstood as the ARC version of the retain call.
  • __weakThis indicates that the reference does not keep the referenced object alive. Weak references are set to nil when there are no strong references to objects. Can be__weakARC version of the Assign operator, except that when the object is recycled,__weakSecurity – Pointers will automatically be set to nil.
  • __unsafe_unretained__weakSimilar, except that when there is no strong reference to an object,__unsafe_unretainedIt won’t be set to nil. Think of it as the ARC version of the assign operator.
  • __autoreleasing.__autoreleasingUsed for message parameters passed by reference using ID *. It expects the AutoRelease method to be called in the method passing the parameter.

1.5.2 Property qualifiers

The property declaration has two new holding relationship qualifiers :strong and weak. In addition, the semantics of the assign qualifier have been updated. In short, there are now six qualifiers.

  • The strong. The default character that specifies the __strong relationship.
  • Weak. The __weak relationship is specified.
  • Assign. This is not a new qualifier, but its meaning has changed. Prior to ARC, Assign was the default holder limiter. With ARC enabled, assign represents an __unsafe_unretained relationship.
  • The copy. Alludes to the __strong relationship. In addition, it implies the general behavior of the replication semantics in setters.
  • Retain. The __strong relationship is specified.
  • Unsafe_unretained. __unsafe_unretained relationship is specified.

1.6 Zombie Objects

Zombie objects are debugging features used to catch memory errors.

Normally, objects are released immediately when the reference count drops to zero, but this makes debugging difficult. If zombie objects are enabled, the object is not immediately free of memory, but is marked as zombie. Any attempts to access it are logged, so you can track where an object is used in your code throughout its life.

NSZombieEnabled is an environment variable that controls whether the Core Foundation runtime will use zombie objects. NSZombieEnabled should not be retained for long periods of time, because by default no object is actually destructed, which results in an application using a lot of memory. In particular, NSZombieEnabled must be disabled in the release of the build package.

To set the NSZombieEnabled environment variable, go to Product → Scheme → Edit Scheme. Select Run on the left, and then the Diagnostics TAB on the right. Select the Zombie Objects option as shown below:

1.7 Circular Reference

The biggest pitfall of reference counting is that it can’t handle circular reference relationships, objective-C circular references.

1.7.1 Rules for Avoiding Circular References

  • An object should not hold its parent and should point to its parent with an weak reference.
  • As a corollary, child objects in a hierarchy should retain ancestor objects.
  • Connection objects should not hold their target objects. The role of the target object is holder. Connection objects include the following types.
    • Objects that use delegates. Delegates should be treated as the target object, the holder.
    • The object that contains the target and the action, inferred from the previous rule. For example, UIButton calls the action method on its target object. The button should not retain its target.
    • The object being observed in observer mode. The observer is the holder and observes what happens to the object being observed.
  • Break the circular reference using a dedicated destruction method. Circular references exist in bidirectional lists, and circular references exist in circular lists. In such cases, once it is clear that the object will no longer be used (when the list header is out of scope), you write code to break the link of the list. Create a method that disconnects itself from the next node in the list. This process is performed recursively through the visitor pattern to avoid infinite recursion.

1.7.2 Common Scenarios of Circular Reference

A large number of common scenarios lead to circular references. For example, using threads, timers, simple block methods, or delegates can lead to circular references. We’ll explore these scenarios step by step and show steps to avoid circular references.

Commissioned by 1.

Delegates are probably the most common place to introduce circular references. When an application starts, it is common to get the latest data from the server and update the UI. Similar refresh logic is triggered when the user clicks the refresh button.

The solution is to establish strong references to operations in the delegate and weak references to the delegate in the operation.

2. block

Similar to the problems caused by improper use of delegate objects, trapping external variables can also cause circular references when using blocks.

Weak typeof(self) weakSelf = self; weakSelf = self; .

3. Threads and timers

Improper use of NSThread and NSTimer objects can also result in circular references. The typical steps for running an asynchronous operation are as follows.

  • If no more advanced code is written to manage custom queues, the dispatch_async method is used on global queues.
  • Start asynchronous execution with NSThreads when and where needed.
  • Periodically execute a piece of code with NSTimer.

Solution: NSTimer does not cause circular references in the main thread, but does cause circular references in the child thread. The problem should be with the child thread. The invalidate method must be called when the timer is released. This method does the work of releasing self, block, RunLoop, etc. Releasing RunLoop can only release the RunLoop of the same thread as the timer.

1.7.3 observer

1. Key-value observation

Objective – C allow use addObserver: forKeyPath: options: context: method of adding observer in any NSObject subclass object. Observers will pass observeValueForKeyPath: ofObject: change: context: notified method. RemoveObserver: forKeyPath: context: the method is used to cancel the registration or remove the observer. This is known as the key-value observation.

This is an extremely useful feature, especially when tracing objects that share multiple parts of an application, such as user interfaces, business logic, persistence, and networking, for debugging purposes.

Key-value observation is also useful in two-way data binding. Views can associate delegates in response to user interactions that result in model updates. Key-value observations can be used for reverse binding to update the UI when the model changes.

This means that the observer needs to have a long enough lifetime to be able to continuously monitor changes. You need to pay extra attention to the lifetime of the observer until the observed memory is obsolete.

When you add a key-value observer to a target object, the lifetime of the target object should be at least as long as the observer, because only then is it possible to remove the observer from the target object. This can lead to the target object having a longer lifetime than expected and is something you need to be extra careful about.

2. Notification center

An object can register as an observer of the notification Center (NSNotificationCenter object) and receive an NSNotification object. Like key-value observers, the notification center does not hold a strong reference to the observer. This means that developers are freed from having to worry about observer destructions too early or too late.

1.8 Object life and leakage

The longer an object is active in memory, the more likely it is that memory will not be cleaned up. Long-lived objects should be avoided as much as possible. Of course, you need to keep references to key action objects in your code so that you don’t have to waste time creating them every time. Try to complete the reference to these objects as they are used.

A common form of long-lived objects is singletons. A logger is a classic example – created once, never destroyed.

Another option is to use global variables. Global variables are a terrible thing in program development.

To use global variables properly, the following conditions must be met:

  • Not held by another object;
  • Not constant;
  • There is only one for the entire application, not one per component.

If a variable does not meet these requirements, it should not be used as a global variable.

Complex object graphs reduce the chance of memory reclamation and increase the risk of an application crashing due to memory exhaustion. If the main thread is always forced to wait for child threads to perform operations (such as network or database access), the response performance of the application will be poor.

1.9 the singleton

The singleton pattern is a design pattern that restricts a class to initializing only one object. In practice, initialization is usually performed shortly after the application is started, and these objects are not destroyed.

It’s not a good idea to have an object with the same lifetime as an application. If this object is the source of another object (such as a service locator), memory risks may arise if the locator is incorrectly implemented.

There is no doubt that singletons are necessary. But the implementation of singletons has an important impact on how they are used.

Before we fully discuss the issue of singleton introduction, let’s get a better understanding of singletons and why they are really needed.

Singletons are extremely useful, especially when a system determines that only one object instance is needed. Singletons should be used in the following situations:

  • Queue operations (such as logging and burying points)
  • Accessing shared resources (such as caches)
  • Resource pools (such as thread pools or connection pools)

Once created, the singleton survives until the application is closed. Loggers, buried services, and caches are all reasonable scenarios for using singletons.

More importantly, singletons are typically initialized at application startup, and components intended to use singletons need to wait until they are ready. This increases the startup time of the application.

You can use the following guidelines.

  • Avoid singletons whenever possible.
  • Identify the parts that need memory, such as memory buffers for buried points (used before data is synchronized to the server). Look for ways to reduce memory. Note that you need to balance the memory reduction against other things. Smaller buffers mean more server traffic.
  • Avoid object-level attributes because they live and die with the object. Use local variables whenever possible.

1.10 Best Practices

By following these best practices, you will largely avoid many headaches such as memory leaks, circular references, and large memory consumption.

  • Avoid large numbers of singletons. Specifically, do not have God objects (such as objects with too many responsibilities or too many status messages). This is an anti-pattern that refers to a design pattern for a common solution that quickly backfires. Auxiliary singletons such as loggers, buried services, and task queues are fine, but global state objects are not.
  • Use __strong for child objects.
  • Use __weak for the parent object.
  • Use __weak for objects that close the reference graph, such as delegates.
  • For numeric attributes (NSInteger, SEL, CGFloat, and so on), use the assign qualifier.
  • For block attributes, use the copy qualifier.
  • When declared to useNSError **Parameter method__autoreleasingAnd pay attention to the correct grammar:NSError * __autoreleasing *.
  • Avoid directly referencing external variables within a block. Weakify them outside the block and strongify them inside the block. See the Libextobjc library for @Weakify and @Strongify.
  • Follow the following guidelines for necessary cleaning:
    • Destroy timer
    • Remove the observer (specifically, remove registration of notifications)
    • Unhook the callback (specifically, set the strongly referenced delegate to nil)

Energy consumption of 2.

Every hardware module in the device consumes power. The biggest consumer of power is the CPU, but that’s only one aspect of the system. A well-written application requires careful use of power. Users tend to delete apps that use a lot of power.

In addition to the CPU, power-hungry hardware modules of interest include: network hardware, Bluetooth, GPS, microphone, speedometer, camera, speaker, and screen.

2.1 the CPU

The CPU is the primary hardware used by the application, whether the user is directly using it or not. Applications still consume CPU resources while operating in the background and processing push notifications.

The more the app calculates, the more power it consumes. Older devices consume more power to perform the same basic tasks. The cost of computation depends on different factors.

  • Processing of data (for example, formatting text).
  • Data size to process – A larger display allows software to present more information in a single view, but it also means processing more data.
  • Algorithms and data structures for processing data.
  • The number of times an update is performed, especially after a data update, triggering an update to the app’s state or UI. (Push notifications your app receives can also cause data updates, and you need to update the UI if the user is using your app at that time.)

There is no single rule that can reduce the number of executions in a device. Many of the rules depend on the nature of the operation. Here are some best practices that can be put to use in your application.

  • Choose optimal algorithms for different situations. For example, when you sort, insert sort is better than merge sort if the list has less than 43 instances, but quicksort should be used when the list has more than 286 instances. Use two-pivot quicksort in preference to traditional single-pivot quicksort.
  • If the application receives data from the server, minimize the amount of processing that needs to be done on the client side. For example, if a text needs to be rendered on the client side, clean up the data on the server as much as possible.
  • Optimized ahead-of-time (AOT) static compilation processing. The disadvantage of just-in-time (JIT) processing is that it forces the user to wait for the operation to complete. But aggressive AOT processing leads to a waste of computing resources. Precise quantitative AOT processing needs to be selected depending on the application and device.
  • Analyze power consumption. Measure power consumption on all devices of the target user. Identify high energy consumption areas and find ways to reduce energy consumption.

2.2 the network

Smart network access management makes applications more responsive and helps extend battery life. When the network is inaccessible, subsequent network requests should be deferred until the network connection is restored.

In addition, high-bandwidth operations, such as video streaming, should be avoided when WiFi is not connected. It is well known that cellular wireless systems (LTE, 4G, 3G, etc.) consume far more power than WiFi signals. The root cause is that LTE devices are based on multi-input, multi-output technology, using multiple concurrent signals to maintain LTE links at both ends. Similarly, all cellular data connections are scanned periodically for stronger signals.

Therefore, we need to:

  • Before performing any network operation, check that the appropriate network connection is available.
  • Continuously monitor network availability and provide appropriate feedback when connection status changes.

2.3 Location manager and GPS

It’s important to understand that location services include GPS(or GLONASS) and WiFi hardware, and that location services require a lot of battery life.

Using GPS to calculate coordinates requires the determination of two points of information.

  • Time to lock
    • Each GPS satellite broadcasts a unique 1023 bit random number every millisecond, so the data propagation rate is 1.024Mbit/s. The GPS receiver chip must be correctly aligned with the satellite’s time lock slot.
  • Frequency lock
    • GPS receivers must calculate the signal error caused by doppler shift caused by the receiver’s relative motion with the satellite.

Computing coordinates constantly uses CPU and GPS hardware resources, so they can quickly drain battery power.

2.3.1 Optimal initialization

Common operations and properties for CLLocationManager

// Start user location - (void)startUpdatingLocation; // Stop user location - (void) stopUpdatingLocation;Copy the code

Note: When the startUpdatingLocation method is called, the user’s location is continuously located, with the following methods of the proxy frequently called along the way

- (void)locationManager:(CLLocationManager *)manager didUpdateLocations:(NSArray *)locations;
Copy the code

Two parameters play a very important role when calling the startUpdatingLocation method.

  • distanceFilter
    • As long as the equipment move more than the minimum distance, the distance filter leads to a manager to delegate object locationManager: didUpdateLocations: change event notification. The distance is measured in metric units (meters). This does not help reduce the use of GPS receivers, but it does affect the processing speed of the application, which directly reduces CPU usage.
  • desiredAccuracy
    • The use of precision parameters directly affects the number of antennas used and the battery consumption. The choice of accuracy level depends on the specific purpose of the application. In descending order, precision is defined by the following constants.
      • The best accuracy level kCLLocationAccuracyBestForNavigation for navigation.
      • The kCLLocationAccuracyBest is the best accuracy level possible for the device.
      • KCLLocationAccuracyNearestTenMeters precision nearly 10 meters. Use this value if you are not interested in each meter the user walks (for example, when measuring bulk distances).
      • KCLLocationAccuracyHundredMeters accuracy is close to 100 meters.
      • KCLLocationAccuracyKilometer precision in km range. This is useful for rough measurement of two points of interest hundreds of kilometers apart.
      • KCLLocationAccuracyThreeKilometers precision in 3 km range. Use this value when the distance is really great.

2.3.2 Turn off irrelevant features

Determine when you need to track changes in position. The startUpdatingLocation method is called when tracing is required and the stopUpdatingLocation method is called when tracing is not required.

Suppose the user needs a messaging app to share location with friends. If the application just sends the name of the city, you only need to get the location information once, and then you can turn off location tracking by calling stopUpdatingLocation.

2.3.3 Use the network only when necessary

To maximize battery efficiency, iOS keeps wireless networks off as much as possible. When an application needs to establish a network connection, iOS uses this opportunity to share a network session with the background application so that low-priority events, such as push notifications and receiving emails, can be processed.

The key is that every time an application sets up a network connection, the network hardware stays active for a few more seconds after the connection is completed. Each centralized network communication consumes a lot of power.

To mitigate this problem, your software needs to use the web with reservations. Instead of a continuous stream of active data, the network should be used periodically and intensively for short periods. Only then does the network hardware have a chance to be shut down.

2.3.4 Background Location Service

CLLocationManager provides an alternative method to listen for location updates. StartMonitoringSigni -ficantLocationChanges helps you track movement over greater distances. The exact value is determined internally and is independent of the distanceFilter.

Using this mode, you can continue tracking motion after the application is in the background. (Unless the app is a navigation app and you want to get good detail during the lock screen.) Typically, the startMonitoringSigni-ficantLocationChanges method is executed when the application goes into the background and startUpdatingLocation is executed when the application returns to the foreground.

2.3.5 NSTimer, NSThread, and Location Service

Any timer or thread will hang while the application is in the background. But if you apply for location while your app is in the background, it will wake up briefly every time it receives an update. During this time, both the thread and the timer are woken up.

The scary part is that if you do any network operations during this time, all relevant antennas (such as WiFi and LTE/4G/3G) will be activated.

Controlling this can be tricky. The best option is to use the NSURLSession class.

2.4 screen

Screens consume a lot of power. The bigger the screen, the more power it takes. Of course, if your app is running in the foreground and interacting with users, you’re going to use the screen and consume battery.

However, there are still some options for optimizing screen usage.

Against 2.4.1 animation

You can follow a simple rule: Use animation when the app is in the foreground and pause animation as soon as the app is in the background. Generally speaking, you can monitor UIApplicationWillResignActiveNotification or UIApplic ationDidEnterBackgroundNotification notification events to pause or stop the animation, Also can be monitored UI ApplicationDidBecomeActiveNotification notification events to restore the animation.

2.4.2 Video Playback

It is best to force the screen to be on during video playback. You can do this using the idleTimerDisabled property of the UIApplication object. Once set to YES, it prevents screen sleep so that it is always on. Similar to animation, you can release and acquire locks by responding to notifications from your app.

2.5 Other Hardware

When the application goes into the background, it should release the lock on these hardware:

  • bluetooth
  • The camera
  • Speakers, unless the application is music
  • The microphone

We won’t discuss the features of these hardware here, but the basic rules are the same — only interact with these hardware when applications are in the foreground, and stop interacting when applications are in the background.

Speakers and wireless Bluetooth may be exceptions. If you’re developing music, radio, or other audio applications, you’ll need to continue using the speakers after the application goes into the background. Don’t leave the screen lit up just for audio playback purposes. Similarly, if your application has unfinished data transfers, you need to continue using Wireless Bluetooth while your application is in the background, for example, transferring files to other devices.

2.6 Battery power and code perception

An intelligent application takes into account the battery level and its own state to decide whether to perform a truly resource-intensive operation. Another valuable point is the judgment of charging, determining whether the device is in a charging state.

BatteryLevel and batteryState can be obtained using the UIDevice instance.

Alert the user when the remaining battery is low and request authorization to perform power-intensive operations — only with the user’s consent, of course. Always use an indicator to show the progress of long-running tasks, including upcoming calculations on the device or just downloading some content. Provide users with estimates of progress to help them decide if they need to charge their devices.

2.7 Analyze electricity usage

Use Xcode Instruments’ Energy Log.

  • Open the phone’s Settings, click “Developer” and select Logging.
  • Instruments in iOS Settings select Energy and click startRecording. Then open your APP and run. Operation for about five minutes (depending on your needs), and then enter the mobile phone Settings click stopRecording.
  • Next, connect your iOS device to Xcode and Open the Energy Log in Instruments (Xcode –> Open Developer Tool –> Instruments –> Energy Log). Click Import Logged Data from Device on the toolbar. Import our energy consumption data in iOS performance optimization.
  • You can see the power consumption of your APP in Instruments.

2.8 Best Practices

The following best practices ensure careful use of power. The application can achieve efficient use of electricity by following the following guidelines.

  • Minimize hardware usage. In other words, keep working with the hardware as late as possible, and stop using it as soon as you finish the task.
  • Before undertaking intensive tasks, check the battery level and charging status.
  • When the battery is low, the system prompts the user whether to perform the task, and performs the task after the user agrees.
  • Or provide options for Settings that allow the user to define thresholds for power to prompt the user before performing intensive operations.

3. Concurrent programming

3.1 the thread

A thread is a sequence of instructions executed at runtime.

Each process should contain at least one thread. In iOS, the main thread that starts a process is often called the main thread. All UI elements need to be created and managed in the main thread. All interrupts related to user interaction are eventually dispatched to the UI thread, where the processing code is executed — the IBAction method code is executed in the main thread.

Cocoa programming does not allow other threads to update UI elements. This means that whenever an application performs a time-consuming operation on a background thread, such as networking or other processing, the code must switch the context to the main thread and update the UI — for example, a progress bar to indicate the progress of a task or a TAB to show the result of processing.

3.2 Thread Overhead

While it may look great to have multiple threads in your application, each thread has some overhead that can affect your application’s performance. Threads not only cost time at creation time, they also consume kernel memory, that is, the memory space of the application.

3.2.1 Kernel data structure

Each thread consumes approximately 1KB of kernel memory space. This block of memory is used to store thread-specific data structures and attributes. This piece of memory is wired memory and cannot be paged.

3.2.2 stack space

The stack size of the main thread is 1M and cannot be modified. All secondary threads are allocated 512KB stack space by default. Note that the complete stack is not created immediately. The actual stack space size grows with usage. Therefore, even if the main thread has 1MB stack space, the actual stack space at one point in time is likely to be much smaller.

The stack size can be changed before the thread starts. The minimum stack space is 16KB, and its value must be a multiple of 4KB.

3.2.3 Creation Time

The time it takes to start a thread after it is created ranges from 5 to 100 milliseconds, with an average of 29 milliseconds. This is a significant time cost, especially if multiple threads are started at application startup.

Threads take so long to start because of the overhead of multiple context switches.

3.3 the GCD

List of features provided by GCD.

  • A task or distribution queue that allows execution in the main thread, parallel execution, and serial execution.
  • Distribution group, which enables tracking of the execution of a set of tasks regardless of the queue on which the tasks are based.
  • Semaphore.
  • Barriers that allow simultaneous points to be created in parallel distribution queues.
  • Distribute objects and management sources for lower-level management and monitoring.
  • Asynchronous I/O, using file descriptors or pipes.

GCD also addresses thread creation and management. It helps us keep track of the total number of threads in our application without causing any leakage.

Most of the time, applications work fine using GCD alone, but there are specific cases where you need to consider using NSThreads or NSOperationQueue. When your application has multiple long-running tasks that need to be executed in parallel, it is best to control the creation of threads. If the code takes too long to execute, it is possible to reach the thread limit of 64, the GCD’s thread pool limit. Wasteful use of dispatch_async and dispatch_sync should be avoided as this will cause the application to crash 4. While 64 threads is a high reasonable value for a mobile application, uncontrolled usage will exceed this limit sooner or later.

For GCD thread pool limits, please refer to this document: stackOverflow.com :number-of-threads-created-by-gcd.

3.4 Operations and Queues

Actions and action queues are another important concept in iOS programming related to task management.

NSOperation encapsulates a task and its associated data and code, while NSOperationQueue controls the execution of one or more such tasks in a first-in, first-out order.

Both NSOperation and NSOperationQueue provide the ability to control the number of threads. The maxConcurrentOpera-tionCount attribute can be used to control the number of queues, as well as the number of threads per queue.

Here’s a quick comparison of the NSThread, NSOperationQueue, and GCD apis.

  • GCD

    • The highest degree of abstraction.
    • There are two types of queues out of the box :main and Global.
    • More queues can be created (using dispatch_queue_CREATE).
    • Exclusive access can be requested (using dispatch_barrier_sync and dispatch_barrier_async).
    • Thread-based management.
    • Hard limit of 64 threads.
  • NSOperationQueue

    • There is no default queue.
    • Applications manage queues created by themselves.
    • Queues are priority queues.
    • Operations can have different priorities (using the queuePriority attribute).
    • The operation can be cancelled using the Cancel message. Note that cancel is just a tag. If the operation has already started, it may continue.
    • You can wait for an operation to complete (using the waitUntilFinished message).
  • NSThread

    • Low level construction, maximum control.
    • The application creates and manages threads.
    • The application creates and manages thread pools.
    • Apply the start thread.
    • Threads can have a priority, and the operating system schedules their execution according to that priority.
    • There is no direct API for waiting for a thread to complete. Mutex (such as NSLock) and custom code are required.

3.5 Thread-safe code

3.5.1 Atomic properties

Atomic attributes are a good place to start for application state thread safety. If a property is atomic, the modifications and readings must be atomic.

This is important because it prevents two threads from updating a value at the same time, which could lead to an incorrect state. The thread that is modifying a property must finish processing before any other thread can begin processing.

All attributes are atomic by default. As a best practice, you should use atomic explicitly when needed. Otherwise, the attribute is marked with nonatomic.

Because atomic attributes have overhead, it is unwise to overuse them. For example, if you can guarantee that an attribute will not be accessed by more than one thread at any one time, it is best to mark it as nonatomic.

3.5.2 lock

Locks are the basic building blocks for accessing critical sections. The atomic attribute and @synchronized blocks are intended to be a convenient and useful high-level abstraction.

Here are the three available locks.

  • NSLock

    • This is a low-level lock. Once the lock is acquired, execution is critical and no more than one thread is allowed to execute in parallel. Releasing the lock marks the end of the critical section.
    • NSLock must be unlocked in the locked thread.
  • NSRecursiveLock

    • NSRecursiveLock allows multiple locks before being unlocked. If the number of unlocked times matches the number of locked times, the lock is considered released and other threads can acquire the lock.
  • NSCondition

    • There are cases where you need to coordinate execution between threads. For example, one thread may need to wait for another thread to return a result. The NSCondition can atomically release the lock so that other waiting threads can acquire the lock while the original thread continues to wait. A thread waits for a condition variable to release the lock. Another thread notifies the condition variable to release the lock and wakes up the waiting thread.

3.5.3 Applying read/write Locks to Concurrent Read/write operations

There is a case where if more than one thread is trying to read a property, the synchronized code only allows a single thread to access it at the same time. Using the atomic attributes mentioned above can slow down application performance.

Read/write locks allow parallel access to read-only operations, whereas write operations require mutually exclusive access. This means that multiple threads can read data in parallel, but a mutex is required to modify the data.

The GCD barrier allows the creation of a synchronous point on the parallel distribution queue. When a barrier is encountered, THE GCD delays execution of the committed code block until all blocks in the queue that were committed prior to the barrier have been executed. The blocks of code submitted through the barrier are then executed separately. We call this block of code a barrier block. When it completes, the queue continues with the original behavior.

To implement this behavior, we need to follow these steps.

  • Create a parallel queue.
  • All read operations are performed on this queue using dispatch_sync.
  • Use dispatch_barrier_sync on the same queue for all write operations.

3.5.4 Using immutable Entities

What if you need to access a state that is being modified? For example, what if the cache was cleared, but because the user performed an interaction, part of the state required immediate use? Is there a more efficient mechanism to manage state than multiple components trying to update state at the same time?

Your team should follow these best practices.

  • Use immutable entities.
  • Support is provided by updating subsystems.
  • Allows the observer to receive notifications about data changes.

3.5.5 Asynchronous is Better than Synchronous

For thread-safe, unlockable, and easy to maintain code, the asynchronous style is strongly recommended. If you can do it asynchronously, do it asynchronously.


GitHub: High Performance iOS App Development – Core Optimization

Related articles: High performance iOS App Development – iOS Performance