The idea of latency is worth thinking about. Incubation period is worth thinking about. – Kevin Passat

In Chapter 13, “Efficient Drawing,” we looked at the performance issues associated with Core Graphics drawing and how to fix them. Closely related to drawing performance is image performance. In this chapter, we’ll look at how to optimize loading and displaying images from flash drives or networks. # The actual time spent loading and latent drawing is not usually a performance factor. Images consume a large amount of memory, and it is unlikely to keep all images that need to be displayed in memory, so images need to be loaded and unloaded periodically as the application runs.

Image file loading speed is affected by both CPU and IO(input/output). Flash memory in iOS devices is already much faster than traditional hard drives, but it’s still nearly 200 times slower than RAM, which requires careful loading management to avoid latency.

Whenever possible, try to load images at unobservable points in the application lifecycle, such as startup, or during screen transitions. The maximum delay between pressing a button and responding to a button event is about 200ms, which is much smaller than the 16ms per frame of animation switch. You can load images the first time the app starts, but if it doesn’t start within 20 seconds, the iOS detection timer will stop your app (and users will complain if it starts longer than 2 or 3 seconds).

Sometimes it’s not wise to load everything in advance. Take an image conveyor belt with thousands of images: users want to be able to flip images smoothly and quickly, so it’s impossible to pre-load all images; That would consume too much time and memory.

Sometimes images also need to be downloaded from a remote network connection, which takes more time than loading from disk, and may even fail to load due to connection problems (after a few seconds of trying). You cannot load the network in the main thread causing a wait, so you need background threads. In our contact list example in Chapter 12, “Performance Tuning,” the images are very small, so they can be loaded simultaneously in the main thread. For larger images, however, this is not a good idea because loading takes a long time, resulting in slippery slides. Slide animations are updated in the run loop of the main thread, so there are more CPU-related performance issues running in the render server process.

Listing 14.1 shows a basic picture transmitter implemented through UICollectionView. Images in the main thread – collectionView: cellForItemAtIndexPath: method of synchronous load (see figure 14.1).

Listing 14.1 shows the image transmitter implemented with UICollectionView

#import "ViewController.h"
@interface ViewController() <UICollectionViewDataSource> 
@property (nonatomic, copy) NSArray *imagePaths;
@property (nonatomic, weak) IBOutlet UICollectionView *collectionView; 
@end
@implementation ViewController
- (void)viewDidLoad {
  //set up data
  self.imagePaths = [[NSBundle mainBundle] pathsForResourcesOfType:@"png" inDirectory:@"Vacation Photos"];
  //register cell class
  [self.collectionView registerClass:[UICollectionViewCell class] forCellWithReuseIdentifier:@"Cell"];
}
- (NSInteger)collectionView:(UICollectionView *)collectionView numberOfItemsInSection:(NSInteger)section
{
  return [self.imagePaths count];
}
- (UICollectionViewCell *)collectionView:(UICollectionView *)collectionView cellForItemAtIndexPath:(NSIndexPath *)indexPath
{
  //dequeue cell
  UICollectionViewCell *cell = [collectionView dequeueReusableCellWithReuseIdentifier:@"Cell" forIndexPath:indexPath];
  //add image view
  const NSInteger imageTag = 99;
  UIImageView *imageView = (UIImageView *)[cell viewWithTag:imageTag]; 
  if(! imageView) { imageView = [[UIImageView alloc] initWithFrame: cell.contentView.bounds]; imageView.tag = imageTag; [cell.contentView addSubview:imageView]; } / /set image
  NSString *imagePath = self.imagePaths[indexPath.row]; 
  imageView.image = [UIImage imageWithContentsOfFile:imagePath]; 
  return cell;
}
@end

Copy the code

UIImage
+imageWithContentsOfFile:

To load images in background threads, we can create a custom thread using GCD or NSOperationQueue, or use the To load images from a remote network, we can use asynchronous, but not very effective, images stored locally. The GCD(Grand Central Dispatch) is similar to NSOperationQueue in that it gives us queue closure blocks to execute in a certain order in the thread. NSOperationQueue has an Objecive-C interface (rather than using GCD’s global C functions), which also provides good granularity control over operation priorities and dependencies, but requires more code setup.

Listing 14.2 shows the background of in the low priority queue rather than the main thread using the GCD loading pictures – collectionView: cellForItemAtIndexPath: method, and then when the need to load the picture to view to switch to the main thread, because threads access the view in the background will be a security risk.

Since views are recycled in UICollectionView, we can’t be sure when loading images that they will be reused by different indexes. To avoid loading images into the wrong view, we label the cells indexed before loading, and then check if the label changes when setting up the image.

Listing 14.2 uses GCD to load and transfer images

- (UICollectionViewCell *)collectionView:(UICollectionView *)collectionView cellForItemAtIndexPath:(NSIndexPath *)indexPath
{
  //dequeue cell
  UICollectionViewCell *cell = [collectionView dequeueReusableCellWithReuseIdentifier:@"Cell" forIndexPath:indexPath];
  //add image view
  const NSInteger imageTag = 99;
  UIImageView *imageView = (UIImageView *)[cell viewWithTag:imageTag]; 
  if(! imageView) { imageView = [[UIImageView alloc] initWithFrame: cell.contentView.bounds]; imageView.tag = imageTag; [cell.contentView addSubview:imageView]; } //tag cell with index and clear current image cell.tag = indexPath.row; imageView.image = nil; //switch to background thread dispatch_async( dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_LOW, 0), ^{ //load image NSInteger index = indexPath.row; NSString *imagePath = self.imagePaths[index]; UIImage *image = [UIImage imageWithContentsOfFile:imagePath]; //set image on main thread, but only if index still matches up
    dispatch_async(dispatch_get_main_queue(), ^{
      if(index == cell.tag) { imageView.image = image; }}); });return cell;
 }
Copy the code

When running the updated version, performance is much better than the previous thread-free version, but still not perfect (Figure 14.3).

We can see that the +imageWithContentsOfFile: method is not at the top of the CPU time trace, so we do fix the lazy loading problem. The problem is that we assume that the performance bottleneck of the transporter is the loading of image files, which is not the case. Loading image data into memory is only the first part of the problem.

The CPU time for loading relative to decoding varies depending on the image format. For PNG images, loading is longer than JPEG, because the file may be larger, but decoding is relatively fast, and Xcode decodes PNG images optimized for import into the project. JPEG images are smaller and faster to load, but the decompression steps take longer because the JPEG decompression algorithm is more complex than the ZIP-based PNG algorithm.

When loading an image, iOS usually delays decompressing the image until it is loaded into memory. This can affect performance when the image is ready to be drawn, as it needs to be decompressed before drawing (which is usually a time consuming problem).

The easiest way to do this is to use UIImage’s +imageNamed: method to avoid delayed loading. Unlike +imageWithContentsOfFile (and other UIImage loading methods), this method decompress immediately after loading the image (the same benefits we talked about earlier in this chapter). The problem with +imageNamed is that it only works with images from the app bundle, so it doesn’t work with user-generated image content or downloaded images.

Another way to load an image immediately is to set it to the layer content, or the image property of UIImageView. Unfortunately, this also needs to be done on the main thread, so there is no performance improvement.

The third way is to bypass UIKit and use the ImageIO framework like this:

NSInteger index = indexPath.row;
NSURL *imageURL = [NSURL fileURLWithPath:self.imagePaths[index]]; 
NSDictionary *options = @{(__bridge id)kCGImageSourceShouldCache: @YES}; 
CGImageSourceRef source = CGImageSourceCreateWithURL((__bridge CFURLRef)imageURL, NULL);
CGImageRef imageRef = CGImageSourceCreateImageAtIndex(source, 0, (__bridge CFDictionaryRef)options);
UIImage *image = [UIImage imageWithCGImage:imageRef]; 
CGImageRelease(imageRef);
CFRelease(source);
Copy the code

So you can use the kCGImageSourceShouldCache to create images, mandatory immediately decompressed images, then the life cycle of the images keep decompression after version.

The last way is to load an image using UIKit, but immediately you know it’s in CGContext. Images must be decompressed before drawing, so timeliness of decompression is enforced. The advantage of this is that drawing images can be performed by background threads (such as the load itself) without blocking the UI.

There are two ways to render images ahead of time for forced decompression:

  • Drawing a picture pixel to the size of one pixelCGContext. This still unzips the entire image, but the drawing itself doesn’t take any time. The advantage of this is that the loaded images are not optimized for drawing on a particular device, so they can be drawn at any point in time. IOS can also save memory by discarding unzipped images.
  • Draw the entire image into CGContext, discard the original image, and replace it with a new image from the context. This is more computationally complex than drawing a single pixel, but the resulting image will be optimized for drawing, and since the original compressed image is discarded, iOS can’t always throw away any decompressed image to save memory.

Note that Apple specifically recommends not using these tricks to get around the standard image extraction logic (hence why they chose the default), but if you’re building your app with lots of large images, you’ll have to game the system if you want to improve performance.

If you don’t use +imageNamed:, then drawing the entire image into CGContext is probably the best way. Although you may think that redundant drawing is not very high performance compared to other decompression techniques, newly created images (optimized for specific devices) can be drawn faster than the original images.

Also, if you want to display an image in a container smaller than its original size, it is more efficient to redraw it to the correct size in the background thread once rather than scaling it every time it is displayed (although in this case we loaded the image to the correct size, so no extra optimization is required).

If revised – collectionView: cellForItemAtIndexPath: ways to redraw picture list (14.3), you will find more smooth sliding.

Listing 14.3 Force images to be decompressed for display

- (UICollectionViewCell *)collectionView:(UICollectionView *)collectionView cellForItemAtIndexPath:(NSIndexPath *)indexPath
{
  //dequeue cell
  UICollectionViewCell *cell =[collectionView dequeueReusableCellWithReuseIdentifier:@"Cell" forIndexPath:indexPath]; . //switch to background thread dispatch_async( dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_LOW, 0), ^{ //load image NSInteger index = indexPath.row; NSString *imagePath = self.imagePaths[index]; UIImage *image = [UIImage imageWithContentsOfFile:imagePath]; //redraw image using device context UIGraphicsBeginImageContextWithOptions(imageView.bounds.size, YES, 0); [image drawInRect:imageView.bounds]; image = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); //set image on main thread, but only if index still matches up
    dispatch_async(dispatch_get_main_queue(), ^{
      if(index == imageView.tag) { imageView.image = image; }}); });return cell;
}
Copy the code

CATiledLayer As shown in the example in Chapter 6, “Dedicated Layers,” CATiledLayer can be used to asynchronously load and display large images without blocking user input. But we can also use CATiledLayer to create separate CATiledLayer instances in UICollectionView to load driver images for each table, using only one layer per table.

There are several potential drawbacks to using CATiledLayer this way:

  • CATiledLayer’s queue and cache algorithms are not exposed, so we can only hope that it matches our needs
  • CATiledLayer requires us to redraw the image every timeCGContextEven though it has been unzipped and is the same size as our cell (so it can be used as layer content without redrawing).

Let’s see if these drawbacks make a difference: Listing 14.4 shows a re-implementation of the image transmitter using CATiledLayer.

Listing 14.4 shows the image transmitter using CATiledLayer

#import "ViewController.h"
#import <QuartzCore/QuartzCore.h>
@interface ViewController() <UICollectionViewDataSource>
@property (nonatomic, copy) NSArray *imagePaths;
@property (nonatomic, weak) IBOutlet UICollectionView *collectionView;
@end
@implementation ViewController
- (void)viewDidLoad {
  //set up data
  self.imagePaths = [[NSBundle mainBundle] pathsForResourcesOfType:@"jpg" inDirectory:@"Vacation Photos"]; 
  [self.collectionView registerClass:[UICollectionViewCell class] forCellWithReuseIdentifier:@"Cell"];
}
- (NSInteger)collectionView:(UICollectionView *)collectionView numberOfItemsInSection:(NSInteger)section
{
  return [self.imagePaths count];
}
- (UICollectionViewCell *)collectionView:(UICollectionView *)collectionView
{
  //register cell class
  cellForItemAtIndexPath:(NSIndexPath *)indexPath
  //dequeue cell
  UICollectionViewCell *cell = [collectionView dequeueReusableCellWithReuseIdentifier:@"Cell" forIndexPath:indexPath];
  //add the tiled layer
  CATiledLayer *tileLayer = [cell.contentView.layer.sublayers lastObject]; 
  if(! tileLayer) { tileLayer = [CATiledLayer layer]; tileLayer.frame = cell.bounds; tileLayer.contentsScale = [UIScreen mainScreen].scale; tileLayer.tileSize = CGSizeMake( cell.bounds.size.width * [UIScreen mainScreen].scale, cell.bounds.size.height * [UIScreen mainScreen].scale); tileLayer.delegate = self; [tileLayersetValue:@(indexPath.row) forKey:@"index"]; 
    [cell.contentView.layer addSublayer:tileLayer];
  }
  //tag the layer with the correct index and reload
  tileLayer.contents = nil;
  [tileLayer setValue:@(indexPath.row) forKey:@"index"]; 
  [tileLayer setNeedsDisplay];
  return cell;
}
- (void)drawLayer:(CATiledLayer *)layer inContext:(CGContextRef)ctx {
  //get image index
  NSInteger index = [[layer valueForKey:@"index"] integerValue];
  //load tile image
  NSString *imagePath = self.imagePaths[index];
  UIImage *tileImage = [UIImage imageWithContentsOfFile:imagePath];
  //calculate image rect
  CGFloat aspectRatio = tileImage.size.height / tileImage.size.width; 
  CGRect imageRect = CGRectZero;
  imageRect.size.width = layer.bounds.size.width;
  imageRect.size.height = layer.bounds.size.height * aspectRatio; 
  imageRect.origin.y = (layer.bounds.size.height - imageRect.size.height)/2;
  //draw tile
  UIGraphicsPushContext(ctx); 
  [tileImage drawInRect:imageRect]; 
  UIGraphicsPopContext();
}
@end
Copy the code

A few points need to be explained:

  • CATiledLayertileSizeProperties are in pixels, not points, so to ensure tile and table sizes are consistent, multiply the screen scaling factor.
  • in-drawLayer:inContext:Method, we need to know which layer belongs toindexPathTo load the correct image. So here we’re usingCALayerKVC to store and retrieve arbitrary values, label layers and indexes.

As a result, CATiledLayer works well, the performance problem is solved, and the code amount is similar to that implemented with GCD. The only problem is that the image fades significantly after loading onto the screen (Figure 14.4).

CATiledLayer
fadeDuration
CATiledLayer

Even with all the loading images and caching techniques we discussed above, there are still times when loading large images in real time is problematic. As mentioned in Chapter 13, an entire Retina screen on the iPad has a resolution of 2048×1536 and consumes 12MB of RAM(uncompressed). The hardware of the 3rd generation iPad is not capable of loading, unpacking, and displaying such images at 1/60 of a second frame rate. Even using background thread loading to avoid animation stalling does not solve the problem.

We can display a placeholder image while loading, but that doesn’t solve the problem at all. We can do better. # Resolution Swap Retina resolution (as defined by Apple marketing) represents the minimum pixel size that the human eye can resolve at normal viewing distance. But this only applies to static pixels. When viewing a moving image, your eyes become insensitive to detail, so a low-resolution image is indistinguishable from a retina-quality image.

If you need to load and display large moving images quickly, a simple way to trick the human eye is to display a small image (or low resolution) when moving the transmitter, and then switch to a large image when stopped. This means that we need to store two copies of each image at different resolutions, but fortunately this is the norm due to the need to support both Retina and non-Retina devices.

If no lower-resolution version of the image is available to load from a remote source or the user’s album, you can dynamically draw the larger image into a smaller CGContext and store it somewhere for reuse.

To do this, we need to use some of UIScrollView’s delegate methods that implement the UIScrollViewDelegate protocol (as well as other scroll view-based controls like UITableView and UICollectionView):

- (void)scrollViewDidEndDragging:(UIScrollView *)scrollView willDecelerate:(BOOL)decelerate;
- (void)scrollViewDidEndDecelerating:(UIScrollView *)scrollView;
Copy the code

You can use these methods to detect if the transmitter has stopped scrolling and then load the high resolution image. As long as the high resolution and low resolution images are the same size and color, it will be hard to detect the substitution process (make sure you use the same graphics program or script to generate the images on the same machine). If you have many images to display, it is best not to load them all in advance, but to destroy them as soon as they are removed from the screen. With selective caching, you can avoid repetitive loading of images as you scroll back and forth.

Caching is simple: it simply stores the results of expensive calculations (or files loaded from flash or the network) in memory for later use, making it quickly accessible. The problem is that caching is essentially a tradeoff – memory is consumed to improve performance, but since memory is a precious resource, you can’t cache everything.

It’s not always obvious when and for how long you cache what. Fortunately, in most cases, iOS does a good job of caching images for us.

We mentioned earlier that using [UIImage imageNamed:] to load an image has the advantage of decompressing the image immediately instead of waiting until it is drawn. But the [UIImage imageNamed:] method has another very significant benefit: it automatically caches the unzipped image in memory, even if you don’t keep any references to it yourself.

Using [UIImage imageNamed:] is the easiest and most efficient way to load images for iOS apps that are the main images (such as ICONS, buttons and background images). The same mechanism applies to images referenced in NIB files, so you use it implicitly a lot of the time.

But [UIImage imageNamed:] doesn’t apply in any case. It’s optimized for the user interface, but not for all the types of images your application needs to display. Sometimes you need to implement your own caching mechanism for the following reasons:

  • [UIImage imageNamed:]The method only works for images in the application resource bundle directory, but many images in most applications are retrieved from the network or from the user’s camera, so[UIImage imageNamed:]It won’t work.
  • [UIImage imageNamed:]The cache is used to store images of the application interface (buttons, backgrounds, etc.). If you use this cache for large images like photos, iOS will most likely remove them to save memory. Performance drops when you switch pages because the images need to be reloaded. Using a separate caching mechanism for the image of the transmitter decouples it from the life cycle of the application image.
  • [UIImage imageNamed:]The caching mechanism is not public, so you don’t have much control over it. For example, there is no way to detect if an image is cached before loading, no way to set the size of the cache, and no way to remove an image from the cache when it is not used.

Building a so-called caching system is very difficult. Phil Carlton once said, “There are only two hard things in computer science: caching and naming.”

If you want to write your own image cache, how do you do that? Let’s take a look at what’s involved:

  • Select an appropriate cache key – the cache key is used to uniquely identify the image. If you create images in real time, it is usually not a good idea to generate a string to distinguish between images. In our image conveyor example this is simple, we can use the file name or table index of the image.
  • Cache ahead – If generating and loading data is expensive, you may want to load and cache it when you first need it. The pre-loading logic is inherent in the application, but in our case, it’s also easy to implement because we can tell exactly which image will appear for a given position and scrolling direction.
  • Cache invalidation – If the image file changes, how can I be notified of the cache update? This is a very difficult problem (as Phil Carlton mentioned), but fortunately it doesn’t need to be considered when loading still images from application resources. For user-supplied images (which may be modified or overwritten), a good way to do this is to use a timestamp when the image is cached for comparison when the file is updated.
  • Cache reclamation – How do you determine which caches need to be emptied when you run out of memory? It’s up to you to write a proper algorithm. Fortunately, Apple provides a solution to cache reclamation calledNSCacheGeneric solution #NSCacheNSCacheandNSDictionarySimilar. You can go through-setObject:forKey:and-object:forKey:Method respectively to insert, retrieve. And unlike a dictionary,NSCacheAutomatically discard stored objects when the system is low on memory.

The algorithm NSCache uses to determine when to discard an object is not documented, but you can use the -setCountLimit: method to set the cache size, and -setobject :forKey:cost: to specify the consumed value for each stored object to give some indication.

Specifying the cost can be used to specify the relative rebuilding costs. If you specify a large consumption value for the large graph, the cache knows that these objects are more expensive to store and will discard them when there is a large performance problem. You can also specify the size of the total cache using the -setTotalCostLimit: method.

NSCache is a common caching solution, and we create a custom caching class that is better than the transporter example. (For example, we can determine which images need to be released first based on different cached image indexes and the current intermediate index). But NSCache is sufficient for our current caching needs; There’s no need to optimize too early.

Extend the previous transporter example with image caching and a preloaded implementation, and see if it works better (See Listing 14.5).

Listing 14.5 Adding a cache

#import "ViewController.h"
@interface ViewController() <UICollectionViewDataSource>
@property (nonatomic, copy) NSArray *imagePaths;
@property (nonatomic, weak) IBOutlet UICollectionView *collectionView;
@end
@implementation ViewController
- (void)viewDidLoad {
  //set up data
  self.imagePaths = [[NSBundle mainBundle] pathsForResourcesOfType:@"png" inDirectory:@"Vacation Photos"]; 
  [self.collectionView registerClass:[UICollectionViewCell class] forCellWithReuseIdentifier:@"Cell"];
- (NSInteger)collectionView:(UICollectionView *)collectionView numberOfItemsInSection:(NSInteger)section {
  return [self.imagePaths count]; 
}
- (UIImage *)loadImageAtIndex:(NSUInteger)index {
  //set up cache
  static NSCache *cache = nil; if(! cache) { cache = [[NSCache alloc] init]; } / /if already cached, return immediately
  UIImage *image = [cache objectForKey:@(index)]; 
  if (image)
  {
    return[image isKindOfClass:[NSNull class]]? nil: image; } / /set placeholder to avoid reloading image multiple times
  [cache setObject:[NSNull null] forKey:@(index)];
  //switch to background thread
  dispatch_async( dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_LOW, 0), ^{
    //load image
    NSString *imagePath = self.imagePaths[index];
    UIImage *image = [UIImage imageWithContentsOfFile:imagePath];
    //redraw image using device context
    UIGraphicsBeginImageContextWithOptions(image.size, YES, 0); 
    [image drawAtPoint:CGPointZero];
    image = UIGraphicsGetImageFromCurrentImageContext(); 
    UIGraphicsEndImageContext();
    //set image for correct image view
    dispatch_async(dispatch_get_main_queue(), ^{ //cache the image
      [cache setObject:image forKey:@(index)];
      //display the image
      NSIndexPath *indexPath = [NSIndexPath indexPathForItem: index inSection:0]; 
      UICollectionViewCell *cell = [self.collectionView cellForItemAtIndexPath:indexPath]; 
      UIImageView *imageView = [cell.contentView.subviews lastObject]; 
      imageView.image = image;
    });
  });
  //not loaded yet
  return nil; 
}
- (UICollectionViewCell *)collectionView:(UICollectionView *)collectionView cellForItemAtIndexPath:(NSIndexPath *)indexPath
{
  //dequeue cell
  UICollectionViewCell *cell = [collectionView dequeueReusableCellWithReuseIdentifier:@"Cell" forIndexPath:indexPath];
  //add image view
  UIImageView *imageView = [cell.contentView.subviews lastObject]; 
  if(! imageView) { imageView = [[UIImageView alloc] initWithFrame: cell.contentView.bounds]; imageView.contentMode = UIViewContentModeScaleAspectFit; [cell.contentView addSubview:imageView]; } / /set or load image for this index
  imageView.image = [self loadImageAtIndex:indexPath.item];
  //preload image for previous and next index
  if (indexPath.item < [self.imagePaths count] - 1) {
    [self loadImageAtIndex:indexPath.item + 1]; 
  }
  if (indexPath.item > 0) {
    [self loadImageAtIndex:indexPath.item - 1];
  }
  return cell; 
}
@end
Copy the code

It worked better! There is still some lag in image entry when scrolling, but it is very rare. Caching means we do less loading. The preloading logic here is pretty rough, and you can take sliding speed and direction into account, but it’s much better than the previous version without caching. Image loading performance depends on the tradeoff between loading the large image and decompressing the small image. Many Apple docs say that PNG is the best format for loading all images on iOS. But this is extremely misleading and outdated information.

The lossless compression algorithm used for PNG images can be decompressed faster than JPEG images, but there is no difference in these load times due to flash access.

Listing 14.6 shows some code for how long it takes a standard application to load images of different sizes. To ensure the accuracy of the experiment, we measured the loading and drawing time of each image to ensure that decompression performance was taken into account. In addition, the images are loaded and drawn every second, so that the average time can be taken to make the results more accurate.

Listing 14.6.

#import "ViewController.h"
static NSString *const ImageFolder = @"Coast Photos"; 
@interface ViewController () <UITableViewDataSource>
@property (nonatomic, copy) NSArray *items;
@property (nonatomic, weak) IBOutlet UITableView *tableView;
@end
@implementation ViewController
- (void)viewDidLoad {
  [super viewDidLoad];
  //set up image names
  self.items = @[@"2048x1536"The @"1024x768"The @"512x384"The @"256x192"The @"128x96"The @"64x48"The @"32x24"];
}
- (CFTimeInterval)loadImageForOneSec:(NSString *)path {
  //create drawing context to use for decompression
  UIGraphicsBeginImageContext(CGSizeMake(1, 1));
  //start timing
  NSInteger imagesLoaded = 0;
  CFTimeInterval endTime = 0;
  CFTimeInterval startTime = CFAbsoluteTimeGetCurrent(); 
  while (endTime - startTime < 1)
  {
    //load image
    UIImage *image = [UIImage imageWithContentsOfFile:path]; 
    //decompress image by drawing it
    [image drawAtPoint:CGPointZero];
    //update totals
    imagesLoaded ++;
    endTime = CFAbsoluteTimeGetCurrent(); 
  }
  //close context
  UIGraphicsEndImageContext();
  //calculate time per image
  return (endTime - startTime) / imagesLoaded; 
}
- (void)loadImageAtIndex:(NSUInteger)index {
  //load on background thread so as not to 
  //prevent the UI from updating between runs dispatch_async(
  dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0), ^{
    //setup
    NSString *fileName = self.items[index];
    NSString *pngPath = [[NSBundle mainBundle] pathForResource:filename ofType:@"png" inDirectory:ImageFolder]; 
    NSString *jpgPath = [[NSBundle mainBundle] pathForResource:filename ofType:@"jpg" inDirectory:ImageFolder]; 
    NSInteger pngTime = [self loadImageForOneSec:pngPath] * 1000;
    NSInteger jpgTime = [self loadImageForOneSec:jpgPath] * 1000; 
    //updated UI on main thread
    dispatch_async(dispatch_get_main_queue(), ^{
      //find table cell and update
      NSIndexPath *indexPath = [NSIndexPath indexPathForRow:index inSection:0];
      UITableViewCell *cell = [self.tableView cellForRowAtIndexPath:indexPath];
      cell.detailTextLabel.text =[NSString stringWithFormat:@"PNG: %03ims JPG: %03ims",pngTime, jpgTime];
    }); 
  });
}
- (NSInteger)tableView:(UITableView *)tableView numberOfRowsInSection:(NSInteger)section
{
  return [self.items count];
}
- (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath
{
  //dequeue cell
  UITableViewCell *cell = [self.tableView dequeueReusableCellWithIdentifier:@"Cell"];
  if(! cell) { cell = [[UITableViewCell alloc] initWithStyle: UITableViewCellStyleValue1 reuseIdentifier:@"Cell"]; } / /set up cell
  NSString *imageName = self.items[indexPath.row]; 
  cell.textLabel.text = imageName;   
  cell.detailTextLabel.text = @"Loading...";
  //load image
  [self loadImageAtIndex:indexPath.row]; 
  return cell;
}
@end
Copy the code

The PNG and JPEG compression algorithms work with two different image types :JPEG works well for noisy images; But PNG works better with flat colors, sharp lines, or a few gradients. To make the benchmark even, we experimented with a few different images: a photo and a rainbow gradient. JPEG images are encoded with the default Photoshop60% “high quality” setting. The results are shown in Figure 14.5.

So JPEG will be a good choice for previous picture transmitter applications. With JPEG, some multithreading and caching strategies are unnecessary.

But JPEG images are not suitable for all situations. If the image needs some transparency, or if it loses a lot of detail when compressed, it’s time to consider a different format. Apple has made some optimizations for both PNG and JPEG in iOS, so this format should be used in most cases. This means that other formats should be used only in special cases. For images that contain transparency, it is best to mix PNG images with compressed transparent channels and JPEG images with compressed RGB portions. This works for any format and is similar to PNG and JPEG images in terms of quality, file size, and loading performance. See 14.7 for the code for loading color and mask images separately and compositing them at run time.

Listing 14.7 Composite image created from PNG mask and JPEG

#import "ViewController.h"
@interface ViewController ()
@property (nonatomic, weak) IBOutlet UIImageView *imageView; 
@end
@implementation ViewController
- (void)viewDidLoad {
  [super viewDidLoad]; 
  //load color image
  UIImage *image = [UIImage imageNamed:@"Snowman.jpg"]; 
  //load mask image
  UIImage *mask = [UIImage imageNamed:@"SnowmanMask.png"];
  //convert mask to correct format
  CGColorSpaceRef graySpace = CGColorSpaceCreateDeviceGray(); 
  CGImageRef maskRef = CGImageCreateCopyWithColorSpace(mask.CGImage, graySpace);   
  CGColorSpaceRelease(graySpace);
  //combine images
  CGImageRef resultRef = CGImageCreateWithMask(image.CGImage, maskRef); 
  UIImage *result = [UIImage imageWithCGImage:resultRef]; 
  CGImageRelease(resultRef);
  CGImageRelease(maskRef);
  //display result
  self.imageView.image = result; 
}
@end
Copy the code

Using two separate files for each image is a bit cumbersome. JPNG library (https://github.com/nicklockwood/JPNG) for this technology provides an open source can reuse implementation, and added directly using the + imageNamed: and + imageWithContentsOfFile: Method support. There are other formats that iOS supports besides JPEG and PNG, such as TIFF and GIF, but since they are much more compressed in quality and have much worse performance than JPEG and PNG, they are not considered in most cases.

But since iOS, Apple quietly added support for the JPEG 2000 image format, so most people don’t know about it. It’s not even well supported by Xcode – JPEG 2000 images are not displayed in Interface Builder.

But JPEG 2000 images work well on (devices and emulators) and are of better quality than JPEG, as well as having good support for transparent channels. But JPEG 2000 images are significantly slower than PNG and JPEG in loading and displaying images, so it’s a good choice when you’re more sensitive to image size than performance.

Keep an eye out for JPEG 2000 though, as it’s likely to get some performance improvements in later iOS versions, but for now, blending images will work better with smaller files and quality. Every iOS device in the market today uses Imagination Technologies PowerVR graphics chips as gpus. The PowerVR chip supports a standard image Compression called PVRTC(PowerVR Texture Compression).

Unlike most image formats available on iOS, PVRTC can be drawn directly to the screen without having to extract it beforehand. This means that no decompression is required after loading the image, so the image in memory is much smaller than other image formats (about 1/60 of the size, depending on the compression Settings).

However, PVRTC still has some disadvantages:

  • Despite consuming less RAM at load time, PVRTC files are larger than JPEG files, and sometimes PNG files (depending on the content), because the compression algorithm is for performance, not file size.
  • The PVRTC must be a two-dimensional square. If the source image does not meet these requirements, it must be forced to stretch or fill in the blank space when converting to PVRTC.
  • The quality is not very good, especially the transparent images. It usually looks more like a heavily compressed JPEG file.
  • PVRTC cannot be drawn with Core Graphics, displayed in a normal UIImageView, or used directly as the content of a layer. You have to load the PVRTC image as an OpenGL texture and map it to a pair of triangles inCAEAGLLayerorGLKViewIn the display.
  • Creating an OpenGL texture to draw PVRTC images is quite expensive. Unless you want to draw all images into the same context, this doesn’t take advantage of PVRTC at all.
  • PVRTC uses an asymmetric compression algorithm. Although it decompresses almost immediately, the compression process is rather lengthy. On a modern fast desktop Mac, it can take a minute or more to generate a large PVRTC. So it’s best not to build it in real time on iOS devices.

If you are willing to use OpehGL and can tolerate even pre-generated images, PVRTC will provide very efficient loading performance compared to other available formats. For example, a 2048×2048 PVRTC image can be loaded and displayed within 1/60 of a second of the main thread (which is large enough to fill an iPad with a retina screen), which avoids many of the technical complications of using threads or caches.

Xcode includes command-line tools such as TextureTool to generate PVRTC images, but it is cumbersome to use (it exists in the Xcode application bundle) and very limited. A better solution is to use Imagination Technologies PVRTexTool, available free from http://www.imgtec.com/powervr/insider/sdkdownloads.

Once the PVRTexTool is installed, you can convert an appropriately sized PNG image into a PVRTC file on your terminal using the following command:

`/Applications/Imagination/PowerVR/GraphicsSDK/PVRTexTool/CL/OSX_x86/PVRTexToolCL -i {input_file_name}.png -o {output_file_name}.pvr -legacypvr -p -f PVRTC1_4 -q pvrtcbest`
Copy the code

The code in Listing 14.8 shows the steps to load and display PVRTC images (modified from chapter 6 CAEAGLLayer example subcode).

Listing 14.8 loads and displays PVRTC images


#import "ViewController.h" 
#import <QuartzCore/QuartzCore.h> 
#import <GLKit/GLKit.h>
@interface ViewController ()
@property (nonatomic, weak) IBOutlet UIView *glView; 
@property (nonatomic, strong) EAGLContext *glContext; 
@property (nonatomic, strong) CAEAGLLayer *glLayer; 
@property (nonatomic, assign) GLuint framebuffer; 
@property (nonatomic, assign) GLuint colorRenderbuffer; 
@property (nonatomic, assign) GLint framebufferWidth; 
@property (nonatomic, assign) GLint framebufferHeight; 
@property (nonatomic, strong) GLKBaseEffect *effect; 
@property (nonatomic, strong) GLKTextureInfo *textureInfo;
@end
@implementation ViewController
- (void)setUpBuffers {
  //set up frame buffer
  glGenFramebuffers(1, &_framebuffer); 
  glBindFramebuffer(GL_FRAMEBUFFER, _framebuffer);
  //set up color render buffer
  glGenRenderbuffers(1, &_colorRenderbuffer); 
  glBindRenderbuffer(GL_RENDERBUFFER, _colorRenderbuffer); 
  glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, _colorRenderbuffer); 
  [self.glContext renderbufferStorage:GL_RENDERBUFFER fromDrawable:self.glLayer]; 
  glGetRenderbufferParameteriv( GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &_framebufferWidth); 
  glGetRenderbufferParameteriv( GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &_framebufferHeight);
  //check success
  if(glCheckFramebufferStatus(GL_FRAMEBUFFER) ! = GL_FRAMEBUFFER_COMPLETE) { NSLog(@"Failed to make complete framebuffer object: %i", glCheckFramebufferStatus(GL_FRAMEBUFFER));
  } 
}
- (void)tearDownBuffers {
  if (_framebuffer) {
    //delete framebuffer
    glDeleteFramebuffers(1, &_framebuffer);
    _framebuffer = 0; 
  }
  if (_colorRenderbuffer) {
    //delete color render buffer
    glDeleteRenderbuffers(1, &_colorRenderbuffer);
    _colorRenderbuffer = 0; 
  }
}
- (void)drawFrame {
  //bind framebuffer & set viewport
  glBindFramebuffer(GL_FRAMEBUFFER, _framebuffer); 
  glViewport(0, 0, _framebufferWidth, _framebufferHeight);
  //bindshader program [self.effect prepareToDraw]; //clear the screen glClear(GL_COLOR_BUFFER_BIT); GlClearColor (0.0, 0.0, 0.0, 0.0); //set up vertices
  GLfloatAre [] = {1.0 f, 1.0 f, 1.0 f, f 1.0, 1.0 f, f 1.0, 1.0 f to 1.0 f}; //set up colors
  GLfloatTexCoords [] = {1.0 0.0 f, f, f 0.0, 0.0 f, f 1.0, 0.0 f, f 1.0, 1.0} f; //draw triangle glEnableVertexAttribArray(GLKVertexAttribPosition); glEnableVertexAttribArray(GLKVertexAttribTexCoord0); glVertexAttribPointer(GLKVertexAttribPosition, 2, GL_FLOAT, GL_FALSE, 0, vertices); glVertexAttribPointer(GLKVertexAttribTexCoord0, 2, GL_FLOAT, GL_FALSE, 0, texCoords); glDrawArrays(GL_TRIANGLE_FAN, 0, 4); //present render buffer glBindRenderbuffer(GL_RENDERBUFFER, _colorRenderbuffer); [self.glContext presentRenderbuffer:GL_RENDERBUFFER]; } - (void)viewDidLoad { [super viewDidLoad]; //set up context
  self.glContext =[[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
  [EAGLContext setCurrentContext:self.glContext];
  //set up layer
  self.glLayer = [CAEAGLLayer layer]; 
  self.glLayer.frame = self.glView.bounds; 
  self.glLayer.opaque = NO;
  [self.glView.layer addSublayer:self.glLayer]; 
  self.glLayer.drawableProperties = @{kEAGLDrawablePropertyRetainedBacking: @NO, kEAGLDrawablePropertyColorFormat: kEAGLColorFormatRGBA8};
  //load texture
  glActiveTexture(GL_TEXTURE0);
  NSString *imageFile = [[NSBundle mainBundle] pathForResource:@"Snowman" ofType:@"pvr"];
  self.textureInfo = [GLKTextureLoader textureWithContentsOfFile:imageFile options:nil error:NULL];
  //create texture
  GLKEffectPropertyTexture *texture = [[GLKEffectPropertyTexture alloc] init];
  texture.enabled = YES;
  texture.envMode = GLKTextureEnvModeDecal; texture.name = self.textureInfo.name;
  //set up base effect
  self.effect = [[GLKBaseEffect alloc] init]; self.effect.texture2d0.name = texture.name;
  //set up buffers
  [self setUpBuffers];
  //draw frame
  [self drawFrame]; 
}
- (void)viewDidUnload {
  [self tearDownBuffers];
  [super viewDidUnload]; 
}
- (void)dealloc {
  [self tearDownBuffers];
  [EAGLContext setCurrentContext:nil]; 
}
@end
Copy the code

As you can see, very not easy, if you are in routine applications use PVRTC images is very interested in (for example based on OpenGL game), you can consult GLView library (https://github.com/nicklockwood/GLView), It provides a simple GLImageView class that reimplements all the functionality of UIImageView but provides PVRTC images without requiring you to write any OpenGL code. In this chapter, we examine the performance issues associated with image loading and decompression and extend a series of solutions. In chapter 15, “Layer Performance,” we will discuss performance issues related to layer rendering and composition.