preface

In client development, image downloading will occupy a lot of bandwidth, image loading will consume a lot of performance and memory, the correct use of images is particularly important. This article will introduce the principle of picture display, and then provide some optimization ideas and partial analysis of third-party picture framework.

Types of images

All digital images can be classified as raster or vector. We commonly call raster map bitmap, it is composed of pixels, each pixel has color data, such as RGB value, transparency, etc. Its common formats are PNG, JPEG, WEBP and so on. These are only a compressed bitmap format. Different compression formats have different compression rates and applicable scenes, and their decoding performance is also different.

Vector graphics are images represented by points, lines and other geometric elements based on mathematical equations. Common formats are SVG, AI, etc., which feature no loss of quality when scaled up to any size.

Text is one of the most common vector graphics

A bitmap tells the computer: “This pixel should be light yellow, the next one dark purple, the next one pink,” and so on. But for a vector, it says: “Draw a square 100 by 100 and fill it with green.” , but eventually it will be rendered into bitmap data for display. The difference is that the compressed bitmap decodes the original bitmap data according to the compression format, while the vector map obtains the original bitmap data through calculation.

The bitmapwithThe vector diagramThe difference between

The bitmap A vector image
The principle of pixel Anchor points, lines
Usage scenarios Photos, complex colors/textures/shadows etc Fonts, maps, etc
The zoom Resolution limited Keep the mass and pull up indefinitely
The file size Larger, but compressible small
performance Ok, unzip it and render directly by pixel Poor, need to be resolved and calculated, regenerated into pixels for rendering
A common format PNG, JPEG, WEBP, etc SVG, AI, etc

The file size shown in the table is not absolute, depending on the scene. A JPEG landscape, for example, would be too large and inappropriate to do with vector images.

Comparison of application scenarios of different bitmap formats

Common bitmap formats for daily development are compared as follows:

format advantages disadvantages Applicable scenario
gif Small file, support animation, transparent, compatible Only 256 colors are supported Simple color logo, icon, GIF
jpeg Rich color, high compressibility Lossy compression, image quality after compression Colorful pictures, photographs, etc
png Lossless compression, support transparency, simple picture size is small Does not support animation, colorful pictures are large in size Logo, icon, transparent diagram
webp File small, support lossy and lossless compression, support animation, transparent Browser compatibility is poor and codec performance is poor Support webP format and other apps and webView

Under the same visual experience, WebP is generally smaller than JPEG and PNG, but PNG is generally smaller for small icon images. In addition, the decoding time of WebP is longer, which can reach 4.4 times that of PNG format in some tests. However, it is generally decoded in the background and cache is used, so the longer decoding time of WebP in actual run time does not cause performance bottleneck. On the contrary, webP can save bandwidth and get faster display speeds because it is smaller and takes less time to download. For details, please refer to: The road of WebP exploration

In addition to the above format, there are HEIC, AVIF and other new formats, have better size performance or decoding performance, but the system compatibility is not good, here do not explain, interested can find relevant information.

How are images displayed on the screen?

As we know, the screen is made up of pixels, each of which shows a different color and constitutes the picture we see. In iOS, the system updates the screen by reading data from the Frame Buffer at a frequency of 60 ~ 120Hz. The question is, how did the data in the Frame Buffer generate classes?

Views in iOS are based onUIKitEach App has a root view:UIWindowAnd it adds different types of views layer by layer to form the View of the whole App:These views will eventually be processed asFigure yuanAnd thenGPUFor theseFigure yuanIt is processed and finally synthesized into pixel data and put inFrame BufferRead when the screen displays. So how are the images processed and finally displayed on the screen?

At WWDC 2018, Apple officially presented the Image and Graphics Best Practices for iOS Image processing to explain this aspect.

Image loading is divided into three steps, which are: reading image data, decompressing image, rendering image.

1. Read image data

This refers to retrieving image data (PNG, JPEG, etc.) from disk or network and caching it in memory. (Note that these image data are usually compressed, so they can be decompressed to get the original bitmap)

2. Unzip the image

  • Bitmap decompression: benefitsSDWebImageThird party frameworks do a lot of the work for us, which is often overlooked by developers.PNG.JPEGIn this step, these compressed formats are decoded into the original bitmap. The size of the original bitmap has nothing to do with the compression format, but only the size of the image. With a 1024*1024 sizeJPEGPictures, for example, are only380KB, but per pixelRGBAIt’s four bytes in size, and its unzipped size is1024 * 1024 * 4Bytes, i.e4MB, is before decoding10.8Times! .

IOS has a memory compression mechanism, and the actual memory consumption may differ from this figure. In the third-party image framework, the image size needs to be calculated when the image is cached in memory. You can refer to this part of the code logic to calculate the memory consumption

  • Vector map parsing: Vector maps are parsed through the vector map parsing library, which will be createdCALayerAnd then follow the vector format and draw the information inCALayerFor image drawing, although this drawing is very complicated, but with your own useCALayerThere is no essential difference in drawing custom views. It is important to note that this process is consumptionCPU performanceandmemoryIn the live streaming app, useSVGAWhen playing complex animations, you can obviously see a large increase in CPU and memory consumption.

3. Render the image

Most of this step isGPUWork throughRender ServerAnd the subsequent process of compositing and rendering the view. The daily development process is basically as follows:This part has little to do with the subject of images, so I won’t go into too much detail. I recommend reading this blog post for more details:IOS rendering principle analysis

How to optimize image performance

From the above, we have a general idea of the nature of the image and how it will appear on the screen. Based on this, we can optimize from the following angles.

Image data optimization

It is common to compress images, using tools such as Tinypng or ImageOptim to compress static resources in a project. Tinypng is a website, but it provides an API, and ImageOptim is software that can be written to simplify and standardize this aspect of operation.

The second is to choose the right image size and quality. If the image does not match the view size, it will consume extra performance and waste bandwidth. Currently, all cloud storage platforms support the output of images of a specified size. You can send the configuration through the background, and the client can process the URL returned by the interface based on the configuration and view size to load images of appropriate size and quality. If this part is more detailed, the following optimization points can also be considered:

  • Is the screen 2x or 3X
  • Large image degradation (iPad or image is too large)
  • Low-end downgrade
  • Reduce image quality

The third is to choose the appropriate image format, now it is more common to convert JPEG or PNG format to WebP format, there will be a large image size optimization, you can see the test data here.

Directly deleting redundant images from the project is also an optimization ~, and FengNiao is recommended

Picture decoding optimization

Image decoding will consume a lot of CPU performance and memory. Common optimization measures include DownSampling, background decoding and caching.

DownSampling

In cases where the image is larger than the view, displaying the original image consumes additional CPU performance and memory. Imagine a photo browsing app that displays multiple photos without doing anything to read them and display them. Decode would be a huge DRAIN on CPU and memory. The image we’re showing here, View, doesn’t need to be this big at all.

This situation can passDownsamplingIt’s a way of generating thumbnails.The sample code for the above process is as follows:

func downsample(imageAt imageURL: URL.to pointSize: CGSize.scale: CGFloat) -> UIImage {
	// Load image data without decoding
	let imageSourceOptions = [kCGImageSourceShouldCache: false] as CFDictionary
	let imageSource = CGImageSourceCreateWithURL(imageURL as CFURL, imageSourceOptions)!
	let maxDimensionInPixels = max(pointSize.width, pointSize.height) * scale
	
	/ / kCGImageSourceShouldCacheImmediately set to true, create thumbnail when decoded directly
	let downsampleOptions = [kCGImageSourceCreateThumbnailFromImageAlways: true,
								 kCGImageSourceShouldCacheImmediately: true,
								 kCGImageSourceCreateThumbnailWithTransform: true,
								 kCGImageSourceThumbnailMaxPixelSize: maxDimensionInPixels] as CFDictionary
	let downsampledImage = CGImageSourceCreateThumbnailAtIndex(imageSource, 0, downsampleOptions)!
	return UIImage(cgImage: downsampledImage)
}
Copy the code

The background to decode

I analyzed three common image parsing libraries: SDWebImage, YYWebImage and KingFisher, all of which have background decoding function. The underlying implementation logic is roughly as follows: When images are obtained from the network or hard disk, they will be decoded in background queue according to the logic. The implementation details are slightly different:

Picture parsing library Decode queue type Key to the API The default decoding
SDWebImage serial CGBitmapContextCreate is
YYWebImage serial CGBitmapContextCreate is
KingFisher parallel UIGraphicsBeginImageContextWithOptions no

throughDebugAssembly analysis,UIGraphicsBeginImageContextWithOptionsIt’s also called eventuallyCGBitmapContextCreate.

In addition, in the process of analysis, it is found that the same JPEG network image, based on UIImageView, uses different frames to load and display, occupying different memory, among which SDWebImage occupies the highest memory. YYWebImage has basically the same memory usage as KingFisher (setting and enabling background decoding), both of which have less memory usage than SDWebImage. When KingFisher does not set background decoding, the memory usage after image display is the largest. Due to the limited time, there is no more detailed analysis and test on this part, so this conclusion may not be correct. Welcome to leave a message for those who are interested in this part and are familiar with it. The decoding codes of different frames are posted below.

  • SDWebImage framework decoding key codes:
// SDImageCoderHelper line 228
+ (CGImageRef)CGImageCreateDecoded: (CGImageRef)cgImage orientation:(CGImagePropertyOrientation)orientation {
    // omit code...
    BOOL hasAlpha = [self CGImageContainsAlpha:cgImage];
    CGBitmapInfo bitmapInfo = kCGBitmapByteOrder32Host;
    bitmapInfo |= hasAlpha ? kCGImageAlphaPremultipliedFirst : kCGImageAlphaNoneSkipFirst;
    CGContextRef context = CGBitmapContextCreate(NULL, newWidth, newHeight, 8.0[self colorSpaceGetDeviceRGB], bitmapInfo);
    if(! context) {return NULL;
    }
    
    // Apply transform
    CGAffineTransform transform = SDCGContextTransformFromOrientation(orientation, CGSizeMake(newWidth, newHeight));
    CGContextConcatCTM(context, transform);
    CGContextDrawImage(context, CGRectMake(0.0, width, height), cgImage); // The rect is bounding box of CGImage, don't swap width & height
    CGImageRef newImageRef = CGBitmapContextCreateImage(context);
    CGContextRelease(context);
    
    return newImageRef;
}
Copy the code
  • YYWebImageFramework decoding key codes:
// YYImageCoder line 868
CGImageRef YYCGImageCreateDecodedCopy(CGImageRef imageRef, BOOL decodeForDisplay) {
    / /... Omit code
    if (decodeForDisplay) { //decode with redraw (may lose some precision)
        CGImageAlphaInfo alphaInfo = CGImageGetAlphaInfo(imageRef) & kCGBitmapAlphaInfoMask;
        BOOL hasAlpha = NO;
        if (alphaInfo == kCGImageAlphaPremultipliedLast ||
            alphaInfo == kCGImageAlphaPremultipliedFirst ||
            alphaInfo == kCGImageAlphaLast ||
            alphaInfo == kCGImageAlphaFirst) {
            hasAlpha = YES;
        }
        // BGRA8888 (premultiplied) or BGRX8888
        // same as UIGraphicsBeginImageContext() and -[UIView drawRect:]
        CGBitmapInfo bitmapInfo = kCGBitmapByteOrder32Host;
        bitmapInfo |= hasAlpha ? kCGImageAlphaPremultipliedFirst : kCGImageAlphaNoneSkipFirst;
        CGContextRef context = CGBitmapContextCreate(NULL, width, height, 8.0, YYCGColorSpaceGetDeviceRGB(), bitmapInfo);
        if(! context)return NULL;
        CGContextDrawImage(context, CGRectMake(0.0, width, height), imageRef); // decode
        CGImageRef newImage = CGBitmapContextCreateImage(context);
        CFRelease(context);
        return newImage;
        
    } else {
		 // omit the code....}}Copy the code
  • KingFisher(version 4.10.1) Key codes for framework decoding:
// image. swift begins at line 727
public func decoded(scale: CGFloat) -> Image {
    // 'KingFisher' encapsulates the decoding part with multiple methods. The key code is extracted as follows. Please see the source code for details.
    guard let imageRef = self.cgImage
    BeginContext (size: CGSize(width: imageref.width, height: imageref.height), scale: 1.0)
	UIGraphicsBeginImageContextWithOptions(size, false, scale)
    let context = UIGraphicsGetCurrentContext()
    context?.scaleBy(x: 1.0, y: -1.0)
    context?.translateBy(x: 0, y: -size.height)
    //defer { endContext() }
    let rect = CGRect(x: 0, y: 0, width: CGFloat(imageRef.width), height: CGFloat(imageRef.height))
    context.draw(imageRef, in: rect)
    let decompressedImageRef = context.makeImage()
	UIGraphicsEndImageContext(a)//return Kingfisher.image(cgImage: decompressedImageRef! , scale: scale, refImage: base)
	return Image(cgImage: decompressedImageRef, scale: scale, orientation: .up)
}

Copy the code

In addition, DownSampling mentioned above is supported by third-party network frameworks. In the frame of the SDWebImage, through setting SDWebImageContextImageThumbnailPixelSize context can adjust the sampling.

let thumbnailPixelSizesize = CGSize(width: 100, height: 100)
self.sdImageView.sd_setImage(
    with: URL.init(string: urlStr),
    placeholderImage: nil,
    options: SDWebImageOptions.init(rawValue: 0),
    context: [.imageThumbnailPixelSize: thumbnailPixelSizesize])
Copy the code

In KingFisher, the processor is processed by passing the corresponding processor in the Options parameter. Of course, these third-party frameworks can do much more than that, such as gaussian blur and rounded corners.

The cache

Image downloading consumes a lot of bandwidth and decoding consumes a lot of CPU performance, so the general strategy is to store downloaded images to hard disk and cache decoded images to memory to optimize performance. These third-party frameworks encapsulate these functions well, but the implementation details are not covered here.

The resources

  • Image and Graphics Best Practices: Official video, recommended viewing
  • Advanced Graphics and Animation for iOS Apps
  • Summary of iOS graphics best practices:Image and Graphics Best PracticesThe arrangement of the video is very comprehensive and detailed.
  • IOS rendering principle analysis: write very comprehensive and in-depth, recommended reading
  • Pixel per inch
  • Talk about some common Web image formats: GIF, JPG, PNG, webp
  • Optimize the image library to the end and improve performance by 50%! How did Kyotoki App do it?
  • In-depth understanding of iOS Rendering Process
  • The road to WebP
  • What is going on with the pre-decoding used by the major image loading libraries
  • Mobile image format research

By: Starry sky