Preface:

The optimization of APP performance has always been a problem that needs to be treated seriously. Especially, the more complex the project is, the more important the point that needs to be optimized is. When it comes to performance optimization, the processing of pictures and graphics cannot be avoided.

Code practice section please move:

  • Image encoding and decoding
  • Image processing practice

A picture from disk reading to display to screen process

1. Image loading process:

  • Use the +imageWithContentsOfFile: method to load an image from disk or -[UIImage imageNamed:@” xx.jpg “] without decoding the image; (The difference between the two methods)
  • The initialized UITmage is assigned to UIImageView;
    • Then an implicit CATransaction captures the UIImageView layer tree changes;
    • When the next runloop of the main thread arrives, Core Animation commits this implicit transaction, which may copy the image depending on whether the image is byte aligned, etc. This copy operation may involve some or all of the following steps:
      • Allocate memory buffers to manage file IO and decompression operations;
      • Read file data from disk to memory;
      • Decoding compressed image data into uncompressed bitmap form is a very time-consuming CPU operation;
      • Finally, CALayer in Core Animation renders the UIImageView layer using uncompressed bitmap data.
      • The CPU calculates the Frame of the image and decompresses the image. I’ll hand it over to the GPU to do the image rendering.

      As you can see from the above steps, image decompression is a very time consuming CPU operation, and it is performed in the main thread by default. This can have a serious impact on the responsiveness of our application when there are a lot of images to load, especially on fast scrolling lists.

2. Render the image to the screen

  • In fact, iOS devices give visual feedback to users through the QuartzCore framework. To put it bluntly, the display interface that all users finally see is the result of combining layers, and layers are CALayer in QuartzCore.
  • Normally the view we use in development is UIView, which is not directly displayed on the screen, you can think of it as a container with a display layer CALayer. When we create a view object, the system automatically creates a CALayer for the view. We can also add a new CALayer layer to the view ourselves. When it is time to display, the system hardware will copy all the layers and then compose the final composite on the Z-axis.

3. Picture rendering process

  • After the VSync signal arrives, the main thread starts doing calculations on the CPU.
  • CPU calculation display content: view creation, layout calculation, picture decoding, text drawing, etc.
  • GPU rendering: THE CPU submits the calculated content to the GPU for transformation, synthesis, and rendering.
  • The GPU submits the rendering result to the frame buffer and waits for the next VSync signal to be displayed on the screen.

There are more knowledge points about rendering, such as off-screen rendering, because it is too long to learn, this part will be put in the app performance section later to continue learning.

Second, graphics processing related framework

After learning the theory of image loading and display above, we need to continue to learn the related theory of graphics processing. After all, we cannot use UIimage to read all images from disk or memory in the development process and performance is not allowed. At the same time some interface display also more or less to use the graphics processing framework.

1. Summary of iOS frameworks related to graphics and image processing:

  • Interface graphics framework — UIKit
  • Core Animation framework
  • Apple packaged Graphics framework — Core Graphics & Quartz 2D
  • Traditional cross-platform graphics framework — OpenGL ES
  • Apple’s latest push for a graphical framework called Metal
  • Core Image – Apple filter framework for images
  • Third party filter scheme suitable for video -- GPUImage (third party does not belong to the system, listed here for learning)
  • Game Engine — Scene Kit (3D) and Sprite Kit (2D)
  • OpenCV for iOS – Computer vision for iOS

There is no doubt that the frameworks most commonly used by developers are UIKit, Core Animation, Core Graphic, and Core Image. Here is a brief introduction to these frameworks, and GPUImage:

2. Interface graphics framework — UIKit(interspersed with other graphics processing frameworks)

  • UIKit is a set of Objective-C apis that provide Objective-C encapsulation for line graphics, Quartz images, and color manipulation, as well as 2D rendering, image manipulation, and user-interface level animation.
  • UIKit includes classes such as UIBezierPath (drawing lines, angles, ellipses and other graphics), UIImage (displaying images), UIColor (color manipulation), UIFont and UIScreen (providing font and screen information), bitmap graphics environment and PDF graphics environment for drawing and Operation function, also provides support for standard view, also provides support for print function.
  • UIKit and Core Graphics:

    In UIKit, the UIView class itself automatically creates a Graphics environment when drawing, the CGContext type of the Core Graphics layer, as the current Graphics drawing environment. Can call UIGraphicsGetCurrentContext function when drawing graphics environment to obtain the current;

    Such as:

    / / this code is in a subclass of UIView UIGraphicsGetCurrentContext function called the current graphics environment, and then to add a path, the graphics environment, finally draw. - (void)drawRect:(CGRect)rect { //1. Get the context CGContextRef contextRef = UIGraphicsGetCurrentContext (); UIBezierPath * path = [UIBezierPath bezierPath]; [path moveToPoint:CGPointMake(10, 10)]; / / the end [path addLineToPoint: CGPointMake (100, 100)]. // Set the color [[UIColor whiteColor]setStroke; //3. Add path CGContextAddPath(contextRef, path.cgPath); // Display path CGContextStrokePath(contextRef); }Copy the code

3. Core Animation framework

  • Core Animation is one of the commonly used frameworks. It’s much lower level than UIKit and AppKit. As we know, UIView encapsulates a Layer of CALayer tree. The Core Animation layer is the real rendering layer, so we can see the content on the screen. The real rendering is done in the Core Animation layer.
  • Core Animation is an Objective-C API that implements a high-performance composite engine and provides an easy-to-use programming interface to add smooth motion and dynamic feedback to the user UI.
  • Core Animation is the basis for UIKit to implement animations and transformations, and is responsible for view composition. Core Animation enables custom Animation and fine-grained Animation control, creating complex layered 2D views that support Animation and transformation
  • OpenGL ES content can also be integrated with Core Animation content.
  • To implement animations using Core Animation, you can modify the layer property values to trigger the execution of an action object. Different action objects implement different animations. Core Animation provides a set of base classes and subclasses that support different Animation types:
    • CAAnimation is an abstract common base class. CAAnimation uses CAMediaTiming and CAAction protocols to provide animation time (such as cycle, speed, number of repeats, etc.) and action behavior (start, stop, etc.).
    • CAPropertyAnimation is an abstract subclass of CAAnimation that provides support for layer properties specified by a key path.
    • CABasicAnimation is a concrete subclass of CAPropertyAnimation that provides simple insertion capabilities for a layer property.
    • CAKeyframeAnimation is also a concrete subclass of CAPropertyAnimation that provides keyframe animation support.

Apple packaged Graphics framework — Core Graphics & Quartz 2D

  • Core Graphics (using Quartz 2D engine)
    • Core Graphics is a C-based API that supports drawing of vector Graphics, lines, shapes, patterns, paths, razors, bitmap images, and PDF content
    • Core Graphics is also one of the commonly used frameworks. It is used to draw images at run time. Developers can use Core Graphics to draw paths and colors. When a developer needs to create an image at runtime, they can use Core Graphics to draw it. The runtime computes and draws a series of image frames in real time to animate it. As opposed to creating images before running (such as UIImage images already created from disk or memory).
  • Quartz 2D
    • Quartz 2D is a 2D rendering engine in Core Graphics. Quartz is resource – and device-independent, providing path rendering, anti-aliased rendering, shaver fill patterns, images, transparent rendering and transparent layers, shading and shadows, color management, coordinate transformation, font rendering, offscreen rendering, PDF document creation, display, and analysis.
    • Quartz 2D works with all graphics and Animation technologies such as Core Animation, OpenGL ES, and UIKit. Quartz 2D is painted in Paint mode.
    • The main classes offered by Quartz 2D include:
      • CGContext: represents a graphical environment;
      • CGPath: Uses vector graphics to create paths and is able to fill and stroke;
      • CGImage: used to represent a bitmap;
      • CGLayer: represents a drawing layer that can be used for repeat and offscreen drawing;
      • CGPattern: Used to represent Pattern, used for repeated drawing;
      • CGShading and CGGradient: For rendering shaors
      • CGColor and CGColorSpace; Used for color and color space management;
      • CGFont, for drawing text;
      • CGPDFContentStream, CGPDFScanner, CGPDFPage, CGPDFObject,CGPDFStream, CGPDFString, etc. are used to create, parse and display PDF files.

5. Core Image, apple filter framework for images

  • Core Image is the opposite of Core Graphics, which is used to create images at run time, and Core Image, which is used to process images that have already been created. The Core Image framework has a series of ready-made Image filters to efficiently process existing images.

  • Core Image is an Image processing framework newly added to iOS platform in iOS5, which provides powerful and efficient Image processing functions for operation and analysis of pixel-based images. It has many powerful filters (currently more than 180 kinds) built in. These filters provide a wide variety of effects, and can be stacked together to create powerful custom effects through Filter chains.

    • A filter is an object that has many inputs and outputs and performs some transformations. For example, a blur filter might require an input image and a blur radius to produce a properly blurred output image.
    • A filter chain is a network of filters linked together so that the output of one filter can be the input of another. In this way, elaborate effects can be achieved.
    • IOS8 supports custom CIFilter, which can customize complex effects to meet business requirements.
  • The beauty of Core Image is that it’s very efficient. Most of the time, it does the work on the GPU, but if the GPU is busy, it uses the CPU for processing. If the device supports Metal, then Metal processing will be used. This is done at the bottom, and Apple’s engineers already help developers do this.

    • For example, he can choose CPU or GPU to process as required.
      // Create a CIContext object based on the CPU (default is gPU-based, CPU requires additional parameters) context = [CIContext contextWithOptions: [NSDictionary dictionaryWithObject:[NSNumber numberWithBool:YES] forKey:kCIContextUseSoftwareRenderer]]; // Create gPU-based CIContext context = [CIContext contextWithOptions: nil]; / / objects created based on GPU CIContext EAGLContext * eaglctx = [[EAGLContext alloc] initWithAPI: kEAGLRenderingAPIOpenGLES2]; context = [CIContext contextWithEAGLContext:eaglctx];Copy the code
  • Core Image’s apis fall into three categories:

    • CIImage A class that holds image data, which can be created from UIImage, image files, or pixel data, including unprocessed pixel data.
    • CIFilter represents the applied filter, a class in this framework that details image attributes. It operates on all pixels, with key-value Settings to determine the extent of the operation.
    • CIContext represents the context. Just as the context in Core Graphics and Core Data is used for rendering and handling managed objects, the context of Core Image is also a concrete object to realize Image processing. You can get picture information from it.

6. GPUImage, a third-party filter scheme suitable for video

  • GPUImageOpenGL ES is an open source image processing library based on OpenGL ES 2.0.
    • IOS 4.0 is a minimum, and custom filters are available after iOS 5.0. On low-end models, GPUImage has better performance.
    • GPUImage has a better performance in video processing.
    • The code for GPUImage is open source. You can customize more complex pipeline operations to your business needs. High degree of customizability.

Reference article:

  • Explore the decompression of iOS images into the rendering process

  • IOS image rendering process

  • Interview -iOS image rendering

  • Draws pixels to the screen

  • IOS graphics drawing framework