UIKit

UIKit is the most commonly used framework for iOS development. You can draw an interface by setting the layout and related properties of UIKit components. In fact, UIKit itself does not have the ability to image on the screen. It is mainly responsible for the response to user action events (UIView inherits from UIResponder), and the event response is generally passed through the view tree layer by layer.

Core Animation

Core Animation comes from Layer Kit. Animation is just the tip of the iceberg of Core Animation. Core Animation is a composite engine whose job it is to combine the different visual content on the screen as quickly as possible. These visual content can be broken down into separate layers (CALayer), which are stored in a system called a layer tree. Essentially, the CALayer is the basis for everything the user can see on the screen.

Core Graphics

Core Graphics is based on the Quartz advanced Graphics engine and is primarily used to draw images at run time. Developers can use this framework to handle path-based drawing, conversion, color management, off-screen rendering, patterns, gradients and shadows, image data management, image creation and image masks, and PDF document creation, display and analysis.

When developers need to create images at run time, they can use Core Graphics to draw them. The opposite is creating images before you run them, such as using Photoshop to create images in advance and import them directly into the application. Instead, we need Core Graphics to compute and draw a series of image frames in real time at run time to animate.

Core Image

Core Image is the opposite of Core Graphics, which is used to create images at run time, while Core Image is used to process images created before run. The Core Image framework has a series of off-the-shelf Image filters that allow efficient processing of one-inch images. For the most part, the Core Image will do its work in the GPU, or if the GPU is busy, it will be processed using the CPU.

OpenGL ES

OpenGL ES is a subset of OpenGL. As mentioned in graphics Rendering Principles, OpenGL is a set of third-party standards, and the internal implementation of functions is developed by the corresponding GPU manufacturer. OpenGL/OpenGL ES entry, please refer to OpenGL/OpenGL ES Entry: Graphics API and professional term parsing and other articles series

The relationship between UIView and CALayer

The CALayer is the basis for virtually everything the user sees on the screen. The reason why views in UIKit can render visual content is that every UI view control in UIKit actually has an internal CALayer associated with it, which is the backing layer. Because of this one-to-one relationship, the view hierarchy has the tree structure of view tree, and the corresponding CALayer hierarchy has the tree structure of layer tree.

The view’s job is to create and manage layers so that when a child view is added or removed from the hierarchy, its associated layer does the same in the layer tree, ensuring that the view tree and the layer tree are structurally consistent.

Why does iOS provide two parallel hierarchies based on UIView and CALayer?

The reason for this is the separation of responsibilities, which also avoids a lot of duplicate code. There are many differences between events and user interaction on iOS and Mac OSX. There are essential differences between a multi-touch user interface and a mouse and keyboard based interaction. That’s why iOS has UIKit and UIView, and Mac OSX has AppKit and NSView. They are similar in functionality, but differ significantly in implementation.

In fact, it’s not two hierarchies, it’s four. Each one plays a different role. In addition to view trees and layer trees, there are render trees and render trees.

So why does a CALayer render visual content? Because a CALayer is basically a texture. Texture is an important basis for GPU image rendering.

As mentioned in graphics rendering principles, a texture is essentially an image, so the CALayer also contains a contents property that points to a cache called the backing Store, which can store bitmaps. In iOS, the images stored in this cache are called host images.

The graphics rendering pipeline supports drawing from vertices (which are processed to create textures in the pipeline) and rendering directly with textures (images). Accordingly, in the actual development, there are two ways to draw the interface: one is manual drawing; Another way is to use pictures.

There are two ways to do this in iOS:

  • Use the image: Contents image
  • Manual drawing: Custom drawing

Contents Image Contents Image means that you configure the Image using the Contents property of the CALayer. However, the contents property is of type ID, and in that case, you can give the contents property any value, and the app will still compile. But in practice, if contents is not CGImage, the resulting layer will be blank.

In that case, why do you define the property type of contents to be ID instead of CGImage? In Mac OS, this property works for both CGImage and NSImage values, while in iOS, it only works for CGImage.

Essentially, the contents property points to an area of the cache, called the backing Store, that can hold the bitmap data.

Custom Drawing Custom Drawing refers to using Core Graphics to draw the host Drawing directly. In actual development, it is common to draw from definition by inheriting UIView and implementing the -drawRect: method.

Although -drawrect: is a UIView method, it’s actually the underlying CALayer that does the redrawing and saves the resulting image. The following figure shows the drawRect: Draw definition host diagram fundamentals

  • UIViewThere is an associated layer, namelyCALayer.
  • CALayerThere’s an optional onedelegateProperty, implementedCALayerDelegateThe agreement.UIViewAs aCALayerThe proxy for is implementedCALayerDelegateThe agreement.
  • Called when a redraw is needed-drawRect:.CALayerAsk its agent to give it a host map to display.
  • CALayerFirst it tries to call-displayLayer:Method, where the agent can be set directlycontentsProperties.
- (void)displayLayer:(CALayer *)layer;
Copy the code
  • If the proxy is not implemented-displayLayer:Method,CALayerWill attempt to call-drawLayer:inContext:Methods. Before calling this method,CALayerWill create an empty host map (size byboundsandcontentScaleDecision) and oneCore GraphicsThe drawing context prepares for drawing the host diagram asctxParameter passed in.
- (void)drawLayer:(CALayer *)layer inContext:(CGContextRef)ctx;
Copy the code
  • Finally, there isCore GraphicsDrawing the generated host map will be storedbacking store.

Core Animation Pipeline

Here’s how the Core Animation pipeline works:

Render Server

The App submits the rendering task and related data to the Render Server through IPC. After the Render Server processes the data, the data is transferred to the GPU. Finally, the GPU calls the iOS image device for display.

The detailed process of Core Animation pipeline is as follows:

  • First, the app handles Handle Events, such as when the user clicks an action. During this process, the app may need to update the view tree, and accordingly, the layer tree will be updated.
  • Secondly, the APP uses CPU to complete the calculation of the display content, such as: view creation, layout calculation, picture decoding, text drawing, etc. After calculating the actual content, the app packages the layer and sends it to the next RunLoopRender Server, which is completed oncecommit TransactionOperation.
  • Render ServerMainly execute OpenGL, Core Graphics related programs, and call GPU.
  • The GPU completes the rendering of the image on the physical layer.
  • Finally, the GPU passesFrame Buffer, video controller and other related components to display the image on the screen.

The above steps take more than 16.67ms to execute in tandem, so in order to support the screen’s 60FPS refresh rate, these steps need to be broken down and executed in parallel in a pipelined-like manner, as shown below:

Commit Transaction

In the Core Animation pipeline, the final step before an app calls the Render Server, Commit Transaction, can be broken down into four steps:

  • Layout
  • Display
  • Prepare
  • Commit

Layout

The Layout phase is mainly for view building, including: LayoutSubviews method overloading, addSubview: method filling child view, etc.

Display

The Display phase is mainly for drawing the view, here is only to set the image metadata. Overloading a view’s drawRect: method allows you to customize the display of the UIView by drawing a host diagram inside the drawRect: method, using the GPU and memory.

Prepare

Prepare phase is an additional step, generally dealing with image decoding and transcoding and other operations.

Commit

The Commit phase mainly packages the layers and sends them to the Render Server. This process is performed recursively, because the layers and views are in a tree structure.

Principles of Animation Rendering

The rendering of iOS Animation is also completed based on the above Core Animation pipeline. Here we focus on the execution flow of the App and the Render Server.

In daily development, if there is no particularly complex Animation, UIView Animation is generally used for implementation. IOS divides the processing process into the following three stages:

  • Step1: callanimationWithDuration:animations:methods
  • Step2: perform in the Animation BlockLayout, Display, Prepare, CommitSuch steps.
  • Step3:Render ServerRender frame by frame according to the Animation.

See the blog iOS Graphics Rendering Principles