“This is the 19th day of my participation in the First Challenge 2022, for more details: First Challenge 2022”.

preface

Today, we’re going to talk about rendering and how it works in iOS native.

How is iOS native rendering different from Flutter

Rendering principle

First of all, let’s talk about the rendering principle. The mobile phone interface we see is drawn by CPU + GPU operation. CPU and GPU have different responsibilities at the beginning of design. GPU is a processor that specializes in graphics operation and has stronger parallel computing capability. It can display graphics results on the screen through calculation. Rendering is the process of calculating and converting graphic data in memory and finally rendering it on the screen.

In the rendering process, CPU processes the calculation of rendering content, such as view creation, layout calculation, image decoding, etc. After the content is calculated, it is transmitted to GPU for rendering.

The GPU basically converts 3D coordinates into 2D coordinates and then into actual pixels, Concrete implementation can be divided into the vertex shader (points) to determine the shape, shape, assembly (determine the shape of the line), geometry shader (determine the number of triangles), rasterizer (determine screen pixels), the fragment shader (to the pixel shader), test and hybrid (check depth and transparency to mix) six stages.

Native rendering

The native rendering hierarchy can be divided into the following levels: UIKitCore AnimationOpenGL ESGraphics Hardware

  • UIKit: UI base library, used to meet our upper development;
  • Core Animation: Responsible for graphics rendering and animation infrastructure,Core AnimationMost of the actual drawing tasks are left to the graphics hardware.
  • OpenGL ES: a subset of OpenGL, a set of APIS dedicated to GPU drawing.
  • Core Graphics: Based on the Quartz advanced graphics engine. It provides low-level lightweight 2D rendering with unmatched output fidelity. You can use this framework to handle path-based drawing, transformation, color management, off-screen rendering, patterns, gradients and shadows, image data management, image creation and image masking as well as PDF document creation, display and analysis.
  • Graphics Hardware: Graphics hardware,

The process of updating and rendering the native interface can be divided into the following processes.

The first step is to update the view tree and update the layer tree simultaneously. When you change the Frame view or update the UIView/CALayer hierarchy when you’re working on the UI, or manually call setNeedsLayout/setNeedsDisplay on UIView/CALayer, During this process, the app may need to update the view tree and, accordingly, the layer tree will be updated;

In the second step, the CPU calculates what to display, including view creation (setting Layer properties), layout calculation, view rendering (creating Layer Backing Image), and Image decoding conversion. When Runloop is in the BeforeWaiting and Exit state, the registered listeners are notified, the layers are packaged, and the packaged data is sent to Render Server, a separate process responsible for rendering.

In the third step, the data will be deserialized after reaching the Render Server to obtain the layer tree, filter the shielded part of the layer according to the layer order, RGBA value and layer frame in the layer tree, and then transform the layer tree into a rendering tree, and the information of the rendering tree will be transferred to OpenGL ES/Metal. These are collectively called Commit Transactions.

The fourth step.Render ServerWill be calledGPU.GPUStart with the aforementioned vertex shader, shape assembly, geometry shader, rasterization, fragment shader, test and mix. After completing these six stages of work, we willCPUGPUThe calculated data is displayed on every pixel of the screen. The entire rendering process is shown below:

As shown in the figure above, after processing by CPU, the rendering content will be input into Render Server. After converting layer tree and rendering tree, it will be provided to GPU through OpenGL interface. After processing by GPU, it will be displayed on the screen. The layout calculation of the Commit Trasaction during rendering overloads the view layoutSubviews method and executes the addSubview method to add the view. View drawing overrides the view’s drawRect method. These are all common approaches to iOS development. Moving the position of the view, deleting the view, hiding or showing the view, and calling the setNeedsDisplay or setNeedsDisplayInRect methods trigger an interface update to perform the rendering process.

Render Server

Render Server has six phases as follows:

  1. Vertex Shader The input for this phase is Vertex Data, such as passing three 3D coordinates in an array to represent a triangle. Vertex data is a set of vertices. The main purpose of the vertex shader is to convert 3D coordinates to another 3D coordinates, and the vertex shader can do some basic processing on the vertex properties.

  2. Shape Assembly. This stage takes all vertices output by the vertex shader as input and assembles all points into the shape of the specified primitives. This is a triangle. Primitive is used to show how to render vertex data such as points, lines, and triangles.

  3. Geometry Shader. This stage takes as input a set of vertices in the form of primitives, which can generate other shapes by generating new primitives to construct new (or other) primitives. In this case, it generates another triangle.

  4. Rasterization. This stage maps primitives to corresponding pixels on the final screen, generating fragments. Fragments are all the data needed to render a pixel.

  5. Fragment Shader. This phase starts with Clipping the input fragment. Crop will discard all pixels outside the view to improve execution efficiency.

  6. Tests and Blending. This phase detects the corresponding depth value of the fragment (z coordinate), determines whether the pixel is in front of or behind other objects, and decides whether it should be discarded. In addition, this phase checks the alpha value (which defines an object’s transparency) to mix objects. Therefore, even if the output color of one pixel is calculated in the fragment shader, the final pixel color may be completely different when rendering multiple triangles.

Write in the last

This chapter has focused on the principles of rendering and the iOS native rendering process, so in the next article we will talk about the current hot Flutter rendering process and how it differs from native rendering and some other front-end renderings. Stay tuned!