Relative term

Graphic Processor Unit (GPU)

Graphics processing unit


GLSL (OpenGL Shading Language)

OpenGL is a language for coloring programming in OpenGL, that is, short custom programs written by developers that are executed on the GRAPHICS card’s GPU instead of a fixed part of the rendering pipeline, making different levels of the rendering pipeline programmable. For example: view transformation, projection transformation, etc. GLSL Shader code is divided into two parts: Vertex Shader and Fragment


VertexShader

Generally used to handle each vertex transformation (rotation/translation/projection, etc.)

Vertex shader is an OpenGL program used to compute vertex properties. A vertex shader is a vertex-by-vertex program, which means that each vertex shader is executed once, in parallel, and the vertex shader cannot access data from other vertices.

Generally speaking, typical vertex attributes that need to be calculated mainly include vertex coordinate transformation, vertex lighting operation and so on. This is where the transformation of the vertex coordinates from its own coordinate system to the normalized coordinate system takes place.


FragmentShader program

Generally used to deal with each pixel in the graphics color calculation and filling

A fragment shader is a program in OpenGL that calculates the color of a fragment (pixel). The fragment shader is a per-pixel program, which means that each pixel executes the fragment shader once, also in parallel.


Vertex & vertex data

Vertices refer to the vertex position data of a graph when we draw it. This data can be stored directly in an array or cached in the GPU memory.

Drawing usually involves drawing the skeleton of the image, and then filling in the skeleton with color. The vertex data is the skeleton of the image, the image in OpenGL is made up of primitives. In OpenGLES, there are three types of primitives: points, lines, and triangles.


VertexArray & VertexBuffer

The developer can choose to set the pointer to the function, and when the draw method is called, the vertex data is passed directly from memory, that is, the data was previously stored in memory, called the vertex array. It is better to allocate a block of video memory in advance and pass vertex data into video memory in advance. This part of video memory is called the vertex buffer.


Rasterization

The process of converting vertex data into slices, transforming a graph into a raster image. Convert the mathematical description of the object and the color information associated with the object into the pixels on the screen for the corresponding position and the colors used to fill the pixels.

Rasterization involves two parts of work. First: determine which integral raster areas in the window coordinates are occupied by the basic primitors; Second: assign a color value and a depth value to each region.


texture

It can be understood as a picture.


Image loading workflow

UIImage *image = [UIImage imageWithContentsOfFile:@"filePath"];

Loading an image from disk without extracting it;

imageView.image = image;

Then the generated UIImage is assigned to UIImageView. This step confirms that the image is displayed, and the CPU starts decoding.

Then an implicit CATransaction captures the changes in the UIImageView layer tree;

When the next runloop in the main thread arrives, the Core Animation commits this implicit transaction, which may copy the image.

Depending on whether the image is byte aligned or not, thiscopyThe operation may involve some or all of the following steps:

  1. Allocates memory buffers to manage file IO and decompression operations.
  2. Read file data from disk to memory.
  3. Decodes compressed picture data into uncompressed bitmap form.
  4. The lastCore AnimationCALayerRender with uncompressed bitmap dataUIImageViewThe layer.
  5. CPUCalculate the pictureFrame.Image decompressionI’ll give it to you laterGPUTo do the picture rendering.

unzip

  1. Decompression is performed in the main thread by default.

  2. Decompressing an image is a time-consuming CPU operation, so the CPU only decompresses it when it is sure to display it.

  3. Decompressed images are cached and will not be decompressed repeatedly.

  4. In the child thread ahead of the image is forced to decompress, and the principle of forced decompression is to redraw the image, get a new decompressed bitmap. The core function used is CGBitmapContextCreate.


Specific division of labor during CPU/GPU rendering

CPU: Calculate the view frame, decode the picture, and draw the texture image to the GPU through the data bus

GPU: Texture blending, vertex transformation and calculation, pixel filling calculation, rendering to frame buffer.

Clock signal: Vertical synchronization signal V-sync/horizontal synchronization signal H-sync.

Dual buffering for iOS devices: The display system usually introduces two frame buffers, which are done in collaboration between the CPU and the GPU


Rendering process

  1. GPU gets the coordinates of the image
  2. Hand coordinates to vertex shader (vertex calculation)
  3. Rasterize the image (get the pixels on the screen corresponding to the image)
  4. Chip shader calculation (calculates the final displayed color value for each pixel)
  5. Render from the frame cache to the screen


conclusion

The process of rendering an image to the screen:

  1. Read the file
  2. Calculate the Frame
  3. Image decoding
  4. After decoding the texture image bitmap data through the data bus to the GPU ->
  5. Select Frame from GPU select Frame from GPU
  6. Vertex transformation calculation ->
  7. Rasterizer – >
  8. Gets the color value of each pixel based on the texture coordinates
  9. Render to the frame cache
  10. Render to screen