How does the page we developed render to the screen?

Step 1: CPU (object creation and destruction, object property adjustment, layout calculation, text calculation and layout, and slice format conversion and decoding, image drawing (CoreGraphics))

This design to three framework CoreAnimation CoreGraphics, CoreImage CoreAnimation: Is a composite engine whose job it is to combine different visual elements on the screen as quickly as possible. These visual elements can be broken down into separate calayers. Essentially, CALayer is the base of what we can see. CoreGraphics: mainly used to draw images at run time. CoreImage: mainly used to create images before run

The following figure shows the pipeline of CoreAnimation. The first stage of Application mainly consists of 1.1 and 1.2 steps

1.1 handleEvents, this stage mainly deals with the operation of events, such as user click events 1.2 Commit Transcation, this step is divided into four stages 1.2.1 Layout: Including LayoutSubViews, addSubView 1.2.2 Display: mainly for the view map, DrawRect method 1.2.3 Prepare: mainly used for decoding and converting images 1.2.4 Commit: mainly used for packaging layers and sending them to RenderServer 1.3 RenderServer: this step is mainly used by OpenGL ES or Metal to schedule GUP for renderingCopy the code

The second step: The data generated in the first step of GPU(texture rendering) cannot be directly imaging, so GUP is required to convert the data (vertex data – vertex shader – configuration operation – Raster – slice shader -RenderBuffer), and the data of the converted frame will be stored in our frame cache, which is commonly referred to as video memory. For Apple it’s dual caching (front frame caching and back frame caching)

Step 3: The image is directly displayed on the screen after the data is retrieved from the frame cache by the video controller. For Apple, to display a frame (a screen) of data requires two signals: Horizontal sync (HSync) and vertical sync (VSync), first emit a HSync that renders each horizontal pixel one by one. When one line is rendered, re-emit the HSync for the next line. When all pixels on a screen are rendered, A Vsync is issued to fetch the next frame of data from the cache for display.

So how do our screens get stuck? Why is Apple using dual caches and vSYNC signals?

Apple use double caching mechanism + vertical sync signal is to avoid the screen torn, torn because the screen is half half the cache before is a new cache, before fully displays a frame (issued HSync), apple is not going to get the new data from the frame buffer, so that it can be perfect to avoid screen torn, but it will cause a different effect: Frame drop, also known as screen lag, screen lag is because when a VSync comes, the CPU and GPU have not put new data into the frame cache, at this time the screen controller will continue to show the previous frame, which will produce a visual lag