I got to know CALayer for the first time when I was asked “What is the relationship between layer and view” in an interview. I also gradually understood some things of layer in the development, but I lacked a comprehensive understanding of it, so I made a comprehensive excavation of it.

Relationship between layer and View

You start with a View, and for a long time you may only recognize the View, but only see the layer in certain corners, such as rounded corners, such as coreAnimation, and also use CALayer when drawing content, so the first question about the layer must be: What the hell does this have to do with View?

Here is a document:

Layers provide infrastructure for your views. Specifically, layers make it easier and more efficient to draw and animate the contents of views and maintain high frame rates while doing so. However, there are many things that layers do not do. Layers do not handle events, draw content, participate in the responder chain, or do many other things

  • Layer provides the infrastructure for views, making it easier and less costly to draw content and render more efficient animations
  • The layer does not participate in the event handling of the view and does not participate in the response chain

Think about what a view does in the system: it accepts user clicks and renders content. Layer is responsible for rendering the content, but not handling the user click events.

It’s very simple and easy to remember, and I understand view better.

Content rendering

Now that you know what Layer does, the next question is: How is the content delivered? What is supported? How is it presented?

Looking at CALayer’s API, the most relevant ones for content presentation are:

  • displayandsetNeedsDisplay``displayIfNeeded
  • drawInContext:And delegatedrawLayer:inContext:Etc.
  • attributecontents
Update mechanism

The first set of three methods is similar to the one in the View, they are similar logic. First, a content changes on the layer, such as the color changes, to be immediately visible to the user, the graphics system needs to re-render. Imagine that it is possible for multiple layers to refresh simultaneously in a short amount of time, such as opening a new complex viewController, or quickly swiping a tableView. If every layer update has to be refreshed by the system, this can result in erratic framerates, sometimes very slow and sometimes very idle.

Therefore, the mechanism is reversed. The system has a basically stable refresh frequency, and when the content of the layer changes, it marks the layer to be refreshed, which is setNeedsDisplay. Each refresh, all the layers marked after the last refresh are submitted to the graphics system at one time. So there’s another thing here, which is a CATransaction.

Layer refresh is called display, but this we don’t actively call, let the system call, it can put me better timing. We just need setNeedsDisplay to mark it up. If you really need it, use displayIfNeeded and refresh the layer marked as Needed immediately.

Providing a stable and harmonious general mechanic while accommodating occasional special needs. Good.

Content delivery method

The actual content is drawn in the second set of methods. According to the test, the content provides the following mechanism:

  • display
  • The delegate ofdisplayLayer:
  • drawInContext:
  • The delegate ofdrawLayer:inContext:;

If one of these four methods is implemented, you will not proceed further, and you will consider that you have provided the content. The delegate method checks whether the delegate exists and implements the corresponding method.

Backing Store (CAImageRef); Backing Store (CAImageRef); Backing Store (CAImageRef); Backing Store (CAImageRef);

layer.contents = [UIImage imageNamed:@"xxx"].CGImage;
Copy the code

The latter two methods give layer a chunk of memory to store the drawn content, and in both cases you can use the CoreGraphics API to draw the desired content.

The role of the delegate

The delegate controls the content of the layer. That’s why UIView’s layer delegate is assigned to the view itself by default, and because of that, Most of the time we change the view properties directly (color, position, transparency, etc.) and the layer rendering changes automatically.

Relationship between layer and animation

When using CoreAnimation, you put the created animation on the layer. When using simple animation, most of the time you use [UIView Animation…]. Is the latter essentially an internal animation on layer? Yes, the carrier of the animation is layer, and that’s their basic relationship. But for more efficient animation, there are more details.

If you’ve ever animated a displacement and tried to output the view’s position during animation, you might be surprised to find that the view’s frame is already the end position after the animation starts!

It is common sense that the view position should change over time, and this error in understanding is a good window into the animation kernel.

At least one thing can be concluded from the above phenomenon: even if what you see is not consistent with the data in the system, the animation may be a trick.

Look at the document:

Instead, a layer captures the content your app provides and caches it in a bitmap, which is sometimes referred to as the backing store. … When a change triggers an animation, Core Animation passes the layer’s bitmap and state information to the graphics hardware, which does the work of rendering the bitmap using the new information. Manipulating the bitmap in hardware yields much faster animations than could be done in software.

The content of the layer generates a bitmap. When the animation is triggered, the animation and state information are passed to the graphics hardware. The graphics hardware uses these two data to construct the animation. Processing bitmaps is faster for graphics hardware.

Simulate the animation process is: a very complex view animation, is to synthesize its layer content into a picture, and then to rotate, is to rotate the picture to display. In fact, in the rendering process of the graphics system, rotation, scaling, displacement, etc., only need to add a matrix (corresponding to transform), which is the most basic operation for the graphics system, very efficient.

So the presentation of the animation is separated from the data of the view itself, which means that the animation sees the data at the end of the animation.

If you follow common sense to implement animation, how do you do it?

View move, in the interface refresh method, constantly update the position of the view, each update, the data provided to the graphics system, redraw. For views with complex subviews, redraw the entire subview tree.

What do layer-based deceptive animations save by comparing the two?

  • You don’t have to constantly update the view data
  • You don’t have to constantly interact with the graphics hardware
  • For complex views, there is no need to redraw the entire layer tree
  • They’re better at dealing with graphics hardware

I think the essential reason for doing this is that the animation we need is stylized, with templates and routines. Even slightly complicated animations can be simplified with keyframe animations and eventually become discrete and independent data to be presented in accordance with the established route. You can’t do this if the animation is calculated in real time, such as how a ball bounces when thrown to the ground, based on the weight of the material, the slope of the ground, etc.

The layer tree

The animation system above gives rise to three different layer trees:

  • The Model Layer tree stores the end value of the animation
  • Presentation tree, which contains the values of the animation in progress
  • The render tree is used to represent the data of the actual animation. It should be the data related to the graphics system, such as the bitmap provided to the GPU.

If you want to get the view data during the animation, you can get it from the presentation tree.

Performance issues

Basically, it’s all about off-screen rendering

1. The rounded corners

After iOS9 the system has been optimized, not considered. I think using layer overlay is the best solution. The essence of the rounded corner problem is mask, see the mask section below.

2. Shadow, solution: plusshadowPathTo replaceshadowOffset

Why can shadowPath be used to solve this problem? I haven’t found any other articles about this, and there are only hints in the system documents, but I have made a reasonable conjecture based on various information.

The shadow of the label will follow the text, whereas if the label has a background color, the shadow will follow the border. An imageView with an empty background color and an image that has a hollow out effect will see that the shadows follow the opaque parts of the image.

So I infer that the shadow is generated based on the alpha value of the layer. Simulate the generation process: allocate a shadowlayer of the same size, fill shadowlayer with shadowColor where the alpha of the original layer is not 0, just like the principle of shadow generation in reality, only the opaque part will generate shadow. Then add a shadowOffset to the original layer.

The alpha is not the content of the current layer, but the alpha of the current layer and all its sub-layers. If there are still multiple sub-layers above the layer, the view will be combined and the alpha value will be checked. This can be tested by stagger stacking multiple ImageViews together.

The shadow layer is calculated based on the content in real time and triggers an off-screen rendering, so it’s expensive.

When shadowPath is used, the shape of the shadow layer is fixed, which is similar to adding a subLayer without triggering an off-screen rendering.

ShadowPath:

If you specify a value for this property, the layer creates its shadow using the specified path instead of the layer’s composited alpha channel

The composited is the result of the current layer being mixed with all its sub-layers. With the above explanation, this sentence should make sense.

** note: ** will still lag on the iPhone6, but it’s smooth on the 8 and X

3. mask

Using CALayer’s mask attribute directly will result in an off-screen rendering, see comments

A layer whose alpha channel is used as a mask to select between the layer’s background and the result of compositing the layer’s contents with its filtered background

Masks are also used not only for the contents of the current layer, but also for the composite contents of the layer and all its sub-layers. This can also be tested by setting the mask of the layer in viewA, and then no matter how many views you add to viewA, the mask will apply.

The solution is to add a layer on top to implement the mask. The effect of mask is that the part of alpha>0 is transparent, while the part of 0 is completely obscured.

You can add a maskLayer2 with the opposite alpha on the top layer. According to the mixing effect, the content can be revealed where the alpha of maskLayer2 is 0, corresponding to the original maskalpha>0, which is also the place where the content can be filtered through.

The only trouble is that when you add a new view, the content of the new view will run over maskLayer2, and there will be no mask effect on the new view.

One of the solutions to rounded corners is this. The essence of rounded corners before is to add a mask, resulting in off-screen rendering.

4. shouldRasterizerasterizer

This is also more than said, the previous several performance problems can be seen, performance problems mainly because of two points: 1. 2. Recalculate the composition of complex layer layers each time

Raster optimization is aimed at the latter problem, such as 10 views, stacked on top of each other, each time to calculate the content of the overlay, turn on this effect, the calculated content is generated into a bitmap, the rendering engine will cache and reuse this bitmap, without recalculating.

For example: the former is like you want to tell a person what the captain looks like, and then you build a mobile phone to show him, each introduction to each person you have to build a mobile phone; The latter is similar to when you build your phone, take a picture, and show it every time you introduce it to someone.

The downside is that if the style is constantly changing, reuse is reduced, and storing bitmaps increases memory consumption.

Actual test: Add a text shadow to the tableView cell, and then the text changes randomly. Shadows will cause off-screen rendering, and text shadows cannot be specified using shadowPath, so they will get stuck.

  • openshouldRasterizeThe effect was remarkable.
  • It makes no difference whether the text changes or not, maybeshouldRasterizeThe concept of reuse and change in content are not the same meaning. For tableView, the new cell is not reused, in the test tool display red
  • If the view turns maskToBounds on, it doesn’t work very well. Still, new cells are not being reused. Let’s just say the performance cost of masks is too high
Speculation about off-screen rendering

Through the cognition of the above attributes that trigger off-screen rendering, we find a common feature, that is, they all need the result of combining layer and its sub-layer tree. This is true for masks, this is true for shadows, this is true for shouldRasterize.

Let’s say the normal content is A, and then render the graphic GA, and then you want to add A content B, and then make A blend of the content A and B.

But what if B’s content is based on A? You have to render A before you can generate B, so where does A go when you generate B? This will create A new frame buffer to output the results of A to this place, rather than directly to the screen. And then in that new context, synthesize A and B and then cut back to the original context, and then print it to the screen.

That’s my guess on the off-screen rendering process and why.

Update 1:

So here’s an animation based on CAShapeLayer, and the animation adjustment properties are strokeStart and strokeEnd, which is a path that only draws a specified part, and keeps modifying that part to animate. This conflict with the previous layer to form a bitmap and send it to the graphics system to build the animation, because the path data will be lost after the bitmap is formed, and it is impossible to form the animation through the image + additional simple data. The most likely scenario is that a CAShapeLayer calculates vertex data based on its path and strokeStart ‘ ‘strokeEnd properties and passes it to the graphics system to draw. Keep modifying properties, keep drawing. This may be a special treatment of CAShapeLayer itself, so it will not be consistent with CALayer’s claim.