Writing in the front

Speaking of high-performance animations for the Web, this section is a cliche, but there are some new and useful things to share with you. After reading this article, I believe that everyone will have a good understanding of the mechanics of animation rendering and the key elements of making 60FPS animation. In the future, problems related to animation can be solved from the source.

The body of the

What is high performance animation?

The frame rate of animation can be used as a yardstick, and generally images look good at 60fps.

60fps

This translates to 16.7ms (16.7 = 1000/60) per frame. Therefore, our first priority is to reduce unnecessary performance consumption. The more frames that need to be rendered, the more tasks the browser has to deal with, so frame drops appear, which is a stumbling block to reaching 60fps.

If all animations can’t be rendered in 16.7ms, consider rendering at a slightly lower 30fps frame rate.

How to achieve silky smooth

There are two main determinants:

Frame Timing: When a new Frame is ready

Frame Budget: How long it takes to render a new Frame

The time to start drawing

In general we use setTimeout(callback, 1/60) to render a frame of animation after 16.7ms. However, setTimeout is not actually accurate. First, setTimeout relies on the update frequency of the browser’s built-in clock. For example: With IE8 and previous update intervals of 15.6ms and setTimeout(callback, 1/60) of 16.7ms, it takes two 15.6ms to trigger, which means an unexplained delay of 15.6x 2-16.7 = 14.5ms.

t01457cd049f86cc7ef

Second, if 16.7ms is achieved, there is an asynchronous queue problem. Because of asynchracy, the callback function in setTimeout is not executed immediately, but needs to be queued. The problem, however, is that if a new synchronization script needs to be executed while waiting for the delay to trigger, the synchronization script will not be queued up after the timer’s callback, but will be executed immediately.

function runForSeconds(s) {
    var start = +new Date(a);while (start + s * 1000 > (+new Date())) {}}document.body.addEventListener("click".function () {
    runForSeconds(10);
}, false);

setTimeout(function () {
    console.log("Done!");
}, 1000 * 3);
Copy the code

In the example above, if someone clicks on the body while waiting for the trigger delay of 3 seconds, will the callback still trigger exactly at 3s completion? When the practice executes, it waits for 10 seconds, with synchronous functions always taking precedence over asynchronous ones.

Based on these problems we propose another solution: requestAnimationFrame(callback)

Window. RequestAnimationFrame () method tells the browser you want to perform the animation and request the browser before the next redraw call the specified function to update the animation. This method takes as an argument a callback function that is called before the browser redraws. — MDN

When we call this function, we tell it to do two things:

  1. We need a new frame;
  2. When you render a new frame you need to execute the callback I passed you

The biggest advantage of rAF(requestAnimationFrame) over setTimeout is that it is up to the system to decide when to execute the callback function.

To be specific, the system will actively call the callback function in rAF before each drawing. If the system drawing rate is 60Hz, the callback function will be executed every 16.7ms; if the drawing frequency is 75Hz, the interval will become 1000/75=13.3ms.

In other words, rAF execution follows the drawing frequency of the system. It ensures that the callback function is executed only once in each screen draw interval (function throttling, which is not covered in this article, can be checked if you are interesting), thus not causing frame loss or animation stuttering.

In addition, it can automatically adjust the frequency. If the callback is too much work to complete in one frame, it will automatically be reduced to 30fps. It’s lower, but it’s better than dropping frames.

At the same time, when the page is hidden or minimized, setTimeout still performs the animation task in the background. Since the page is invisible or unavailable at this time, it is meaningless to refresh the animation, and it also wastes CPU resources. RAF is completely different. When the page processing is not activated, the screen drawing task of the page will also be suspended by the system, so rAF that follows the steps of the system will also stop rendering. When the page is activated, the animation will continue to execute where it left off last time, effectively saving CPU overhead.


In fact, there is a good solution for rAF compatibility, the following is a relatively simple:

window.requestAnimFrame = (function(){
 return  window.requestAnimationFrame   || 
   window.webkitRequestAnimationFrame || 
   window.mozRequestAnimationFrame    || 
   window.oRequestAnimationFrame      || 
   window.msRequestAnimationFrame     || 
   function( callback ){
        window.setTimeout(callback, 1000 / 60); }; }) ();Copy the code

This notation does not take into account compatibility with cancelAnimationFrame, and not all device draw intervals are 1000/60.

This is a nice polyfil.

Time to draw a frame

Generally speaking, rAF has solved the first problem (drawing timing), but as for the second problem (drawing cost), rAF can do nothing. At most, it can automatically reduce the frequency to deal with it.

Here we need to optimize from the browser rendering side. First look at this image:

t01018eff532098fece

Rendering

When the page first loads, the browser downloads and parses the HTML, turning the HTML elements into a “content tree” of DOM nodes. In addition, styles are also parsed to produce a “render tree”. In order to improve performance, the rendering engine will do this work separately, and even render trees will generate faster than DOM trees.

At this stage, the most important thing that affects the drawing time is Layout

// animation loop
function update(timestamp) {
    for(var m = 0; m < movers.length; m++) {
        / / the DEMO version
        //movers[m].style.left = ((Math.sin(movers[m].offsetTop + timestamp/1000)+1) * 500) + 'px';

        / / FIXED version
        movers[m].style.left = ((Math.sin(m + timestamp/1000) +1) * 500) + 'px';
        }
    rAF(update);
};
rAF(update);
Copy the code

The DEMO version in the above example is very slow. The reason why it is slow is that when changing the left value of each object, the offsetTop value of the object is requested, triggering a rearrangement, which is a very time-consuming reflow operation.

We often inadvertently write a lot of frequent layout code, such as:

var h1 = element1.clientHeight;
element1.style.height = (h1 * 2) + 'px';

var h2 = element2.clientHeight; 
element2.style.height = (h2 * 2) + 'px';

var h3 = element3.clientHeight;
element3.style.height = (h3 * 2) + 'px';
Copy the code

Constantly reading and writing to the DOM results in “forced synchronous layouts,” Over the course of technology, though, it evolved into the more graphic term Layout thrashing (see layout thrashing for details). The browser tracks the “dirty elements” and stores the transformation when appropriate. Then, after reading a particular attribute, the developer can force the browser to calculate ahead of time, so that repeated reads and writes can cause reordering. So here we need to optimize, read before write is a solution, the above code can be rewritten as:

// Read
var h1 = element1.clientHeight;
var h2 = element2.clientHeight;
var h3 = element3.clientHeight;

// Write
element1.style.height = (h1 * 2) + 'px';
element2.style.height = (h2 * 2) + 'px';
element3.style.height = (h3 * 2) + 'px';
Copy the code

Of course, this only works in general cases, where the code is decoupled or where more complex read and write operations are nested after the read and write, a more mature solution such as fastdom.js can be used. Another trick is to use rAF to delay all write operations until the next frame is also a good solution.

Paint

After the layout is generated, the browser draws the page to the screen. Similar to the previous step, the browser tracks dirty elements and merges them into a large rectangular area. Only one redraw occurs within each frame to paint the contaminated area.

The performance impact of this phase is mainly redraw.

Reduce unnecessary drawing

For example, gifs can cause paint even if they are not visible. If you don’t need them, you should set the display property of the GIF to None in areas that are often painted. Avoid expensive style styles that are expensive:

color.border-style.visibility.background.text-decoration.background-image.background-position.background-repeat
outline-color.outline.outline-style
border-radius.outline-width.box-shadow
background-size
Copy the code

Please refer to csstriggers.com/

Reduce the area to draw

Generate a separate Layer for elements that cause a large Paint scope to reduce the scope of the Paint

Take a look at the demo site and redraw the green areas:

Composite

Compound all drawn elements. By default, all elements will be drawn in the same layer. If elements are separated into different composite layers, updating elements is performance-friendly, and elements that are not in the same layer are not susceptible.

In this stage, CPU draws the layer and GPU generates the layer. The change cost on GPU composite layer is minimum and the performance consumption is minimum. Therefore, the optimization here is mainly to put costly changes on the GPU, which is generally said to enable hardware acceleration technology, which can be said to be beneficial and harmless, if the performance of the device is enough to enable.

The main limitations are: bandwidth between GPC and CPU, GPU limit.


Here we need to distinguish between CPU and GPU work:

t01918abbc87b4f13ba

  1. Javascript calculation and execution
  2. CSS style calculation
  3. Layout calculation
  4. Draw page elements as bitmaps, also known as rasters.
  5. Give the bitmap to the composite thread

The synthetic thread is responsible for:

  1. Pass the bitmap to the GPU as a texture.
  2. Calculate the visible and soon-to-see portions of the page (scroll)
  3. CSS animation handling (CSS animation is better because its flow is not affected by the main thread.)
  4. Instruct the GPU to draw a bitmap to the screen

The GPU only needs to draw layers, so hardware acceleration is definitely better.


You can enable hardware acceleration in the following ways:

  1. By changing theopacitytransformThe value of the trigger
  2. throughtransformThe 3D property of the
  3. will-changeExplicitly tell the browser to optimize the rendering of one or more elements of an element

After hardware acceleration, the browser creates a separate “layer” for this element. When there is a separate layer, the element’s repaint operation will only need to update itself and not affect anyone else. You can think of it as a partial update. So animations with hardware acceleration turned on will be much smoother

By default, CSS properties such as transform and opacity are directly notified by the CPU to the GPU, because the GPU can quickly function on top of the texture (texture: A Bitmap transferred from the CPU to the GPU for offset, scaling, rotation, and transparency modification without going through layout and paint of the main thread. So you have hardware acceleration on.

Will-change is a new thing that explicitly tells the browser to optimize the rendering of one or more elements of an element. Will-change accepts a variety of property values, such as one or more CSS properties (transform, opacity), contents, or scroll position. However, the most commonly used value is probably auto, which indicates that the browser will optimize by default:


Gpus are good at processing images, but they also have bottlenecks.

The bandwidth between the CPU and GPU is limited, and if too many layers are updated at one time, it is easy to reach the GPU bottleneck, affecting the smoothness of the animation. So we need to control the number of layers and the number of layers to paint.

The number of control layers is understandable because the creation and updating of layers consumes memory. The control layer paints the number of times to reduce the number of bitmap updates. Every time a bitmap is updated, the composite thread needs to submit a new bitmap to the GPU. Frequent bitmap updates can also slow down GPU efficiency.

There is a lot of talk about “too many layers of composition hindering rendering”. Because browsers have done everything they can for optimization, will-change’s performance optimization scheme is itself resource-demanding. If the browser continues to perform will-change on an element, it means that the browser continues to optimize that element, resulting in a performance drain on the page.

Too many composite layers degrade page performance on mobile.


Avoid accidentally generated layers

Z-index elements higher than Layer also generate a separate Layer

Demo and description page

summary

There are two main determinants for achieving silky smoothness:

Frame Timing:

  • rAF

Frame Budget:

  • Avoid layout: Read before you write

  • Paint as little as possible: Pay attention to the use of styles

  • Appropriate hardware acceleration