This article mainly refers to the theme of 9 Performance secrets Revealed by Guillaume Chau, a core member of Vue.js, in Vue Conf of the United States in 19, in which he mentioned nine Performance optimization techniques of vue.js.

After watching his PPT, I also read the relevant project source code. After in-depth understanding of its optimization principle, I applied some of the optimization skills to my daily work, and achieved quite good results.

This sharing is very practical, but not many people seem to know and pay attention to it. So far, this project only has a few hundred stars. Although it has been two years since the big guy’s sharing, the optimization techniques are not outdated. In order to let more people understand and learn the practical skills, I decided to do a second processing of his sharing, elaborate on the optimization principles and extend them to a certain extent.

This article will focus on vue.js 2.x, after all, vue.js 2.x will be the mainstream version of our work for some time to come.

I recommend that you pull the source code of the project and run it locally while studying this article to see the difference before and after optimization.

Functional components

The first tip, functional components, you can check out this online example.

The component code before optimization is as follows:

<template>
  <div class="cell">
    <div v-if="value" class="on"></div>
    <section v-else class="off"></section>
  </div>
</template>

<script>
export default {
  props: ['value'],
}
</script>
Copy the code

The optimized component code is as follows:

<template functional>
  <div class="cell">
    <div v-if="props.value" class="on"></div>
    <section v-else class="off"></section>
  </div>
</template>
Copy the code

Then we set up 800 components before and after each rendering optimization of the parent component, and trigger the update of components by modifying data within each frame. Open The Performance panel of Chrome to record their Performance, and obtain the following results.

Before optimization:

After the optimization:

Comparing these two figures, we can see that script execution takes more time before optimization than after optimization, and we know that JS engine is a single thread running mechanism, JS thread will block UI thread, so when script execution takes too long, rendering will be blocked, resulting in page lag. The optimized script takes less time to execute, so it performs better.

So why does the execution time get shorter with functional component JS? This starts with the implementation of functional components, which you can think of as a function that renders a DOM based on the context data you pass.

Different from ordinary object type components, functional components will not be regarded as a real component. We know that in the patch process, if a node is component VNode, the initialization process of sub-components will be performed recursively. The render of a functional component generates a normal VNode without recursive subcomponents, so the rendering overhead is much lower.

So functional components don’t have state, they don’t have reactive data, they don’t have lifecycle hook functions. You can think of it as a DOM level reuse by stripping out part of the normal component template and rendering it as a function.

Child component splitting

The second technique, subcomponent splitting, you can check out this online example. The component code before optimization is as follows:

<template>
  <div :style="{ opacity: number / 300 }">
    <div>{{ heavy() }}</div>
  </div>
</template>

<script>
export default {
  props: ['number'],
  methods: {
    heavy () {
      const n = 100000
      let result = 0
      for (let i = 0; i < n; i++) {
        result += Math.sqrt(Math.cos(Math.sin(42)))
      }
      return result
    }
  }
}
</script>
Copy the code

The optimized component code is as follows:

<template>
  <div :style="{ opacity: number / 300 }">
    <ChildComp/>
  </div>
</template>

<script>
export default {
  components: {
    ChildComp: {
      methods: {
        heavy () {
          const n = 100000
          let result = 0
          for (let i = 0; i < n; i++) {
            result += Math.sqrt(Math.cos(Math.sin(42)))
          }
          return result
        },
      },
      render (h) {
        return h('div', this.heavy())
      }
    }
  },
  props: ['number']
}
</script>
Copy the code

Then we render 300 components before and after the optimization of the parent component, and modify the data within each frame to trigger the update of components, open the Performance panel of Chrome to record their Performance, and get the following results.

Before optimization:

After the optimization:

Comparing these two figures, we can see that the script execution time after optimization is significantly less than that before optimization, so the performance experience is better.

The example simulates a time-consuming task with a heavy function that executes once on each rendering, so each rendering of the component takes a long time to execute JavaScript.

The optimized method is to encapsulate the execution logic of the heavy function in the child component ChildComp. Since the update of Vue is component-grained, although every frame is re-rendered by data modification of the parent component, ChildComp does not re-render. Because it doesn’t have any responsive data changes internally. As a result, the optimized components do not perform time-consuming tasks on every render, resulting in less natural JavaScript execution time.

However, I have put forward some different views on this optimization method. For details, please click on this issue. I think optimization in this scenario is better with calculated attributes than subcomponent splitting. Due to the caching nature of computed properties, the time-consuming logic is only executed on the first render, and there is no additional overhead of rendering child components using computed properties.

In practice, there are many scenarios where computational attributes are used to optimize performance. After all, it also embodies the idea of space-for-time optimization.

Local variables

The third technique, local variables, you can check out this online example.

The component code before optimization is as follows:

<template>
  <div :style="{ opacity: start / 300 }">{{ result }}</div>
</template>

<script>
export default {
  props: ['start'],
  computed: {
    base () {
      return 42
    },
    result () {
      let result = this.start
      for (let i = 0; i < 1000; i++) {
        result += Math.sqrt(Math.cos(Math.sin(this.base))) + this.base * this.base + this.base + this.base * 2 + this.base * 3
      }
      return result
    },
  },
}
</script>
Copy the code

The optimized component code is as follows:

<template>
  <div :style="{ opacity: start / 300 }">{{ result }}</div>
</template>

<script>
export default {
  props: ['start'],
  computed: {
    base () {
      return 42
    },
    result ({ base, start }) {
      let result = start
      for (let i = 0; i < 1000; i++) {
        result += Math.sqrt(Math.cos(Math.sin(base))) + base * base + base + base * 2 + base * 3
      }
      return result
    },
  },
}
</script>
Copy the code

Then we render 300 components before and after the optimization of the parent component, and modify the data within each frame to trigger the update of components, open the Performance panel of Chrome to record their Performance, and get the following results.

Before optimization:

After the optimization:

Comparing these two figures, we can see that the script execution time after optimization is significantly less than that before optimization, so the performance experience is better.

The main difference here is the implementation difference of the calculation attribute result of the component before and after optimization. The component before optimization accesses this.base several times in the calculation process, while the optimized component will use the local variable base before calculation, cache this.base, and then directly access base.

The reason why this difference makes a performance difference is that every time you access this.base, since this.base is a responsive object, it fires its getter and executes the dependency collection logic. Similar logic is performed too often, as in the example where hundreds of loops update hundreds of components, each triggering computed recalculation, and then performing the dependent collection-related logic multiple times, and performance naturally degrades.

This. Base performs a dependency collection once and returns the result of its getter to the local variable base. The getter is not triggered when the base is accessed again, and the dependency collection logic is not used.

This is a very useful performance tuning technique. This is because many people in vue.js are used to writing this. XXX whenever they fetch a variable, because most people don’t notice what’s going on behind this. Performance issues are not significant when the number of accesses is low, but when the number of accesses is high, such as multiple accesses in a large loop, such as in the example, performance issues can occur.

When I did performance optimization for ZoomUI’s Table component, I used local variable optimization techniques in the render Table body and wrote benchmark to do performance comparison: Rendering 1000 * 10 tables, ZoomUI Table’s updated data rerenders nearly twice as fast as ElementUI’s Table.

Reuse DOM with v-show

The fourth tip, reusing the DOM with V-show, you can check out this online example.

The component code before optimization is as follows:

<template functional>
  <div class="cell">
    <div v-if="props.value" class="on">
      <Heavy :n="10000"/>
    </div>
    <section v-else class="off">
      <Heavy :n="10000"/>
    </section>
  </div>
</template>
Copy the code

The optimized component code is as follows:

<template functional> <div class="cell"> <div v-show="props.value" class="on"> <Heavy :n="10000"/> </div> <section v-show="! props.value" class="off"> <Heavy :n="10000"/> </section> </div> </template>Copy the code

Then we set 200 components before and after rendering optimization of the parent component, and trigger the update of components by modifying data within each frame. Open The Performance panel of Chrome to record their Performance, and get the following results.

Before optimization:

After the optimization:

Comparing these two figures, we can see that the script execution time after optimization is significantly less than that before optimization, so the performance experience is better.

The main difference before and after optimization is that V-show instruction replaces V-IF instruction to replace explicit implicit of components. Although in terms of performance, V-show and V-IF are similar in controlling explicit implicit of components, there is still a big gap in internal implementation.

The V-if instruction is compiled into a ternary operator at compile time, and conditional rendering, such as the component template before optimization, is compiled to generate the following render function:

function render() {
  with(this) {
    return _c('div', {
      staticClass: "cell"
    }, [(props.value) ? _c('div', {
      staticClass: "on"
    }, [_c('Heavy', {
      attrs: {
        "n": 10000
      }
    })], 1) : _c('section', {
      staticClass: "off"
    }, [_c('Heavy', {
      attrs: {
        "n": 10000
      }
    })], 1)])
  }
}
Copy the code

When the value of the condition props. Value changes, the corresponding component update will be triggered. For the node rendered by V-IF, the old vNode will be removed and a new vnode will be created during the comparison of the core DIff algorithm because the vNodes of the old and new nodes are inconsistent. A new Heavy component is created, and the Heavy component itself initializes, renders vNodes, patches, and so on.

As a result, each update to a component using V-if creates a new Heavy child component, which naturally causes performance pressure when there are too many updates.

When we use the V-show command, the optimized component template is compiled to produce the following rendering function:

function render() { with(this) { return _c('div', { staticClass: "cell" }, [_c('div', { directives: [{ name: "show", rawName: "v-show", value: (props.value), expression: "props.value" }], staticClass: "on" }, [_c('Heavy', { attrs: { "n": 10000 } })], 1), _c('section', { directives: [{ name: "show", rawName: "v-show", value: (!props.value), expression: "!props.value" }], staticClass: "off" }, [_c('Heavy', { attrs: { "n": 10000}})], 1)])}}Copy the code

When the value of the condition props. Value changes, it will trigger the corresponding component update. For v-show rendered nodes, because the old and new vNodes are the same, they just need to patchVnode.

Originally, in the process of patchVnode, it will update the hook function corresponding to the v-show instruction, and then it will set the explicit and implicit control of the style.display value of the DOM element it functions according to the binding value of the V-show instruction.

Therefore, v-show only updates the implicit values of the existing DOM, compared to v-if, which deletes and creates new DOM functions. Therefore, V-show is much less expensive than V-IF. The more complex the internal DOM structure is, the greater the performance difference will be.

However, v-show’s performance advantage over V-IF is in the update phase of the component. If it is only in the initialization phase, V-IF performs better than V-Show because it renders only one branch, whereas V-show renders both branches. Use style.display to control the explicit and implicit of the corresponding DOM.

With V-show, all components inside the branch are rendered and the corresponding lifecycle hook function is executed, whereas with V-if, components inside the branch that did not hit are not rendered and the corresponding lifecycle hook function is not executed.

So you need to understand how they work and how they differ so that you can use the appropriate commands in different scenarios.

KeepAlive

The fifth tip is to cache the DOM using the KeepAlive component, which you can check out for an online example.

The component code before optimization is as follows:

<template>
  <div id="app">
    <router-view/>
  </div>
</template>
Copy the code

The optimized component code is as follows:

<template>
  <div id="app">
    <keep-alive>
      <router-view/>
    </keep-alive>
  </div>
</template>
Copy the code

We click the button to switch between the Simple page and the Heavy Page, rendering a different view, and the Heavy page is very time-consuming to render. We opened Chrome’s Performance panel to record their Performance, and then performed the above actions before and after optimization, and got the following results.

Before optimization:

After the optimization:

Comparing these two figures, we can see that the script execution time after optimization is significantly less than that before optimization, so the performance experience is better.

In the non-optimized scenario, every time we click the button to switch the routing view, we will re-render the component, which will go through the component initialization, render, patch and other processes. If the component is complex or deeply nested, the whole rendering time will be very long.

With KeepAlive, the vNode and DOM of the keepalive-wrapped component will be cached after the first rendering. Then, when rendering the component again, the corresponding VNode and DOM will be directly retrieved from the cache and rendered. There is no need to go through another component initialization, render and patch etc. series of processes, reduce script execution time, better performance.

However, using KeepAlive components is not without cost, because it takes up more memory for caching, which is typical of space-for-time optimization.

Deferred features

The sixth tip is to use the Deferred component to render components in batches. You can check out this online example.

The component code before optimization is as follows:

<template>
  <div class="deferred-off">
    <VueIcon icon="fitness_center" class="gigantic"/>

    <h2>I'm an heavy page</h2>

    <Heavy v-for="n in 8" :key="n"/>

    <Heavy class="super-heavy" :n="9999999"/>
  </div>
</template>
Copy the code

The optimized component code is as follows:

<template>
  <div class="deferred-on">
    <VueIcon icon="fitness_center" class="gigantic"/>

    <h2>I'm an heavy page</h2>

    <template v-if="defer(2)">
      <Heavy v-for="n in 8" :key="n"/>
    </template>

    <Heavy v-if="defer(3)" class="super-heavy" :n="9999999"/>
  </div>
</template>

<script>
import Defer from '@/mixins/Defer'

export default {
  mixins: [
    Defer(),
  ],
}
</script>
Copy the code

We click the button to switch between the Simple page and the Heavy Page, rendering a different view, and the Heavy page is very time-consuming to render. We opened Chrome’s Performance panel to record their Performance, and then performed the above actions before and after optimization, and got the following results.

Before optimization:

After the optimization:

Comparing these two figures, we can find that when we cut from Simple Page to Heavy Page before optimization, the Page still renders Simple Page at the end of a Render, giving a feeling of Page lag. After optimization, when we cut from the Simple Page to the Heavy Page, the Heavy Page is rendered in the first Render position, and the Heavy Page is rendered incrementally.

The difference before and after optimization is mainly due to the latter’s use of the Defer mixin, so let’s take a look at how it works:

export default function (count = 10) {
  return {
    data () {
      return {
        displayPriority: 0
      }
    },

    mounted () {
      this.runDisplayPriority()
    },

    methods: {
      runDisplayPriority () {
        const step = () => {
          requestAnimationFrame(() => {
            this.displayPriority++
            if (this.displayPriority < count) {
              step()
            }
          })
        }
        step()
      },

      defer (priority) {
        return this.displayPriority >= priority
      }
    }
  }
}
Copy the code

The main idea of Defer is to split one render of a component into multiple renderings, maintain the displayPriority variable internally, and then increment it up to count for each frame rendered via requestAnimationFrame. The component that uses Defer mixin can then be controlled internally to render some blocks when displayPriority increases to XXX by v-if=” Defer (XXX)”.

Deferred is a good idea when you have components that take a lot of time to render. It prevents a render from getting stuck due to JS execution taking too long.

Time slicing

Tip number seven, use Time slicing. You can see this online example.

The code before optimization is as follows:

fetchItems ({ commit }, { items }) {
  commit('clearItems')
  commit('addItems', items)
}
Copy the code

The optimized code looks like this:

fetchItems ({ commit }, { items, splitCount }) { commit('clearItems') const queue = new JobQueue() splitArray(items, SplitCount).foreach (chunk => queue.addJob(done => {// Commit data by chunk requestAnimationFrame(() => {commit('addItems', chunk) done() }) }) ) await queue.start() }Copy the code

We first created 10000 fake data by clicking the Genterate Items button, and then committed data by clicking the Commit Items button with time-slicing on and off, respectively. Open Chrome’s Performance panel to record their Performance, and you get the following results.

Before optimization:

After the optimization:

Comparing these two pictures, we can find that the total script execution time before optimization is less than that after optimization, but from the actual look and feel, click the submit button before optimization, the page will be stuck for about 1.2 seconds. After optimization, the page will not be completely stuck, but there will still be a feeling of rendering lag.

So why do pages get stuck before optimization? Because too much data is submitted at one time, the internal JS execution takes too long, blocking the UI thread and causing the page to freeze.

After optimization, the page is still stuck, because we split the granularity of data is 1000, in this case, re-render components are still under pressure, we observe FPS only ten, will feel stuck. A page is usually pretty smooth as long as it has an FPS of 60, but if we break it down to 100, we can get it up to 50 or more. Even though the page rendering is smoother, it still takes longer to complete 10000 pieces of data.

We use Time slicing to avoid pages getting stuck. Usually we add a loading effect to these time-consuming tasks. In this case, we can turn on the loading animation and commit data. By comparison, it is found that JS runs for a long time and blocks THE UI thread due to too much data submitted at one time before optimization, and this loading animation will not be displayed. However, after optimization, as we split into multiple time slices to submit data, the running time of JS at a single time will be shorter, so loading animation will have a chance to be displayed.

One thing to note here is that although we are using the requestAnimationFrame API to split the time slice, using requestAnimationFrame by itself does not guarantee a full frame to run. RequestAnimationFrame guarantees that the browser will execute the corresponding callback function after each redraw. To ensure full frames, the JS can only run for no more than 17ms in a Tick.

Non-reactive data

Number eight, using non-reactive data, you can check out this online example.

The code before optimization is as follows:

const data = items.map(
  item => ({
    id: uid++,
    data: item,
    vote: 0
  })
)
Copy the code

The optimized code is as follows:

const data = items.map(
  item => optimizeItem(item)
)

function optimizeItem (item) {
  const itemData = {
    id: uid++,
    vote: 0
  }
  Object.defineProperty(itemData, 'data', {
    // Mark as non-reactive
    configurable: false,
    value: item
  })
  return itemData
}
Copy the code

Again, we created 10000 fake items by clicking the Genterate Items button, and then clicking the Commit Items button with Partial Reactivity enabled and Partial Reactivity disabled, respectively. Open Chrome’s Performance panel to record their Performance, and you get the following results.

Before optimization:

After the optimization:

Comparing these two figures, we can see that the script execution time after optimization is significantly less than that before optimization, so the performance experience is better.

The reason for this difference is that internally submitted data is defined as responsive by default. If the sub-attributes of the data are in object form, the sub-attributes are also defined as responsive recursively. Therefore, when a lot of data is committed, the process becomes time-consuming.

After the optimization, the Object property of the newly submitted data is changed to 64x, which is false. Therefore, the Object property array can be accessed via object.keys (obj) without any additional information. The data attribute is not defineReactive, because data refers to an object, which reduces the recursive response logic and reduces the performance cost of this part. The larger the volume of data, the more obvious the effect of this optimization will be.

There are many other ways to do this. For example, some of the data we define in the component does not have to be defined in the data. Some data is not used in the template, and we do not need to listen for changes, but just want to share the data in the context of the component. We can simply mount the data to the component instance this, for example:

export default {
  created() {
    this.scroll = null
  },
  mounted() {
    this.scroll = new BScroll(this.$el)
  }
}
Copy the code

This allows us to share the Scroll object in the component context, even though it is not a reactive object.

Virtual scrolling

The ninth tip, using the Virtual scrolling component, is shown in this online sample.

The code for the component before optimization is as follows:

<div class="items no-v">
  <FetchItemViewFunctional
    v-for="item of items"
    :key="item.id"
    :item="item"
    @vote="voteItem(item)"
  />
</div>
Copy the code

The optimized code is as follows:

<recycle-scroller
  class="items"
  :items="items"
  :item-size="24"
>
  <template v-slot="{ item }">
    <FetchItemView
      :item="item"
      @vote="voteItem(item)"
    />
  </template>
</recycle-scroller>
Copy the code

Again, we need to open the View List and click on the Genterate Items button to create 10000 fake items. (Note that the online example can only create 1000 items at most, and 1000 items is not a good example of optimization. So I changed the constraints of the source code, ran it locally and created 10000 pieces of data), and then clicked the Commit Items button in the case of Unoptimized and RecycleScroller respectively to submit the data and scroll the page. Open Chrome’s Performance panel to record their Performance, and you get the following results.

Before optimization:

After the optimization:

By comparing these two figures, we find that in the case of non-optimization, the FPS of 10,000 pieces of data in the case of scrolling is only single digit, and in the case of non-scrolling, it is more than ten digit, because the DOM rendered in the non-optimization scene is too much, and the rendering itself is under great pressure. After optimization, even 10,000 pieces of data can be used to achieve 30 + FPS with scrolling and 60 full FPS without scrolling.

The reason for this difference is that virtual scrolling is implemented by rendering only the DOM inside the viewport, so the total number of DOM rendered is small and the natural performance is much better.

The virtual scroll component was also written by Guillaume Chau, and if you are interested, you can explore its source code implementation. The basic idea is to listen for scrolling events, dynamically update the DOM elements that need to be displayed, and calculate their displacement in the view.

The virtual scroll component is not without cost, as it needs to be evaluated in real time while scrolling, so there is a cost of script execution. So if the list is not very large, plain scrolling is sufficient.

conclusion

Through this article, I hope you’ve learned nine performance tuning tips for vue.js that you can apply to your actual development projects. In addition to the above tips, there are also lazy loading images, lazy loading components, asynchronous components, and other common performance optimization techniques.

Before doing performance optimization, we need to analyze where the performance bottleneck is, and then adapt it to local conditions. In addition, performance optimization requires data support. Before you perform any performance optimization, you need to collect the data before optimization, so that you can see the optimization effect through data comparison.

Hopefully, as you go through your development process, you will not just be satisfied with implementing the requirements, but will write each line of code thinking about its potential performance impact.

This article is reprinted from teacher Huang Yi’s blog, thank you!

Original link: juejin.cn/post/692264…

Original author PPT link: slides.com/akryum/vuec…