Here to record the technical points encountered in the work, as well as some of their own thoughts on life, published on Wednesday or Friday.

cover

Obviously, it’s hard. It’s really hard! Not so hard!

There are a lot of detailed articles about front-end performance optimization, such as reducing DOM manipulation, compressing code files, reducing images, using CDN, etc. These are common problems because, fundamentally, front-end performance is affected by three factors: network bandwidth, interface return time, and interface rendering speed.

Among these three points, the first two problems can be divided into one category, and the one that is really closely related to the interface rendering speed. So what does interface rendering speed depend on?

In general, we think of DOM elements, and the more DOM elements, the longer the interface takes to render. We all know this, but we’re missing an important point: The Vue and React frameworks are currently used in front-end development. Both frameworks are data-driven, so when we render a list or table, the factor that affects the efficiency of the interface is the calculation time of the data.

So what affects the calculation time of the data? Obviously, the amount of data and the efficiency of JS execution. The smaller the amount of data, the higher the js execution efficiency, the faster the calculation results will be returned, and the shorter the interface rendering time will be. The larger the data, the less efficient the JS execution, and the longer the interface rendering time.

Js execution efficiency is generally understood as the efficiency of JS loops, which can be regarded as a fixed value. The following code takes approximately 5 seconds to execute:

function test(){ let start = performance.now() for(let i=0; i<30000; i++){ console.log('i') } let end = performance.now() console.log(end-start) // 5000ms }Copy the code

It’s just a simple loop with 30,000 values, and there’s no other complex operations in the loop body, but when we have other complex operations in the loop body, such as judgments, object assignments, or nested one or more inner loops, the time takes to do that increases exponentially.

Especially when the attributes in a single data are not fixed, generally speaking, we encounter a table, there are more than ten attributes in a single data, ten or twenty, but sometimes there are more than a hundred!

Yes, you read that right, there are more than a hundred! At this point, if you’re iterating through an array that’s 28,000 in length and has 100 attributes, you have to wait. There’s nothing you can do.

Here’s a real-world example:

Recently done projects for more than a column header table, the number of header is not fixed, the number of header grouping is not fixed, header with a maximum of 720 columns, each header grouping of up to eight columns, grouping headers need dynamic calculation, dynamic calculation table data also need the front, so calculate down, a data at best 7208 n field, n is required fields, In addition, there is no back-end paging and 30,000 pieces of data are returned at a time.

In this case, front-end optimization, how do you optimize? At this point, you will find that once the amount of data exceeds a certain critical value, there is a 90% probability that the interface will crash. Even if it does not crash, it will take more than ten minutes.

One might say, well, this is a case where you slice it, you group the data. Data grouping is essentially a time-sharing function. For example, if you used to generate 10,000 pieces of data at once, now you generate 500 pieces per second. This effect is not obvious in the above scene, but there is no particularly good way.

My time-sharing function here looks like this:

const genTableDataSource = (arr) => {
        let obj = {}
        if (arr.length) {
          arr.map((item) => {
            data.xAxis.map((v) => {
              if (item.dimension === v.dimension) {
                obj.store_name = item.store_name
                for (let k in coreIndexListObj) {
                  obj[`${v.title}${coreIndexListObj[k].field}`] =
                    item[coreIndexListObj[k].field]
                }
              }
            })
          })
        }
        tableDataSource.push(obj)
        console.log('tableDataSource---', tableDataSource)
      }
      
const timeChunk = (arr, fn, count) => {
        let keys = Object.keys(arr),
          t
        const start = () => {
          for (let i = 0; i < Math.min(count || 2, keys.length); i++) {
            let obj = arr[keys.shift()]
            console.log('obj---', obj)
            fn(obj)
          }
        }
        return function() {
          self.timer = setInterval(() => {
            if (keys.length == 0) {
              clearInterval(self.timer)
            }
            start()
          }, 500)
        }
      }

      let genFinal = timeChunk(rebuildList, genTableDataSource)
      console.log('genFinal--', genFinal)
      genFinal()
Copy the code

In my opinion, front-end performance optimization still needs to be analyzed on a case-by-case basis. In a general project, are there really so many scenarios to be optimized? Not necessarily, but we need to have our own solutions for some specific scenarios.

It can be technically optimized, paging, image compression, Web caching, NGINx caching, CND, etc. It can also combine non-technical solutions to optimize interaction solutions and reduce business complexity.

In short, performance optimization needs to be analyzed on a case-by-case basis.

The last

  • “JavaScript Advanced Programming”
  • Reply “vue-router” or “router” to receive VueRouter source analysis document.
  • Reply “vuex” or “vuex” to receive a vuex source analysis document.

Thank you for your likes, retweets and followings.