The Engineered Systems column is always first posted on Github, where people can focus on likes and likes, usually more than a week before it is published on any major platform.

Source code and video address involved in this article:

  • The source code
  • The video, because the production certainly needs more time than the text, so the update will be finished this week, you can first put an eye on the video

preface

Readers often ask me what front-end engineering is. How do you start front-end engineering?

After chatting, I came to some conclusions: these readers generally work in small and medium sized companies, with single-digit front-end staff, busy development, almost no infrastructure within the team, and wild tools. Engineering is so new to these readers that they have little idea what it is or think that Webpack is all about front-end engineering.

The author is currently working in the infrastructure group of a factory, providing basic service construction for The front end of Bailai, and has some superficial experience in this field. So I have some ideas, front-end engineering will be the author of a pit this year a series of works, each content will be presented in the way of article + source + video.

The outputs of this series are applicable to the following groups:

  • Small and medium-sized factory front, infrastructure wild, usually tired of business, do not know how to do things outside the business can improve their competitiveness, rich resume

  • The company has no infrastructure plan for the time being and can only spare time to do some low-cost and high-yield products

  • Want to learn about front-end engineering

It should be noted that the output will only be a minimal usable product at a low cost that you can use to add features, reference ideas, or simply learn a little bit.

What is front-end engineering?

Since this is the first article in this series, I’ll give you an overview of what front-end engineering is.

My understanding of front-end engineering is that it basically means doing improvement engineering, starting with writing code. For example, the experience and efficiency of writing code with IDE will be different from notepad. Tools such as Webpack also help us improve the efficiency of development and construction, and other tools are not listed in a list, you know the meaning of good.

Of course, today’s talk about performance testing is also part of engineering. After all, we need a tool to help identify where the app’s performance is falling short, which can help developers locate problems more quickly than in a production environment where users complain about sluggage.

Why do you need performance checks?

Performance optimization is a topic that many front ends can’t get around, and it’s a common question to ask during interviews, not to mention whether a project needs performance optimization.

However, performance optimization alone is not enough. Finally, we need to make a comparison of before and after data to reflect the value of this optimization. After all, the quantification of data is still quite important in the workplace. The boss doesn’t know exactly what you do. A lot of things have to be seen in the numbers. The better the numbers, the better your ability to get the job done.

In order to capture the change in performance before and after the data, we must use some tools to do performance testing.

How should performance be detected?

Performance testing can be done in a number of ways:

  • Chrome developer tools: Performance

  • Lighthouse Open Source Tool

  • Native Performance API

  • Various official libraries, plug-ins

Each of these methods has its own advantages. The first two methods are quick and easy to visualize all kinds of metrics, but it’s hard to get user data because you’re not likely to get users to run these tools. There are also some minor disadvantages, such as the number of redirects and so on.

Official libraries and plug-ins are much less impressive than the first two and provide only a few core metrics.

The native Performance API has compatibility issues, but it covers development and production, and also overlays the built-in developer tool, the Performance tool. During the development phase, we can not only know the performance indicators of the project, but also obtain the data from the client side, which helps us to make a better optimization plan. In addition, we can obtain a complete range of indicators, so this is the choice of our products.

Of course, this is not an all-or-nothing choice. The Performance tools and apis need to be used together during development as they complement each other.

In actual combat

This is the source code for the product here: address.

Some performance indicators

Before we get into the game, let’s take a look at some performance metrics, as some of the old performance optimization articles have become a bit outdated. Google is constantly updating its performance optimization metrics, and I wrote a post about the latest performance metrics, so if you are interested, you can read it in detail.

Of course, if it’s too long for you to see, you can use the following mind map for a brief overview:

Of course, in addition to this metric, we also need to obtain network, file transfer, DOM and other information rich metric content.

The Performance using

The Performance interface retrieves Performance related information from the current page and provides high precision timestamps, date.now (). First let’s look at the compatibility of the API:

This percentage is actually very compatible and can be supported by all the major browsers.

There is no need to explain the Performance API in detail in this article. If you are interested, you can read the MDN documentation. Here, I will only cover a few important apis that will be used later.

getEntriesByType

This API lets us get some information by passing in type:

  • Frame: Time data for a frame in an event loop.
  • Resource: Detailed network timing data for loading application resources
  • Mark:performance.markCall information
  • Measure:performance.measureCall information
  • Longtask: information about a longtask that takes longer than 50ms. This type is deprecated (not documented, but deprecated when used in Chrome) and can be retrieved in other ways
  • Navigation: Methods and properties that are indicators of browser document events
  • Paint: Obtains FP and FCP indicators

The last two types are the key types to obtain indicators in performance detection. Of course, if you also want to analyze information about loading resources, you can add the resource type.

PerformanceObserver

PerformanceObserver is also an API used to get some performance metrics, as follows:

const perfObserver = new PerformanceObserver((entryList) = > {
    // Information processing
})
// Pass in the desired type
perfObserver.observe({ type: 'longtask'.buffered: true })
Copy the code

Combined with the getEntriesByType and PerformanceObserver, we have all the metrics we need.

The code!

Because the source code has been posted, I will not post a large section of code up, will be the main from zero to a process comb again.

First of all, we must design how users call the SDK (proxy refers to the performance detection library). What parameters need to be passed? How to obtain and report performance indicators?

Generally speaking, calling the SDK is to build an instance, so this time we will write it as class. A tracker function is temporarily passed to obtain various indicators and log variables to determine whether to print indicator information. The signature is as follows:

export interfaceIPerProps { tracker? :(type: IPerDataType, data: any, allData: any) = > voidlog? :boolean
}

export type IPerDataType =
  | 'navigationTime'
  | 'networkInfo'
  | 'paintTime'
  | 'lcp'
  | 'cls'
  | 'fid'
  | 'tbt'
Copy the code

The Performance API has compatibility issues, so we need to check whether the browser supports Performance before invoking it:

export default class Per {
  constructor(args: IPerProps) {
    // Store parameters
    config.tracker = args.tracker
    if (typeof args.log === 'boolean') config.log = args.log
    // Determine compatibility
    if(! isSupportPerformance) { log(`This browser doesn't support Performance API`)
      return}}export const isSupportPerformance = () = > {
  const performance = window.performance
  return( performance && !! performance.getEntriesByType && !! performance.now && !! performance.mark ) }Copy the code

Once you’ve done that, you can start writing the code to get the performance metric data.

We first through the performance. GetEntriesByType (‘ navigation ‘) to obtain indicators of document incident

The API still gets time stamps for a lot of events, so if you want to know what they mean, you can read the documentation, so I don’t want to copy them here.

See so many fields, some readers may be dizzy, so many things I can calculate indicators. Don’t worry, just take a look at the following image and combine it with the previous document:

We do not need to use all the fields obtained above, the important indicator information can be exposed, according to the diagram and document to draw the code:

export const getNavigationTime = () = > {
  const navigation = window.performance.getEntriesByType('navigation')
  if (navigation.length > 0) {
    const timing = navigation[0] as PerformanceNavigationTiming
    if (timing) {
    // The field is too long to be posted
      const{... } = timingreturn {
        redirect: {
          count: redirectCount,
          time: redirectEnd - redirectStart,
        },
        appCache: domainLookupStart - fetchStart,
        // dns lookup time
        dnsTime: domainLookupEnd - domainLookupStart,
        // handshake end - handshake start time
        TCP: connectEnd - connectStart,
        // HTTP head size
        headSize: transferSize - encodedBodySize || 0.responseTime: responseEnd - responseStart,
        // Time to First Byte
        TTFB: responseStart - requestStart,
        // fetch resource time
        fetchTime: responseEnd - fetchStart,
        // Service work response time
        workerTime: workerStart > 0 ? responseEnd - workerStart : 0.domReady: domContentLoadedEventEnd - fetchStart,
        // DOMContentLoaded time
        DCL: domContentLoadedEventEnd - domContentLoadedEventStart,
      }
    }
  }
  return{}}Copy the code

We can find that many of the indicators obtained above are related to the network, so we also need to combine the network environment to analyze, it is very convenient to obtain the information of the network environment, the following is the code:

export const getNetworkInfo = () = > {
  if ('connection' in window.navigator) {
    const connection = window.navigator['connection') | | {}const { effectiveType, downlink, rtt, saveData } = connection
    return {
      // Network type, 4G 3g etc
      effectiveType,
      // Network downlink speed
      downlink,
      // Round trip time between sending data and receiving data
      rtt,
      // Enable/request data protection mode
      saveData,
    }
  }
  return{}}Copy the code

With the above metrics out of the way, we need to use the PerformanceObserver to get some core experience metrics. For example, FP, FCP, FID, etc., are included in the mind maps we have seen above:

One caveat before we do this: it is possible for a page to load in the background, so the metrics captured in this case are inaccurate. So we need to ignore this situation and store a variable with the following code, and compare the timestamp to determine if it is in the background when fetching the index:

document.addEventListener(
  'visibilitychange'.(event) = > {
    // @ts-ignore
    hiddenTime = Math.min(hiddenTime, event.timeStamp)
  },
  { once: true})Copy the code

The following is the code to obtain the index, because they get the same way, so first to encapsulate the method of obtaining:

// Encapsulate PerformanceObserver for subsequent calls
export const getObserver = (type: string, cb: IPerCallback) = > {
  const perfObserver = new PerformanceObserver((entryList) = > {
    cb(entryList.getEntries())
  })
  perfObserver.observe({ type.buffered: true})}Copy the code

Let’s first obtain FP and FCP indicators:

export const getPaintTime = () = > {
  const data: { [key: string] :number } = ({} = {})
  getObserver('paint'.entries= > {
    entries.forEach(entry= > {
      data[entry.name] = entry.startTime
      if (entry.name === 'first-contentful-paint') {
        getLongTask(entry.startTime)
      }
    })
  })
  return data
}
Copy the code

The data structure looks like this:

It should be noted that the time of obtaining longTask needs to be synchronized after getting the FCP index, because the subsequent TBT index needs to be calculated using LongTask.

export const getLongTask = (fcp: number) = > {
  getObserver('longtask'.entries= > {
    entries.forEach(entry= > {
      // get long task time in fcp -> tti
      if(entry.name ! = ='self' || entry.startTime < fcp) {
        return
      }
      // long tasks mean time over 50ms
      const blockingTime = entry.duration - 50
      if (blockingTime > 0) tbt += blockingTime
    })
  })
}
Copy the code

Next, let’s take the FID indicator. Here’s the code:

export const getFID = () = > {
  getObserver('first-input'.entries= > {
    entries.forEach(entry= > {
      if (entry.startTime < hiddenTime) {
        logIndicator('FID', entry.processingStart - entry.startTime)
        // TBT is in fcp -> tti
        // This data may be inaccurate, because fid >= tti
        logIndicator('TBT', tbt)
      }
    })
  })
}
Copy the code

FID index data looks like this, which requires user interaction to trigger:

After obtaining the FID index, we also took the TBT index, but the data we got were not necessarily accurate. Because the meaning of TBT index is the sum of long task blocking time between FCP and TTI index, but there seems to be no good way to obtain TTI index data, so FID is temporarily used.

Finally, CLS and LCP indicators are stuck together:

export const getLCP = () = > {
  getObserver('largest-contentful-paint'.entries= > {
    entries.forEach(entry= > {
      if (entry.startTime < hiddenTime) {
        const { startTime, renderTime, size } = entry
        logIndicator('LCP Update', {
          time: renderTime | startTime,
          size,
        })
      }
    })
  })
}

export const getCLS = () = > {
  getObserver('layout-shift'.entries= > {
    let cls = 0
    entries.forEach(entry= > {
      if(! entry.hadRecentInput) { cls += entry.value } }) logIndicator('CLS Update', cls)
  })
}
Copy the code

The data structure looks like this:

The other two indicators are different from others and are not fixed. It will be updated as new data become available to meet the criteria.

These are all the performance metrics we need to capture. Of course, it is not enough to capture metrics alone, but also to expose each data to the user. For this unified operation, we need to encapsulate a utility function:

// Prints data
export const logIndicator = (type: string, data: IPerData) = > {
  tracker(type, data)
  if (config.log) return
  // make log nice
  console.log(
    `%cPer%cThe ${type}`.'background: #606060; color: white; padding: 1px 10px; border-top-left-radius: 3px; border-bottom-left-radius: 3px; '.'background: #1475b2; color: white; padding: 1px 10px; border-top-right-radius: 3px; border-bottom-right-radius: 3px; ',
    data
  )
}
export default (type: string.data: IPerData) => {
  const currentType = typeMap[type]
  allData[currentType] = data
  // If the user passes a callback function, the relevant information is exposed each time a new indicator is acquired
  config.tracker && config.tracker(currentType, data, allData)
}
Copy the code

Once the function is wrapped, we can call it like this:

logIndicator('FID', entry.processingStart - entry.startTime)
Copy the code

At this point our SDK is complete in outline, and we can add some minor features as needed, such as getting metric scores.

Index scores are official recommendations, and you can see the numbers defined on the official Blog or in my article.

The code is not complex, let’s get the score of FCP index as an example to demonstrate the code:

export const scores: Record<string.number[] > = {fcp: [2000.4000].lcp: [2500.4500].fid: [100.300].tbt: [300.600].cls: [0.1.0.25],}export const scoreLevel = ['good'.'needsImprovement'.'poor']

export const getScore = (type: string, data: number) = > {
  const score = scores[type]
  for (let i = 0; i < score.length; i++) {
    if (data <= score[i]) return scoreLevel[i]
  }

  return scoreLevel[2]}Copy the code

The first is the score related utility function, which is just watching the official advice to copy, and then we just need to add a line of code where we just got the index:

export const getPaintTime = () = > {
  getObserver('paint'.(entries) = > {
    entries.forEach((entry) = > {
      const time = entry.startTime
      const name = entry.name
      if (name === 'first-contentful-paint') {
        getLongTask(time)
        logIndicator('FCP', {
          time,
          score: getScore('fcp', time),
        })
      } else {
        logIndicator('FP', {
          time,
        })
      }
    })
  })
}
Copy the code

End, interested can come here to read the source code, there are not a few lines anyway.

The last

The article was written over the weekend, which is a little hasty. If there is any mistake, please correct it. At the same time, we welcome to discuss the problem together.

For more posts, follow me on Github or join the group to talk about front-end engineering.