preface

Performance optimization is always an issue we need to pay attention to, especially for C-end products. However, there are numerous articles and textbooks about performance optimization. After the performance optimization, we need to have a very intuitive data about how much the performance is improved, so that we can work more conveniently and calculate the results we have obtained more clearly.

Effect display:

Source address: PMAT

Existing tools

There are so many ways to check performance today

  • Chrome developer tools: Performance
  • Lighthouse Open Source Tool
  • Native Performance API
  • Various official libraries, plug-ins

These tools all have their own advantages, but also have certain limitations. For example, Lighthouse allows you to visually visualize all kinds of indicators, but it fails to fulfill some specific requirements. For example, in some projects, you need to log in to see a specific page…

What we need to do is to integrate some of their characteristics into what we need, and for those students who do not even know the knowledge of native Performance API, it is a very good opportunity to learn and practice.

Prepare knowledge

Before you start working on the tool, you need to know about the native Performance API.

Here is a guide to collecting Web performance information that you should know in 2018. Although it has been a while, I will not go into details here.

What kind of tools do you want

To be clear, my thinking is a combination of per-Moniteur and hiper written by community leaders

  • per-moniteur: Project injectionjs, the use ofPerformanceObserverMonitor page performance, output results in the console, mainly monitorFCP.LCPEqual performance index
  • hiper: Command line tool, usingpuppeteerStart the headless browser several times and enter the url to returnOn average,Monitoring data, but only calculatedperformance.timing(obsolete) data

Therefore, my idea is to combine the two, that is, you can test its performance indicators (including basic detection data and performance indicators data) by entering the URL and number of requests, and also can configure the puppeteer extended cache Settings and so on.

start

The tools are all developed in typescript and converted to CommonJS syntax before release. There’s nothing special about that.

○ Starting entrance

class Pmat {
  // Command entry, parsing parameters
  public cli: Cli;
  Puppeteer starts the headless browser
  public puppeteer: Puppeteer;
  // Array of performance detection objects
  public observer: Observer;

  constructor() {
    this.cli = new Cli();
    this.puppeteer = new Puppeteer();
    this.observer = new Observer();
  }

  async run() {
    // Gets the parameters returned by the command
    const options = await this.cli.monitor();
    // Initialize the headless browser
    const puppeteer = await this.puppeteer.init(options); .const { count, url } = options;
    const { page, browser } = puppeteer;

    // listr2 creates the task
    const task = new Listr([
      {
        title: 'start executing'.task: async() = > {// Check the number of pages opened according to the input
          for (let i = 0; i < count; i += 1) {
            // Execute life cycle beforeStart
            await this.observer.beforeStart();
            await page.goto(url, { waitUntil: 'load' });
            // Start the test
            await this.observer.start(); }}}, {title: 'start calculating'.task: async() = > {// Calculate the average value according to the values detected
          await this.observer.calculate(); }},]); . }}Copy the code

The above code is the operation of the start entry, that is, the corresponding operation according to the input parameters of the command.

The important thing to note here is that count defaults to 3, because checking for TTI using external JS libraries takes too much time to inject, so try not to set it too high, or choose not to check for TTI metrics at all

○ Creating a headless browser

import puppeteer from 'puppeteer';

import type { IPuppeteerOutput } from './interface';
import type { ICliOptions } from '.. /cli/interface';

class Puppeteer {
  asyncinit(options? : ICliOptions):Promise<IPuppeteerOutput> {
  
    const browser = await puppeteer.launch({
      product: 'chrome'});const page = await browser.newPage();

    // Get command line arguments
    const { cache, javascript, online, useragent, tti } = options;

    // Set each request to ignore the cache
    await page.setCacheEnabled(cache);
    // Whether to enable js
    await page.setJavaScriptEnabled(javascript);
    // Whether to enable offline mode
    awaitpage.setOfflineMode(! online);if (useragent) {
      await page.setUserAgent(useragent);
    }

    return{ page, browser, tti }; }}export default Puppeteer;
Copy the code

This is a simple case of using puppteer to create a headless browser and setting browser properties based on command line input. See the official documentation for more API usage

Computing performance

A. calculate the navigation

Navigation refers to methods and properties that fetch metrics about browser document events via the Performance interface. Concrete method is the performance getEntriesByType (‘ navigation ‘), the data to get a result similar to the following format:

The specific calculation method can be referred to:

With the above knowledge, we can easily calculate the data we want, the calculation method is as follows:

Duration: duration, Redirect: redirectEnd - redirectStart, AppCache: domainLookupStart - fetchStart, DNS: domainLookupEnd - domainLookupStart, TCP: connectEnd - connectStart, First Byte time: Responsestart-requeststart, Download: responseend-responseStart, white screen time: domInteractive - fetchStart, DOMReady: domContentLoadedEventEnd - fetchStart, Load: domContentLoadedEventEnd - domContentLoadedEventStartCopy the code

Note: Because the tool requests an address multiple times, it needs to calculate the average of the sum of all data

○ Calculate performance indicators

In order to be able to more detailed quantitative performance optimization of this piece, we need to get this a series of performance indicators proposed by Google, but Google has been update performance optimization of index, the following some performance index of the mind map is a current, some performance index calculation may not entirely accurate, for reference only, Are you still reading those same old performance tuning articles? Take a look at these latest performance indicators

PerformanceObserver is used as a listener to collect the desired performance metrics in the following format:

const perfObserver = new PerformanceObserver((entryList) = > {
    // Information processing
})

// Pass in the desired type
perfObserver.observe({ entryTypes: ['paint']})Copy the code

Note: This Performance metric is relatively simple to calculate, but there were some difficulties With the development of puppeteer tests. For details, see Web Performance Recipes With Puppeteer

FP & FCP

  • FP (First Paint) is used to record the time when pixels are First drawn on a page.

  • FCP (First Contentful Paint), First content draw, this metric is used to record the time when a page First draws text, images, non-blank Canvas or SVG.

page.evaluateOnNewDocument(getPaint);

function getPaint() {
  window.FP = 0;
  window.FCP = 0;

  const observer = new PerformanceObserver((list) = > {
    for (const entry of list.getEntries()) {
      const { startTime, name } = entry;
      if (name === 'first-contentful-paint') {
        window.FCP = startTime;
      } else {
        window.FP = startTime; }}}); observer.observe({entryTypes: ['paint']}); }Copy the code

LCP

  • Largest Contentful Paint LCP (Largest Contentful Paint), used to record the drawing time of the Largest element in a window
await page.evaluateOnNewDocument(calcLCP);
await page.goto(url, { waitUntil: 'load'.timeout: 60000 });

let lcp = await page.evaluate(() = > {
    return window.largestContentfulPaint;
});

function calcLCP() {
	window.largestContentfulPaint = 0;

    const observer = new PerformanceObserver((entryList) = > {
        const entries = entryList.getEntries();
        const lastEntry = entries[entries.length - 1];
        window.largestContentfulPaint = lastEntry.renderTime || lastEntry.loadTime;
    });

    observer.observe({ type: 'largest-contentful-paint'.buffered: true });

    document.addEventListener('visibilitychange'.() = > {
        if (document.visibilityState === 'hidden') {
            observer.takeRecords();
            observer.disconnect();
            console.log('LCP:'.window.largestContentfulPaint); }}); }Copy the code

CLS

  • CLS (Cumulative Layout Shift), which records the unexpected displacement fluctuations on the page.
window.CLS = 0;

const observer = new PerformanceObserver((list) = > {
  for (const entry of list.getEntries()) {
    if(! entry.hadRecentInput) {window.CLS += entry.value; }}}); observer.observe({entryTypes: ['layout-shift']});Copy the code

TTI

  • TTI (Time to Interactive) : indicates the first interaction Time. Generally speaking, the page fromFCPThe time it takes to click on an interaction is a very important metric, which basically indicates the performance of the page. And its calculation needs to meet the following conditions:
  1. fromFCPIndex after the start of the calculation
  2. No long task (execution time exceeds 50 ms) for 5 seconds and no more than two tasks are in progressGETrequest
  3. Go back 5 seconds to the end of the last long mission

TTI calculations use TTI -polyfill:

window.__tti = { e: []};const observer = new PerformanceObserver((list) = > {
  const fcp = performance.getEntriesByName('first-contentful-paint') [0].startTime;
  const entries = list.getEntries();

  window.__tti.e = window.__tti.e.concat(entries);
})
observer.observe({ entryTypes: ['longtask']}); .await page.addScriptTag({ path: './node_modules/tti-polyfill/tti-polyfill.js' });

// Time to Interactive
TTI = await page.evaluate(() = >
  window.ttiPolyfill ? window.ttiPolyfill.getFirstConsistentlyInteractive() : -1,);Copy the code

It is recommended that this code be turned off because of its extremely long computing timeTTI, or carefully set too highcount(Default is 3).

FID

  • First Input Delay (FID), which records the response Delay between the FCP and TTI when the user interacts with the page for the First time.
window.FID = 0;
const observer = new PerformanceObserver((list) = > {
  for (const entry of list.getEntries()) {
    window.FID = entry.processingStart - entry.startTime; }}); observer.observe({type: 'first-input'.buffered: true });
Copy the code

TBT

  • Total Blocking Time (TBT), which is the Total Blocking Time of all long tasks between FCP and TTI.
window.TBT = 0;

const observer = new PerformanceObserver((list) = > {
  const fcp = performance.getEntriesByName('first-contentful-paint') [0].startTime;
  const entries = list.getEntries();

  for (const entry of entries) {
    if(entry.name ! = ='self' || entry.startTime < fcp) {
      return;
    }
    // long tasks mean time over 50ms
    const blockingTime = entry.duration - 50;
    if (blockingTime > 0) window.TBT += blockingTime; }}); observer.observe({entryTypes: ['longtask']});Copy the code

The last

I wrote this tool in my spare time. First, I wanted to learn the knowledge points of performance testing, and second, I wanted to see the range of optimization more intuitively when I optimized the performance of the project. Any questions are welcome to be pointed out. Thank you