>> Original link

Contents


  • ## Contents
  • # # introduction
    • 1. What is hexo?
    • 2. What is optimization?
  • ## Performance optimization indicators
    • 1. Overall operating performance
    • 2. Website accessibility
    • 3. Does the site apply best practice strategies
      • > 1)target="_blank"<a>Link if not declaredrel="noopener noreferrer"Security risks exist.
      • > 2) Check whether there are alarms and error messages on the console of the browser, and locate and rectify the fault according to the instructions.
      • > 3) HTTP and HTTPS addresses are not used together
      • > 4) Avoid AppCache
      • > 5) Avoid using document.write()
      • > 6) Avoid using mutation events
      • > 7) Avoid Web SQL
      • > 8) Avoid loading too large a DOM tree
      • > 9) Allow users to paste passwords
    • 4. Website search engine optimization SEO
  • ## Performance testing tool Lighthouse
  • ## Website optimization based on hexo framework
    • 1. Optimize the load time of resources
      • ➣ asynchronously download script script resources using defer/async
      • ➣ loads external resources asynchronously using async functions
      • ➣ uses the browser onfocus event to delay external resource loading
      • ➣ use preload/prefetch/preconnect preload optimization
      • ➣ uses the hexo plugin to compress code files and image files
      • ➣ Write the hexo-img-lazyload plugin to add lazy image loading features
      • ➣ uses IntersectionObserver API to lazily load other resources
      • ➣ Load external dependency scripts using CDN
      • ➣ Use Aplayer instead of Iframe to load netease Cloud Music
    • 2. Optimize interface performance
      • ➣ optimize page reflux and redraw
      • ➣ optimize scrolling event listening using throttling and chattering ideas
      • ➣ IntersectionObserver API polyfill compatibility strategy
      • ➣ uses IntersectionObserver to replace the native OnScroll event listener
    • 3. Website best practices
      • ➣ uses the hexo-Abbrlink plug-in to generate article links
      • ➣ Use hexo-filter-nofollow to avoid security risks
      • ➣ use hexo-Deployer-git and Github Workflow for automated deployment
    • 4. Website SEO optimization
      • ➣ automatically generates a sitemap using the hexo-generator-sitemap plug-in
      • ➣ Submit your site map to the Google Search Console
  • # # reference
  • # # epilogue

preface


1. What is hexo?

Hexo is a web development tool designed to query and display real-time data on the interface without relying on the back end. For example, I have used Node.js as the background to develop a blog website before, so I need to consider database storage, front and back end interaction, and back-end Server deployment. The whole process is more complicated, in the early stage can be used as a way of front-end developers to learn personal website construction. Hexo simplifies this process by compiling and building both aspects of data storage and data retrieval into the front-end static resource locally. One example is the blog sites usually need to flip function to get the blog posts, under the traditional development mode, access to the next page this operation launched by the front-end scripts, and by the backend Server processes the request and returns, but after using hexo this whole process is done in a local time, static resource in hexo will all local indexes is established.

With Hexo, writers typically focus only on writing markdown articles, leaving the rest of the process of compiling, building, and publishing a website to the framework, where all content is packaged into static HTML/CSS/JS files. Hexo supports custom plugins and has a plugin community where writers can publish their own plugins for open source sharing if they also have front-end capabilities.

2. What is optimization?

The high and low performance we usually talk about may focus on evaluating how fast the site is running, including how fast static resources and scripts are fetched and how smoothly the site’s UI is running. In fact, optimization in a broad sense should include: website performance optimization, website accessibility optimization, website SEO optimization, website best practices, etc..

Performance optimization index


1. Overall operating performance

  • FCP (First Contentful Paint) : The time it takes from the time the user initiates a web request to the time the browser First renders web data. The site data mentioned in the first rendering contains the text, images, HTML DOM structure of the page, and does not contain the data of the page located in the IFrame. This metric is usually used to measure the speed of the first network communication between the local and server.

  • TTI (Time To Interactive) : The Time taken from the Time the user navigates To the site To the Time the page becomes fully Interactive. Interactivity is measured by the fact that the site shows actual usable content, that web events with visible elements on the interface have been successfully bound (click, drag, etc.), and that the feedback time between user and page interaction is less than 50 ms.

  • SI (Speed Index) : measures the Speed of visual display of content during page loading. In general terms, it’s the speed at which a site’s interface elements are drawn and rendered. With lighthouse, it captures multiple picture frames of a page being loaded in the browser and calculates the visual rendering progress between those frames.

  • Total Blocking Time (TBT) : Measures the Time from when the page first started rendering (FCP) to when the page was actually interoperable (TTI). Usually when we visit a website, after the overall presentation of the website, there is a short period of time when we cannot interact with the interface, such as mouse clicks and keyboard keys, during which the browser is loading and executing scripts and styles.

  • LCP (Largest Contentful Paint) : Measures how long it takes to draw the Largest content element in the viewport to the screen. The entire process of downloading, parsing, and rendering this element is usually involved.

  • CLS (Cumulative Layout Shift) : A numerical indicator to measure the jitter of the overall Layout during website loading. If the user interface jitter and flicker several times during the loading process of a website, it may cause mild discomfort to the user, so the number of rearrangements and redraws of the website should be minimized.

2. Website accessibility

  • The contrast between the background color and the text foreground of the website should not be too low, otherwise it will affect the reading of the user.
  • Page link tag<a>Preferably include a description of the link, such as:< a href = "https://github.com/nojsja" > [description] - nojsja making personal interface < / a >.
  • HTML element existslangProperty specifies the current locale.
  • Proper HTML semantic tags enable keyboards and screen readers to work properly. Typically, the structure of a web page can be described using semantic tags as:
<html lang="en">
  <head>
    <title>Document title</title>
    <meta charset="utf-8">
  </head>
  <body>
    <a class="skip-link" href="#maincontent">Skip to main</a>
    <h1>Page title</h1>
    <nav>
      <ul>
        <li>
          <a href="https://google.com">Nav link</a>
        </li>
      </ul>
    </nav>
    <header>header</header>
    <main id="maincontent">
      <section>
        <h2>Section heading</h2>
	      <p>text info</p>
        <h3>Sub-section heading</h3>
          <p>texgt info</p>
      </section>
    </main>
    <footer>footer</footer>
  </body>
</html>
Copy the code
  • Id uniqueness of interface elements.
  • Img tagsaltProperty declaration. It specifies alternative text to replace what the image displays in the browser when the image cannot be displayed or when the user disables the image display.
  • Form element internal declarationlabelTag to make the screen reader work correctly.
  • Iframe element declarationtitleProperty to describe its internal contents so that the screen reader can work.
  • For the use of aria accessibility attributes and tags, see >> Aria Reference
  • Input [type=image] and object labels are addedaltAttribute declaration:
<input type="image" alt="Sign in" src="./sign-in-button.png">
<object alt="report.pdf type="application/pdf" data="/report.pdf">
Annual report.
</object>
Copy the code
  • Elements that require the TAB focus feature can be declaredtabindexProperty, and the focus switches in turn when we press TAB. And, depending on the order in which the keyboard sequence navigates, elements with a value of 0, invalid, or no tabIndex value should be placed after elements with a positive tabIndex value:
1) TabIndex = negative (usually tabIndex = "-1"), indicating that the element is focusable but cannot be queried by keyboard navigation, which is useful when using JS to do internal keyboard navigation of a page widget. 2) TabIndex ="0", indicating that the element is focusable and can be focused by keyboard navigation, and its relative order is determined by the current DOM structure. 3) TabIndex = positive, indicating that the element is focusable and can be queried by keyboard navigation; Its relative order increases with tabIndex and lags in focusing. If multiple elements have the same TabIndex, their relative order is determined by their order in the current DOM.Copy the code
  • Used correctly in the table elementthscopeMake row and list headers one-to-one with their data fields:
<table>
  <caption>My marathon training log</caption>
  <thead>
    <tr>
      <th scope="col">Week</th>
      <th scope="col">Total miles</th>
      <th scope="col">Longest run</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <th scope="row">1</th>
      <td>14</td>
      <td>5</td>
    </tr>
    <tr>
      <th scope="row">2</th>
      <td>16</td>
      <td>6</td>
    </tr>
  </tbody>
</table>
Copy the code
  • Video element specifiestrackText captioning resources for the hearing-impaired (captioning resource files required) :
<video width="300" height="200">
    <source src="marathonFinishLine.mp4" type="video/mp4">
    <track src="captions_en.vtt" kind="captions" srclang="en" label="english_captions">
    <track src="audio_desc_en.vtt" kind="descriptions" srclang="en" label="english_description">
    <track src="captions_es.vtt" kind="captions" srclang="es" label="spanish_captions">
    <track src="audio_desc_es.vtt" kind="descriptions" srclang="es" label="spanish_description">
</video>
Copy the code
  • The LI list tag is placed in the container componentulolThe use of.
  • Heading tags are strictly in accordance with the ascending declaration, matchingsectionOr other elements (such as the P tag) that correctly reflect the content structure of the interface:
<h1>Page title</h1>
<section>
  <h2>Section Heading</h2><h3>Sub-section Heading</h3>
</section>
Copy the code
  • use<meta charset="UTF-8">Specifies the site character set encoding.
  • The aspect ratio of the image resource referenced by the IMG element should be the same as that of the img application, otherwise the image could be distorted.
  • add<! DOCTYPE html>To prevent browsing interface rendering anomalies.

3. Does the site apply best practice strategies

> 1)target="_blank"<a>Link if not declaredrel="noopener noreferrer"Security risks exist.

When a page links to another page that uses target=”_blank”, the new page runs in the same process as the old one. If the new page is executing expensive JavaScript, the performance of the old page may suffer. And the new page can access the old window object through window.opener, for example, it can use window.opener. Location = URL to navigate the old page to a different URL.

> 2) Check whether there are alarms and error messages on the console of the browser, and locate and rectify the fault according to the instructions.

> 3) HTTP and HTTPS addresses are not used together

Browsers gradually prohibit the mixing of non-protocol resources. For example, a Web server using HTTP loads resources starting with HTTPS. Therefore, the following situations may occur:

  • Mixed content is loaded, but warnings appear
  • Do not load mixed content, will directly display blank content;
  • Before loading mixed content, there will be a prompt like whether to “display” or “block” because there is an unsafe risk!

Consider the following ways to optimize:

  • Part of the protocol load site mixed with external resources into their own server hosting;
  • For sites that deploy HTTPS, declare it on the web page<meta http-equiv="Content-Security-Policy" content="upgrade-insecure-requests"Convert HTTP requests to HTTPS requests;
  • Add the request header on the server side for websites that deploy HTTPS:header("Content-Security-Policy: upgrade-insecure-requests")The same effect can be achieved;
  • For sites that support both HTTP and HTTPS access, consider using relative protocols rather than specifying the protocol directly in plaintext:<script src="//path/to/js">, the browser will dynamically switch the request based on the current page protocol.
  • Similar to using relative protocols, using relative urls does the trick, but adds coupling between resources:<script src="./path/to/js"></script>

> 4) Avoid AppCache

Consider using the service worker’s Cache API.

> 5) Avoid using document.write()

For users with slow network speeds (2G, 3G, or slow WLAN), the dynamic injection of external scripts through Document.write () will delay the display of page content by tens of seconds.

> 6) Avoid using mutation events

The following mutation events impair performance and have been deprecated in the DOM event specification:

  • DOMAttrModified
  • DOMAttributeNameChanged
  • DOMCharacterDataModified
  • DOMElementNameChanged
  • DOMNodeInserted
  • DOMNodeInsertedIntoDocument
  • DOMNodeRemoved
  • DOMNodeRemovedFromDocument
  • DOMSubtreeModified

MutationObserver is recommended instead

> 7) Avoid Web SQL

You are advised to replace it with IndexedDB

> 8) Avoid loading too large a DOM tree

Large DOM trees can degrade page performance in a number of ways:

  • Network efficiency and load performance, if your server sends a large DOM tree, you may be shipping a lot of unnecessary bytes. This can also slow down page load times, as the browser may parse many nodes that are not displayed on the screen.
  • Runtime performance. As users and scripts interact with the page, the browser must constantly recalculate node positions and styles. A large DOM tree combined with complex style rules can significantly slow down rendering.
  • Memory performance. If you use the general query selector (for example, the document. The querySelectorAll (‘ li ‘) you might inadvertently to store a large number of reference nodes), it might overwhelm the user equipment memory function.

An optimal DOM tree:

  • There are less than 1500 nodes in total.
  • The maximum depth is 32 nodes.
  • There is no parent node with more than 60 children.
  • In general, you just need to find a way to create DOM nodes when you need them and destroy them when you no longer need them.

> 9) Allow users to paste passwords

Password pasting improves security because it enables users to use the password manager. Password administrators typically generate strong passwords for users, store them securely, and then automatically paste them into the password field when the user needs to log in. Remove code that prevents users from pasting into password fields. Using Clipboard paste in the event breakpoint to break the point, you can quickly find code that prevents pasting passwords. For example, code like this prevents pasting passwords:

var input = document.querySelector('input');
input.addEventListener('paste'.(e) = > {
  e.preventDefault();
});
Copy the code

4. Website search engine optimization SEO

  • Add the viewport declaration<meta name="viewport">Specify with and device-width to optimize the mobile display.
  • The document to addtitleProperty to allow screen readers and search engines to correctly identify web site content.
  • addmetaDesctiption tag to describe the content of the website<meta name="description" content="Put your description here.">.
  • Add description text to the link tag<a href="videos.html">basketball videos</a>To clearly communicate what the hyperlink is pointing to.
  • Use the correct href address to enable search engines to track the actual url correctly. Here is a counter example:<a onclick="goto('https://example.com')">
  • Don’t use meta tags to disable search engine crawlers from crawling your pages. Here’s a counter example:<meta name="robots" content="noindex"/>, and can exclude some crawlers in particular:<meta name="AdsBot-Google" content="noindex"/>.
  • Use Alt text with clear intent and meaning for image elements, and avoid non-specific referential pronouns such as:"chart", "image", "diagram"(Chart, picture).
  • Do not use plug-ins to display your content, that is, avoidembed,object,appletAnd so on to import resources.
  • The right placerobots.txtTo the root directory of the site, it describes the content of the site should not be retrieved by search engines, and what can be retrieved. Usually some background files (such as CSS, JS) and user privacy files need not be captured by search engines, and some files are frequently captured by spiders, but these files are not important, so you can use robots.txt to shield.

An example:

User-agent: * Allow: / Disallow: /wp-admin/ Disallow: /wp-includes/ Disallow: /wp-content/plugins/ Disallow: /? r=*Copy the code
  • Submit sitemap.xml to search engines. The XML text is designed for computers to read, and sitemap.xml is a specification that search engines use to allow web owners to create a directory of all the pages on their site for search engine crawlers to read. It’s like a map that lets search engines know what pages are on the site.


      
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9">
  
  <url>
    <loc>https://nojsja.github.io/blogs/2019/10/26/63567fa4.html/</loc>
    
    <lastmod>The T10:2021-04-29 02:04. 853 z</lastmod>
    
  </url>
  
  <url>
    <loc>https://nojsja.github.io/blogs/2020/03/26/7507699.html/</loc>
    
    <lastmod>The T10:2021-04-29 01:30. 661 z</lastmod>
    
  </url>

</urlset>

Copy the code

Performance testing tool Lighthouse


The performance metrics mentioned above can be automatically analyzed and generated using lighthouse. Newer versions of Chrome and the Edge browser based on The Chromium architecture are already built with them. A lower version of Chrome can be installed through a search plugin in the store. Once installed, use F12 to open the console and switch to the Lighthouse column to use it directly:

As shown in the figure, you can decide to check the test project and test platform (desktop/mobile terminal) by yourself. Finally, click Generate report to start running the automated test task, and open a page of analysis results after the test is completed:

Result interface:

Performance, Accessibility, Best Practices, and SEO in the figure above correspond to the overall website Performance, website Accessibility, website Best Practices, search engine SEO optimization and other indicators mentioned above. We can click on each specific project to see the optimization recommendations given by the testing tool:

The test results will largely depend on the loading speed of your HTTP static resource server. For example, using the Github Pages service to host static resources without proxy will be slightly slower than using the Domestic Gitee Pages hosting service, and some resources may fail to load. Therefore, domestic users can use Gitee Pages to replace Github Pages. However, non-paying users of Gitee do not have the automatic build and deployment function. They need to use github Actions mentioned below to automatically log in and trigger the build and deployment.

Note: plugins installed in some browsers can affect test results because plugins may initiate requests to load other scripts and so on. In this case, you can install NPM install -g Lighthouse globally using the NPM package Manager and start the test process using the command line: Lighthouse https://nojsja.gitee.io/blogs – view – preset = desktop – output – path = “/ home/nojsja/desktop/lighthouse. HTML ‘. As a result, a browser test result web file is generated based on the specified address, which can be opened directly for performance check.

Website optimization based on hexo framework


I wrote an article “Front-end performance optimization techniques in detail (1)”, describing some aspects of front-end performance optimization in detail. This article will not list every point, but will only illustrate the practical application of optimization means in the blog, which is roughly divided into these aspects:

  1. Optimize resource load time
  2. Optimized interface performance
  3. Web best Practices
  4. Website SEO optimization

1. Optimize the load time of resources

Common load time optimization methods include the following:

  • Improve the parallel loading capability of web resources
  • Lazy loading of unnecessary external resources
  • Reduce fragmentation of external resource requests, small files do merge
  • Increase site bandwidth

➣ asynchronously download script script resources using defer/async

When the HTML is parsed, the declared

  • Defer script: Declare the outside of the defer property<script>The script does not block PARSING and rendering of THE HTML when it is downloaded and starts executing after the HTML rendering is complete and actionable (DOMContentLoadedBefore the event is triggered), the sequence of parsing execution of each script corresponds to the sequence of position when the declaration is made, and the page will be triggered after the execution is completedDOMContentLoadedEvents.
  • Async script: Declares the outside of the async property<script>The script download will not block HTML parsing and rendering, and the download and execution of each script are completely independent. The execution will start after the completion of the download, so the execution sequence is not fixed and has no correlation with the triggering of DOMContentLoaded event.

In my blog site there are styles and JS scripts that use Bootstrap external dependencies, but you need to make sure that they are declared in the first place, because with asynchronous technology, the order of execution of other

Typically Async /defer will be used to optimize the loading of independent subcomponent-dependent scripts, such as a script for navigation bar jumps in a blog post, whose execution order is completely unconstrained by the other parts and can therefore be optimized independently using async properties. However, you need to ensure that the navigation bar DOM element declaration used by the script is located before the script is introduced to prevent the DOM element from being rendered when the script is executed, causing script errors.

➣ loads external resources asynchronously using async functions

The following async functions are used to create a link/script tag from the incoming URL and add it to the tag to dynamically load external resources, and to listen for the resource loading if a callback function exists, and then execute the callback function after loading. It is worth noting that this approach differs from declaring dependencies directly through script in that it does not block the interface, and scripts are downloaded and executed asynchronously.

Blogs are often used to dynamically load external libraries programmatically in a particular case and initialize the external libraries within a callback function. For example, my blog uses a music player component. When scrolling to the uninitialized DOM element containing this component in the visual area of the web page, async is used to request the JS script resources of the server. After loading, the music player is initialized in the callback function.

/** * async [async script loading] *@author nojsja
  * @param  {String} U [Resource address] *@param  {Function} C [callback function] *@param  {String} Tag [loading resource type link | script] *@param  {Boolean} Async [whether async properties need to be declared when loading script resources] */
function async(u, c, tag, async) {
    var head = document.head ||
      document.getElementsByTagName('head') [0] | |document.documentElement;
    var d = document, t = tag || 'script',
      o = d.createElement(t),
      s = d.getElementsByTagName(t)[0];
    async = ['async'.'defer'].includes(async)?async:!!!!!async;
    
    switch(t) {
      case 'script':
        o.src = u;
        if (async) o[async] = true;
        break;
      case 'link':
        o.type = "text/css";
        o.href = u;
        o.rel = "stylesheet";
        break;
      default:
        o.src = u;
        break;
    }

    /* callback */

    if (c) { 
      if (o.readyState) {//IE
        o.onreadystatechange = function (e) {
          if (o.readyState == "loaded"
            || o.readyState == "complete") {
            o.onreadystatechange = null;
            c(null, e)(); }}; }else {// Other browsers
        o.onload = function (e) {
          c(null, e);
        };
      }
    }

    s.parentNode.insertBefore(o, head.firstChild);
  }
Copy the code

➣ uses the browser onfocus event to delay external resource loading

It is also an optimization of lazy loading to trigger the loading of external resources after some events are triggered by the interaction between the user and the interface and the conditions for loading external resources are met.

My blog, for example, has a search box in the right navigation bar area to search for blog posts, which itself is done by looking for a static XML file that is a resource index that is pre-generated locally. The XML file becomes large when there are many articles and content, and downloading it when the web page is first loaded is a sure way to consume web bandwidth and network requests. Therefore, asynchronous downloading of XML can be triggered when the user clicks the search box to focus on it. In addition, in order not to affect the user experience, loading effect can be set during the loading process to indicate the user to delay the operation.

➣ use preload/prefetch/preconnect preload optimization

  • Preload is used to preload resources needed for the current page, such as images, CSS, JavaScript, and font files. Preload has a higher loading priority than Prefetch. Note also that preload does not block the window’s onLoad event. Blogs use it to preload fonts referenced in CSS:<link href="/blogs/fonts/fontawesome-webfont.woff2? V = 4.3.0 "rel = preload as =" font ">, you need to add different values for different resource typesasTag information, added if it is cross-domain loadingcrossoriginProperty declaration.
  • Prefetch is a low-priority resource prompt that starts downloading other preloaded resources once a page has loaded and stores them in the browser’s cache. Prefretch includes:Link, DNS, and prerenderingThree types of preload requests. Link prefetch example:<link rel="prefetch" href="/path/to/pic.png">Allow browsers to retrieve resources and store them in caches; DNS Prefetch allows the browser to run DNS prerender in the background while the user is browsing the page. Very similar to Prefetch, both optimize future requests for resources on the next page. The difference is that prerender renders the entire page in the background, so it should be used with caution. Network bandwidth may be wasted.
  • Preconnect allows the browser to perform several actions prior to an HTTP request being formally sent to the server, including DNS resolution, TLS negotiation, and TCP handshake, which eliminates round-trip latency and saves the user time. For example, in the blog: no operator statistics library of the web page pre-link<link href="http://busuanzi.ibruce.info" rel="preconnect" crossorigin>.

➣ uses the hexo plugin to compress code files and image files

Compression of static resources is also a way to save network bandwidth and improve the response speed of requests. Configuration is usually done in an engineered manner rather than manually compressing each image. My blog uses hexo’s Hexo-All-Minifier compression plugin to optimize blog access speed by compressing HTML, CSS, JS, and images.

Installation:

npm install hexo-all-minifier --save
Copy the code

Enable in the config.yml configuration file:

# ---- code and resource compression
html_minifier:
  enable: true
  exclude:

css_minifier:
  enable: true
  exclude: 
    - '*.min.css'

js_minifier:
  enable: true
  mangle: true
  compress: true
  exclude: 
    - '*.min.js'

image_minifier:
  enable: true
  interlaced: false
  multipass: false
  optimizationLevel: 2 # Compression level
  pngquant: false
  progressive: true Whether to enable progressive image compression
Copy the code

Resource compression costs performance and time, so consider not enabling these plug-ins in your development environment to speed up development environment startup. For example, you can specify a separate _config.dev.yml and turn off all of the above plugins. See package.json for script field declarations:

{..."scripts": {
    "prestart": "hexo clean --config config.dev.yml; hexo generate --config config.dev.yml"."prestart:prod": "hexo clean; hexo generate"."predeploy": "hexo clean; hexo generate"."start": "hexo generate --config config.dev.yml; hexo server --config config.dev.yml"."start:prod": "hexo generate --config config.dev.yml; hexo server"."performance:prod": "lighthouse https://nojsja.gitee.io/blogs --view --preset=desktop --output-path='/home/nojsja/Desktop/lighthouse.html'"."performance": "lighthouse http://localhost:4000/blogs --view --preset=desktop --output-path='/home/nojsja/Desktop/lighthouse.html'"."deploy": "hexo deploy"}}Copy the code

➣ Write the hexo-img-lazyload plugin to add lazy image loading features

In order to learn from Hexo’s own plug-in system during blog optimization, hexo-img-lazyload, an image-lazy loading plug-in, is written using IntersectionObserver API, which can be installed by NPM command: hexo-img-lazyload NPM I hexo – img – the lazyload.

Effect preview:

< img SRC = “path/to/loading data -” SRC = “path/to/xx. JPG” >.

function lazyProcessor(content, replacement) {
    
    return content.replace(/
      
       /gi
      (.*?)>.function (str, p1, p2, p3) {
        if (/data-loaded/gi.test(str)) {
            return str;
        }

        if (/no-lazy/gi.test(str)) {
            return str;
        }
        return `<img ${p1} src="${emptyStr}" lazyload data-loading="${replacement}" data-src="${p2}" ${p3}> `;
    });  
}
Copy the code

The replacement also required us to use Hexo’s code injection capabilities to inject our own code into each of the built interfaces.

Hexo code injection:

/* registry scroll listener */
hexo.extend.injector.register('body_end'.function() {
  const script = `
    <script>
      ${observerStr}
    </script>`;

  return script;
}, injectionType)
Copy the code

The injected code that monitors whether image elements to be loaded enter the visual area for dynamic loading uses the IntersectionObserver API instead of the Window.onscroll event, which has better performance. The browser listens for all element position changes and distributes scrolling event results:


(function() {
  /* avoid garbage collection */
  window.hexoLoadingImages = window.hexoLoadingImages || {};

  function query(selector) {
    return document.querySelectorAll(selector);
  }
  
  /* registry listener */
  if (window.IntersectionObserver) {
  
    var observer = new IntersectionObserver(function (entries) {

      entries.forEach(function (entry) {
        // in view port
        if (entry.isIntersecting) {
          observer.unobserve(entry.target);

          // proxy image
          var img = new Image();
          var imgId = "_img_" + Math.random();
          window.hexoLoadingImages[imgId] = img;

          img.onload = function() {
            entry.target.src = entry.target.getAttribute('data-src');
            window.hexoLoadingImages[imgId] = null;
          };
          img.onerror = function() {
            window.hexoLoadingImages[imgId] = null;
          }

          entry.target.src = entry.target.getAttribute('data-loading');
          img.src = entry.target.getAttribute('data-src'); }}); }); query('img[lazyload]').forEach(function (item) {
      observer.observe(item);
    });
  
  } else {
  /* fallback */
    query('img[lazyload]').forEach(function (img) {
      img.src = img.getAttribute('data-src');
    });
  
  }
}).bind(window) ();Copy the code

➣ uses IntersectionObserver API to lazily load other resources

IntersectionObserver API has been used as a major resource lazy loading method in my blog due to its better performance. I also use it to optimize the loading of blog comment component Valine. Generally, comment components are located at the bottom of blog articles, so it is unnecessary to load comment components when the page is loaded. Therefore, it is recommended to use IntersectionObserver to monitor whether comment components enter the viewport, and then use async asynchronous script to download and callback to initialize the comment system.

On the other hand, Aplayer, the music player at the bottom of each article, also uses a similar loading strategy, so it can be said that the optimization effect is always successful!

➣ Load external dependency scripts using CDN

CDN stands for content delivery network. The CDN service to cache the static resources into high performance speed nodes all over the country, and when a user access the corresponding business resources, CDN system can in real time according to the network traffic and connect the load status of each node, and the distance to the user response time and so on comprehensive information to the user’s request to guide users closest service node, The content can be transmitted faster and more stable. Can improve the ability to respond to first request.

Some public external libraries in blogs, such as Bootstrap and jQuery, are used as external CDN resource addresses. On the one hand, they can reduce the webpage bandwidth consumption of the current host site, and on the other hand, CDN can also provide some resources download acceleration.

➣ Use Aplayer instead of Iframe to load netease Cloud Music

Previous versions of the blog would embed a netease Cloud Music player at the bottom of the post interface, which is actually an iframe like this:

<iframe frameborder="no" border="0" marginwidth="0" marginheight="0" width=330 height=86 src="//music.163.com/outchain/player?type=2&id=781246&auto=1&height=66"></iframe>
Copy the code

Iframe loads loads a bunch of stuff. Although lazy loading is possible, iframe also has some disadvantages:

  • Iframe blocks the onLoad event on the main page
  • Iframe and the home page share the HTTP connection pool, and browsers have restrictions on connections to the same domain, so parallel page loading can be affected
  • Iframe is bad for web layout
  • Iframe is not mobile-friendly
  • Repeated reloads of iframe can cause memory leaks in some browsers
  • Data transfer in IFrame is complex
  • Iframe is bad for SEO

The new version changes the Iframe player to Aplayer and uploads one of your favorite song lists to another Gitee Pages repository for static hosting. You can load a custom song list at the bottom of your blog by:

var ap = new APlayer({
  container: document.getElementById('aplayer'),
  theme: '#e9e9e9'.audio: [{
    name: 'Presence signal'.artist: 'AcuticNotes'.url: 'https://nojsja.gitee.io/static-resources/audio/life-signal.mp3'.cover: 'https://nojsja.gitee.io/static-resources/audio/life-signal.jpg'
    
  },
  {
    name: 'Relic In full sensuality'.artist: 'Okabe Keiichi'.url: 'https://nojsja.gitee.io/static-resources/audio/%E6%96%9C%E5%85%89.mp3'.cover: 'https://nojsja.gitee.io/static-resources/audio/%E6%96%9C%E5%85%89.jpg'}}]);Copy the code

Preview:

2. Optimize interface performance

➣ optimize page reflux and redraw

1) concept

Reflow and repaint are essential processes for browsers to render web pages. In the main HTML rendering process of reflow, DOM trees are rerendered due to changes in the spatial position and size of elements. Repaint is due to changes in the style attributes of nodes and does not affect the spatial layout. In terms of performance, backflow consumes a lot of performance and is prone to cascading effects. That is, in the flow layout of a normal DOM tree, after backflow of an element, all elements behind the element position will backflow and be rendered again, and redrawing consumes less performance.

2) How to effectively judge the backflow and redraw of the interface?

Devtools, a web development tool, comes with chromium based browsers, but most front-end developers don’t really know what it’s for, and just use it for simple log debugging, web request tracking and style debugging. It can also be used to visually measure reflux and redraw: F12 Open Devtools, Select the folding button of the three points in the upper right corner to open successively -> More Tools -> Rendering – Select the first two items Paint Flashing (highlighted redraw area) and Layout Shift Regions (highlighted backlight area). Now go back to the page you opened and operate. During the operation, the area where backflow occurred will turn blue, and the area where redraw occurred will turn green. It will not last long, so pay attention to it.

Repaint:

Reflow:

In addition to being used for visual interface reflow/redraw situations, Devtools has some other useful features, such as: Coverage Tools can be used to analyze the use Coverage of external CSS/JS script content introduced on the interface, that is to say, we can measure the use of external files introduced through this tool. For external resources that are used less frequently, inline or direct handwriting can be considered for implementation, so as to improve the cost performance of introducing external resources.

➣ optimize scrolling event listening using throttling and chattering ideas

Some people may question when to use throttling and when to use shaking when faced with high-frequency triggering scenarios where functions require call control. Through an easy to distinguish: a feature here if you need to keep a short time, high frequency trigger the final results, then use to shake function, if you need to limit the number of calls to a function, with the best call time interval to ensure a sustainable function calls and don’t care whether it is the last call results, please use the function of throttling.

For example, Echarts diagrams often need to re-use data rendering after the window resize, but listening for resize events directly can cause the rendering function to fire multiple times in a short period of time. We can use the idea of function buffering by listening for the resize event and then using the parameters to call the throttle function, which is initialized in advance, to ensure that an actual Echarts rerender is triggered once the resize process is complete.

Here is a simple implementation of the throttling function and the chattering function:

  /** * fnDebounce *@author nojsja
    * @param  {Function} Fn [raw function logic that needs to be wrapped] *@param  {Numberl} Timeout [Delay time] *@return {Function} [Higher-order function] */
  var fnDebounce = function(fn, timeout) {
    var time = null;

    return function() {
      if(! time)return time = Date.now();
      if (Date.now() - time >= timeout) {
        time = null;
        return fn.apply(this, [].slice.call(arguments));
      } else {
        time = Date.now(); }}; };/** * fnThrottle [throttle function] *@author nojsja
    * @param  {Function} Fn [raw function logic that needs to be wrapped] *@param  {Numberl} Timeout [Delay time] *@return {Function} [Higher-order function] */
  var fnThrottle = function(fn, timeout) {
    var time = null;

    return function() {
      if(! time)return time = Date.now();
      if ((Date.now() - time) >= timeout) {
        time = null;
        return fn.apply(this, [].slice.call(arguments)); }}; };Copy the code

In the blog, the content navigation bar on the right of the article will automatically switch between fixed layout and general flow layout according to the position of the scroll bar, so that the navigation bar can also be normal in the process of reading the article, and will not be hidden to the top:

  /* Only 150ms can trigger a scroll detection */
  (window).on('scroll', fnThrottle(function() {
      var rectbase = $$tocBar.getBoundingClientRect();
      if (rectbase.top <= 0) {
        $toc.css('left', left); (! $toc.hasClass('toc-fixed')) && $toc.addClass('toc-fixed');
        $toc.hasClass('toc-normal') && $toc.removeClass('toc-normal');
      } else {
        $toc.css('left'.' ');
        $toc.hasClass('toc-fixed') && $toc.removeClass('toc-fixed'); (! $toc.hasClass('toc-normal')) && $toc.addClass('toc-normal');
        ($$toc.scrollTop > 0) && ($$toc.scrollTop = 0); }},150));
Copy the code

➣ IntersectionObserver API polyfill compatibility strategy

It is mentioned in the article that IntersectionObserver API has been used for lazy loading of all interface components in the blog, which has better performance and more comprehensive functions. However, in web development, we usually consider the compatibility of each API. Here you Can check the compatibility report website Can I Use. From the following figure, we Can see that the compatibility of this API is still acceptable, and many older browsers on the desktop already support it:

So in order to solve the compatibility problems of some older browsers, a more extreme approach is taken here. Polyfill. Js (XXX is the corresponding API) to add functionality to older browsers, but for older browsers that already support the API themselves, they have to repeatedly download the Polyfill library, resulting in wasted page requests and bandwidth resources. So I’m not going to do that here, because this API is already supported by most browsers, and by default we don’t use

<! This script is placed somewhere near the top of the page -->
<script>
  if ('IntersectionObserver' in window &&
    'IntersectionObserverEntry' in window &&
    'intersectionRatio' in window.IntersectionObserverEntry.prototype) {

    if(! ('isIntersecting' in window.IntersectionObserverEntry.prototype)) {
      Object.defineProperty(window.IntersectionObserverEntry.prototype,
        'isIntersecting', {
        get: function () {
          return this.intersectionRatio > 0; }}); }}else {
    /* load polyfill sync */
    sendXMLHttpRequest({
      url: '/blogs/js/intersection-observer.js'.async: false.method: 'get'.callback: function(txt) {
        eval(txt); }}); }</script>
Copy the code

➣ uses IntersectionObserver to replace the native OnScroll event listener

IntersectionObserver is usually used for some intersection detection scenarios in the interface:

  • Image lazy loading – images are loaded only when they are scrolled into view
  • Content infinite scrolling – More data is loaded directly when the user scrolls near the bottom of the scroll container, without the user having to turn the page, giving the user the illusion that the web page can scroll indefinitely
  • Detect AD exposure – In order to calculate AD revenue, you need to know the exposure of AD elements
  • Perform tasks and play videos when the user sees an area

Take infinite scrolling of content as an example, the ancient intersection detection scheme is to use scroll event to monitor the scroll container and obtain the geometric attribute of the scroll element in the listener function to judge whether the element has been rolled to the bottom. As we know, the acquisition and setting of properties such as scrollTop will lead to page backflow, and if the interface needs to bind multiple listening functions to scroll events for similar operations, the page performance will be greatly reduced:

  /* scrolllistener */
  onScroll = () = > {
    const { 
      scrollTop, scrollHeight, clientHeight
    } = document.querySelector('#target');
    
    /* Scroll to the bottom */
    // scrollTop(scrollup height); ClientHeight (total visible height of container); ScrollHeight (total content length of the container)
    if (scrollTop + clientHeight === scrollHeight) { /* do something ... * /}}Copy the code

Here is a simple picture lazy loading function to introduce its use, detailed use can see the blog: “front-end performance optimization skills in detail (1)”.

(function lazyload() {

  var imagesToLoad = document.querySelectorAll('image[data-src]');

  function loadImage(image) {
    image.src = image.getAttribute('data-src');
    image.addEventListener('load'.function() {
      image.removeAttribute('data-src');
    });
  }

  var intersectionObserver = new IntersectionObserver(function(items, observer) {
    items.forEach(function(item) {
      /* All attributes: Item. BoundingClientRect - target element geometry boundary information item. IntersectionRatio - intersection than intersectionRect/boundingClientRect Item. intersectionRect - Describes the intersection area of the root and target element item.isIntersecting - true(intersecting starts), Item. rootBounds - Describes the root element item.target - the target element item.time - The timestamp from the origin (the point in time when the page finished loading the window) to the time when the crossover was triggered */
      if(item.isIntersecting) { loadImage(item.target); observer.unobserve(item.target); }}); }); imagesToLoad.forEach(function(image) { intersectionObserver.observe(image); }); }) ();Copy the code

3. Website best practices

➣ uses the hexo-Abbrlink plug-in to generate article links

The default blog address generated by the Hexo framework is :year/:month/:day/:title, i.e. / year/ month/ day/ title. When the blog title is in Chinese, the generated URL link also appears in Chinese, the Chinese path is not seo friendly. Copied links are encoded, making them unreadable and unconcise.

NPM install hexo-abbrlink –save, add configuration in _config.yml:

permalink: :year/:month/:day/:abbrlink.html/
permalink_defaults:
abbrlink:
  alg: crc32  # crC16 (default) and CRC32
  rep: hex    Dec (default) and hex
Copy the code

Generated after the blog becomes this: https://post.zz173.com/posts/8ddf18fb.html/, even if have updated the post title, post links will not change.

➣ Use hexo-filter-nofollow to avoid security risks

The hexo-filter-nofollow plugin adds an attribute rel=” Noopener external nofollow noreferrer” to all links.

There are a lot of external chain inside the site will affect the weight of the site, not conducive to SEO.

  • nofollow: is a property that Google, Yahoo, and Microsoft proposed together a few years ago, so that links add this property without being weighted. Nofollow tells crawlers that they do not need to track the target page. In order to fight against Blogspam, Google recommends using Nofollow to tell search engine crawlers that they do not need to grab the target page, and at the same time tell search engines that they do not need to pass the Pagerank of the previous page to the target page. However, if you submit the page directly through Sitemap, the crawler will still crawl. Here, nofollow is only an attitude of the current page towards the target page, and does not represent the attitude of other pages towards the target page.
  • noreferrernoopener: when<a>Label usetarget="_blank"Property links to another page, the new page will run on the same process as your page. If the new page is executing expensive JavaScript, the performance of the old page may suffer. And new pages can passwindow.openerGet the old page window object to perform arbitrary operations, with great security risks. usenoopener(Compatibility attributesnoreferrerAfter that, the newly opened page cannot get the old page window object.
  • external: Tell the search engine that this is a link to a different siteTarget = "_blank", reduce the SEO weight impact of external links.

➣ use hexo-Deployer-git and Github Workflow for automated deployment

Once the static resource package is generated, I need to commit it to the corresponding Github Pages or Gitee Pages repository, which is very inefficient to do manually when multiple repositories need to be deployed. Therefore, the hexo-deployer-git plug-in is used for automated deployment and you can declare the repository information you want to deploy as follows, or declare multiple deploy fields if you have multiple repositories:

# Deployment
deploy:
  type: git
  repository: https://github.com/nojsja/blogs
  branch: master
  ignore_hidden:
    public: false
  message: update
Copy the code

It is worth noting that the non-paying Gitee Pages repository does not support automatic deployment after submission. So the solution I used was to just declare a deploy repository pointing to the Github Pages repository, Then the Github repository github Workflow service and giitee – pages-Action script to achieve automatic Deployment of GITee. Its principle is to use github automated workflow to synchronize the github repository code to the GITee repository, and then automatically log in to the GITee account and call the manual deployment interface of the web page by reading our configured GITee account information, to achieve the entire automatic deployment process.

4. Website SEO optimization

➣ automatically generates a sitemap using the hexo-generator-sitemap plug-in

What a site map is:

  • A site map describes the structure of a website. It can be an arbitrary document that is used as a design tool for web design, or it can be a web page that lists all the pages in a web site, usually in hierarchical form. This helps visitors and search engine bots find pages on the site.
  • Some developers think that A site index is A more appropriate way to organize A web page, but A site index is usually an A-Z index that only provides access to specific content, while A site map provides A general top-down view of the entire site.
  • Sitemap makes all pages searchable to enhance search engine optimization.

Installation:

$: npm install hexo-generator-sitemap --save
Copy the code

Add related fields to configuration file _config.yml:

# sitemap
sitemap:
  path: sitemap.xml
# page url
url: https://nojsja.github.io/blogs 
Copy the code

After running Hexo Generate, you can automatically generate sitemap.xml. You need to go to Google Search Console to record your site and submit the corresponding site file.

➣ Submit your site map to the Google Search Console

  • Log in to the Google Search Console
  • Add your own site information

  • Ownership can be verified in several ways

  • Submitted plug-in generatedsitemap.xml

Google Search Console also gives you a handy view of your site’s hits, keyword index, and more.

reference


  1. Lighthouse and Google’s mobile best practices
  2. Google web.dev

conclusion


Learning all aspects of front-end performance optimization, on the one hand, is an investigation of our core basic knowledge, on the other hand, it can also provide ideas for dealing with some practical problems we encounter, is the only way for every front-end people to advance.

That’s all for this article and will be updated as necessary…