> link to > original

Contents


  • ## Contents
  • # # introduction

    • 1. What is Hexo?
    • 2. What is optimization?
  • ## Performance optimization metrics

    • 1. Overall operating performance
    • 2. Website accessibility
    • 3. Does the site apply best practice policies

      • > 1)target="_blank"<a>Link if not declaredrel="noopener noreferrer"There is a security risk.
      • > 2) Check the browser-side console for alarms and errors, locate the problem with instructions and resolve it.
      • 3) HTTP and HTTPS protocol addresses are not mixed
      • > 4) Avoid AppCache
      • > 5) Avoid using document.write()
      • > 6) Avoid using mutation events
      • > 7) Avoid using Web SQL
      • > 8) Avoid loading too large DOM trees
      • > 9) Allow the user to paste the password
    • 4. Website search engine optimization SEO
  • ## Performance test tool Lighthouse
  • ## Website optimization based on Hexo framework

    • 1. Optimize resource load times

      • ➣ download the script resource asynchronously using defer/async
      • ➣ Load external resources asynchronously using the async function
      • ➣ uses browser onfocus events to delay external resource loading
      • ➣ use preload/prefetch/preconnect preload optimization
      • ➣ Compress code files and image files using the Hexo plug-in
      • ➣ Write a hexo-img-lazyload plugin to add lazy image loading feature
      • ➣ lazily load additional resources using the IntersectionObserver API
      • ➣ Load the external dependency script using the CDN
      • ➣ load netease cloud music with Aplayer instead of iframe
    • 2. Optimize interface performance

      • ➣ Optimizes reflow and redraw of pages
      • ➣ Optimizes scrolling event listening with throttling and debuffing ideas
      • ➣ Polyfill compatibility policy for the IntersectionObserver API
      • ➣ Replace native onscroll event listening with IntersectionObserver
    • 3. Site best practices

      • ➣ Use the hexo-abbrlink plug-in to generate article links
      • ➣ Use hexo-filter-nofollow to avoid security risks
      • ➣ Automated deployment using hexo-deployer-git and GitHub Workflow
    • 4. Website SEO optimization

      • ➣ Automatically generate sitemap using the hexo-generator-sitemap plug-in
      • ➣ Submit a personal sitemap to the Google Search Console
  • # # reference
  • # # epilogue

preface


1. What is Hexo?

Hexo is a web development tool designed to present real-time data queries on the interface without relying on the back end. For example, I have used Node.js as the background to develop a blog website. If I implement the background logic by myself, I need to consider database storage, front-end interaction, back-end Server deployment and so on. The whole process is more complex, in the early stage can be used as a way for front-end developers to build a personal website to learn. Hexo simplifies this process by integrating both data storage and data retrieval through compilation and localization into front-end static resources. One example is the blog sites usually need to flip function to get the blog posts, under the traditional development mode, access to the next page this operation launched by the front-end scripts, and by the backend Server processes the request and returns, but after using hexo this whole process is done in a local time, static resource in hexo will all local indexes is established.

With Hexo, a writer is usually only concerned with writing articles in Markdown format. The rest of the process of compiling, building, and publishing the website is handled by the framework, and all the content is packaged into static HTML/CSS/JS files. Hexo supports custom plugins and has a community of plugins, and writers who also have front-end capabilities can publish their plugins to the community for open source sharing.

2. What is optimization?

When we talk about performance, we might focus on how fast the website is running, including how fast static resources and scripts are fetched, and whether the website’s UI is running smoothly. In fact, optimization in a broad sense should include: website performance optimization, website accessibility optimization, website SEO optimization, website best practices, etc.

Performance optimization index


1. Overall operating performance

  • FCP (First Contentful Paint) : The time elapsed between the time the user initiates the site request and the time the browser First renders the site data. The website data referred to for the first time to start rendering includes text, images, HTML DOM structure, etc., and does not contain the data of the web page located in the iframe. This metric is often used to measure how quickly local and server network communication is first established.
  • TTI (Time To Interactive) : The Time it takes for a user To navigate To the site and for the page To become fully Interactive. Interactivity is measured by the fact that the site shows what is actually available, that web events for visible elements of the interface have been successfully bound (click, drag, etc.), and that the user’s interaction with the page takes less than 50 ms.
  • Si (Speed Index) : A measure of the Speed at which content is visualized during page loading. In general, this is the speed at which a site’s interface elements are drawn and rendered. If you use the Lighthouse measurement tool, it will capture multiple image frames of a page in the browser while it is being loaded and then calculate the visual rendering progress between those frames.
  • TBT (Total Blocking Time) : A measure of the Time between the page rendering (FCP) and the actual interactivity (TTI) of the page. Usually when we visit a website, after the website is presented as a whole, there is a short period of time when we cannot interact with the interface, such as mouse clicks, keyboard keys, etc. During this period, the browser is loading and executing scripts and styles.
  • LCP (Largest Contentful Paint) : Measures the time it takes for the Largest content element in the viewport to be drawn to the screen. This typically includes the entire process of downloading, parsing, and rendering the element.
  • CLS (Cumulative Layout Shift) : A numerical indicator that measures the overall Layout jitter when a website loads. If a site’s user interface shakes and blinks several times during loading, it may cause mild discomfort for users, so try to minimize the number of rearrangement and redrawing of the site.

2. Website accessibility

  • The contrast between the background color of the page and the text in the foreground of the site should not be too low, otherwise it will affect the user’s reading.
  • Web link tag<a>It is best to include information describing the link, such as:< a href = "https://github.com/nojsja" > [description] - nojsja making personal interface < / a >.
  • HTML element existslangProperty specifies the current locale.
  • The correct HTML semantic tags can make the keyboard and screen reader work properly. In general, the structure of a web page can be described by semantic tags as:

    <html lang="en"> <head> <title>Document title</title> <meta charset="utf-8"> </head> <body> <a class="skip-link" href="#maincontent">Skip to main</a> <h1>Page title</h1> <nav> <ul> <li> <a href="https://google.com">Nav link</a> </li>  </ul> </nav> <header>header</header> <main id="maincontent"> <section> <h2>Section heading</h2> <p>text info</p> <h3>Sub-section heading</h3> <p>texgt info</p> </section> </main> <footer>footer</footer> </body> </html>
  • Unique ID for interface elements.
  • Img tagsaltProperty declaration. It specifies alternative text to replace what the image displays in the browser if it fails to display or if the user disables the image display.
  • Form element internal declarationlabelTag to make the screen reader work correctly.
  • IFRAME element declarationtitleProperty to describe its internal content so that the screen reader can work.
  • For the use of ARIA accessibility attributes and tags, refer to >> ARIA Reference
  • Input [type=image], object tag add Alt attribute declaration:

    <input type="image" alt="Sign in" src="./sign-in-button.png">
    <object alt="report.pdf type="application/pdf" data="/report.pdf">
    Annual report.
    </object>
  • Elements that need to use the TAB focus feature can declare the tabindex attribute, and the focus switches when we press the TAB key. Elements with a value of 0, an invalid value, or no tabindex value should be placed after elements with a positive tabindex value, depending on the order in which the keyboard sequence is navigated:

    1) tabIndex = negative value (usually tabIndex = "-1"), which means that the element is focusable, but cannot be accessed by keyboard navigation. This is very useful when using JS for internal keyboard navigation of page widgets. 2) TabIndex ="0", which means that the element is focable and can be focused to this element through keyboard navigation. Its relative order is determined by the DOM structure in which it is currently in. 3) TabIndex = positive, which means that the element is focusable and can be accessed by keyboard navigation; Its relative order increases with the tabindex value and lags to focus. If multiple elements have the same tabindex, their relative order is determined by their order in the current DOM.
  • The correct use of th and scope in the table element causes the row table and list headers to correspond to their data fields:

    <table>
    <caption>My marathon training log</caption>
    <thead>
      <tr>
        <th scope="col">Week</th>
        <th scope="col">Total miles</th>
        <th scope="col">Longest run</th>
      </tr>
    </thead>
    <tbody>
      <tr>
        <th scope="row">1</th>
        <td>14</td>
        <td>5</td>
      </tr>
      <tr>
        <th scope="row">2</th>
        <td>16</td>
        <td>6</td>
      </tr>
    </tbody>
    </table>
  • The video element specifies the track text subtitle resource for the hearing impaired (subtitle resource file is required) :

    <video width="300" height="200">
      <source src="marathonFinishLine.mp4" type="video/mp4">
      <track src="captions_en.vtt" kind="captions" srclang="en" label="english_captions">
      <track src="audio_desc_en.vtt" kind="descriptions" srclang="en" label="english_description">
      <track src="captions_es.vtt" kind="captions" srclang="es" label="spanish_captions">
      <track src="audio_desc_es.vtt" kind="descriptions" srclang="es" label="spanish_description">
    </video>
  • The li list tag is placed in the container componentulolThe use of.
  • Heading tags are declared strictly in ascending order, with sections or other elements (e.g. P tags) to correctly reflect the interface content structure:

    <h1>Page title</h1> <section> <h2> section Heading</h2>... <h3>Sub-section Heading</h3> </section>
  • use<meta charset="UTF-8">Specifies the character set encoding for the Web site.
  • The image resource referenced by the img element should have the same aspect ratio as the one currently used by img, otherwise the image may be distorted.
  • add<! DOCTYPE html>To prevent browser interface rendering abnormalities.

3. Does the site apply best practice policies

> 1)target="_blank"<a>Link if not declaredrel="noopener noreferrer"There is a security risk.

When a page links to another page that uses target=”_blank”, the new page will run in the same process as the old one. If the new page is executing expensive JavaScript, the performance of the old page may suffer. And the new page can access the old window object through window.opener, for example, it can use window.opener.location = url to navigate the old page to a different URL.

> 2) Check the browser-side console for alarms and errors, locate the problem with instructions and resolve it.

3) HTTP and HTTPS protocol addresses are not mixed

Browsers have gradually begun to prohibit the mixed use of non-protocol resources. For example, a Web server using the HTTP protocol loads a resource starting with the HTTPS protocol, so the following situations may occur:

  • [Fixed] A warning appears when mixed content is loaded.
  • Instead of loading mixed content, blank content is displayed.
  • Before loading mixed content, a prompt like whether to “show” or “block” because there is an unsafe risk will appear!

Optimization should be considered in the following ways:

  • Part of the protocol loaded by the site is put into its own server for hosting with external resources;
  • For sites that have HTTPS deployed, declare it on the web page<meta http-equiv="Content-Security-Policy" content="upgrade-insecure-requests"Convert HTTP requests to HTTPS requests;
  • For Web sites with HTTPS deployed, add the request header on the server side:header("Content-Security-Policy: upgrade-insecure-requests")You can do the same thing;
  • For sites that support both HTTP and HTTPS access, consider using a relative protocol rather than specifying the protocol directly in clear text:<script src="//path/to/js">, when the browser sends the request, it will switch dynamically according to the current page protocol.
  • Similar to using relative protocols, using relative URLs serves the purpose, but increases the coupling between resources:<script src="./path/to/js"></script>

> 4) Avoid AppCache

AppCache has been deprecated in favor of using the service worker’s Cache API.

> 5) Avoid using document.write()

For users with slow network speeds (2G, 3G, or slower WLAN), external script dynamic injection via document.write() can delay the display of page content by tens of seconds.

> 6) Avoid using mutation events

The following mutation events impair performance and have been deprecated in the DOM event specification:

  • DOMAttrModified
  • DOMAttributeNameChanged
  • DOMCharacterDataModified
  • DOMElementNameChanged
  • DOMNodeInserted
  • DOMNodeInsertedIntoDocument
  • DOMNodeRemoved
  • DOMNodeRemovedFromDocument
  • DOMSubtreeModified

It is recommended to use MutationObserver instead

> 7) Avoid using Web SQL

IndexedDB is recommended instead

> 8) Avoid loading too large DOM trees

Large DOM trees can degrade page performance in a number of ways:

  • Network efficiency and load performance. If your server is sending a large DOM tree, you may be shipping a large number of unnecessary bytes. This may also slow down page load times, as the browser may resolve many nodes that are not displayed on the screen.
  • Runtime performance. As users and scripts interact with the page, the browser must constantly recalculate the location and style of the nodes. A large DOM tree combined with complex style rules can seriously slow down rendering.
  • Memory performance. If you use the general query selector (for example, the document. The querySelectorAll (‘ li ‘) you might inadvertently to store a large number of reference nodes), it might overwhelm the user equipment memory function.

An optimal DOM tree:

  • Fewer than 1500 nodes in total.
  • The maximum depth is 32 nodes.
  • There is no parent node with more than 60 children.
  • In general, you just need to find a way to create a DOM node when you need it, and destroy it when you no longer need it.

> 9) Allow the user to paste the password

Password paste improves security because it enables the user to use the password manager. Password administrators typically generate strong passwords for users, store them securely, and then automatically paste them into the password field when the user needs to log in. Remove the code that prevents the user from pasting into the password field. Using Clipboard Paste in the event breakpoint to break the point, you can quickly find the code that prevents pasting the password. Such as the following code to prevent pasting passwords:

var input = document.querySelector('input');
input.addEventListener('paste', (e) => {
  e.preventDefault();
});

4. Website search engine optimization SEO

  • Add viewport declaration<meta name="viewport">And specify with and device-width to optimize the mobile display.
  • The document to addtitleProperty to allow screen readers and search engines to correctly identify site content.
  • addmetaThe desctiption tag describes the content of the site<meta name="description" content="Put your description here.">.
  • Adds description text to the link label<a href="videos.html">basketball videos</a>To clearly communicate what the hyperlink is pointing to.
  • Use the correct href link address so that the search engine can track the actual URL correctly. Here is a counter example:<a onclick="goto('https://example.com')">
  • Don’t use meta tags to disable search engine crawlers from accessing your page. Here’s a counter example:<meta name="robots" content="noindex"/>In contrast, it can specifically exclude some crawlers from crawling:<meta name="AdsBot-Google" content="noindex"/>.
  • Use Alt text with clear intent and meaning to describe image elements. Avoid non-specific pronouns such as:"chart", "image", "diagram"(chart, picture)
  • Do not use plug-ins to display your content, that is, avoid using themembed,object,appletAnd other tags to introduce the resource.
  • Correctly place robots.txt to the root directory of the site, it describes what content in the site should not be obtained by the search engine, which can be obtained. Usually some background files (such as CSS, JS) and user privacy files are not crawled by search engines, and some files are frequently crawled by spiders, but these files are not important, then you can use robots.txt to shield. An example:

    User-agent: * Allow: / Disallow: /wp-admin/ Disallow: /wp-includes/ Disallow: /wp-content/plugins/ Disallow: /? r=*
  • Submit sitemap. XML to search engines. The XML-formatted text is written specifically for computers to read. The sitemap. XML specification is used by search engines to allow site owners to use it to create a directory of all the web pages on their site for search engines to read. It’s like a map that lets search engines know what pages are on the site.

    <? The XML version = "1.0" encoding = "utf-8"? > < urlset XMLNS = "http://www.sitemaps.org/schemas/sitemap/0.9" > < url > < loc > https://nojsja.github.io/blogs/2019/10/26/63567fa4.html/ < / loc > < lastmod > 2021-04-29 T10:02:04. 853 z < / lastmod > < url > < url > < loc > https://nojsja.github.io/blogs/2020/03/26/7507699.html/ < / loc > < lastmod > 2021-04-29 T10:01:30. 661 z < / lastmod > </url> </urlset>

Performance testing tool Lighthouse


The performance metrics mentioned above can be automatically analyzed and generated by the Lighthouse Performance Monitor, which is already available in newer versions of Chrome and Chromium architecture’s Edge browser. A lower version of Chrome can be installed through a store search plugin. After installation, use F12 to open the console and switch to Lighthouse column to use directly:

As shown in the figure, you can decide by yourself to check the test item and test platform (desktop/mobile terminal), and finally click “Generate Report” to start running the automated test task. After the test is completed, an analysis result page will be opened:

Result interface:

The Performance, Accessibility, Best Practices, and SEO in the figure above correspond in turn to the overall Performance, website Accessibility, website Best Practices, search engine SEO optimization and other indicators we mentioned above. We can click on each specific project to see the optimization recommendations given by the testing tool:

The results of the test will largely depend on the loading speed of your HTTP static resource server. For example, if you do not use a proxy, using the Gitee Pages service to host the static resource will be slightly slower than using the domestic Gitee Pages service, and some resources may fail to load. Therefore, domestic users can use Gitee Pages in place of GitHub Pages. However, non-paying Gitee users do not have automatic code build and deployment capabilities. They need to use the GitHub Actions mentioned below to automate login and trigger build and deployment.

Note: plugins installed on some browsers may affect the test results, as plugins may make requests to load other scripts. In this case, you can install NPM install-g lighthouse globally using the NPM package manager and then use the command line to start the test process: Lighthouse https://nojsja.gitee.io/blogs – view – preset = desktop – output – path = “/ home/nojsja/desktop/lighthouse. HTML ‘. The result is a browser-friendly web file lighthouse. HTML that can be opened directly for performance checking at the specified address.

Website optimization based on HEXO framework


The article I wrote before, “Detailed Exposition of Frontend Performance Optimization Skills (1)”, describes some aspects of front-end performance optimization in detail. This article will not list every point, but will only explain the optimization means applied in the blog, which can be roughly divided into these aspects:

  1. Optimize resource load times
  2. Optimize interface performance
  3. Web Best Practices
  4. Website SEO Optimization

1. Optimize resource load times

Common ways to optimize load times include the following:

  • Improve the parallel loading ability of web resources
  • Lazy loading of unnecessary external resources
  • Reduce the fragmentation of external resource requests, small files do merge
  • Increase website bandwidth

➣ download the script resource asynchronously using defer/async

A

  • Defer script: Declares the exterior of the defer property<script>The script does not block HTML parsing and rendering while it is downloaded, and will execute once the HTML rendering is complete and ready for practical operation (DOMContentLoadedBefore the event is triggered), the order in which each script is parsed and executed corresponds to the order in which it was declared, and the page is fired when the execution is completeDOMContentLoadedEvents.
  • The async script: Declares the external of the async property<script>The script download does not block parsing and rendering of the HTML. Each script is downloaded and executed independently and immediately after the download, so the execution sequence is not fixed and has no correlation with the DOMContentLoaded event.

There are styles and JS scripts that use the external dependencies of Bootstrap on my blog site, but I need to make sure that they are declared in the first order, because the execution order of other

Usually async/defer is used to optimize the loading of separate subcomponent-dependent scripts, such as the one used for a navigation bar jump in a blog post. It is executed in an order that is completely independent of the rest of the script and can therefore be tuned separately using the async property. However, you need to ensure that the navigation bar DOM element on which the script is based is declared before the script is introduced to prevent a script error from occurring when the DOM element is not rendered at the time the script is executed.

➣ Load external resources asynchronously using the async function

The following async function creates a link/script tag from the URL passed in and adds it to the tag to dynamically load the external resource. If the callback function exists, it will monitor the resource loading condition and execute the callback function after loading the resource. It is worth noting that this approach differs from declaring dependent resources directly through a script in that it does not block the interface and the script is downloaded and executed asynchronously.

It is often used in blogs to dynamically load external dependent libraries programmatically in a special case and initialize the external library in the callback function. For example, my blog uses a music player component. When I scroll to the uninitialized DOM element containing this component in the visual area of the web page, I use async to request the server’s JS script resource. After loading, I initialize the music player in the callback function.

/** * async [async] * @author nojsja * @param {String} u [resource address] * @param {Function} c [callback Function] * @param {String} tag [loading resource type link | script] * @ param async {Boolean} [load script whether resources need to declare the async attributes] * / function async (u, c, tag, async) { var head = document.head || document.getElementsByTagName('head')[0] || document.documentElement; var d = document, t = tag || 'script', o = d.createElement(t), s = d.getElementsByTagName(t)[0]; async = ['async', 'defer'].includes(async) ? async : !! async; switch(t) { case 'script': o.src = u; if (async) o[async] = true; break; case 'link': o.type = "text/css"; o.href = u; o.rel = "stylesheet"; break; default: o.src = u; break; } /* callback */ if (c) { if (o.readyState) {//IE o.onreadystatechange = function (e) { if (o.readyState == "loaded" || o.readyState == "complete") { o.onreadystatechange = null; c(null, e)(); }}; } else {// O.O.Load = function (e) {c(null, e); }; } } s.parentNode.insertBefore(o, head.firstChild); }

➣ uses browser onfocus events to delay external resource loading

After some events are triggered by the interaction between the user and the interface, the conditions of external resource loading are met, and then the external resource loading is triggered, which is also an optimization of lazy loading.

For example, there is a search box in the right navigation bar area of my blog to search for blog posts. This search itself is done by looking for a locally generated resource index static XML file. This XML file becomes so large that if you download it when the page is first loaded, it will consume the bandwidth of the page and the number of network requests. So consider triggering an asynchronous download of the XML when the user clicks the search box to focus on it, and setting a loading effect during loading to instruct the user to delay the action, to keep the experience from being affected.

➣ use preload/prefetch/preconnect preload optimization

  • Preload is used to preload the resources needed for the current page, such as images, CSS, JavaScript, and font files. It has a higher loading priority than prefetch. Also note that preload does not block the window’s onload event. The blog posts use it to preload fonts referenced in CSS:<link href="/blogs/fonts/fontawesome-webfont.woff2? V = 4.3.0 "rel = preload as =" font ">For different resource types, different ones need to be addedasTag information, add if it is a cross-domain loadcrossoriginProperty declaration.
  • PREFETCH is a low-priority resource prompt. Once a page is loaded, it starts downloading other preloaded resources and stores them in the browser’s cache. Prefretch includes:Link, DNS, and prerenderingThree types of preloaded requests. LINK PREFETCH for example:<link rel="prefetch" href="/path/to/pic.png">Allows the browser to retrieve resources and store them in a cache; DNS Prefetch allows the browser to run in the background while the user is browsing the page. DNS PreFetch and PreFetch are very similar in that they both optimize for future requests for resources on the next page, but prerender renders the entire page in the background, so be careful. May cause a waste of network bandwidth.
  • Preconnect allows the browser to perform several actions before an HTTP request is formally sent to the server, including DNS parsing, TLS negotiation, and TCP handshaking, which eliminates round-trip latency and saves the user time. For example, in the blog: do not operate the statistics database of the web page pre-connection<link href="http://busuanzi.ibruce.info" rel="preconnect" crossorigin>.

➣ Compress code files and image files using the Hexo plug-in

Compression of static resources is also a way to save network bandwidth and improve the speed of request response. It is usually configured in an engineering way rather than manually compressing each image. My blog uses a compression plugin from Hexo, Hexo-All-Minifier, which compresses HTML, CSS, JS, and images to optimize the speed of your blog.

Installation:

npm install hexo-all-minifier --save

Enable: in the config.yml configuration file:

# ---- html_minifier: enable: true exclude: - '*.min.css' js_minifier: enable: true mangle: true compress: true exclude: - '*.min.js' image_minifier: enable: true interlaced: False multipass: false optimizationLevel: 2 # PngQuant: false progressive: true # Do you want to enable progressive image compression

Resource compression costs performance and time, so consider not enabling these plug-ins in the development environment to speed up the development environment startup. For example, specify a single _config.dev.yml and turn off all of the above plugins, refer to the script field declaration in package.json:

{
...
"scripts": {
    "prestart": "hexo clean --config config.dev.yml; hexo generate --config config.dev.yml",
    "prestart:prod": "hexo clean; hexo generate",
    "predeploy": "hexo clean; hexo generate",
    "start": "hexo generate --config config.dev.yml; hexo server --config config.dev.yml",
    "start:prod": "hexo generate --config config.dev.yml; hexo server",
    "performance:prod": "lighthouse https://nojsja.gitee.io/blogs --view --preset=desktop --output-path='/home/nojsja/Desktop/lighthouse.html'",
    "performance": "lighthouse http://localhost:4000/blogs --view --preset=desktop --output-path='/home/nojsja/Desktop/lighthouse.html'",
    "deploy": "hexo deploy"
  }
}

➣ Write a hexo-img-lazyload plugin to add lazy image loading feature

In order to learn about Hexo’s own plugin system, we use IntersectionObserver API to write an image lazyload plugin: hexo-img-lazyload, which can be installed by NPM command: NPM I hexo – img – the lazyload.

Effect preview:

Plug-in main principle is to monitor the blog build process of hook events, to build good code strings, then native code of image statement such as: < img SRC = “path/to/xx. JPG” > through regular whole match and replace with: < img SRC = “path/to/loading data -” SRC = “path/to/xx. JPG” >.

function lazyProcessor(content, replacement) { return content.replace(/<img(.*?) src="(.*?) "(. *?) >/gi, function (str, p1, p2, p3) { if (/data-loaded/gi.test(str)) { return str; } if (/no-lazy/gi.test(str)) { return str; } return `<img ${p1} src="${emptyStr}" lazyload data-loading="${replacement}" data-src="${p2}" ${p3}>`; }); }

After the replacement, we need to use Hexo’s code injection capability to inject our own code into each built interface.

Hexo code injection:

/* registry scroll listener */
hexo.extend.injector.register('body_end', function() {
  const script = `
    <script>
      ${observerStr}
    </script>`;

  return script;
}, injectionType)

Part of the injected code, which monitors whether the image element to be loaded enters the visible area for dynamic loading, uses the IntersectionObserver API instead of the window.onscroll event, which has better performance. The browser uniformly listens for all element location changes and distributes the scrolling event results:

(function() { /* avoid garbage collection */ window.hexoLoadingImages = window.hexoLoadingImages || {}; function query(selector) { return document.querySelectorAll(selector); } /* registry listener */ if (window.IntersectionObserver) { var observer = new IntersectionObserver(function (entries) { entries.forEach(function (entry) { // in view port if (entry.isIntersecting) { observer.unobserve(entry.target); // proxy image var img = new Image(); var imgId = "_img_" + Math.random(); window.hexoLoadingImages[imgId] = img; img.onload = function() { entry.target.src = entry.target.getAttribute('data-src'); window.hexoLoadingImages[imgId] = null; }; img.onerror = function() { window.hexoLoadingImages[imgId] = null; } entry.target.src = entry.target.getAttribute('data-loading'); img.src = entry.target.getAttribute('data-src'); }}); }); query('img[lazyload]').forEach(function (item) { observer.observe(item); }); } else { /* fallback */ query('img[lazyload]').forEach(function (img) { img.src = img.getAttribute('data-src'); }); } }).bind(window)();

➣ lazily load additional resources using the IntersectionObserver API

The IntersectionObserver API has been used as a major source of lazy loading in my blog due to better performance, and I’ve also used it to optimize the loading of the blog comment component Valine. Generally, the comment component is located at the bottom of the blog post, so it is completely unnecessary to load the resource of the comment component when the article page is first loaded. You can consider using IntersectionObserver to monitor whether the comment component enters the viewport, and then use the async script to download and call back the comment system initialization after entering the viewport.

On the other hand, the music player Aplayer at the bottom of every article uses a similar loading strategy, so it works!

➣ Load the external dependency script using the CDN

A CDN is a content delivery network. The CDN service to cache the static resources into high performance speed nodes all over the country, and when a user access the corresponding business resources, CDN system can in real time according to the network traffic and connect the load status of each node, and the distance to the user response time and so on comprehensive information to the user’s request to guide users closest service node, Make content transfer faster and more stable. It can improve the responsiveness of the first request.

Some public external libraries in the blog, such as Bootstrap and jQuery, all use external CDN resource addresses. On the one hand, it can reduce the consumption of web page bandwidth of the current master site; on the other hand, CDN can also provide some resources to download faster.

➣ load netease cloud music with Aplayer instead of iframe

The previous version of the blog would have embedded a netease cloud music player at the bottom of the post screen. This player is essentially an iframe like this:

<iframe frameborder="no" border="0" marginwidth="0" marginheight="0" width=330 height=86 src="//music.163.com/outchain/player?type=2&id=781246&auto=1&height=66"></iframe>

When an iframe is loaded, it loads a bunch of things, and while lazy loading is possible, there are a number of disadvantages to an iframe:

  • The iFrame blocks the onLoad event on the main page
  • The iFrame and the main page share the HTTP connection pool, and the browser has restrictions on connections to the same domain, which affects the parallel loading of the page
  • Iframes are bad for web layout
  • Iframes are not mobile-friendly
  • Repeated reloads of iframes can cause memory leaks in some browsers
  • The data transfer in the iFrame is complex
  • Iframes are bad for SEO

The new version replaces the iframe player with the Aplayer and uploads a list of your favorite songs to another Gitee Pages repository for static hosting. You can load a custom list of songs at the bottom of your blog by:

Var ap = new APlayer({container: document.getElementById(' aPlayer '), theme: '#e9e9e9', audio: [{name: 'existence.getElementById ', artist: 'AcuticNotes', url: 'https://nojsja.gitee.io/static-resources/audio/life-signal.mp3', cover: "Https://nojsja.gitee.io/static-resources/audio/life-signal.jpg"}, {name: 'heritage サ レ タ sites/inclined light', the artist: 'rev a okabe, url: 'https://nojsja.gitee.io/static-resources/audio/%E6%96%9C%E5%85%89.mp3', cover: 'https://nojsja.gitee.io/static-resources/audio/%E6%96%9C%E5%85%89.jpg' }] });

Preview:

2. Optimize interface performance

➣ Optimizes reflow and redraw of pages

1) concept

Reflow (reflow) and repaint (repaint) are an essential process for browsers to render web pages. Reflow is a DOM tree rendering caused by changes in the spatial position and size of elements during the main HTML rendering process. Repaint is caused by changes in the style properties of nodes and does not affect the spatial layout. In terms of performance, reflow consumes a lot of performance, and it is easy to cause cascading effect. That is, in a normal DOM tree flow layout, when an element reflows, all the elements after that element position will reflow and re-render, and redrawing consumes less performance.

2) How to effectively judge the backflow and redraw of the interface?

In fact, generally based on the Chromium architecture browser comes with a web development tool DevTools, but it can be said that the vast majority of front-end developers do not seriously understand the specific purpose of this tool, just use it for simple log debugging, webpage request tracking and style debugging such as basic functions. Reflux and redraw can also be used for visual measurement: F12 Open DevTools, Find the folding buttons for the three dots in the upper right corner and open them one by one -> More Tools -> Rendering – Check the first two Paint Specific to the cloud and Layout Shift Regions, Now go back to the page you opened and do the operation. During the operation, the area where reflow occurred will turn blue, and the area where redraw occurred will turn green. The duration is not long.

Repaint:

Reflow:

In addition to being used for reflow/redraw visualizations, DevTools has several other useful features, such as: Coverage Tools can be used to analyze the usage Coverage of external CSS/JS script content introduced on the interface. That is to say, we can measure the usage of external files introduced through this tool. For external resources that are used less frequently, we can consider inline or directly write the implementation to improve the cost performance of external resources introduced.

➣ Optimizes scrolling event listening with throttling and debuffing ideas

When faced with high-frequency triggering scenarios for functions that require call control, one might wonder when to use throttling and when to use buffeting. Through an easy to distinguish: a feature here if you need to keep a short time, high frequency trigger the final results, then use to shake function, if you need to limit the number of calls to a function, with the best call time interval to ensure a sustainable function calls and don’t care whether it is the last call results, please use the function of throttling.

Echarts, for example, often need to reuse the data after the window resize, but listening for resize events directly can cause the render function to be fired multiple times in a short period of time. Using the idea of function buffeting, we can ensure that the resize process triggers an actual Echarts rerender by taking the arguments in the listener function and using them to call the preinitialized throttle function.

Here is a simple implementation of the throttling function and the buffeting function:

/** * fnDebounce [to dump] * @author nojsja * @param {Function} fn * @param {Numberl} timeout [delay time] * @return {Function} * var fnDebounce = Function (fn, timeout) {var time = null; return function() { if (! time) return time = Date.now(); if (Date.now() - time >= timeout) { time = null; return fn.apply(this, [].slice.call(arguments)); } else { time = Date.now(); }}; }; /** * Fnthrottle [Throttle] * @Author nojsja * @Param {Function} Fn [Throttle] * @Param {Numberl} Timeout [Throttle] * @return {Function} */ var fnThrottle = Function (fn, timeout) {var time = null; return function() { if (! time) return time = Date.now(); if ((Date.now() - time) >= timeout) { time = null; return fn.apply(this, [].slice.call(arguments)); }}; };

The content navigation bar on the right side of the article in the blog will automatically switch the fixed layout and general flow layout according to the position of the scroll bar. This is to make the navigation bar present normally during reading the article and will not be hidden to the top:

/ * the limit of 150 ms to trigger a rolling test * / (window). On (' scroll 'fnThrottle (function () {var rectbase = $$tocBar. GetBoundingClientRect (); if (rectbase.top <= 0) { $toc.css('left', left); (! $toc.hasClass('toc-fixed')) && $toc.addClass('toc-fixed'); $toc.hasClass('toc-normal') && $toc.removeClass('toc-normal'); } else { $toc.css('left', ''); $toc.hasClass('toc-fixed') && $toc.removeClass('toc-fixed'); (! $toc.hasClass('toc-normal')) && $toc.addClass('toc-normal'); ($$toc.scrollTop > 0) && ($$toc.scrollTop = 0); }}, 150));

➣ Polyfill compatibility policy for the IntersectionObserver API

The article mentioned that the IntersectionObserver API has been used for lazy loading of various interface components in blogs, which provides better performance and more complete functionality. However, in web development, we usually consider the compatibility of various APIs, which Can be viewed through the compatibility report website Can I Use. The following figure shows that the compatibility of this API is still acceptable, and many higher versions of browsers on the desktop have been supported:

Therefore, in order to solve the compatibility problem of some low version browsers, an extreme approach is adopted. In general, we need to introduce an external [XXX].polyfill. Js (XXX is the corresponding API) to add corresponding functions to the lower version of the browser. However, for the higher version of the browser, which already supports this API, we have to download the polyfill library repeatedly, resulting in a waste of web page requests and bandwidth resources. So I’m not going to do this because most browsers already support this API, so by default we’re not going to use the

<! - this script is placed in a certain position close to the first page - > < script > if (' IntersectionObserver in window && 'IntersectionObserverEntry' in Windows && 'intersectionRatio' in window.IntersectionObserverEntry.prototype) { if (! ('isIntersecting' in window.IntersectionObserverEntry.prototype)) { Object.defineProperty(window.IntersectionObserverEntry.prototype, 'isIntersecting', { get: function () { return this.intersectionRatio > 0; }}); } } else { /* load polyfill sync */ sendXMLHttpRequest({ url: '/blogs/js/intersection-observer.js', async: false, method: 'get', callback: function(txt) { eval(txt); }}); } </script>

➣ Replace native onscroll event listening with IntersectionObserver

IntersectionObserver is usually used for some intersection detection scenarios in the interface:

  • Image Loading Lazy – Loads only when the image is scrolled to visible
  • Unlimited scrolling of content – More data is loaded directly as the user scrolls near the bottom of the scrolling container, without the user having to turn pages, giving the user the illusion that the page can scroll indefinitely
  • Monitor AD exposure – To calculate AD revenue, you need to know the exposure of AD elements
  • Perform a task and play a video when the user sees an area

Taking infinite content rolling as an example, the ancient intersecting detection scheme is to use scroll event to monitor the rolling container, and obtain the geometric attributes of the rolling elements in the listener function to judge whether the elements have been rolled to the bottom. We know that getting and setting attributes like scrollTop will cause page reflow, and that page performance will suffer if the interface has to bind multiple listeners to scroll events for similar actions:

/ / onScroll = () => {const {scrollTop, scrollHeight, clientHeight} = document.querySelector('#target'); /* Scroll to bottom */ / ScrollTop (scrollup height); ClientHeight (the total height of the container); If (scrollTop + clientHeight === scrollHeight) {/* do something... * /}}

Here is a simple image lazy loading function to introduce its use, detailed use can see the blog: “front-end performance optimization tips detailed (1)”.

(function lazyload() { var imagesToLoad = document.querySelectorAll('image[data-src]'); function loadImage(image) { image.src = image.getAttribute('data-src'); image.addEventListener('load', function() { image.removeAttribute('data-src'); }); } var intersectionObserver = new IntersectionObserver(function(items, Observer) {items.foreach (function(item) { Item. BoundingClientRect - target element geometry boundary information item. IntersectionRatio - intersection than intersectionRect/boundingClientRect Item. intersectionRect - describes the intersection area of the root and target elements item.isIntersecting - true(intersection start), False (end of intersection) item.rootbounds - describes the timestamp */ if of the root element item.target - the target element item.time - the origin of time (the point in time when the page finished loading the window) to the time at which the intersection was triggered (item.isIntersecting) { loadImage(item.target); observer.unobserve(item.target); }}); }); imagesToLoad.forEach(function(image) { intersectionObserver.observe(image); }); }) ();

3. Site best practices

➣ Use the hexo-abbrlink plug-in to generate article links

By default, the Hexo framework generates blog addresses in the format :year/:month/:day/:title, i.e., / year/ month/ day/ title. When the blog title is in Chinese, the generated URL link also appears in Chinese, and the Chinese path is not SEO friendly. Copied links are encoded, unreadable and unterse.

To solve this problem, use hexo-abbrlink: NPM install hexo-abbrlink –save, add configuration in _config.yml:

permalink: :year/:month/:day/:abbrlink.html/ permalink_defaults: abbrlink: alg: CRC32 # algorithm: CRC16 (default) and CRC32 rep: hex # decimal: dec(default) and hex

Generated after the blog becomes this: https://post.zz173.com/posts/8ddf18fb.html/, even if have updated the post title, post links will not change.

➣ Use hexo-filter-nofollow to avoid security risks

The hexo-filter-nofollow plug-in adds the attribute rel=”noopener external nofollow noreferrer” for all links.

There are a lot of outside the chain inside the site will affect the weight of the site, not conducive to SEO.

  • nofollow: is a property proposed by Google, Yahoo and Microsoft in the past few years. Links added to this property will not be weighted. Nofollow tells the crawler not to track the target page. In order to combat blogspam, Google recommends using nofollow, which tells the search engine not to fetch the target page, and tells the search engine not to pass the PageRank of the previous page to the target page. However, if you submit the page directly via a Sitemap, the crawler will still crawl to it. The nofollow is only the attitude of the current page to the target page, and does not represent the attitude of other pages to the target page.
  • noreferrernoopener: when<a>Label usetarget="_blank"Property links to another page, the new page will run in the same process as your page. If the new page is executing expensive JavaScript, the performance of the old page may suffer. And new pages can go throughwindow.openerGetting the old page window object to perform arbitrary operations, with great security risks. usenoopenerCompatible attributesnoreferrerAfter that, the newly opened page can no longer get the old page window object.
  • external: Tell search engine, this is not the link of this site, this function is equivalent toTarget = "_blank", reduce the influence of external link SEO weight.

➣ Automated deployment using hexo-deployer-git and GitHub Workflow

After the static resource packaging generation is complete, I need to commit it to the corresponding GitHub Pages or Gitee Pages repository, which is very inefficient when multiple repositories need to be deployed. Therefore, the hexo-deployer-git plug-in is used to automate the deployment. You can declare the repository information you need to deploy as follows, and if you have multiple repositoriesyou can directly declare multiple deploy fields:

# Deployment
deploy:
  type: git
  repository: https://github.com/nojsja/blogs
  branch: master
  ignore_hidden:
    public: false
  message: update

It’s worth noting that the Gitee Pages repository for non-paying users does not support post-commit automatic deployment. So my solution is to simply declare a deploy repository pointing to the GitHub Pages repository. Then through the GitHub repository’s own GitHub Workflow service with the Gitee-page-action script to achieve automatic deployment of Gitee. The principle of it is to use GitHub automated workflow to synchronize the code of GitHub repository to Gitee repository, and then automatically log in to Gitee account and call the manual deployment interface of the web page to realize the entire automated deployment process by reading the configured Gitee account information.

4. Website SEO optimization

➣ Automatically generate sitemap using the hexo-generator-sitemap plug-in

What is a sitemap:

  • A sitemap describes the structure of a website. It can be a document in any form, used as a design tool for web design, or it can be a page that lists all the pages in a website, usually in a hierarchical form. This helps visitors and search engine bots find pages in your site.
  • Some developers consider site indexes to be A more appropriate way to organize Web pages, but site indexes are usually A-Z indexes that provide only access to specific content, while A site map provides A general top-down view of the entire site.
  • Sitemap makes all pages available to enhance search engine optimization.

Installation:

$: npm install hexo-generator-sitemap --save

Add relevant fields to configuration file _config.yml:

# sitemap
sitemap:
  path: sitemap.xml
# page url
url: https://nojsja.github.io/blogs 

After you run hexo generate to automatically generate sitemap.xml, you need to go to the Google Search Console to record your site and submit the corresponding site files.

➣ Submit a personal sitemap to the Google Search Console

  • Log on to the Google Search Console
  • Add your own website information
  • Ownership can be verified in several ways
  • Submit the plug-in generatedsitemap.xml

The Google Search Console is also handy for seeing clicks on your site, keyword indexing, and more.

reference


  1. Lighthouse and Google’s mobile best practices
  2. Google web.dev

conclusion


Learning all aspects of front-end performance optimization, on the one hand, is the investigation of our core basic knowledge, on the other hand, it can also provide us with some practical problems to deal with ideas, which is the only way for each front-end person to advance.

That’s the end of this article. We will update it if necessary.