Author: Shiki

segmentfault.com/a/1190000015052545

Performance optimization is a big area, this article mainly involves several front-end points, such as front-end performance optimization process, common technical means, tools and so on.

Mention front-end performance optimization, we should think of Yahoo military, this article will be combined with yahoo military into their own understanding of knowledge, summary and combing 😜.

Yahoo! Catch

First, let’s take a look at the 35 yahoo catch-22 rules:

  1. Minimize the number of HTTP requests – trade-offs

  2. Using CDN (Content Delivery Network)

  3. Specify Expires or cache-control for the file header to make the content cacheable.

  4. Avoid empty SRC and href

  5. Use gzip to compress content

  6. Put CSS at the top

  7. Put JS at the bottom

  8. Avoid USING CSS expressions

  9. Put CSS and JS in external files

  10. Reduce DNS lookup times

  11. Simplify CSS and JS

  12. To avoid the jump

  13. Eliminate duplicate JS and CSS

  14. Configuration ETags

  15. Make AJAX cacheable

  16. Flush the output buffer as early as possible

  17. Use GET to complete AJAX requests

  18. Lazy loading

  19. preload

  20. Reduce the number of DOM elements

  21. Divide the page content by domain name

  22. Minimize the number of IFrames

  23. Avoid 404

  24. Reduce the size of cookies

  25. Use a cookie-free domain

  26. Reducing DOM access

  27. Develop intelligent event handlers

  28. Use instead of @ import

  29. Avoid using filters

  30. To optimize the image

  31. To optimize the CSS Sprite

  32. Don’t zoom in on images in HTML – there are trade-offs

  33. Favicon.ico should be small and cacheable

  34. Keep individual content smaller than 25K

  35. Package components into composite text

If the specific details of yahoo military rules content is not very understand, can go to the search engine search Yahoo military rules to understand details.

Compression consolidation

For front-end performance optimization, it is natural to pay attention to the opening speed of the first screen, and this speed is largely spent on network requests. So how to reduce the time of network requests?

  • Reduce the number of network requests

  • Reduce file size

  • Use CDN acceleration

So compress and merge is a solution, of course you can compress and merge with gulp, Webpack, Grunt, etc.

JS, CSS compression, merge

For example: gulp JS, CSS compression, merge code as follows 👇 :


     
  1. // Compress and merge js

  2. gulp.task('scripts', function () {

  3.    return gulp.src([

  4.        './public/lib/fastclick/lib/fastclick.min.js',

  5.        './public/lib/jquery_lazyload/jquery.lazyload.js',

  6.        './public/lib/velocity/velocity.min.js',

  7.        './public/lib/velocity/velocity.ui.min.js',

  8.        './public/lib/fancybox/source/jquery.fancybox.pack.js',

  9.        './public/js/src/utils.js',

  10.        './public/js/src/motion.js',

  11.        './public/js/src/scrollspy.js',

  12.        './public/js/src/post-details.js',

  13.        './public/js/src/bootstrap.js',

  14.        './public/js/src/push.js',

  15.        './public/live2dw/js/perTips.js',

  16.        './public/live2dw/lib/L2Dwidget.min.js',

  17.        './public/js/src/love.js',

  18.        './public/js/src/busuanzi.pure.mini.js',

  19.        './public/js/src/activate-power-mode.js'

  20.    ]).pipe(concat('all.js')).pipe(minify()).pipe(gulp.dest('./public/dist/'));

  21. });

  22. // Compress and merge CSS

  23. gulp.task('css', function () {

  24.    return gulp.src([

  25.        './public/lib/font-awesome/css/font-awesome.min.css',

  26.        './public/lib/fancybox/source/jquery.fancybox.css',

  27.        './public/css/main.css',

  28.        './public/css/lib.css',

  29.        './public/live2dw/css/perTips.css'

  30.    ]).pipe(concat('all.css')).pipe(minify()).pipe(gulp.dest('./public/dist/'));

  31. });

Copy the code

Then, put the compressed, combined JS, CSS into the CDN, 👀 to see the effect:

This is lishaoy.net after clearing the cache home page request speed.

It can be seen that the request time is 4.59s, the total number of requests is 51, and the number of JS requests is 8, and the number of CSS requests is 3 (in fact, there is only one all.css, and the other two are loaded by Google browser). The request time is more than 10 seconds, the total number of requests is more than 70, the number of JS requests is more than 20, compared with the request time performance improved by more than 1 times.

As shown in the picture, there is the home page effect under cache:

Basic is second open 😝.

Tips: After compression and merging, a single file should be within 25 ~ 30 KB. It is best to have no more than 5 resources in the same domain.

Image compression and merging

For example, the gulp image compression code is 👇 :


     
  1. / / compressed image

  2. gulp.task('imagemin', function () {

  3.    gulp.src('./public/**/*.{png,jpg,gif,ico,jpeg}')

  4.        .pipe(imagemin())

  5.        .pipe(gulp.dest('./public'));

  6. });

Copy the code

Images can be merged using CSSSpirite, the method is to use PS to synthesize a picture of some small images, with CSS positioning display the position of each picture.


     
  1. .top_right .phone {

  2. background: url(.. /images/top_right.png) no-repeat 7px -17px;

  3.    padding: 0 38px;

  4. }

  5. .top_right .help {

  6. background: url(.. /images/top_right.png) no-repeat 0 -47px;

  7.    padding: 0 38px;

  8. }

Copy the code

Then, put the compressed image into the CDN, 👀 to see how it looks:

It can be seen that the request time is 1.70s, the total number of requests is 50, and the number of img requests is 15 (here because the home page is large, there is no merge, just compressed), but the effect is very good 😀, shortened from 4.59s to 1.70s, and the performance is doubled.

Take a look at the cache situation 😏 :

The request time is 1.05 seconds, with or without caching.

Tips: For large images on different terminals, use different resolutions instead of zoom (percentage)

The whole compression, merge (JS, CSS, IMG) and then put into THE CDN, the request time from more than 10 seconds, to the last 1.70 s, performance improved more than 5 times, so it can be seen that this operation is necessary.

The cache

The cache stores a copy of the output on request, such as pages, images, files, and when the next request comes in: if it is the same URL, the cache directly responds to the access request with a local copy, rather than sending the request to the source server again. Therefore, performance can be improved in the following two ways.

  • Reduce latency and improve response time

  • Reduces network bandwidth consumption and saves traffic

Let’s take a look at the browser caching mechanism with two diagrams.

1. The first request from the browser

2. The browser requests again

From the above two figures, you can clearly understand the browser cache process:

  • When a URL is accessed for the first time, there is no cache. However, the server responds with header information, such as Expires, Cache-Control, Last-Modified, etag, etc., to record whether and how the next request will be cached.

  • When the URL is accessed again, the browser decides whether to cache it and how to cache it based on the header information returned from the first visit.

Let’s focus on the analysis of the second picture, which is actually divided into two lines, as follows 👇.

First line: When the browser accesses a URL again, it first obtains the header information of the resource and determines whether the strong cache (cache-control and Expires) is matched. If so, it directly obtains the resource from the cache, including the header information of the response (the request does not communicate with the server). This is called strong caching, as shown below:

Second Line: If the strong cache is not hit, the browser sends a request to the server with the header returned from the first request (last-modified/if-modified-since and Etag/ if-none-match). According to the relevant header information in the request, the server compares whether the result is negotiated cache hit. If a match is made, the server returns a new response header that updates the corresponding header in the cache, but does not return the resource content, which tells the browser that it can be retrieved directly from the cache; Otherwise, the latest resource content is returned, which is the negotiated cache.

Now that we know that browser caching is divided into strong caching and negotiated caching, let’s take a look at the differences 👇 :

There are two header fields associated with strong caching:

1, expires

Expires. This is the http1.0 specification, and its value is a time string in absolute time GMT format, such as Mon, 10jun201521:31:12 GMT. If the time to send the request is before Expires, the local cache is always valid, Otherwise, a request is sent to the server to retrieve the resource.

2, the cache-control

Cache-control: max-age=number cache-control: max-age=number cache-control: max-age=number If the request time is earlier than the current request time, the Cache will be hit. If the request time is earlier than the current request time, the Cache will not hit. There are a few more common Settings:

  • No-cache: no local cache is used. Cache negotiation is required to verify with the server whether the returned response has been changed. If there is an ETag in the previous response, the request will be verified with the server. If the resource has not been changed, the re-download can be avoided.

  • No-store: directly forbids the browser to cache data. Each time the user requests the resource, a request will be sent to the server, and the complete resource will be downloaded each time.

  • Public: can be cached by all users, including end users and intermediate proxy servers such as CDN.

  • Private: the device can be cached only by the browser of the terminal user and cannot be cached by a trunk cache server such as the CDN.

If cache-control and Expires exist together, cache-control takes precedence over Expires.

Negotiate the cache

The negotiated cache is negotiated between the browser and the server to determine whether the cache is cached. The two header fields are paired, that is, a last-Modified or Etag field in the response header of the first request. Subsequent requests will carry the corresponding request field (if-modified-since or if-none-match). If the response header does not have a last-Modified or Etag field, the request header will also have no corresponding field.

1, the last-modified/If – Modified – Since

Both values are time strings in GMT format.

  • The first time a browser requests a resource from the server, the server returns the resource with a last-Modified field in the Header of RespOne. This header field indicates when the resource was Last Modified on the server.

  • When the browser requests the resource from the server again, it adds the if-modified-since field to the header of the request. The header field’s value is the last-Modified value returned from the previous request.

  • When the server receives a resource request again, it determines whether the resource has changed according to if-modified-since and the time when the resource was last Modified on the server. If there is no change, 304NotModified is returned, but the resource content is not returned. If there are changes, the resource content is returned as normal. When the server returns a 304NotModified response, the last-Modified header is not added to the response header, because since the resource has not changed, the last-Modified header will not change. This is the response header when the server returns 304.

  • When the browser receives the response from 304, it loads the resource from the cache.

  • If the negotiated cache is not hit and the browser loads the resource directly from the server, the last-Modified Header will be updated when it is reloaded, and if-modified-since will enable the last-modified value returned the previous time on the next request.

2, the Etag/If – None – Match

These two values are unique identifying strings for each resource generated by the server and change whenever the resource changes; The process is similar to last-Modified and if-Modified-since. Unlike last-Modified, when the server returns a 304NotModified response, the ETag has been regenerated. The response header will also return this ETag, even if it’s the same ETag as the previous one.

Last-modified and ETag can be used together. The server validates the ETag first. If the ETag is consistent, the server will continue to compare last-Modified and decide whether to return 304.

Service Worker

What is a Service Worker

The Service Worker essentially acts as a proxy server between the Web application and the browser, and can also act as a proxy between the browser and the network when the network is available. They are designed, among other things, to enable the creation of an effective offline experience, intercept network requests and take appropriate action based on whether the network is available and whether updated resources reside on the server. They also allow access to push notifications and background synchronization apis.

The Service worker can solve the problem of offline applications and do more at the same time. A Service Worker can enable your application to access local cache resources First, so it can still provide basic functions (generally called Offline First) in Offline state before receiving more data over the network. This is what native apps are designed to do, and it’s the main reason why native apps are preferred over Web apps.

Let’s see what 👀service worker can do:

  • Background messaging

  • Network proxy, forwarding requests, forging responses

  • Offline caching

  • Being pushed

  • .

This article uses lishaoy.net as an example to explain how a service worker works.

2. Life cycle

The life cycle of service worker’s initial installation, as shown in 🌠 :

As can be seen from the figure at 👆, the working process of service worker:

  1. Installation: the service worker through serviceWorkerContainer URL. The register () to obtain and registration.

  2. Activation: When the service worker is installed, it receives an activation event. The main purpose of onActivate is to clean up resources used in previous versions of service worker scripts.

  3. Monitor: Two states

  • Terminate to save memory;

  • Listen for fetch and message events.

  • Destroy: It is up to the browser to decide whether or not to destroy a service worker. If a service worker has not been used for a long time or the machine has limited memory, the worker may be destroyed.

  • Tips: Once activated, in Chrome, You can visit chrome://inspect/#service-workers and chrome://serviceworker-internals/ to view the current serviceworker, as shown in 👇 :

    Now, let’s write a simple example 🌰.

    3. Register service worker

    To install a service worker, you need to register it on your page. This step tells the browser where your service worker script is.

    
         
    1. if ('serviceWorker' in navigator) {

    2.  navigator.serviceWorker.register('/sw.js').then(function(registration) {

    3.    // Registration was successful

    4.    console.log('ServiceWorker registration successful with scope: ',    registration.scope);

    5.  }).catch(function(err) {

    6.    // registration failed :(

    7.    console.log('ServiceWorker registration failed: ', err);

    8.  });

    9. }

    Copy the code

    The code above checks if the service worker API is available, and if so, the service worker/sw.js is registered. If the service worker is already registered, the browser automatically ignores the above code.

    Activate the service worker

    After your service worker is registered, the browser tries to install and activate it for your page or site.

    The Install event is triggered after the installation is complete. The Install event is typically used to populate your browser’s offline cache capabilities. You need to define a callback for the Install event and decide which files you want cached.

    
         
    1. // The files we want to cache

    2. var CACHE_NAME = 'my-site-cache-v1';

    3. var urlsToCache = [

    4.  '/',

    5.  '/css/main.css',

    6.  '/js/main.js'

    7. ];

    8. self.addEventListener('install', function(event) {

    9.  // Perform install steps

    10.  event.waitUntil(

    11.    caches.open(CACHE_NAME)

    12.      .then(function(cache) {

    13.        console.log('Opened cache');

    14.        return cache.addAll(urlsToCache);

    15.      })

    16.  );

    17. });

    Copy the code

    In our Install Callback, we need to perform the following steps:

    • Enable a cache

    • Cache our files

    • Determines whether all resources should be cached

    In the above code, we open the cache file name we specify with caches. Open, then we call cache.addall and pass in our array of files. This is done through a series of promises (caches. Open and Cache.addall). Event.waituntil takes a promise and uses it to find out how long the installation took and whether it was successful.

    5. Listen to the service worker

    Now that we have cached your site resources, you need to tell the service worker to do something with the cached content. With the FETCH event, this is easy to do.

    Every time any resource controlled by the service worker is requested, the FETCH event will be triggered. We can add a fetch event listener to the service worker. Then call the respondWith() method on the event to hijack our HTTP responses, and you can update them in your own way.

    
         
    1. self.addEventListener('fetch', function(event) {

    2.  event.respondWith(

    3.    caches.match(event.request);

    4.  );

    5. });

    Copy the code

    Caches. Match (event.request) allows us to match network requested resources with available resources in the cache to see if there are corresponding resources in the cache. The match is made through the URL and the Vary header, just like a normal HTTP request.

    So how do we return request? Here’s an example: 👇 🌰 :

    
         
    1. self.addEventListener('fetch', function(event) {

    2.  event.respondWith(

    3.    caches.match(event.request)

    4.      .then(function(response) {

    5.        // Cache hit - return response

    6.        if (response) {

    7.          return response;

    8.        }

    9.        return fetch(event.request);

    10.      }

    11.    )

    12.  );

    13. });

    Copy the code

    In the code above we define the fetch event. In the event. RespondWith, We pass in a promise.caches. Match lookup request that is hit by the service worker’s cache.

    If we have a hit response, we return the cached value, otherwise we return the result of a real-time fetch request from the network.

    6, sw – toolbox

    Of course, I can also use third-party libraries, such as Sw-Toolbox for Lishaoy.net.

    Using the SW-Toolbox is very simple. Here is an example of lishaoy.net at 👇 🌰 :

                                                                
         
    1. "serviceWorker" in navigator ? navigator.serviceWorker. register( '/sw.js').then( function () {

    2. navigator.serviceWorker.controller ? console.log( "Assets cached by the controlling service worker.") : console.log( "Please reload this page to allow the service worker to handle network operations.")

    3. }). catch( function (e) {

    4. console.log( "ERROR: " + e)

    5. }) : console.log( "Service workers are not supported in the current browser.")

    Copy the code

    This is to register a service woker.

    
         
    1. "use strict";

    2. (function () {

    3.    var cacheVersion = "20180527";

    4.    var staticImageCacheName = "image" + cacheVersion;

    5.    var staticAssetsCacheName = "assets" + cacheVersion;

    6.    var contentCacheName = "content" + cacheVersion;

    7.    var vendorCacheName = "vendor" + cacheVersion;

    8.    var maxEntries = 100;

    9.    self.importScripts("/lib/sw-toolbox/sw-toolbox.js");

    10.    self.toolbox.options.debug = false;

    11.    self.toolbox.options.networkTimeoutSeconds = 3;

    12.    self.toolbox.router.get("/images/(.*)", self.toolbox.cacheFirst, {

    13.        cache: {

    14.            name: staticImageCacheName,

    15.            maxEntries: maxEntries

    16.        }

    17.    });

    18.    self.toolbox.router.get('/js/(.*)', self.toolbox.cacheFirst, {

    19.        cache: {

    20.            name: staticAssetsCacheName,

    21.            maxEntries: maxEntries

    22.        }

    23.    });

    24.    self.toolbox.router.get('/css/(.*)', self.toolbox.cacheFirst, {

    25.        cache: {

    26.            name: staticAssetsCacheName,

    27.            maxEntries: maxEntries

    28.        }

    29. .

    30.    self.addEventListener("install", function (event) {

    31.        return event.waitUntil(self.skipWaiting())

    32.    });

    33.    self.addEventListener("activate", function (event) {

    34.        return event.waitUntil(self.clients.claim())

    35.    })

    36. }) ();

    Copy the code

    So done 🍉 (specific usage can go to https://googlechromelabs.github.io/sw-toolbox/api.html#main to view).

    Some students ask, “Service worker is so easy to use, how much cache space is there?” In fact, you can see it in Chrome, as shown below:

    As you can see, it’s about 30GB. My site only uses 183MB, which is more than enough for 🍓.

    Finally, two images:

    Due to the length of the article, we will continue to summarize the optimization of architecture, for example:

    • Bigpipe blocks output

    • Bigrender blocks render

    • .

    And, render optimization, for example:

    • requestAnimationFrame

    • well-change

    • Hardware-accelerated GPU

    • .

    And, performance testing tools such as:

    • PageSpeed

    • audits