译 文 : The State of The Web: A guide to impactful performance Improvements

The Internet is evolving very quickly, so we created the Web platform. Often we ignore issues like connectivity, but users don’t. A glance at the world Wide Web reveals that we haven’t built it with compassion, flexibility, and performance.

So, what is the state of the Web today?

Of the 7.4 billion people on the planet, only 46 percent have Internet access. The upper limit of average network speed is 7Mb/s. What’s more, 93 percent of all Internet users are accessing the Internet from mobile devices — and not being able to use a mobile device is frowned upon. Typically, data is more expensive than we assume — it can take anywhere from 1 to 13 hours to buy a 500MB packet (German vs. Brazil; For more interesting statistics, see Beyond the Bubble: The Real World Performance by Ben Schwarz).

Our site isn’t perfect either — the average site is the size of the original Doom game (about 3 MB) (note that the median should be used for statistical accuracy, reading Ilya Grigorik’s excellent “average page” is a myth, the median site size is currently 1.4MB). Images can easily take up 1.7MB of bandwidth, and the average JavaScript size is 400KB. This is not just a Web platform problem, native applications can be worse, remember when you had to download a 200MB installation package to get a bug fix?

Technicians often find themselves in a privileged position. With the latest high-end laptops, cell phones, and fast wired Internet connections, it’s easy to forget that these aren’t something everyone has (in fact, really few).

If we build web platforms from the perspective of privilege and lack of compassion, it leads to a bad experience of exclusivity.

Given the performance of design and development, how can we do better?


Optimize all resources

Understanding how browsers analyze and process resources is one of the most powerful but underutilized ways to significantly improve performance. Browsers, it turns out, are pretty good at sniffing out resources, parsing them and prioritizing them. Here is the source of the key request.

A request is critical if it contains resources necessary to render content in the user’s viewport.

For most sites, it will be HTML, necessary CSS, logos, web fonts, and possibly images. In many cases, dozens of other unrelated resources (JavaScript, trace code, advertising, and so on) affect the critical request. Fortunately, we can control this behavior by carefully selecting and prioritizing important resources.

With we can manually force resources to be prioritized to ensure that the desired content is delivered on time. This technique can significantly improve the “interaction time” metric, making the best user experience possible.

Critical requests still seem like a black box to many, and the lack of shareable data doesn’t change that. Fortunately, Ben Schwarz has published a very comprehensive and approachable article on the subject – Key Requests. Also, see Addy’s article, Preload, Prefetch and Priorities in Chrome.

[Enable priority in Chrome Developer Tools]

To keep track of how well you’re doing with your request priorities, use Lighthouse’s performance tool and critical request chain auditing tool, or check the Request priorities under the Web TAB in Chrome Developer Tools.

Generic performance list

  1. Active caching
  2. Enable compression
  3. Optimize the priority of key resources
  4. Using CDN (Content Delivery Networks)

Image optimization

Images typically account for most of the payload of web page delivery, so image optimization can provide the greatest performance gains. There are many existing strategies and tools that can help us remove the extra bytes, but the first question to consider is: “Is the image critical to the message and effect I want to convey?” . If you can eliminate it, you can save not only bandwidth, but requests as well.

In some cases, similar results can be achieved with different techniques. CSS, for example, has a set of art-oriented attributes such as shadows, gradients, animations, and shapes that allow us to construct DOM elements with the appropriate style.

Choose the correct format

If you can’t get rid of images, it’s important to determine which format works best. The first choice is between vector and raster graphics:

  • Vector graphics: resolution independent, usually smaller. Great for logos, ICONS and simple graphics, including basic shapes (lines, polygons, circles and dots).
  • Raster graphics: Render more detailed information, more suitable for photos.

Once you’ve made your first decision, you can choose between JPEG, GIF, PnG-8, PnG-24, or the latest WEBP and JPEG-XR formats. With so many options, how do we make sure we make the right one? Here are the basics for finding the best format:

  • JPEG: very colorful images (such as photographs)
  • PNG — 8: a relatively monochromatic image
  • PNG — 24: partially transparent image
  • GIF: dynamic figure

Photoshop can optimize each of these formats through various Settings, such as reducing quality, noise, or the number of colors. Make sure the designer understands the performance practices described above and is able to optimize the images in the right format. If you want to learn more about how to handle images, read Lara Hogan’s great article Designing for Performance.

Try a new format

Image formats have several newer players, namely WebP, JPEG 2000, and JPEG-XR. They were all developed by browser vendors: Google’s WebP, Apple’s JPEG 2000 and Microsoft’s JPEG-XR.

WebP is the most popular contender and supports lossless and lossy compression, which makes it very flexible. Lossless WebP is 26% smaller than PNG and 25-34% smaller than JPG. WebP, with 74% browser support, can be safely degraded, saving up to a third of the transmitted bytes. JPG and PNG can be converted to WebP in Photoshop and other image processing applications as well as in the command line interface (Brew Install webp).

If you want to explore the visual differences between other formats, recommend this awesome Demo on Github.

Optimize with tools and algorithms

Even if you use an efficient image format, post-processing optimization should not be skipped. This is an important step.

If you choose smaller SVG sizes, they can also be compressed again. SVGO is a command-line tool that allows you to quickly optimize SVG by stripping away unnecessary metadata. Alternatively, if you prefer a Web interface or are limited by your operating system, use Jake Archibald’s SVGOMG. Because SVG is based on XML, it can also be GZIP compressed on the server side.

ImageOptim is the best choice for most other image types. Including PngCrush, PngQuant, MozJPEG, Google Zopfli and more, it bundles a bunch of great tools in a comprehensive open source package. ImageOptim can be easily implemented into existing workflows in the form of Mac OS applications, command line interfaces, and Sketch plug-ins. For those scenarios on Linux or Windows, most ImageOptim’S CLI is available on your platform.

If you’re inclined to try emerging encoders, Google released Guetzli, an open source algorithm derived from WebP and Zopfli, earlier this year. Guetzli can produce jPeGs of up to 35% smaller volume than any other compression method available. The only downside: slow processing time (1 minute per megapixel on the CPU).

When selecting tools, make sure they produce the results you want and fit into your team’s workflow. Ideally, the optimization process is automated so that there are no omissions.

Responsive picture

Ten years ago, we could serve everyone with one resolution, but times are changing so fast that today’s responsive Web is not what it used to be. This is why we must be very careful to carefully optimize our visual resources to make sure they are suitable for all kinds of viewport devices. Fortunately, thanks to the Responsive Images community team, we were able to perfectly use the Picture element and srcset attributes (both 85% + approval rating).

Srcset properties

srcsetThis works best in resolution switching schemes — that is, when we need to display images based on the user’s screen density and size. Based on thesrcsetandsizeProperty, the browser will select the best image and serve it to the viewport accordingly. This technique can result in significant bandwidth and request savings, especially for mobile users.



[Srcset Example]

Picture element

pictureElements andmediaAttributes are designed to make artistic design easy. By providing different images for different situations (tested by media queries), we were able to always keep the most important elements of the image in focus, regardless of resolution.



[Picture example]

Be sure to read Jason Grigsby’s Guide to Responsive Images 101 for a thorough exposition of both approaches.

Use picture CDN for distribution

The final step in visual optimization is distribution. All resources can benefit from using content distribution networks, but there are specific tools for image optimization, such as Cloudinary and imgX. The benefits of using these services go far beyond reducing traffic on the server and significantly reducing response latency.

CDN can well solve the complexity of image-heavy websites, including responsive service and image optimization. The products are different (and so are the prices), but most of the solutions are sizing and tailoring to determine which format works best for users, depending on the device and browser. Even more — they can compress, detect pixel density, watermark, recognize faces, and allow post-processing power. With these powerful capabilities, and the ability to attach parameters to urls, user-centric image services are made easy.

Image Performance List

  1. Choose the correct image format
  2. Use vector graphics whenever possible
  3. If the change is not obvious, reduce the image quality
  4. Use new format images
  5. Use tools and algorithms for optimization
  6. learningsrcsetandpicture
  7. Use picture CDN

Web Font optimization

Custom fonts are a very powerful design tool. But with power comes a lot of responsibility. With 68% of all Web sites using Web fonts, this type of resource is one of the performance killers (easily up to 100KB on average, depending on the number of variants and fonts).

Even if volume isn’t the biggest problem, invisible text flash (FOIT) is. FOIT occurs when a Web font is loading or fails to load, leaving the page blank and making the content unreachable. It might be worth checking carefully whether we need Web fonts first. If so, there are strategies that can help us mitigate the negative impact on our business.

Choose the correct format

There are four web font formats: EOT, TTF, WOFF, and most recently WOFF2. TTF and WOFF are widely used and have over 90% browser support. Depending on the support, it is most likely to be safe to use WOFF2 and demote to WOFF in older browsers. The advantages of using WOFF2 are a set of custom preprocessing and compression algorithms (such as Brotli), and an approximately 30% reduction in file size and improved parsing capability.

When defining the source of a web font in @font-face, use the format() prompt to specify which format should be used.

If you use Google Fonts or Typekit to provide Fonts, both tools implement policies to optimize their performance. Typekit can now service all suites asynchronously, preventing FOIT and allowing its JavaScript suite code to be cached for 10 days instead of the default 10 minutes. Google Fonts automatically provides minimal files based on the user’s device.

Review font range

The number of fonts, font size, and font style, whether self-hosted or not, can significantly affect your performance budget.

Ideally, all we need is a regular and bold font. If you are not sure how to choose a font range, please refer to Lara Hogan’s Weighing Aesthetics and Performance.

Use Unicode range subsets

Unicode range subsets allow large fonts to be split into smaller collections. This is a relatively advanced strategy, especially when dealing with Asian languages, and can lead to significant savings (did you know that Chinese fonts have an average of 20,000 characters?). . The first step is to limit the font to the necessary set of languages, such as Latin, Greek or Cyrillic. If you use only Web fonts for Logo classes, you should use Unicode range descriptors to select specific characters.

Filament Group has an open source command-line tool that generates glyph hanger lists of necessary glyph characters based on files or urls. Alternatively, the Web-based Font Squirrel Web Font Generator provides advanced subset and optimization options. Selecting language subsets using Google Fonts or Typekit is easier if built into the font selector interface.

Set up a font loading policy

Fonts block rendering — because the browser must first build the DOM and CSSOM; Browsers do not download Web fonts until they use CSS selectors that match existing nodes. This behavior significantly delays text rendering, often resulting in the previously mentioned invisible text flash (FOIT). FOIT is even more noticeable on slower networks and mobile devices.

Enforce a font loading policy to prevent users from accessing your content. Often, choosing unstyled text flash (FOUT) is the simplest and most efficient solution.

font-displayIs a new CSS property that provides a non-javascript dependent solution. Unfortunately, it is only partially supported (Chrome and Opera) and is currently being developed in Firefox and WebKit. Nevertheless, it can and should be used in conjunction with other font loading mechanisms.



[font display attribute practice]

Fortunately, Typekit’s Web Font Loader and Bram Stein’s Font Face Observer help manage Font loading behavior. In addition, Web font performance expert Zach Leatherman has published a comprehensive guide to font loading strategies that will help you choose the right approach for your project.

Font performance List

  1. Choose the correct format
  2. Review font range
  3. Use Unicode range subsets
  4. Set up a font loading policy

JavaScript to optimize

Currently, the average size of JavaScript is 446 KB, making it the second largest resource type (images are the first).

What we may not realize is that our beloved JavaScript hides even more serious performance bottlenecks.

Monitor JavaScript traffic

Optimizing delivery is only the first step in addressing web bloat. Once the JavaScript is downloaded, it must be parsed, compiled, and run by the browser. Take a quick look at some popular websites and it’s obvious that gzip compressed JS is inAt least three times larger after decompression. In fact, we’re sending out a bunch of code.



1MB JavaScript parsing time on different devices. Images by Addy Osmani and hisJavaScript Start-up PerformanceArticle provided.

Analyzing parsing and compilation times is critical to understanding whether your application is ready for interaction. These times vary according to the hardware capabilities of the user’s device.Parsing and compiling can easily be 2-5 times higher on low-end phones.AddyThe study confirmed that an application would take 16 seconds to become interactive on a regular phone and 8 seconds on a desktop computer. Analyzing these metrics is crucial, and fortunately, you can do it with Chrome DevTools.



[View parsing and compilation in Chrome Development Tools]

Be sure to read Addy Osmani’s detailed explanation of JavaScript startup performance.

Get rid of unnecessary dependencies

The way modern package managers work, it’s easy to mask the number and size of dependencies. Webpack-bundle-analyzer and Bundle Buddy are great visualization tools to help identify code duplication, biggest performance issues, and outdated, unnecessary dependencies.

Figure webPack Bundle Analyzer practice

With the Import Cost extension in VS Code and Atom, we can make the Cost of importing dependencies more obvious.

Figure VS CodeImport Codeextension

Implementing code segmentation

Whenever possible, we should provide only the resources necessary for the user experience. Sending users a complete bundle.js file, complete with interaction handling code that they may never see, is not ideal (assuming you download JavaScript to handle the entire application when you visit the landing page). Similarly, we should not generally provide code that is specific to a particular browser or user agent.

Webpack, one of the most popular module packers, has built-in code splitting support. The simplest code splitting can be done by page (home.js for landing pages, contact.js for contact pages, etc.), and Webpack also provides some advanced policies, such as dynamic import and lazy loading, that are worth a look.

Consider frame selection

JavaScript front-end frameworks are changing fast. React was the most popular choice, according to a 2016 JavaScript survey. A closer look at your architectural choices may reveal that you can use a more lightweight alternative, such as Preact (note that Preact is not a complete React re-implementation, but a high-performance, lighter virtual DOM library). Similarly, we can replace a larger library with a smaller version — moment.js with date-fns (or, in certain cases, remove unused locales from moment.js).

Before starting a new project, it is necessary to determine what functionality is required and choose the most high-performance framework for your needs and goals. Sometimes this might mean choosing to write more native JavaScript.

JavaScript Performance List

  1. Monitoring JavaScript traffic
  2. Get rid of unnecessary dependencies
  3. Implementing code segmentation
  4. Consider frame selection

Performance tracking, the way forward

We’ve discussed some strategies that in most cases will make a positive difference to the user experience of the product we’re building. Performance can be a tricky issue, and it is necessary to track the results of our adjustments over time.

User-centric performance metrics

Excellent performance metrics, designed to portray the user experience as closely as possible. In the past, onLoad, onContentLoaded, or SpeedIndex gave very little information about how quickly the user could interact with the page. Quantifying the perception of performance is difficult when focusing on transport resources. Fortunately, there is time to fully describe the visibility and interactivity of content.

These metrics are First Paint, First Meaningful Paint, Visual Complete, and Time to Interactive.

  • First Render: The change in the browser from a white screen to the first visual rendering.
  • First meaningful rendering: text, images, and main content are visible.
  • Visual integrity: Everything in the viewport is visible.
  • Interactive time: Everything in the viewport is visible and can be interacted with (the JavaScript main thread stops being active).

These times correspond directly to the user’s actual experience and can therefore be tracked as a focus. If possible, record them all, otherwise choose one or two to better monitor performance. Other metrics need to be watched as well, especially the number of bytes we send (optimized and decompressed).

Set a performance budget

All these reported numbers can quickly become confusing and hard to understand. Without actionable goals and objects, it’s easy to lose sight of our original purpose. A few years ago, Tim Kadlec wrote about the concept of a performance budget.

Unfortunately, there is no magic formula. Performance budgets often boil down to competitive analysis and product goals, which are different for every business.

When setting a budget, it is important to achieve a significant difference, usually at least 20% improvement. Practice and iterate on your budget, using Lara Hogan’s approach to new design and performance budgeting as a reference.

Try the Performance Budget Calculator or Chrome extension for Browser Calories to help create a budget.

Continuous monitoring of

Monitoring performance should not be manual. There are many powerful tools out there that provide comprehensive reporting.

Google LighthouseIs an open source project that can audit performance, accessibility, progressive web applications, and more. You can use Lighthouse from the command line or directly in Chrome Developer Tools.



[One-off Performance Audit of Lighthouse Operation]

For continuous tracking, select SelectCalibre, it can provide performance budgeting, device emulation, distributed monitoring, and many other features without us having to carefully build our own performance suite.



[] Calibre statements

Whatever you’re tracking, make sure the data is transparently accessible to the entire team or organization.

Performance is a shared responsibility that goes far beyond the developer team — we are all responsible for the user experience we create, regardless of role or rank.

It is important to advocate speed and establish collaborative processes to expose potential bottlenecks early in the product decision or design stages.

Build sexual awareness and compassion

Caring about performance is not just a business goal (but if you need to sell through sales statistics, you can sell through PWA statistics). It’s about basic compassion and user experience in the first place.

As technologists, it is our responsibility not to let the user’s attention and time be spent on the waiting page, which could be happily spent elsewhere. Our goal is to build tools that are aware of time and people’s concerns.

Promoting performance awareness should be everyone’s goal. Let’s build a better, more meaningful future for all of us with compassion and performance.