: the original images. Guide /

By Addy Osmani

Translator: Bobby King Kong

Translation skills are limited, please point out any mistakes


preface

Automatic compression of images

Image optimization should be automated. It’s easy to forget, best practices are easy to change, and content that doesn’t make it through the build pipeline is easy to get lost. To automate, you can build applications using Imagemin or Libvips, but this is not the only one, there are plenty of alternatives.

Most CDNS (like Akamai) and third-party solutions (Cloudinary, IMgix, Fastly’s Image Optimizer, Instart Logic’s SmartVision, ImageOptim API) provide a comprehensive, professional automatic image optimization solution.

It doesn’t hurt to talk about money; you’ll spend more on reading a blog or reconfiguring your project than the monthly fee for a three-way service (Cloudinary has a free plan). If you don’t want to outsource the work because of cost or delay, the open source projects mentioned above are a solid choice. Imageflow, Thumbor and other projects offer self-hosted alternatives.

Efficient compression of images

You should at least use ImageOptim. It can significantly reduce image size while maintaining visual quality. Windows and Linux alternatives are also available.

More specifically, run JPEG through MozJPEG (q = 80 or lower, for Web content) and consider supporting progressive JPEG, PNG images through PngQuant, and SVG optimized through SVGO. Reduce file size by explicitly removing metadata (–strip for pngquant). Instead of using bulky animated GIFs, use H.264 video (or WebM, which is supported by Chrome, Firefox, and Opera). If not, at least optimize with Giflossy. If you can spare extra CPU cycles and tolerate excessively slow coding times, you can try Guetzli in scenarios where above-average network quality is required.

Some browsers use Accept headers to show their support for different image formats, which can be used to provide different image formats for different browsers: for example, lossy WebP images for Blink based browsers and JPEG/PNG for others for optimization purposes.

You can further optimize. There are many tools to generate and provide SRcset breakpoints, and Client Hints to enable automatic resource selection in Blink based browsers. And with save-data, less Data can be sent to users who select ‘Data savings’.

Note: Client Hints, Save-data: Hints and Hints Thinking: Combine Service Worker to realize image optimization processing.

In fact, a ServiceWorker is a client agent running in a browser. It intercepts all outgoing requests and allows you to examine, rewrite, cache, and even synthesize responses. Images make no difference, and with Client Hints enabled, ServiceWorker can recognize image requests, check the provided Client prompts, and define its own processing logic. More content

The smaller the image file size, the better the web experience — especially on mobile devices. In this article, we will explore ways to reduce image size with modern compression techniques while minimizing the impact on image quality.


The body of the

introduce

Graphics are still the number one performance killer

Image files are usually large in size, so they take up a lot of bandwidth. According to data from HTTP Archive, 60% of the data transferred to obtain a web page are JPEG, PNG and GIF images. As of July 2017, images accounted for 1.7MB of the average 3.0MB of web content.

According to Tammy Everts, adding images to a page or making existing images bigger can improve conversion rates. Images are an essential part of web pages, so it is important to develop an effective compression strategy.

Conversion rate refers to the percentage of site visitors who convert to active customers

Reducing the size of an image file is just one of many metrics of image optimization, which ultimately depends on the visual fidelity that is actually required.

Image optimization: Select the right format, carefully compress and prioritize key images over those that can be loaded lazily.

Common image optimizations include compression, using / < IMG SRcset > to gracefully reduce in response to screen size, and resizing them to reduce image decoding costs.

According to the HTTP ArchiveThe data showed that the 95th bit saved 30KB per image

So, we have plenty of room to optimize the image.

ImageOptim is free and reduces image size by stripping unnecessary EXIF metadata with modern compression techniques.

So, if you’re a designer, install an ImageOptim Plugin for Sketch that optimizes the size when exporting images, which saves a lot of time.

How do I know if my image needs to be optimized?

Here is a performance test site that can be highlighted to remind you of the direction of image optimization!

WebPageTest’s report provides a list of images that can be compressed and optimized more effectively, as well as an estimated optimized volume

Lighthouse is a cool tool that provides a variety of best practices for performance optimizations, including optimizations for images: suggestions for which images can be further compressed or which off-screen images can be lazy-loaded.

Starting with Chrome 60, Lighthouse is supported in the Audits panels in Chrome DevTools:

You may also be familiar with other performance auditing tools, such as PageSpeed Insights or the Cloudinary Test by Cloudinary.

How do I choose an image format?

As Ilya Grigorik points out in her Guide to Image optimization, the correct “image format” is a combination of visual and functional requirements. Are you using vector images or raster images?

Raster images are represented by encoding the value of each pixel within a rectangular grid of pixels. Raster images don’t have such nice properties that are independent of resolution or zoom, and when you zoom in on a raster image, the image will appear jagged and blurry. Therefore, you may need to save multiple versions of raster images at different resolutions to provide the best experience for your users. In pursuit of “photo-realistic” scenes, raster image formats (such as GIF, PNG, JPEG, or one of the newer formats, jPEG-XR and WebP) should be used.

Vector images use lines, points, and polygons to represent the image. Vector images are best suited for images that contain simple geometric shapes (such as logos, text, ICONS, etc.) and can produce sharp results at any resolution and zoom setting.

Low-key JPEG

JPEG is probably the most widely used image format in the world, with 45% of images on web sites captured by the aforementioned HTTP Archive being JPEG. Your phone, DSLR, camera… Both support this format. It is an ancient format that was first proposed in 1992 and has been refined and improved over time.

JPEG is a standard method of lossy compression that does its best to discard information to save space while maintaining visual fidelity.

JPEG does not support animation, transparency, and 24 bits per pixel (i.e., it can represent up to math.pow (2, 24) colors), so it works well on visual fidelity and is often used for banners and rotations.

Choose a picture quality you can live with

JPEG image formats are best for images with multiple color areas. Be aware that excessive compression can lead to halo, color block loss, etc.

Therefore, when compression is carried out, it is necessary to combine the actual functional requirements and visual requirements to formulate a plan. Select high-quality compression scheme when image quality is higher than performance and bandwidth.

JPEG compression mode

There are several different compression modes for JPEG, the three most popular being Baseline (sequential), Progressive JPEG (PJPEG), and Lossless. (Linear, progressive, lossless)

The difference between progressive and linear

Linear JPeGs (the default setting for most picture editing and optimization tools) are encoded and decoded in a relatively simple way: from top to bottom. So when the image loads slowly or the connection is unstable, what the user sees is a gradual loading from top to bottom. Lossless type with lei S

Progressive JPEG splits an image into multiple “scans,” with the first “scan” loading a blurry, low-quality image and subsequent “scans” improving the image. The process you can imagine is to gradually improve the image, adding more detail, and finally producing a full-quality image.

Lossless JPEG optimization is achieved by deleting EXIF data added by the digital camera or editor, optimizing the hofmann table of the image, or rescan the image. Tools such as JPEGtran enable lossless compression by rearranging compressed data without degrading image quality. Jpegrescan, Jpegoptim, and Mozjpeg also support lossless JPEG compression.

Advantages of progressive JPEG

The low resolution “preview” function provided by PJPEG at load time improves perception performance. The user will feel a little faster loading. On slower 3G connections the user can see the image vaguely and decide what to do next, which is a better user experience than loading from top to bottom.

For images over 10KB, PJPEG has a 2-10% bandwidth reduction compared to linear JPEG. PJPEG has a higher compression ratio, thanks to the ability to have its own dedicated optional Hoffman table for each scan in JPEG. Modern JPEG encoders (e.g. Libjpeg-turbo, MozJPEG, etc.) take advantage of the flexibility of PJPEG to better package data.

Note: Why does PJPEG compress better? Linear JPEG encodes one block at a time. With PJPEG, similar DCT coefficients that span multiple blocks can be encoded together for better compression.

Who uses PJPEG in production?

1. Twitter.com ships Progressive JPEGs

2. Facebook ships Progressive JPEGs for their iOS app

3. Yelp switched to Progressive JPEGs

Progressive JPEG is used by many sites with dense images. Such as Pinterest

Disadvantages of progressive JPEG

Progressive JPEG decoding is slower than linear JPEG – sometimes up to 3 times faster. This may not be too much of a problem on a PC with a powerful CPU, but it can be a problem on a mobile device with limited power. Displaying incomplete layers requires multiple decodes, and these multiple passes consume CPU cycles.

Progressive JPeGs are not always smaller. For very small images, such as thumbnails, progressive JPeGs can be larger than their linear counterparts. Also, for such small thumbnails, progressive rendering may not really provide that much value.

Performance optimization is all about finding a balance between conflicts. The use of progressive JPEG also strikes a balance between image size, CPU, network conditions, and other requirements.

Note: PJPEG (and all JPeGs) can sometimes be hardware decoded on mobile devices. It doesn’t improve RAM impact, but it can eliminate some CPU issues. Not all Android devices support hardware acceleration, but high-end devices do, and so do all iOS devices.

How to create progressive JPeGs

ImageMagick, libjpeg, JpegTRAN, JPEG-ReCompress and Imagemin all support progressive jpeg image export.

Easy to incorporate into our automated build:

const gulp = require('gulp')
const imagemin = require('gulp-imagemin')

gulp.task('images'.function(){
  return gulp.src('images/*.jpg')
    .pipe(imagemin({ progressive: true }))
    .pipe(gulp.dest('dist'))})Copy the code

You can also generate progressive JPeGs using image editing tools.

Chroma (or color) sampling

The human eye is a little more tolerant of loss of color detail than of loss of brightness. Taking advantage of this, reducing the color accuracy of chromaticity sampling compression methods can effectively optimize the file size, in some cases up to 15%-17% volume reduction. It will not affect image quality, and can be used for JPEG, but also reduce image memory usage.

Since contrast is responsible for forming the shapes we see in the image, it is important to define its brightness. Old or filtered black and white photos, for example, may contain no color, but can still be as detailed as their color counterparts because of their brightness. Therefore, it can be seen that chroma (color) has little influence on visual perception.

The figure above shows a common sample of subsampling. 4:4:4, 4:2:2 and 4:2:0.

1. 4:4:4 is not compressed, so color and brightness are completely transmitted. 2. Half sampling in horizontal and full sampling in vertical. Sample half of the pixels in the first row and ignore the second row.Copy the code

By reducing the number of pixels in the chroma component, you can significantly reduce the size of the color component and ultimately the size of the bytes.

It can be seen that at a mass of 80, the chromaticity sampling strategy results in a reduction in volume with no visual violation

Chromaticity sampling is also not suitable for all scenarios, such as medical images, where chromaticity is just as important as brightness. Images containing fonts are also affected (see below), as poor sub-sampling of text reduces its readability. Sharp edges are harder to compress with JPEG because it is designed to better handle photographic scenes with softer transitions.

There is no exact way to specify chrominance subsampling in the JPEG specification, so different decoders handle it differently.

For example, when ‘Save for Web ‘is used in Photoshop, chrominance sampling is automatically performed. When the image quality is set between 51 and 100, subsampling is not used at all (4:4:4). When the mass is below this value, a 4:2:0 subsample is used. This is one reason you can observe a larger reduction in file size when switching mass from 51 to 50.

The status quo

What we often do now is provide different formats for different browsers!

Here’s a list of new gadgets:

JPEG2000 (2000) Improved version, browser support: Safari Desktop + iOS

JPEG XR (2009) supports HDR and wide gamut space JPEG and JPEG 2000 alternatives. Generates smaller files than JPEG at slightly slower encoding/decoding speeds. Browser support: Edge + IE.

WebP (2010) Google published a block-based predictive image format that supports lossy and lossless compression. Provides byte savings associated with JPEG and supports transparency. Lack of chroma subsampling configuration and progressive loading. Decoding time is also slower than JPEG. Browser support: Chrome + Opera. Experimental support for Safari and Firefox.

FLIF (2015) claims to be superior to all image formats mentioned above, with browser support: none. Note that there is a JS in-browser decoder.

BPG (2015) is unlikely to gain wide traction due to licensing issues. Browser support: None. Note that there is a JS in-browser decoder.

HEIF (2016) Apple announced at WWDC that they would explore switching to HEIF using JPEG on iOS, citing a 2x reduction in file size. Browser support: Not available at the time of writing. Finally, Safari desktop and iOS 11 support.

These different image formats are mentioned because it has been a common optimization approach to provide images in the best format supported by different browsers.

Next, let’s talk about options when you can’t conditionally offer different image formats: optimized JPEG encoders.

Optimized JPEG encoder

Modern JPEG encoders attempt to generate smaller, higher-fidelity JPEG files while maintaining compatibility with existing browsers and image processing applications. They avoid the need to introduce new image formats or changes into the ecosystem in order to achieve compression gains. Two such encodings are MozJPEG and Guetzli.

How to choose the right encoder:

1. Use MozJPEG for general network resources 2. Choose Guetzli if quality is important and coding time is not importantCopy the code

Configurable if required:

  • JPEGRecompress
  • Jpegmini it is similar to Guetzli – automatically selects the best quality. It’s not as technically complex as Guetzli, but it’s faster and aims to be a better fit for the quality range of the network.
  • The ImageOptim API is unique in color handling!

About MozJPEG

Mozilla’s own son, claims to reduce JPEG file size by 10%. Some of its features, such as progressive scan optimization and mesh quantization, can be used to create high DPI images.

const gulp = require('gulp');
const imagemin = require('gulp-imagemin');
const imageminMozjpeg = require('imagemin-mozjpeg');

gulp.task('mozjpeg', () =>
    gulp.src('src/*.jpg')
    .pipe(imagemin([imageminMozjpeg({
        quality: 85
    })]))
    .pipe(gulp.dest('dist')));Copy the code

As you can see, it still effectively reduces the file size.

SSIM is a method used to measure the similarity between two images, where the SSIM score is a measure of the quality of an image, assuming that the target image (being compared) is “perfect”.

In my experience, MozJPEG is a good choice for compressing web images with high visual quality while reducing file sizes. For small to medium sized images, I’ve found that MozJPEG (quality = 80-85) can save 30-40% of the file size while maintaining an acceptable SSIM. Its encoding costs are slower than baseline JPEG, but that’s no reason to reject it.

About Guetzli

const gulp = require('gulp');
const imagemin = require('gulp-imagemin');
const imageminGuetzli = require('imagemin-guetzli');

gulp.task('guetzli', () =>
    gulp.src('src/*.jpg')
    .pipe(imagemin([
        imageminGuetzli({
            quality: 85
        })
    ]))
    .pipe(gulp.dest('dist')));Copy the code

Also brings significant volume optimization effect

More reference

1. About srcset

2. Html 5.1 Use responsive images

3. Client Hints Introduction

4. New idea of mobile terminal image optimization in PWA era

5. 2018 Front-end Performance Checklist

6. Client Hints Demo

7. Responsive images