• Parallel Streaming of Progressive images
  • Andrew Galloni, Kornel Lesiński
  • The Nuggets translation Project
  • Permanent link to this article: github.com/xitu/gold-m…
  • Translator: twang1727
  • Proofreader: Suhanyujie

Progressive image rendering and HTTP/2 multiplexing have been around for some time, but now we’re combining them in a new way to make them more powerful. With Cloudflare’s progressive streaming, images can load in half the time, and browsers can start rendering pages faster.

  • Video: crates. Rs/cf – test/com…

The server establishing the HTTP/1.1 connection has no control over the order in which resources are sent to the client; The response is sent as an indivisible whole in the exact order of the browser-side request. HTTP/2 improves in this respect by increasing multiplexing and prioritization, allowing the server to decide what data to choose to send and when. We leverage these new HTTP/2 capabilities to increase the perceptual loading speed of progressive images by prioritizing the most important segments of the image data.

This feature is compatible with all major browsers and doesn’t require any changes to the page markup, making it easy to use. Sign up for the Beta to make this feature available on your site!

What is progressive image rendering?

Normal images are loaded strictly from top to bottom. If the browser receives only half of the image file, it will only display the top half of the image. The content of progressive images is not arranged from top to bottom, but from lowest to highest level of detail. Even if the browser receives only a portion of the data, it can display the entire image, albeit with a loss of sharpness. As more data is received, the picture becomes clearer.

This feature is useful for JPEG because it takes only 10-15% of the data to show the image preview, and when 50% of the data is loaded, the image looks almost as sharp as when the entire file arrived. Progressive JPEG images have the same data as normal images, but are sorted in a more user-friendly way, so progressive rendering does not increase the file size. This is possible because JPEG does not store images as pixels. It represents the image as a frequency factor, like a series of predefined patterns that can be mixed in any order to recreate the original image. JPEG internals are quite interesting, and you can learn more about them from my permorflash.now () conference talk.

The end result is an image that looks almost indistinguishable from being loaded in half the time, and costs nothing. The page looks complete and can be put to use faster. The rest of the image data will arrive quickly and upgrade the image to full sharpness before the viewer notices any defaults.

HTTP/2 progressive streaming

But there’s a problem. The site will have more than one image (sometimes hundreds of images). Progressive rendering doesn’t help much when the server is transferring images one by one in the raw way, because in general images are still loaded sequentially:

Receiving all the data from half of the images (but not any data from the remaining half) looks worse than receiving half of the data from all the images.

And there’s another problem: when the browser doesn’t know the size of the image, it lays it out with placeholders, which it relays as each image loads. This causes the page to jump during loading, which is intrusive, distracting and annoying to the user.

Our new progressive streaming feature greatly improves this situation: we can send all images in parallel at once. This allows the browser to get the size of all images as quickly as possible, so it can preview all images without waiting for a lot of data, and large images don’t delay the loading of styles, scripts, and other important resources.

The concept of parallel streaming progressive image loading is as old as HTTP/2 itself, but it requires special processing from the underlying components of the web server, so so far it has not been widely used.

When we improved HTTP/2 priority, we realized that it could be used to improve this functionality as well. Image files as a whole are neither high priority nor low priority. Each file has a different priority and dynamically adjusting the priority gives us the behavior we need:

  • Image headers with image sizes are a high priority because browsers need to know the size early to lay out the page. The image header is small, and it doesn’t hurt to send it before other data.

  • The minimum amount of image data needed to show the image preview is a medium priority (we need to leave room for early completion of unloaded images, but also leave enough bandwidth for scripts, fonts, and other resources).

  • The rest of the image data is low priority. Browsers can accept it to improve picture quality when it doesn’t matter because the web page is already fully available.

  • To understand the exact amount of data being sent at each stage, you need to understand the structure of the image file, but having a web server parse the image response and hardcode the format-based behavior at the protocol level would seem weird. By treating this problem as a dynamic change in priority, we can elegantly separate the underlying network coding from the image format information. We can use Worker or offline image processing tools to analyze images and guide the server to change the HTTP/2 priority accordingly.

The advantage of image parallel streaming is that it does not add any overhead. We’re still sending the same data, the same amount of data, we’re just sending it in a smarter way. This trick takes advantage of existing web standards, so it is compatible with all browsers.

The waterfall figure

The following waterfall diagram from WebPageTest shows a comparison between a normal HTTP/2 response and a progressive flow. The files are the same in both cases, the amount of data transferred is the same, and the total page load time is the same (within the measurement error). In the figure, the blue segment indicates that data is being transferred and the green segment indicates that the request is waiting.

The first figure shows typical server behavior with images loaded sequentially. The images look neat, but the actual page loading experience isn’t great — the last image doesn’t start loading until near the end.

The second figure shows parallel image loading. The blue vertical lines represent the early stages of image hair and the later stages of progressive rendering. You can see that the useful part of the data for all the images arrived earlier. You may also notice that one image was sent all at once, rather than in batches like the others. That’s because we didn’t know the true speed of the TCP/IP connection when it was established, so we had to sacrifice some priority processing to maximize the speed of the connection.

Indicators compared with other methods

There are other techniques used to provide image previews faster, such as low quality image placeholder (LQIP), but they have some disadvantages. They add unnecessary data to placeholders, and often interfere with the browser’s preload scan and delay the loading of the full image, since replacing the preview to the full image depends on JavaScript.

  • Our approach does not add requests, nor does it add more data. The total page load time is not delayed.
  • Our approach does not require JavaScript. It takes advantage of the browser’s native capabilities.
  • Our approach does not require a change in page markup, so site-wide deployment is safe and simple.

User experience improvements can be seen in performance metrics such as SpeedIndex and visual completion time. Note that the normal image download process looks linear, but progressive flow allows it to proceed quickly to near-completion:

Make the most of progressive rendering

Avoid letting JavaScript spoil the effect. Scripts that hide images until the onload event is displayed (using fades, etc.) invalidate progressive rendering. Progressive rendering works best with traditional elements.

Is only JPEG ok?

Our application is format independent, but progressive streaming is only useful for certain file types. For example, it is not feasible to apply it to scripts or styles: these resources have only unrendered or fully rendered states.

Preferentially send picture headers (including picture size) for all formats.

The benefits of progressive rendering are unique to JPEG (supported by all browsers) and JPEG 2000 (supported by Safari). GIF and PNG have interleaved modes, but these modes come with poor compression. WebP does not support progressive rendering at all. This creates a dilemma: WebP is typically 20 to 30 percent smaller than a JPEG of the same quality, but progressive JPeGs appear to load 50 percent faster. Some next-generation image formats support progressive rendering better than JPEG and compression better than WebP, but browsers do not yet support these formats. You can also choose between bandwidth-saving WebP and more perceptible progressive JPEG by modifying the Polish Settings in the Cloudflare panel.

Custom headers for testing

We also support custom HTTP headers for experimentation to optimize streaming of other resources on the site. For example, you could tell our server to send the first frame of an animated GIF first and the rest later. Or you can load the resources mentioned in the tag first before the HTML document loads.

Custom headers can only be set in workers. Syntax is a comma-separated list of file locations with priority and concurrency options. The priority and concurrency options are the same as the cF-priority header for the entire file mentioned in the previous blog post.

cf-priority-change: <offset in bytes>:<priority>/<concurrency>, ...
Copy the code

For example, for progressive JpeGs we use this (JavaScript snippet from Worker) :

let headers = new Headers(response.headers);
headers.set("cf-priority"."30/0");
headers.set("cf-priority-change"."512:20/1, 15000:10/n");
return new Response(response.body, {headers});
Copy the code

It tells the server to use 30 as priority when initially sending the first 512 bytes. Then switch to priority 20 and some degree of concurrency (/1), and finally switch to low priority and high concurrency (/n) to send the rest of the file after sending 15000 bytes.

We try to split HTTP/2 frames to match the specified offset in the header, thus changing the priority sent as quickly as possible. However, prioritization does not guarantee that data on different streams will be reused exactly as directed, because the server only applies prioritization when multiple streams need to send data at the same time. If certain requests arrive earlier from the upstream browser or cache, the server may send them immediately rather than wait for other requests.

Give it a try!

You can use our Polish tool to convert your images to progressive JPeGs. Add beta to gracefully use parallel streams.

If you find any mistakes in your translation or other areas that need to be improved, you are welcome to the Nuggets Translation Program to revise and PR your translation, and you can also get the corresponding reward points. The permanent link to this article at the beginning of this article is the MarkDown link to this article on GitHub.


The Nuggets Translation Project is a community that translates quality Internet technical articles from English sharing articles on nuggets. The content covers Android, iOS, front-end, back-end, blockchain, products, design, artificial intelligence and other fields. If you want to see more high-quality translation, please continue to pay attention to the Translation plan of Digging Gold, the official Weibo, Zhihu column.