It is my great honor to join Ele. me as a consultant and participate in the related work of PWA from February to May this year. This article was originally written in English and published on Medium before I/O:
Upgrading Eleme to Progressive Web Apps, has gained some attention. So I also decided to rewrite the Chinese version to share again, I hope it will be helpful to you;)





This article was first published
The ProgrammerThe July issue, now simultaneously published on
Ele. me front – Zhihu column,
Hux Blog, please keep the link for reprint.

We have been working on upgrading the Ele. me mobile site to Progressive Web App since the vue. js official tweet was first published. It was only recently that Google I/O 2017 came to an end. We are honored to launch the world’s first PWA specifically for domestic users, but we are even more honored to partner with Google, UC and Tencent to promote the development of the Web and browser ecosystem in China.

Multi-page applications, Vue, PWA?

For building a PWA that wants to achieve a native app-level experience, the prevailing approach in the community is to use SPAs, or single-page apps, to organize entire Web applications, Twitter Lite, Flipkart Lite, Housing Go and Polymer Shop are the best known EXAMPLES of PWA in the industry.

However, ele. me, like many e-commerce sites in China, loves the benefits of multi-page App (MPA), and has reconstructed its mobile site from angular.js-based single-page App to the current multi-page App model in just over a year. The most important advantage teams value is the isolation and decoupling between pages, which allows us to treat each page as an independent “microservice” that can be iterated independently, embedded independently to various third-party portals, and even maintained independently by different teams. The whole site is a collection of services rather than a giant whole.

At the same time, we still rely on vue.js as a JavaScript framework. In addition to being a heavy weapon competitor to React/Angular, Vue’s lightweight and high performance makes it a perfect alternative to the jQuery/Zepto/Kissy + template engine stack popular in traditional multi-page application development. The component system, declarative and responsive programming provided by Vue improves the development efficiency of code organization, sharing, data flow control, rendering and other links. Vue is also an incremental framework, and if the complexity of the site continues to increase, modules such as Vuex or VuE-Router can be introduced on demand and incrementally. What if you have to change the receipt sheet again? (Who knows…)

In 2017, PWA has become the new wave of web applications. We decided to see how far we could go in upgrading PWA with our existing “Vue + multi-page” architecture.

Implement the “PRPL” pattern

“PRPL” (pronounced “Purple”) is a Web application architecture pattern proposed by Google engineers. It aims to dramatically optimize the performance and experience of the mobile Web by leveraging new technologies of modern Web platforms. It provides a high-level abstraction on how to organize and design high-performance PWA systems. We are not going to rebuild our Web application from scratch, but we can make implementing the “PRPL” pattern our migration goal. “PRPL” is actually an acronym for Push/Preload, Render, Precache, and lazy-load, which we’ll expand on below.

1. PUSH/PRELOAD: Pushes/preloads key resources required for initial URL routes.

Whether it’s HTTP2 Server Push or <link rel=”preload”>, the key is that we want to ask ahead of time for resources hidden deep in the application Dependency Graph, To save HTTP round-trips, browser document parsing, or script execution time. For example, for a SPA with code splitting based on routes, if we can put the initial URL before the Webpack list, the routes and other entry chunks are downloaded and run, That is, the code on which the user accesses the entry URL route depends is preloaded with Server Push or <link rel=”preload”>. By the time these resources are actually requested, they may already have been downloaded and stored in the cache, speeding up the readiness of all dependencies for the initial route.

In multi-page applications, each route requests only the resources required by the route, and the dependencies are usually flat. Most scripts on the Ele. me mobile site rely on plain <script> elements, so they can be scanned and requested by the browser’s preloader early in the document parsing process, and the effect is actually the same as explicit <link rel=”preload”>.

We also host all key static resources in the same domain (no hash) to better take advantage of the Multiplexing inherent in HTTP2. At the same time, we are also experimenting with Server Push apis.

2. RENDER the initial route as soon as possible to make the application interactive

Now that all the initial route dependencies are in place, we can start rendering the initial route as soon as possible, which helps to improve the application metrics such as first render time, interactive time, etc. Multi-page apps don’t use javascript-based routing, but traditional HTML jumping mechanisms, so they don’t really do anything extra for this part.

3. Pre-cache: Use the Service Worker to pre-cache the remaining routes

This is where the Service Worker comes in. A Service Worker is a client proxy that sits between the browser and the web and is known for intercepting, processing, and responding to HTTP requests that flow through it, allowing developers to provide resources to Web applications from the cache. However, the Service Worker can also initiate HTTP requests in the “background” to pre-request and pre-cache resources we need in the future.

We’ve already used Webpack for.vue compilation, file name hashing, and so on during the build process, so we wrote a Webpack plug-in to help us collect cached dependencies into a “pre-cached manifest.” And use this listing to generate a new Service Worker file on each build. When a new Service Worker is activated, the resources in the manifest are requested and cached, which is very similar to how the SW-precache library works.

In fact, we only do dependency collection on routes that we mark as “critical routes.” You can think of these “critical routing” dependencies as the “App Shell” or “installation package” for our entire application. Once they are cached, or installed successfully, our Web application can be launched directly from the cache, whether the user is online or offline. For less important routes, we do incremental caching at run time. The SW-Toolbox we used provided LRU replacement strategies and TTL invalidation mechanisms to ensure that our application would not exceed the browser’s cache quota.

4. Lazy-load Lazily loads and instantiates the remaining routes on demand

Lazy loading and lazy instantiation of the rest of the routes is a bit more cumbersome for SPA. You need to implement code Splitting and asynchronous loading based on routes. Fortunately, this is one of those things that multi-page applications don’t have to worry about, because the routes in multi-page applications are inherently separate.

It is worth noting that, regardless of the single-page or multi-page application, if we have pre-downloaded and cached the resources of these routes in the previous step, then lazy loading is almost instantaneous, and we only need to pay the cost of instantiation.

These four sentences are the sum total of PRPL. Interestingly, we found it even easier to implement PRPL in multi-page applications than in single-page applications. So what were the results?

According to Google’s Web performance analysis tool Lighthouse (V1.6), on a simulated 3G network, a user’s first access (without any caching) is “interactive” in about two seconds, which is pretty good. For re-access, since all resources come directly from the Service Worker cache, the page can reach an interactive state in about a second.

But the story is not so simple. In the actual experience, we found that there was still a very obvious blank screen gap when the application switched from page to page. Since PWA runs in full screen, the negative impact of blank screen on user experience is even greater than before in the browser. We already cached all the resources with the Service Worker. How could this happen?

Click the discovery page from the home page, and the white screen is displayed during the jump

The pitfall of multi-page applications: restart overhead

In contrast to SPA, in multi-page applications, Navigating is native browser documents, which means that the previous page is completely discarded and the browser needs to perform all startup steps for the next page: Re-download resources, re-parse HTML, re-run JavaScript, re-decode images, re-layout pages, re-draw… Even though many of these steps could have been reused across multiple routes. These efforts will undoubtedly incur significant computational overhead and therefore considerable time costs.

Here is the profile data for our entry page (and heaviest page) under 2x CPU throttling simulation. Even if we could keep the “interactivity time” around a second, our users would still feel that it was too slow to “just switch tabs”.

Huge JavaScript restart overhead

According to the Profile, we found that a significant amount of time (900 ms) was spent running the Evaluate Script in JavaScript before the First Paint occurred. Almost all the scripts are parser-blocking, but since all the UI is JavaScript/Vue driven, there is no performance impact. Of this 900ms, about half is spent running dependencies including the Vue runtime, components, libraries, and so on, while the other half is spent starting and rendering the Vue when the business component is instantiated. We need these abstractions from a software engineering perspective, so this is not to blame JavaScript or Vue for the overhead.

However, in SPA, the startup costs of JavaScript are spread over the life cycle: Each script needs to be parsed and compiled only once, heavier tasks such as generating the Virtual DOM can be executed only once, and large objects such as the Vue ViewModel or Virtual DOM can be left in memory for reuse. Unfortunately, this is not the case in multi-page apps, where we pay a huge JavaScript reboot price every time we switch pages.

Browser cache ah, can you help?

Yes and no.

V8 provides code caching, which allows you to make a local copy of compiled machine code so that the next time you request the same script, you omit all the work of requesting, parsing, and compiling at once. Also, for scripts cached in the Service Worker’s accompanying Cache Storage, V8’s code caching is triggered after the first execution, which is helpful for multi-page switching.

Another browser Cache that you may have heard of is called a “Back and Forward Cache,” or BFCache for short. Browsers have different names for this, Opera calls it Fast History Navigation and Webkit calls it Page Cache. But the idea is the same, that we can make the browser keep the previous page in memory when it jumps, preserving the JavaScript and DOM state rather than destroying it all. You can try any traditional multi-page site on iOS Safari, and you’ll basically see instant loading through the browser’s forward and back buttons, gestures, and (with some variations) hyperlinks.

Bfcache is actually perfect for multi-page applications. Unfortunately, Chrome is not currently supported due to memory overhead and its multi-process architecture. Chrome currently just uses the traditional HTTP disk cache to simplify the loading process a bit. There’s no way to count on the Chromium kernel-dominated Android ecosystem.

Strive for “perceived experience.

While multi-page apps face real-world performance issues, we don’t want to compromise so quickly. On the one hand, we tried to minimize the amount of code execution before the page reached the interactionable time, such as reducing/postponing the execution of some dependent scripts, and reducing the DOM nodes rendered for the first time to save the initialization overhead of the Virtual DOM. On the other hand, we also realize that there is more room for improvement in the perceptual experience of apps.

Chrome product Manager Owen has written Reactive Web Design: The secret to building Web apps that Feel Amazing talks about two ways to improve perceptual experience: one is The use of Skeleton Screen to achieve instant loading; The second is to define the size of elements in advance to ensure the stability of loading. It’s exactly the same as what we do.

To eliminate white screen time, we also introduced a skeleton screen with stable size to help us achieve instant loading and space holding. Even on devices with weak hardware, we can render the skeleton screen of the destination route immediately after clicking the toggle TAB to keep the UI stable, continuous, and responsive. I recorded two videos and put them on Youtube, but if you are a domestic reader, you can go directly to ele. me mobile website to experience the effect in the field;) The final result is shown below.

After adding the skeleton screen, click back to the home page from the Discovery page

This should have been easy to pull off, but it actually took a little bit of work.

Use the Vue pre-rendered skeleton screen at build time

As you might imagine, in order for the skeleton screen to be cached by the Service Worker, loaded instantaneously, and rendered independently of JavaScript, we need to inline the HTML tags, CSS styles, and image resources that make up the skeleton screen into the static *.html files of each route.

However, we’re not going to write these skeleton screens by hand. Think about it, if every iteration of a real component (every route is a Vue component to us) we had to manually synchronize every change to the skeleton screen, it would be too tedious and difficult to maintain. Fortunately, the skeleton screen is just a blank version of the page before the data is loaded in. If we can implement the skeleton screen as a special state of the real component — the “empty state” — we can theoretically render the skeleton screen directly from the real component.

This is where Vue’s versatility comes in. We can really use the vue.js server-side render module to do this, but not on a real server, but at build time to pre-render the empty state of the component into strings and inject it into the HTML template. You’ll need to tweak your Vue component code to run on Node. Some pages have DOM/BOM dependencies that can’t be easily removed, so we’ve had to write an extra *.shell. Vue to get around this for now.

The browser Painting

Just because you have tags in your HTML file doesn’t mean they can be drawn to the screen immediately, you have to make sure that the key render paths of your page are optimized for this. Many developers believe that placing the script tag at the bottom of the body is sufficient to ensure that the content is drawn before the script is executed, which may be true for browsers that can render incomplete DOM trees (such as streaming rendering common in desktop browsers). But mobile browsers probably don’t do that because of slower hardware, power consumption, and so on. Not only that, but even if you’ve been told that scripts set to Async or defer won’t block HTML parsing, that doesn’t mean the browser will necessarily render before executing them.

First I want to clarify that, according to the HTML specification Scripting section, async scripts run immediately after their request completes, so it can block parsing anyway. Only defer (and not inline) with the latest type=module is specified as “must not block resolution”. (However, we have a little problem with defer now… We’ll come back to that later.)

More importantly, a script that doesn’t block HTML parsing can still block drawing. I made a simplified “Minimal multi-page PWA” (Minimal multi-page PWA, or MMPWA) to test this problem: We generate and render 1000 list items in an Async (and indeed do not block HTML parsing) script, and then test if the skeleton screen can be rendered before the script executes. Here is a profile recorded on my Nexus 5 phone via USB Debugging:

Yes, was it unexpected? The first rendering does block until the script is finished executing. The reason for this is that if we do a DOM manipulation too quickly before the browser has completed its last rendering, our beloved browser will have to throw away all its pixels and wait until all the work caused by the DOM manipulation is done before we can do the next rendering. This is more likely to happen on mobile devices with slower cpus/Gpus.

Dark Magic: Use setTimeout() to draw ahead

It is not hard to see that skeleton screen drawing and script execution are actually a race. Probably because Vue is too fast, there is still a very high probability that our skeleton screen cannot be drawn. So we thought about how we could make the script execution slow, or “lazy.” So we thought of a classic Hack: setTimeout(callback, 0). We are trying to put the DOM operation in MMPWA (render 1000 lists) into setTimeout(callback, 0)…

Dangdang! The first render was instantly brought forward. If you’re familiar with the browser’s event loop model, this Hack actually uses the setTimeout callback to put the DOM operation into the event loop’s task queue to avoid it executing in the current loop, so that the browser can catch its breath (update the render) while the main thread is idle. If you want to try MMPWA yourself, you can visit github.com/Huxpro/mmpw… Or huangxuan.me/ mMPwa/Access code with Demo. I designed the UI as A/B Test and changed it to render 5000 list items to make it more dramatic.

Back on the Ele. me PWA, we also tried to put new Vue() in setTimeout. Sure enough, the dark magic worked again, and the skeleton screen was rendered immediately after every jump. The Profile looks like this:

Now, we trigger the first rendering (skeleton screen) at 400ms and finish rendering the actual UI and reach the page interactible at 600ms. You can pull up and compare the profile before and after optimization in detail.

The Bug I “deferred” about defer

I don’t know if you noticed, but in the Profile above, we still have quite a few scripts that block HTML parsing. Well, let me explain, for historical reasons, we do keep some blocking scripts, such as the invasive Lib-flexible, that we can’t easily remove. However, most of the blocking scripts in the Profile actually had defer set up, and we thought they were supposed to be executed after the HTML was parsed, but profile slapped them in the face.

I spoke to Jake Archibald and it turned out to be a Chrome Bug: when the script deferred was fully cached, it did not comply with the spec and waited for parsing to end, instead blocking parsing and rendering. Jake has submitted the crbug, let’s vote on it

Finally, the optimized Lighthouse score also shows significant performance improvements. Just to be clear, there are a number of factors that can affect Lighthouse running score, so I recommend that you run your experiments with control variables (the device you’re running on, the network environment you’re running on, etc.).

Finally, attached is a diagram for Addy Osmani’s I/O presentation that describes how ele. me PWA combines Vue to implement the PRPL pattern for multi-page applications as an architectural reference and schematic.

Some thoughts

Multi-page apps still have a long way to go

The Web is an incredibly diverse platform. From static blogs, to e-commerce sites, to desktop-level productivity software, they are all first citizens of the Web family. The way we organize Web applications is also more, not less: multi-page, single-page, Universal JavaScript applications, WebGL, and predictably Web Assemblies. There are no advantages or disadvantages between different technologies, but the differences in application scenarios do exist.

Jake said at the Chrome Dev Summit 2016, “PWA! = = SPA “. But despite the latest technologies (PRPL, Service Worker, App Shell…) We still have some insurmountable obstacles due to the limitations of the multi-page application model itself. Multi-page applications may have new specifications such as “BFCache API” and Navigation Transition in the future to narrow the gap with SPA, but we must also admit that the limitations of multi-page applications are very obvious today.

PWA will eventually usher in a new era of Web applications

Even though our multi-page apps weren’t as shiny as the single-page ones on the upgrade path to PWA, the idea and technology behind PWA actually helped us provide a better user experience on the Web.

As the next generation Web application model, PWA tries to solve the fundamental problem of The Web platform itself: the hard dependence on the Web and browser UI. So any Web application can benefit from this, regardless of whether you’re multi-page or single-page, desktop or mobile, React or Vue. It may also change what users expect from the mobile Web. Who thinks the desktop Web is just a place to look at documents these days?

It’s the same old story: let our users love the Web as much as we do.

Finally, I would like to thank Yisi Wang, Guanghui Ren and Ye Ye from Ele. me, Michael Yeung from Google, DevRel team, UC Browser team and Tencent X5 browser team for their cooperation in this project. Thanks to Yu Yuxi, Monty Chen and Jake Archibald for helping me with the writing process.