• How we Made Carousell’s Mobile Web Experience 3X Faster
  • Stacey Tay
  • The Nuggets translation Project
  • Permanent link to this article: github.com/xitu/gold-m…
  • Translator: Noah Gao
  • Proofread by: Kyrieliu, Moonliujk

How did we make Carousell’s mobile Web experience three times faster?

Take a look back at the six months we spent building Progressive Web App

Carousell is a mobile classified advertising marketplace developed in Singapore and operates in a number of Southeast Asian countries including Indonesia, Malaysia and the Philippines. We launched our mobile Web progressive Web Application (PWA) version earlier this year for a group of users.

In this article, we’ll share (1) our motivation to build a faster Web-side experience, (2) how we did it, (3) the impact it had on our users, and (4) what helped us do it faster.

🖼 This PWA is at mobile.carousell.com 🔎

Why a faster Web experience?

Our app was developed for the Singapore market, where we are used to users having above-average phones and high-speed Internet. However, as we expand into more countries throughout the Southeast Asia region, such as Indonesia and the Philippines, we face the challenge of providing an equally enjoyable and fast web experience. The reason is that in these places, more general terminal devices and Internet speeds tend to be slow and unreliable compared to our application design standards.

We started reading more about performance and started taking a fresh look at our application with Lighthouse, and we realized that if we wanted to grow in these new markets, we needed a faster Web experience. If we want to acquire and retain our users, it is unacceptable for a web page to load for more than 15 seconds on 3G (like ours).

🌩 Lighthouse’s performance score should be a great wake-up service ~ 🏠

The Web side is often the entry point for our new users to discover and learn about Carousell. We want to give them an enjoyable experience from the start, because performance is the user experience.

To that end, we designed a new, performance-first Web experience. When deciding which pages to try first, we chose the product listings page and the home page because Google Analytics statistics showed that these pages had the most natural traffic.


How did we do that

Start with a real-world performance budget

One of the first things we did was draft a performance budget to avoid committing unchecked bloatiness (an issue in our previous Web application).

Performance budgets keep everyone on the same “page”. They help create a culture of shared passion to improve the user experience. Teams with a budget can also track and plot progress more easily. This helps to support executive sponsors who have meaningful metrics to justify ongoing investments.

— Can you afford it? : Real-world network performance budget.

Because there are multiple moments during the loading process that affect the user’s perception of whether the page is “fast enough,” we base our budget on a combination of metrics.

Loading a web page is like a movie reel with three key moments. The three moments are: Did it happen? Does it work? And then, does it work?

— JavaScript spending in 2018

We decided to set an upper limit of 120 KB for critical path resources, a 2 second first screen content rendering and a 5 second interactive time limit on all pages. These numbers and metrics are based on a thought-provoking article by Alex Russell on real world Web performance budgets and Google user-Centric Performance Metrics.

Critical path Resources 120KB First Screen rendering 2s Interactive duration 5s Lighthouse Performance score > 85Copy the code

🔼 Our performance budget 🌟

In order to stick to our performance budget, we were very careful in selecting libraries from the beginning, including React, React-Router, Redux, Redux-Saga, and Unfetch.

We also integrated Bundlesize into our PR process to execute our performance budget on critical path resources.

⚠️ Bundlesize prevented a PR 🚫 from going over budget

Ideally, we would also automatically check the first screen rendering time and interactionable time metrics. However, we haven’t done that yet because we want to release the initial page first. We thought we could avoid this with our small team size, reviewing our releases weekly with our Lighthouse to make sure our changes stayed within budget.

The next step in our backlog was to build our own performance monitoring framework.

How do we make it faster

  1. We adopted part of the PRPL pattern. We send a minimum amount of resources per page request (using route-based code splitting) and use Workbox to pre-cache the rest of the application package. We also split unnecessary components. For example, if the user is logged in, the application will not load the login and registration components. Currently, we still deviate from the PRPL model in several ways. First, since we didn’t have time to redesign the old pages, the application had multiple application shells. Second, we haven’t explored generating separate build packages for different browsers.

  2. Inline key CSS. We use Webpack’s Mini-CSs-extract-Plugin to extract and inline the key CSS of the corresponding page to optimize the first screen rendering time. This gives the user some sense that something is happening.

  3. Lazy loading of images outside the viewport. And load them gradually. We created a scroll watch component based on react-LazyLoad that listens for scroll events and starts loading the image once it is calculated to be in the viewport.

  4. Compress all images to reduce the amount of data transferred over the network. This will be done in our CDN provider’s automated image compression service. If you don’t use CDN, or are just curious about image performance issues, Addy Osmani has a guide on how to automate image optimization.

  5. Use the Service Worker to cache network requests. This reduces the amount of data used by apis where the data does not change very often, and improves the application’s subsequent access load time. We turned to The Offline Cookbook to help us decide which caching strategy to use. Until we had more application shells, Workbox’s default registerNavigationRoute didn’t work for our scenario, so we had to create a handler to match the navigation requests of the current application shell.

workbox.navigationPreload.enable();

// From https://hacks.mozilla.org/2016/10/offline-strategies-come-to-the-service-worker-cookbook/.
function fetchWithTimeout(request, timeoutSeconds) {
  return new Promise((resolve, reject) => {
    const timeoutID = setTimeout(reject, timeoutSeconds * 1000);
    fetch(request).then(response => {
      clearTimeout(timeoutID);
      resolve(response);
    }, reject);
  });
}

const networkTimeoutSeconds = 3;
const routes = [
  { name: "collection", path: "/categories/.*/? $" },
  { name: "home", path: "/ $" },
  { name: "listing", path: "/p/.*\\d+/? $" },
  { name: "listingComments", path: "/p/.*\\d+/comments/? $" },
  { name: "listingPhotos", path: "/p/.*\\d+/photos/? $"},];for (const route of routes) {
  workbox.routing.registerRoute(
    new workbox.routing.NavigationRoute(
      ({ event }) => {
        return caches.open("app-shells").then(cache => {
          return cache.match(route.name).then(response => {
            return (response
              ? fetchWithTimeout(event.request, networkTimeoutSeconds)
              : fetch(event.request)
            )
              .then(networkResponse => {
                cache.put(route.name, networkResponse.clone());
                return networkResponse;
              })
              .catch(error => {
                return response;
              });
          });
        });
      },
      {
        whitelist: [new RegExp(route.path)],
      },
    ),
  );
}
Copy the code

⚙️ We have adopted a network-first policy with a timeout of 3 seconds 🐚 for all application shells

In the midst of these changes, we relied heavily on Chrome’s “mid-range mobile device” emulation feature (i.e. network limited to 3G speeds) and created multiple Lighthouse audits to assess the impact of our work.

Results: How did we do it

🎉 compares before and after mobile Web metrics 🎉

Our new PWA list page loads 3 times faster than our old list page. Since launching this new page, our Indonesian natural traffic has increased by 63% compared to all of our long weeks. Within three weeks, we also saw a threefold increase in clicks on ads and a 46 per cent increase in anonymous users initiating chats on our list pages.

⏮ A before and after comparison of our list pages on the Nexus 5 over a faster 3G network. Update: WebPageTest’s simple report on this page. ⏭


Iterate quickly and confidently

Consistent Carousell design system

While we were doing this, our design team was also creating standardized design systems. Since our PWA was a new project, we had the opportunity to create a standardized set of UI components and CSS constants based on the design system.

Having a consistent design allows us to iterate quickly. We build each UI component once and reuse it in multiple places. For example, we have a ListingCardList component that displays a feed of list cards and triggers a callback to prompt its parent component to load more lists when it scrolls to the end. We use it on the home page, the list page, the search page, and the profile page.

We also work with designers to identify appropriate performance trade-offs in application design. This allowed us to maintain our performance budget, change some of the old designs to fit the new ones, and, if they were too expensive, forgo fancy animations.

With the Flow

We chose to make the Flow type definition mandatory for all of our files because we wanted to reduce annoying null values or type issues (I’m also a big fan of progressive types, but why we chose Flow over TypeScript is a topic for another time).

The choice of Flow proved to be very useful as we developed and created more code. It gives us the confidence to add or change code, refactoring the core code to make it simpler and safer. This allows us to iterate quickly without breaking things.

In addition, the Flow type is very useful for our API conventions and documentation of shared library components.

As an added benefit of forcing the types of the Redux operations and React components to be written out, it helps us think carefully about how to design our apis. It also provides an easy way to start early PR discussions with the team.


summary

We created a lightweight PWA to serve our customers with unreliable Internet speeds, releasing page after page, improving our business metrics and user experience.

What helps us keep going fast enough

  • Have and stick to a performance budget
  • Reduce critical render paths to a minimum
  • Use Lighthouse frequently for auditing

What helps us iterate quickly

  • Have a standardized design system and its corresponding UI component library
  • Have a fully typed code base

The end of the thinking

Looking back at what we’ve done over the past two quarters, we couldn’t be prouder of our new mobile Web business experience, and we’re working hard to make it even better. It was our first platform to focus on speed and think more about how a page loads. Our PWA’s improvements to business and user metrics helped convince more people within the company to understand the importance of application performance and load times.

We hope this article inspires you to think about performance when designing and building Web experiences.

Cheers to those who worked on this project: Trong Nhan Bui, Hui Yi Chia, Diona Lin, Yi Jun Tao and Marvin Chin. Thanks also to Google, and especially to Swetha and Minh for their advice on this project.

Thanks to Bui,Danielle Joy,Hui Yi,Jingwen Chen,See Yishu 和 Yao Hui ChuaWriting and proofreading.

In the end, thanks to Hui Yi, Yao Hui Chua, Danielle Joy, Jingwen Chen and See Yishu.

If you find any mistakes in your translation or other areas that need to be improved, you are welcome to the Nuggets Translation Program to revise and PR your translation, and you can also get the corresponding reward points. The permanent link to this article at the beginning of this article is the MarkDown link to this article on GitHub.


The Nuggets Translation Project is a community that translates quality Internet technical articles from English sharing articles on nuggets. The content covers Android, iOS, front-end, back-end, blockchain, products, design, artificial intelligence and other fields. If you want to see more high-quality translation, please continue to pay attention to the Translation plan of Digging Gold, the official Weibo, Zhihu column.