Introduction: At present, the era of comprehensive development of mobile Internet, with the continuous improvement of product iteration speed, the proportion of web in App development is also increasing. Web development can not only realize the multi-terminal reuse of iOS, Android and Web at a lower cost to save manpower, but also effectively reduce the volume of program installation package, and more importantly, it can evade the blockade of Apple on iOS hot update. On the other hand, there is still an obvious gap in loading speed between mobile web pages and native pages. How to reduce this gap to the greatest extent and provide users with a good interactive experience has become the ability that every mobile developer needs to master. This paper will combine the actual RESEARCH and development experience of Baidu Aifanfan front-end team in the past period of time, systematically analyze and optimize the solutions to some problems faced by mobile terminal web development from the aspects of experience, performance, security and so on, so that users can open the web page in the App in seconds, as smooth as the original page.

The full text is 5800 words and the expected reading time is 12 minutes.

A clear problem: slow web pages

At present, mobile devices have many shortcomings compared with traditional desktop computers. “low bandwidth”, “slow speed” and “small memory” are the three most obvious bottlenecks, which are exactly the focus on which web pages rely.

The first one is network conditions. Although in recent years, with the popularization of 4G and 5G, the network speed of users’ mobile phones has been constantly improving, the network delay of mobile terminals is always uncertain, which will be limited by various conditions. In real life, there are still many cases that will lead to poor network speed of users. This restriction is very serious for web pages, which will make the loading process of web pages longer, or even failure.

Another aspect is the speed of the processor. Nowadays web pages are carrying more and more information, interface interaction and business logic are becoming more and more complex. Excessive computation will increase the processing time of web pages. The hardware configuration of the user’s device is varied, and this problem will be more obvious in the low-end models that account for the majority of them.

For web pages, device memory size is also important; more memory means more web content can be supported. On the other hand, the lack Of Memory will make the APP less efficient in dealing with web pages, resulting in frequent lag and, most fatally, more likely to cause OOM (Out Of Memory) phenomenon, resulting in program crash.

With the development of technology and the continuous improvement of interactive experience on mobile terminals, people’s tolerance of slow web loading is also getting lower and lower. According to a survey, more than two-thirds of web users consider loading speed to be the single biggest factor affecting their browsing experience. When a mobile web page takes more than 3 seconds to load, more than half of users simply leave. Therefore, a fast loading process is an important part of improving the quality of APP web pages.


2. Analysis of pain points: loading time


Before we discuss how to improve the speed of web page loading, we need to give a numerical definition of slow web page loading, and define a baseline — how to define the user experience of web page loading time. Here’s a formula:

Web page loading time = Time for completing web page loading – Time for starting web page loading

The time when the page starts to load is easier to judge. From the user’s point of view, when he clicks somewhere on the superior page to jump to the page, it can be understood as the page starts to load.

The key is how to define the time when the web page is finished loading. From the perspective of client development, whether iOS or Android, as the WebView control that bears the web page, there is a loadFinish callback to indicate that the web page is finished loading, but in fact, it does not really reflect the actual sensory experience of the user.

Here we first sort out the steps needed to load a common web page on the mobile terminal:

It can be seen that during the whole process of opening the web page, the user will experience several stages including no feedback, blank screen and loading. However, after the WebView control loadFinish, the page basically remains in loading interface. Therefore, the page loading mentioned in the above formula can generally be understood as the business data rendering is completed, because only then can the user actually see the desired content.

In other words, the slow loading of web pages in terms of numerical value means that the time difference between the user clicking on the web page and the completion of business data rendering is too large, so how to reduce the time difference is an urgent problem for us to solve.

Iii. Offer solutions: optimize practice


For mobile terminal page load time without feedback, hang, loading the three stages of love’s transgressions front-end team from the aforementioned network condition, processing speed, memory footprint this cut into several points, in view of the cache system, web page rendering mechanism, the browser kernel, network request efficiency such as the direction, formulate a series of optimization scheme.

The first is “independent components package distribution”, and then on this basis has carried out “page on demand pre-rendering”, “web container pre-initialization” and “business request pre-execution” and other processing.

Our goal is to reduce the loading time of all major web pages in Ifan to less than 1s.

3.1 Package and Distribute Independent Components

Generally speaking, downloading static resources while loading a web page is time-consuming, and this process is most affected by the network environment. In order to solve this problem, static resources such as HTML, JavaScript and CSS of a group of independent web pages are compressed and packaged into an offline component package, which is pre-downloaded and decompressed to the local mobile phone after the App starts. When users open the target web page, these resources are directly loaded from the local phone.

And according to the business in an App module can be divided into multiple offline component packages, each offline package with only the version number, through the back-end of offline package platform managed and distributed, the client will be at the designated time and platform synchronized offline package version information, when have updated version will be batch download and update the local files silent in the background, Users are basically in a state of no perception when they operate the App normally.

By packaging and distributing independent components, time-consuming static resource downloads can be bypassed and the white screen time during web page loading can be greatly reduced.

3.2 Pages are pre-rendered on demand

Page on-demand pre-rendering is an optimization scheme developed to solve problems in all links of web page loading at one time. It is implemented based on the idea of client rendering (Native Side Rending), while NSR is rendered by the server (SSR, Server Side Rendering), the essence of NSR is distributed SSR.

SSR refers to complete the rendering of the web page on the server side, complete the page template, data filling, page layout and other work on the server side, and then return the complete HTML content to the browser. Since all rendering is done on the server side, the page load time is reduced. However, as a result of this optimization scheme, the front-end page rendering needs to be completed on the server side, and the responsibilities of the front and back ends cannot be separated well. In addition, during the page loading process, there will inevitably be a period of blank screen time, and the load requirements on the server side will be relatively high.

Therefore, NSR is adopted here. After the user logs in successfully, a JS-Runtime is enabled with the help of WebView control. Before the user manually switches to the target web page, resources in the local offline component package are loaded in the background in advance and network requests are sent to obtain business data. Finally, the web page is set up in a MemoryCache at the memory level for point-and-click viewing. To say the least, even if the above processes are not completely executed when the user clicks the page, some of them will be executed in advance, which will reduce some user perception time compared with the traditional mode.

On the other hand, pre-rendering is a double-edged sword, essentially trading space for time and taking up a lot of extra memory. But memory is valuable on lower-end mobile devices, and a high memory footprint can cause a number of experience and stability issues. So how to do pre-render with the lowest possible memory footprint is a careful tradeoff. In the end, we decided to only open this function for several important pages at the entry level of the App as needed to avoid occupying too much memory space.

According to statistics, the average page loading time of the target page decreased from 2500ms to 231ms on iOS and from 2803ms to 628ms on Android.

3.3 Web container preinitialization

The loading process of mobile and Web pages is not exactly the same. When the App starts, the embedded browser kernel will not be initialized automatically by default. It will only be initialized when the WebView as the Web container is initialized. Therefore, we designed a web container pre-initialization optimization scheme for this point.

3.3.1 Container Preloading

Container preloading is the core of web container preinitialization, that is, WebView controls are initialized and related resources and frames are loaded before web pages are opened to reduce the loading time of web pages.

After downloading and updating the offline component package, Aipanfan will initialize the WebView control in the background, load an intermediate web page in the component package, and load relevant resources and frames in advance. After the intermediate page is loaded successfully, the WebView will be placed in the container pool, and start to listen to a customized JS method and wait.

When the user clicks open the target web page, the user will first judge whether the function of container preloading is enabled on the page according to the configuration file in the offline component package. If so, the user will request the container pool to obtain the initialized WebView, and call the customized JS method to notify H5. Finally, H5 redirects to the target page through Vue Router.

And when the container pool delivers the WebView out, it automatically reinitializes a new WebView to start loading the middle page in preparation for the next user operation.

Because the WebView obtained in the container pool has been initialized in advance, and some common resources and frames in the component package have been loaded, the white screen phase seen when the user opens the web page is significantly shortened and the web page loading time is significantly reduced. In terms of data, the loading speed of the optimized web pages on both iOS and Android increased by 200~300ms.

3.3.2 Micro-front-end architecture

In the container preloading solution, pages in offline component packages of each service are independent from each other and cannot be jumped to pages in other component packages through the Vue Router. Therefore, each service component package needs to provide a WebView control in the container pool for loading intermediate web pages. As the number of business component packages increases, the number of WebViews in the container pool increases synchronously. This increases the memory footprint significantly, and as mentioned above, high memory footprint can cause problems such as application running lag and even crashes, which can be particularly fatal on lower-end devices.

The micro front end solution is a good solution to this problem. The so-called micro front end is mainly to aggregate the original multiple business offline component packages into a system to realize the overall scheduling within the system and complete the interaction between component packages. Aipanfan adopts a master-slave architecture, that is, a master-slave design:

Master: public component package, responsible for loading other component packages and providing public resources;

Slave: service component package, responsible for specific service codes of different modules.

The local data interaction between Master and Slave mainly relies on Symbolic Link. The Native end provides a Symbolic Link corresponding to the local path for each offline component package including the public component package. Only one WebView control is provided for the public component package in the container pool. The public component package can be addressed locally through Symbolic Link to find the path of the page in the corresponding service component package. Then, the jump logic in “container preloading” can be completed by using Vue Router. In this way, under the condition of the original N component packages, the WebView controls in the component container pool are reduced from N to 1, and the memory usage is also reduced to 1/ N, which effectively reduces the program lag rate and crash rate.

On the other hand, the common component package extracts some common framework resources in each business component package, such as Vue Router. When using them, each business component package can also use Symbolic Link to locate corresponding framework resources in the common component package. This has the advantage of unifying the management of common resources and reducing the overall volume of offline component packages to some extent.

Through the optimization of micro-front-end architecture, the memory usage of our App is significantly reduced when displaying web pages, avoiding many problems caused by high memory, and the volume of offline component packages of each business is also reduced.

3.3.3 Preset offline Packages

Due to the master-slave design in the “micro front-end architecture” mentioned above, the common offline component package as Master contains some common framework resources required by the service offline component package (Slave). When the user opens a business component package of a page, the existence of common components package became the page to the normal operation condition, and when the user installing the APP for the first time, there must be a platform from the offline package download process of common components package, if slow loading will lead to all other business component packages have been during normal use.

In order to avoid such problems, we preloaded the public offline component package into the APP installation package to ensure its priority, and it will be updated with the APP release. After the APP is installed for the first time, it usually skips the downloading process of the preset package and copies it directly from the APP to the local sandbox.

On the other hand, the preset offline component package scheme is not only applicable to public component packages, but also to business component packages, especially for some of them with large volume and long downloading time, which can provide users with better interactive experience when the APP is first installed and started.

3.4 Pre-executing service Requests

Web pages mostly rely on servers to provide business data-driven page display content. In the previous analysis of the loading time of web pages, it can be seen that in the traditional mode, the business network request will be executed after the WebView container loadFinish. In view of this, we design an optimization scheme for the pre-execution of business request.

3.4.1 Client Request

In order to support the scheme of pre-execution of business request, the network request in the web page needs to be transformed into a client first, that is, the business data network request in the web page is processed by the Native terminal. In addition, using client request to interact with the server can also solve a series of problems related to the original use of XHR request, such as cross-domain restrictions, can not directly connect to the back end during the test, network layer configuration logic is not unified, etc.

In terms of specific steps, when a web page makes a network request, the H5 end should first configure the service request information, such as request address, interface input parameter, custom request, etc., and send these contents to the Native end through JS-Bridge. The Native end then performs some unified configuration and optimization of the network layer and sends the request. Such as adding cookies and some necessary request header data. Finally, after receiving the response, the content is returned to the H5 end through JS-Bridge again.

On the one hand, front-end developers can directly connect to the server without doing any configuration and proxy during debugging, which avoids the time-consuming process of sending packages in traditional mode and greatly improves the efficiency of iterative development. On the other hand, WKWebView on iOS has enhanced security restrictions that prohibit cross-domain requests when accessing local web pages. Using client requests can perfectly circumnavigate this restriction. Finally, it can centrally manage and uniformly optimize all requests in the webpage at the Native end, which also provides the prerequisite for the implementation of “network preloading” later.

3.4.2 Network Preloading

Before optimization (FIG. 1) :

After optimization (Figure 2) :

Network preloading is the core of service request preexecution scheme. Generally speaking, the business network request in a web page can be executed at the earliest after the DOM construction of the page is completed, that is, after point B in Figure 1. In Aipan, most of the business request parameters of the web page depend on some page-level input parameters. However, page entry can only be obtained after the local JS script (the built-in JS file of Aipanpan App, which contains a lot of logic for interaction between THE H5 terminal and Native terminal, and is a prerequisite for normal loading of web pages) is injected. And because the local JS script needs to perform injection after loadFinish (point B in Figure 1), for most of our web pages, the earliest time to send a business request is after point C in Figure 1. In addition, according to the technical scheme of “client request” mentioned above, when a web page sends a network request, the H5 terminal needs to configure the request information first, and then apply for the Native terminal through JS-Bridge for real execution. Therefore, before network preloading optimization, the sending time of business network request is point D in Figure 1.

To sum up, the traditional web page loading time can be divided into two parts:

1. Parse the static resources and build the DOM structure and local JS script injection (a-d) : here we call this part Part1, which is time-consuming depending on the front end;

2. Specific execution process of business network request (d-f) : here we call this part Part2, whose time consumption mainly depends on the user’s network environment and back-end;

The relationship between Part1 and Part2 is sequential. So in order to reduce the user perception time as much as possible, we can put Part2 in front as much as possible and do parallel processing with Part1.

Network preload is implemented based on this idea, first front-end developers will execute preloaded network request information written to the corresponding offline component package configuration file, when the user opens the offline package specifies the web, in Native client reads the information directly from the component package configuration information network and immediately began to send the request. This action is executed in the child thread and begins almost at the same time as the initialization of the WebView control.

The time to obtain response later (point I in FIG. 2) can be roughly divided into two situations according to the time consuming of Part2:

1. Before the completion of Part1 (that is, point I is before point J and after point G, as shown in figure 2) : At this time, the Native terminal will cache the response obtained by network preloading and start to wait. After the completion of Part1, the Native terminal will receive the application of the same request sent by H5 terminal in a normal way. At this point, jS-Bridge immediately delivers the cached response to the H5 end for use and destroys the cache. On the other hand, if the waiting time exceeds a certain time limit, the Native terminal will also destroy the cache, which is regarded as the failure of the network preloading behavior. If successful, the benefit in this case is the total time spent in Part2 (d-f);

2. After the completion of Part1 (i.e. point I is after point J) : In this case, when the network preloading request is still in the process, the Native terminal receives the normal request from the H5 terminal. At this time, the Native terminal intercepts the request and continues to wait for the return of the previous network preloading request. After the return, the response will be delivered to the H5 end for use through JS-Bridge. The payoff in this case is the time taken in Part1 (a-d).

The above is the main realization principle of network preloading. Through this optimization scheme, when users open mobile web pages, it can greatly reduce the user’s perceived time when opening the web pages. In terms of data, the time of loading web pages using this scheme is reduced by 300 ~ 600ms in Aiphanan.


Fourth, summarize the benefits: continue to explore


In addition to the above mentioned, Baidu Aipanfan front-end team has done a lot of work in order to reduce the loading time of web pages, such as iOS upgrade WKWebView, Android upgrade X5 kernel and so on, limited to space reasons, I will not do the expansion here.

Through these optimization methods, the loading time of each level of core page of Aipanfan mobile terminal has converged from 2-3 seconds on average to less than 1 second, basically reaching the established goal of opening in seconds.

Of course, we still need to do a lot in the future, and we will continue to promote the performance optimization of mobile web pages in the future, so that users can enjoy the experience close to the native page when using ifan.

This author | yeah baby, love baidu’s front end, a senior engineer, have many years experience of research and development. Good at iOS, Android, Web multi – terminal development.

Job Information:

Whether you are back-end, front-end, big data or algorithm, there are several positions waiting for you here, welcome to send your resume, Aifanfan business Department is looking forward to your joining!

Resume address: [email protected] (Note “Love”)

Recommended reading:

Decrypt 100 TB data analysis how to run into 45 seconds

| icon from the Web evolution history best practice | below to book

Those thing | | baidu contents of risk control glossary at the end of the article to send books

———- END ———-

Baidu said Geek

Baidu official technology public number online!

Technical dry goods, industry information, online salon, industry conference

Recruitment information · Internal push information · technical books · Baidu surrounding

Welcome to your attention