Common performance optimization and principle analysis of H5

Static resource collation

Common picture formats:

  • JPEG format:

First, the whole process of JPEG COMPRESS is to transform the image color RGBA (), then resampling to distinguish the high frequency and low frequency color transform, so as to carry out a DTA process, and then compress the high frequency color transform sampling result, and then quantize and encoding. The result is a compressed JPEG.

The compressed version of the picture is different from that of the original data. Although some data is lost in the compression process, these differences are unrecognizable to human eyes. So after compression does not affect the overall browsing experience, at the same time for the page, the capacity of static resource images can also be reduced a lot, so as to improve the loading speed of the web page.

  • PNG format:

PNG is an image format that supports transparency and is essentially a collection of indexed data in color. It has PNG8, PNG24, PNG32 three formats, namely 8 bit, 24 bit, 32 bit index. The PNG file format has a color palette inside.

Take PNG8 as an example: PNG is a 256 color + transparent format. It has 256 colors in its palette, that is, the color of one pixel. It needs 8 bits of data length to index, that is, the color of PNG8 images can only appear in these 256 colors. It also has one of the smallest file sizes in the PNG file format.

PNG24 requires 24 bits to index a color, so PNG24 needs three times as much data to index a color as PNG8, and does not support transparency. Png32 has added transparency to PNG24 images. The selection of PNG images depends on the color of the image.

If the picture color is not very rich and relatively single, you can consider using PNG8 picture, if the picture color is very rich, you can choose PNG24 or PNG32 bit picture to reduce the size of the image resource. Each PNG image format has some minor differences, and the actual development needs to balance the file size, image format, image quality, and the importance of image size in the current project to decide which image format to use.

Common picture format usage scenarios:

  • JPEG: the image compression rate is relatively high, suitable for the background image, the head image is suitable for the use of large area background.

  • PNG: Supports transparency. This format is very compatible with images and can be used for backgrounds or pop-up layers that need to be transparent, or for the sake of experience quality when developing whole pages using PNG images.

  • SVG: another kind is an SVG vector diagram, the biggest benefit of this format is to zoom in on by not distortion and smooth high code is relatively small and the embedded image format, can conditional word use such images as much as possible, of course, this also can only be used for some simple components such as ICONS, buttons and so on a write simple business scenario.

Optimize the introduction of pictures:

CSS sprite

At present, spite is still a commonly used image arrangement method. Its advantage is to merge large and small images into a large image, and then use image positioning to display the corresponding image, which can reduce the page request and improve the page loading speed.

But also has a downside is that since is a larger image synthesis, so a lot of small images is dependent on the picture, if the picture without loading up the whole page is basically is missing, but in today’s network, basically can also ignore this problem, now basically is 4 g or wifi speed do not exist.

Image-inline

Using BASE64 format embedded into the page is a good way to reduce HTTTP request, but also less commonly in practical development to do so, because behind to embed the image in the HTML to actually is not good to maintenance, in my experience, are generally appeared if they can’t use BASE64 format images.

For example, in development projects, image resources are generally placed in addresses of different domains. In the case of using CANVAS to generate images, Canvas.todataURL (…) It will pollute the original address of the image, resulting in cross-domain problems, and the backend can not be generated because the image alone, this time using BASE64 is the simplest and crude solution. Embedding images in HTML solves the cross-domain problem.

Image compression

Place images on some tool to compress them in batches.

HTML pages load the rendering process

The process of rendering web pages

When a web page is loaded, it is presented with an HTML text or a string. The browser parse performs a series of lexical analyses on the string, generating a token or object for each tag. These tokens are then parsed from top to bottom, and the corresponding DOM nodes are generated step by step from top to bottom.

Of course, in the process of lexical analysis, link Script tags can be resolved, and the corresponding Web resources will be requested to load. JavsScript is executed by the V8 engine in the browser kernel,CSS is similar to HTML, it is parsed into CSSOM, and then HTML,CSSM,SCRIPT are parsed and combined to generate the basic information that Rander Tree gets. Then go to Layout, which is the layout, and finally render Paint.

HTML loading features

Sequential loading and concurrent loading

Sequential loading refers to the lexical analysis mentioned earlier, in which the browser parses the HTML page from top to bottom.

Concurrent loading refers to the fact that static resources in the same domain will initiate requests at the same time, that is, concurrent requests. Of course, if there are concurrent requests, the server also has the upper limit of concurrent requests. For example, the number of concurrent requests for resources in a unified domain in Google Browser is 6. In the case of a large number of requested images, we need to use lazy loading or preloading to operate.

Optimize HTML loading and rendering

Avoid CSS and JS blocking

The CSS should be in the head as much as possible, because CSS loading will block the loading of the page, which is beneficial. This avoids the flash of the page when the page is loading, and the loading of the CSS will block the execution of THE JS, but not the loading of the incoming JS.

Try to write js at the bottom of the HTML text because the introduction of JS blocks the rendering of the page and also depends on DOM nodes. Therefore, HTML and CSS should be loaded first, and JS should be loaded last. Of course, asynchronous loading defer and async can also be used to load JS files that are not needed immediately without affecting the initial screen. The loading of defer is based on DOM and then executed in sequence.

In this way, we need to pay attention to whether JS is dependent on each other. The sequence of JS execution is also interdependent, blocking the execution of subsequent JS logic, so we have to prioritize.

In addition to defer and Acync, there is also the direct use of dynamically loading JS. In general, this approach is used in the case of components, wrapping a component and using JS to dynamically load JS and CSS.

Lazyload & Preload

Lazyload is used to load a large number of pictures but can be decided according to the user’s operation, the purpose is to reduce the server request and reduce the waste of network traffic, but also improve the user experience. For example, some e-commerce pages display products and load the corresponding data where the browser scrolls to, rather than listing all the data at once.

Pull-down refresh and pull-up loading are also common practices in H5 pages. Of course, some corresponding processing needs to be done due to the browser characteristics of IOS itself.

Preload is used for situations where the user experience and smooth running of the page interaction are important. All data is loaded before the page is opened. The most common way to do this is to use a loading progress bar, which first stores all the static resources in an array, then loads them in order to calculate the percentage, and then moves on to the next step when they reach 100%.

Browser redraw and reflow

Let’s first talk about the concept of a frame. At present, the refresh rate of the screen on most devices is 60 times/second, that is, 1000/60=1.6ms for a frame. For a browser to render anything, the render time must be less than 1.6ms or as close as possible to 1.6ms, otherwise it will lag and affect the user experience.

Assuming that the browser now renders an animation for exactly one frame, the image of that frame will first be recalculated (CSS/DOM, etc.) then backflow, updated tree, repainted, and finally Composite. As shown in the figure below

  • Reflow is a mechanism that triggers backflow when the layout and geometry of the current page changes.
  • Repaint means that some of the properties of the Render tree are updated without affecting the overall layout, just changing the background, color, etc. This is called repaint.

Optimize page rendering for the browser

The key to front-end performance optimization is to reduce page redrawing and backflow.

Avoid using attributes that trigger backflow, such as top, height, and other layout-related attributes. For example: the displacement method in @keyframes animation replaces top with translateX.

A layout step is missing because the top attribute that triggers backflow is translated instead. This reduces the layout step in the rendering process, reducing rendering time and improving performance.

Triggering a backflow condition

Backflow is avoided

It is obvious that a layout step is missing, because replacing the top attribute that triggers backflow with Translate reduces the layout step in the rendering process, reducing rendering time and improving performance.

Principles for optimizing browser page rendering

Separate frequent rendering layers, which require frequent backflow redrawing, as a separate layer, so that the browser backflow redrawing area is reduced, thereby reducing CPU resource consumption. Because browser rendering works like this:

  • Now split the DOM into multiple layers;
  • Then each layer is rasterized and the nodes are drawn to the graph;
  • The layer is then uploaded to the GPU as a texture;
  • Finally, the layer reorganization is carried out. As long as we independently redraw and recirculate the layer to be operated, other layers will not be affected.

According to the above rendering process, the concept of GPU acceleration will be introduced here. Now that we have created a new composition layer, that is to say, GPU acceleration is enabled, there are several ways to create new layers:

  • 3D or perspective conversion
  • Video elements that use accelerated video decoding;
  • A 2D context Canvas element with a 3D(WelGL) context or accelerator;
  • Do CSS animations on your Opactiy or use WebKit to transform elements;
  • Have elements that speed up CSS filtering;
  • If element A has an element B with A z-index smaller than it, and element B is A composition layer (in other words, the element is rendered above the composition layer), element A is promoted to the composition layer;

Take point 2 as an example: Open a live video of a League of Legends game:

We can see why video is a layer here, and there’s an explanation for that.

Point 7 is mentioned here because it is a common problem in real development projects, especially when doing animations on mobile.

As shown above, element B should be on a separate compositing layer, and the final image of the screen should be composed on the GPU. But the A element is on top of the B element, and we don’t specify the hierarchy of A and B elements. The browser will force A new composition layer for element A, so that both A and B become separate composition layers.

Therefore, when using GPU acceleration to improve animation performance, it is best to add a higher z-index attribute to the current animation element. Artificially interfering with the sorting of composite layers can effectively reduce Chrome to create unnecessary composite layers and improve rendering performance.

Note when creating new layers: Not only does the GPU need to send the render layer images to the GPU, but it also needs to store them for later reuse in animations. Do not create layers at will, must be based on the current project situation to analyze. Because creating a new layer comes at a cost, each new rendering layer created means new memory allocation and more complex layer management. You can’t afford this performance overhead on some Android models.

Browser storage

cookie

Cookie: Cookie is generally used to store the information of account verification or some sensitive user data, or when the cooperation page of some projects in the mobile terminal needs to obtain the information of login state, you can use the cookie of a transfer page to store the corresponding data for easy access. In general, for C-S interaction and for its own data storage.

This is because it is first generated from the server, and then the browser writes the set-cookie in the header to the local server when it receives the data from the server. Then each HTTP request (under the same domain name) carries the cookie information, so that the server can authenticate the user of the request. Note: Cookies of different fields are not accessible.

This is a very efficient interaction mechanism, but it also brings some problems since every time I take the cookie so that if the request number will bring traffic consumption, will cause the loading speed is slow and waste of resources, some resources can use the CDN solution to separate the master station and resource station of domain name, of course it is also based on a large amount of web pages, If the PV of a web page is less than 100,000 or more, it can be ignored in today’s network.

Speaking of this, it reminds me of an interview with a small company in the past. When I asked about the web performance optimization section of their company, the technical director said, “If the traffic is not more than 100,000, it is ok to see the normal experience of the interface, and come as convenient as possible. “Everyone laughed. But as a developer, from a technical point of view, do the best you can, no matter how big or small your project.

localStrage & sessionStrage

LocalStrage & sessionStrage: Compared with cookies, these two attributes are new to H5 and are specially used to store data. The capacity can reach 5M. The only difference is that one is used to store data after closing, and the other is used to clear data after closing the browser. Can be used as temporary storage for data such as forms or shopping cart data. However, if there is a limit case where this capacity is not enough, you can also consider indexedDB.

IndexedDB

The browser API, which is a browser database, when need to store large amounts of structured data only need to use, the use of the API or very few, because the client also don’t store particularly large amount of data, data is given to the background, basic front end basically need to store data is basically a temporary data and test data. IndexedDB is also used to create corresponding offline applications.

Server Worker

This is used for obtaining large size and large amount of calculation of JS files, in the case of 3D rendering, JS file volume is large, the amount of calculation is also large, and JS is a single thread execution. This may lead to a situation where the last JS process is not finished, the next JS will have to wait, SW is independent of the current WEB, in the background can be processed for different JS, the main page for listening and finally summarized.

progressive web app

PWA refers to a series of new Web features combined with UI design to achieve the best user experience. This is the future of WEB apps. To put it bluntly, it will try its best to get close to the experience of the native APP. For example, it has three main directions. First, the APP can be opened and used even without the network.

The second is to improve the speed to achieve the best experience effect, and the other is to generate desktop clickable due, which is the same as the ordinary APP, by clicking the APP to enter the same full-screen and push functions. The twinkling of an eye. Pwa is 7 years old from Google.

Browser cache

A good caching strategy can reduce HTTP request and web page delay, reduce unnecessary data loading, reduce network load, thus improving the response speed of the page, can let users have a better browsing experience. However, caching only improves the response time of the second page opening, which is determined by the current network environment and device.

The browser cache stores files on the client, and at each session, the browser checks to see if the cached copy is still valid. If so, the browser will no longer request the file from the server, but will simply fetch and use it in memory. If the file is out of date, the browser will make a request to the server. This will reduce unnecessary requests and speed up the response of the page.

The web cache information is stored in the HttpHeader, and attributes in the HttpHeader are used to configure caching policies that determine whether the resource needs to be reloaded to the server. It can be in the responseHeader or it can be in the Requestheader to let the client and the server know about each other’s cache.

Cache-control

Cache-control is the HTTP header that controls the Cache policy. It contains the following attributes: max-age, s-maxage, private, public, no-cache, no-store.

  • max-age

Max-ago refers to the maximum valid time, that is, the resource starts from the current request time. Within this time range, there is no need to initiate a resource request to the server, the browser can directly obtain the file in memory.

The cache-control max-age is 86,400 seconds (86400/3600=24), which means that the server will not send a resource request to the web page within a day, even if the server’s logo changes. From memory cache

  • s-maxage

S-maxage is similar to max-age in that it does not send a resource request to the server for a specified period of time, except that s-maxage refers to the shared cache, for example: CDN, that is, after s-maxage is set, the REQUEST will not be re-initiated if the CDN is updated within this period. And when both maxage and S-MaxAge are set in a CachA-Control, s-MaxAge overwrites maxAge and Expires.

  • Private and public

Private refers to the private cache, which can only be accessed by the user, and public refers to the shared cache, which can be accessed by multiple browsers. If neither private nor public is specified, the default is public. S-maxage takes effect only when the value is set to public.

  • no-cache

This means that a request is made to the server every time to verify that the cache is expired, as opposed to maxage, which does not make a request to the server for resources for a period of time. Set maxage to 0 and set properties to private: cache-control: private, maxage: 0, no-cache

  • no-store

Caching is disabled, and every load requires a resource request.

  • Expires

Expires is used to set the expiration date of a cache. Like max-age, it specifies a certain amount of time before a resource is requested as long as the cache is in effect. However, Max-age takes precedence over Expires and needs to be used with last-Modified.

Because expires is a strong cache, it doesn’t make a request to the server for a specified amount of time, regardless of whether the file has been updated on the server. There’s also Expires, which is relatively early, so it has an advantage in terms of browser compatibility. (This is a little old)

  • Last-modified and if – last-modified

Last-modified &if-last-modified refers to the time when the file was last modified, Last-modified is stored in the response header if-modifity-since is stored in the Request header based on the client-server cache negotiation mechanism.

So we see that there’s a last-Modified time in the Response header, and that’s the time when this file was last modified on the server, and the browser will save that time, and the next time it’s requested, If -modified-since in the request header is going to have that time, tell the server that this file was last updated at that time.

If the server file has changed, it will reload and return status code 200. If the server resource has not changed, the browser will fetch the cache and return 304.

  • The Etag and if – none – Match

The server generates a hash value based on the content of the file to identify the resource status. When it sends a request to the server for the second time, the server verifies whether the hash value is consistent to determine whether the file has changed. What problem can the server solve? Last-modified only causes the following defects:

The server file changes, but the content does not; The server cannot accurately obtain the last modification time of the resource. The resource was operated on in seconds and last-Modified is not recognized;

Etag is based on the content. No matter what operation is performed, the hash value changes whenever the content changes. Another is that eTAG has a higher priority than last-Modified. Last-modified&etag is only used when the browser validates again. It must check whether the cache has expired (max-age) before using these two items. Of course, the ETag priority is higher than last-modifity.

Vary: accept-encoding This is for resources that are gzip compressed and cached by the proxy server. If the client does not support compression, in this case the correct data may not be available, so the proxy server may have two versions of the resource, one compressed and one uncompressed. Another reason is Internet Explorer, which does not support any resources with Very headers but not accept-encoding and user-agent (ie has been abandoned).

Summary: The optimization scheme of the page needs to be adjusted according to the needs of the current project to achieve the best actual experience.


When I first started writing H5, I was a badminton prince who dominated the court. I miss…

@GavinUI