Translator: One west, one east

The original link

A few months ago, I was traveling abroad and wanted to show a friend my personal (static) website. I tried to navigate to my site, but it took much longer than I expected. The site has very little dynamic content – it has some animation and responsive design, but the content is the same. I was shocked by the results: DOMContentLoaded in 6 seconds and the full page download in 8.2 seconds. This static site has 50 requests with a total data size of 2.9 MB. I’m used to 1GB/s and just connecting from Los Angeles to a server in San Francisco, which makes this monster site look really fast. In Italy, at 8 megabits per second, it’s a different story.

This is my first study of website optimization. Previously, when I wanted to add a library or resource, I put the data in it and pointed to the resource via SRC =””””. I don’t care about performance at all, whether it’s caching or lazy loading.

I started looking for friends who had similar experiences. Unfortunately, much of the literature on static optimization is out of date — some of it is from discussions in 2010 or 2011, some of the assumptions are wrong, and some of it rehounds useless statements.

However, I found two excellent resources — High Performance Browser Networking and Dan Luu’s Similar Experience with Optimizing Static Sites. While I didn’t remove formatting and content like Dan did, I still made my pages load 10 times faster. It takes about a fifth of a second to complete DOMContentLoaded and 355ms to download the entire page.

process

The first step is to research the site as a whole. I want to figure out what takes the most time, and how to parallelize requests. I tested my site from around the world using several tools, including:

  • tools.pingdom.com/
  • www.webpagetest.org/
  • tools.keycdn.com/speed
  • Developers.google.com/web/tools/l…
  • Developers.google.com/speed/pages…
  • webspeedtest.cloudinary.com/

Some tools can give helpful advice, but only if your static site only has 50 requests — like everything from an empty GIF image, a relic from the 90s that has never been used (I downloaded 6 fonts and only used one).

This is the timeline of my site – I tested it on the Web Archive. I don’t have the raw numbers, but it looks a lot like the raw numbers I saw a few months ago.

Everything I can control I want to improve — from content and speed of connecting to the server (Nginx) via javascript and DNS Settings.

To optimize the

Zoom out and compress content

The first thing I noticed was that I had over a dozen requests for both CSS and JS. None of these JS requests are in HTTP keep-alive form; For different sites, there are still HTTPS requests. This adds multiple routes to different CDNS or servers, and some JS files request other resources, resulting in the blocking cascade seen above.

I used Webpack to merge my resources into a single file. Anytime the content changes, it automatically compresses and makes all the dependent resources a single file.

I tried several schemes. The original single bundle.js file was placed in the header, which blocked. Its final size is 829kb, including every non-image resource (fonts, CSS, all libraries, dependency files and js files). The biggest of these is font-awesome, which accounts for 724KB of 829KB.

I’ve manipulated the font and style of font awesome, leaving only the three fonts I used — Fa-Github, Fa-Envelope, and Fa-code. I use Fontello, which lets me use the font I need. The resulting file size drops to 94KB.

Today’s web sites can go wrong if they are built using only stylesheets. It is necessary to add a bundle.js file with a obstructive nature, which takes 118ms to load, an order of magnitude better than the above.

There are some additional benefits — I no longer have resources pointing to third-party resources or CDN, so the user doesn’t need to: 1) send CDN requests; 2) Form HTPPS handshake; 3) Wait for all resources to download from a third party.

While CDN and distributed caching are useful for large, distributed sites, it’s not useful for my small, static site. An additional loss of a few hundred milliseconds is acceptable.

Compressing resources

I loaded an 8MB header image and displayed it at 10% width/height. This is not a lack of optimization, but a waste of user bandwidth.

All images I use webspeedtest.cloudinary.com/ compression, it also remind me to use webp, but I want more compatible website, so I insisted on using JPG images. It’s also possible to add a mechanism for only those sites that can support it, but I want to keep it simple and adding this abstraction layer of code doesn’t seem worth it.

Improve Web services – HTTP2, TLS and more

The first thing I did was switch to HTTPS – originally I was running Nginx on port 80 and just loaded the service file from /var/www/html.

I started with HTTPS and redirected all my resource requests from HTTP to HTTPS. I got my TLS certificate from Let’s Encrypt (a great site that can also register wildcard certificates).

By simply adding the HTTP2 directive, Nginx can take advantage of all the latest HTTP features. Note that if you want to use the HTTP2 (previously SPDY) feature, you must use HTTPS. Learn more here.

You can also use the _http2_push command on images/ headshot.jpg; .

Note: Enabling gzip and TLS may expose you to [BREACH] 16. Given that this is a static site and the risk of BREACH is very low, compression is not an issue for me.

Leverage caching and compression

What else can Nginx do? The first idea that popped up was caching and compression

I sent the original uncompressed HTML file. By simply adding the _gzip and _line commands, the file size dropped by 50 percent from 16000bytes to 8000bytes.

We can speed things up even further, if we set _gzip_static, it will look for previously compressed versions. When combined with our webpack configuration above — we were able to use ZopflicPlugin to pre-zip all our files at compile time. This saves computing resources and allows us to maximise our resources without worrying about speed.

Not only that, but my site rarely changes, so I want resources cached for as long as possible. This way, users don’t need to re-download everything (especially bundle.js) on future visits.

The server notes I updated are as follows. Note that I didn’t cover all the changes I made, such as changes to TCP Settings, gzip instructions and file caching. If you want to learn more about tuning Nginx, read this article on Tuning Nginx.

Corresponding server content

Lazy loading

Finally, there is a small change to my site, which will increase the speed slightly. Five of the images are not visible unless you press the relevant TAB, but they are downloaded at the same time as other resources (becauseThis tag)

I wrote a short script to modify the attributes of elements with the _lazyload_ name.

So once the content is done and loaded,The tag will become“To load the images.

conclusion

These enabled my page to load from more than 8 seconds for the first time to less than 350 mm, and 200 milliseconds for non-first time. I highly recommend reading High Performance Browser Networking — it’s easy to read, but provides an overview of modern web sites, with good suggestions for optimizing each layer of the current web model.

Did I miss anything? Do some methods violate best practices or are there better ways to improve performance? Please contact _JonLuca De Caro!