Question:

When the company was developing a 3D panoramic function, threejS framework was needed for 3D rendering. Generally, the size of 3D files was over 40M, but this time we rendered files were around 44M, and the download time was 80s.

Previous solutions:

Patience to persuade the product manager, this is a normal phenomenon, such a large file must be so many seconds, don’t worry ah, and not load out, you see, this is not load out?

The solution now:

The file is so big, can it be compressed and then transferred? Nginx gzip can be compressed in two ways: 1. Real-time compression, which consumes CPU to compress the file in real time and returns it to the browser. In this way, the Etag attribute in the response header will have the word ‘W\’. Nginx will use a. Gz file when loading the file. Nginx will use a. Gz file when loading the file. The gzip_static module does not come with nginx, so it needs to be compiled and added. In addition, gzip ->bingo needs to be configured successfully during the vue project packaging process

For detailed parameter Settings, see Enabling GZIP Compression and Deployment for vUE project enabling GZIP Compression and performance optimization – GuopengJU – Blog Park

Effect: a 44Msize IFC file is compressed to 8M and the load time is reduced from 80 seconds to 12 seconds

In addition, static resource files such as JS, CSS and JSON have been enhanced

New one!

Future solutions:

Please leave better methods in the comments!!


Derived proxy_cache cache

The starting point: After the above gzip compression, the problem of transferring large files is basically solved, so we want to further improve the loading speed. After the static resources are loaded for the first time, memory cache and disk cache are generated, but these caches are client caches. In other words, when another user visits the website, The proxy_cache is very simple to use, and various blogs say: blog.csdn.net/qq_3172… Add_header nginx-cache $upstream_cache_status; add_header nginx-cache $upstream_cache_status; The attributes in the original response header cache are customizable. 3, I can divide this kind of cache into two situations: (1) to cache the data of the back-end response, the result of the request can be cached down, the effect is remarkable, but the drawback is to cache the interface with high real-time requirements, it is not desirable in reality, or have to adapt to local conditions. (2) static resources do cache (can not directly add proxy_cache, need to do a layer of proxy forwarding) : tutorial www.jb51.net/article/… The effect is not obvious, but through the analysis of other people’s CDN resources, they have done caching, whether it is really useful to observe later.