Make fun of yourself

Three months, three months have not written a thing, even daughter-in-law can not help but say: “feel this year you are not very ambitious”, I returned a: “confident point, take out the feeling”.

April busy wedding, May demand mountains, June a little idle, just happened to catch up with the NBA playoffs and the European Cup, how much time to study, all lie flat.

In fact, from the beginning of the line, on the road to contact Nginx, learn to install, learn to match, learn to use it to solve the front-end tube is not the front-end problem, knowledge bits and pieces from 100 degrees to check a lot, but never remember, so….

About Nginx

If there is no concept of Nginx, I recommend taking a look at the baidu encyclopedia of Nginx;

One thing to remember: Nginx is a powerful, high-performance Web and reverse proxy service with many great features;

High Performance Web Services

Web services are actually called static resource services. Since the front end is separated, the output of the front end tends to be in the form of static resources. What is a static resource: is the result of the output we usually use Webpack to build, for example:

In order to provide the accessibility of files in the Internet, the front-end still needs to rely on static resource services; The most common ones are the Node service and the Nginx service.

It is the most common Node service WebpackServer, often used in the front-end development alignment, after the start we can through the form of http://localhost:8907/bundle.05a01f6e.js to access the building resources; In addition, I give you another Node service library in Amway, Serve, which can also start a static resource service quickly.

But in a production environment, we usually use Nginx, not because Node is bad, but because Nginx is good enough. Usually, there are many front-end projects in the whole front-end, so we put the build results in a folder on a server (usually with a backup machine), and then install Nginx to create a static resource service for all front-end resources to access. For example, there are four front-end project resources A, B, C, D under static folder. We use nginx to configure them:

server { listen 80; listen [::]:80; server_name static.closertb.site; # Load configuration files for the default server block. include /etc/nginx/default.d/*.conf; location / { alias /home/static/; autoindex on; if ($request_uri ~ .*\.(css|js|png|jpg)$) { #etag on; expires 365d; #add_header Cache-Control max-age=31536000; }}}

We can through the http://static.closertb.site/A… Access to A project, through http://static.closertb.site/C… Access the C project to eat more than one chicken, which is especially common in the age of HTTPS and HTTP2.

That’s a simple use of Nginx as a Web service, so let’s take a look at reverse proxy services

Reverse Proxy Service

As a developer, you might hear the word proxy a lot, but many people don’t know the difference between the positive and the negative:

As shown on the left of the figure above, the forward proxy is the user’s active behavior, as we did when accessing goggle during FQ; The reverse proxy on the right is the behavior of the server that we access, which we, as users, cannot control and do not need to care about.

Reverse proxying is a very common technique in service deployment, such as load balancing, disaster recovery, and caching.

For front-end development, reverse proxy is mostly used for request forwarding to deal with cross-domain issues. When we point the front-end static resource service to a domain name (static. CloserTb.site), it does not match the server’s requested domain name (server.closerTb.site), causing cross-domain. If the server does not cooperate, then the front end can be easily done with nginx. In the previous configuration, we added:

server {
        listen       80;
        listen       [::]:80;
        server_name  static.closertb.site;

        location /api/ {
            proxy_pass    http://server.closertb.site;
            proxy_set_header Host      $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        }

        location / {
        ....
    }

So when sent a request page: http://static.closertb.site/a… When the actual request address is: http://server.closertb.site/a… This simply implements a service proxy. Its principle is similar to that of WebpackServer Proxy.

Those are the two typical usage scenarios for Nginx Web Services and Proxy Services in front-end development. Let’s talk about some of the more fragmented and useful ones

Nginx configuration refinement

The knowledge source: http://nginx.org/en/docs/http…

root vs alias

The main difference between root and alias is how Nginx interprets the URI after location to map the request path to the server file in different ways, such as:

location ^~ /static {
    root /home/static;
}

When the request http://static.closertb.site/s… , will return on the server/home/static/static/a/logo. The PNG file, namely ‘/ home/static’ + ‘/ static/a/logo PNG’, its matching stitching is the address of the string and its future

For alias:

location ^~ /static {
    root /home/static;
}

When the request http://static.closertb.site/s… ‘/home/static /a/logo. PNG ‘+’/a/logo. PNG ‘will be returned to the server with the address after the matching string

Location with and without ‘/’

You may have seen A:

location /api {
    proxy_pass http://server.closertb.site;
}

Maybe you’ve seen B

location /api/ {
    proxy_pass http://server.closertb.site;
}

What’s the difference?

The two are similar to root and alias, but the difference only applies to:

Special processing is performed if the location is defined by a prefix string ending in a slash character, and the request is handled by one of proxy_pass, fastcgi_pass, uwsgi_pass, scgi_pass, memcached_pass, or grpc_pass. In response to a request that has a URI equal to the string but no trailing slash, a permanent redirect with code 301 returns to the URI of the additional slash request

So when I received a request: http://static.closertb.site/a… When configuring A will request broker to: http://server.closertb.site/a… ; Configuration will request broker to B: http://server.closertb.site/u…

This knowledge is really important in agent configuration

Request redirect

I can use the rewrite service when we take down a front-end service, or when a user visits a page that doesn’t exist, and we don’t want the user to see a 404, we want them to redirect them to a page that’s obscure and correct; Backhand a configuration, directly hit the traffic to the site home page;

location /404.html {
    rewrite  ^.*$ / redirect;
}

If the website has HTTPS enabled, we need to redirect all HTTP requests to HTTPS:

server {
        listen       80 default_server;
        server_name  closertb.site;

        include /etc/nginx/default.d/*.conf;
        rewrite ^(.*)$ https://$host$1 permanent;
}

Rewrite (rewrite) or Permanent (rewrite). There are some big differences between rewrite and rewrite.

Help optimize the Web performance experience

Web performance optimization is a front-end knowledge, a good static resource loading speed, will significantly improve the user experience, and Nginx as the most commonly used static resource server, there are many channels to help us to improve the static resource loading speed, in short, can be configured in three aspects directly:

  • Enable resource cache: since all the file names of resources except.html are hashed at present, we can cache the files other than HTML files for a long time, which can ensure that the resources are loaded instantly when the user refreshes the page.
if ($request_uri ~ .*\.(css|js|png|jpg|gif)$) {
    expires 365d;
    add_header  Cache-Control  max-age=31536000;
}

Both Expires and Max-Age can be set to tell the browser that a resource expires in a year. See this article for more on HTTP caching.

  • Enable http2: As http2 matures and multiplexes, it can greatly improve the load speed of the first screen of pages with multiple resource dependencies. Enable http2 involves enabling HTTPS and then adding an http2 flag. Nginx configures SSL to achieve HTTPS access;
# listen  443 ssl default_server; 
listen  443 ssl http2 default_server;
  • Open gzip: http2 solves the number of requests at the same time, resource cache improves the speed of resource secondary loading, then gzip fundamentally, reduce the size of the resource transmission in the network, directly open gzip configuration in the HTTP level;
http {
    gzip on;
    gzip_disable "msie6";
    gzip_min_length 10k;
    gzip_vary on;
    # gzip_proxied any;
    gzip_comp_level 3;
    gzip_buffers 16 8k;
    gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
}

Pull straddle summary

The article was supposed to be published at the end of June, when the party celebrated its 100th birthday, but procrastination and football got the better of me. Write a week, finally finished writing, knowledge is very simple, also a little fragmentary, hope useful to you.

The great new Italian left-back was injured last night.