The role of nginx in applications

  • To solve the cross domain

  • The request filtering

  • Configuring gzip

  • Load balancing

  • Static resource server

Nginx is a high-performance HTTP and reverse proxy server, as well as a general-purpose TCP/UDP proxy server, originally written by Russian Igor Sysoev.

Nginx is now almost mandatory for many large web sites, and in most cases you don’t need to configure it yourself, but it’s important to understand what role it plays in your application and how to address those issues.

I’ll explain what Nginx does in an application from a real application in the enterprise.

Nginx is a high-performance reverse proxy server. So what is a reverse proxy?

Forward proxy and reverse proxy

A proxy is a hypothetical layer of servers between the server and the client. The proxy will receive the client’s request and forward it to the server, and then forward the server’s response to the client.

Both forward and reverse proxies implement the above functions.

Forward agent

Forward proxy, which means a server located between the client and the original server (Origin Server), in order to get content from the original server, the client sends a request to the proxy and specifies the destination (origin server), then the proxy forwards the request to the original server and returns the obtained content to the client.

The forward proxy is for us, that is, for the client. The client can access server resources that cannot be accessed by the forward proxy.

The forward proxy is transparent to us, but opaque to the server, which does not know whether it is receiving access from the proxy or from the real client.

The reverse proxy

  • In Reverse Proxy (*) mode, a Proxy server receives Internet connection requests, forwards the requests to the Intranet server, and returns the results to the Internet client. In this case, the proxy server acts as a reverse proxy server.

A reverse proxy serves the server. The reverse proxy helps the server receive requests from clients, forward requests, and balance loads.

The reverse proxy is transparent to the server, but not to us, that is, we do not know that we are accessing a proxy server, and the server knows that the reverse proxy is serving him.

The basic configuration

configuration

Here is the basic structure of an nginx configuration file:

events { } http { server { location path { ... } location path { ... } } server { ... }}Copy the code
  • Main: indicates the global configuration of nginx.

  • Events: The configuration affects the network connection between the Nginx server and users.

  • HTTP: can nest multiple servers, configure proxy, cache, log definition and most functions and third-party module configuration.

  • Server: Configures the parameters of the virtual host. One HTTP server can have multiple servers.

  • Location: Configures the routing of requests and the processing of various pages.

  • Upstream: Configuring the IP address of the back-end server is an integral part of load balancing configuration.

Built-in variables

Here are some of the built-in global variables commonly used in nginx configuration. You can use them anywhere in the configuration.

The variable name function
$host Request informationHostIf not in the requestHostLine is equal to the server name set
$request_method Client request type, for exampleGET,POST
$remote_addr The client’sIPaddress
$args Parameters in the request
$content_length Request headerContent-lengthfield
$http_user_agent Client Agent information
$http_cookie Client cookie information
$remote_addr IP address of the client
$remote_port Client port
$server_protocol Protocol used in the request, for exampleHTTP / 1.0, HTTP / 1.1 `
$server_addr Server address
$server_name Server name
$server_port Port number of the server

To solve the cross domain

Let’s go back to the basics, what cross-domain is all about.

Cross domain definition

The same origin policy restricts how documents or scripts loaded from the same source can interact with resources from another source. This is an important security mechanism for isolating potentially malicious files. Read operations between different sources are generally not allowed.

Definition of homology

If both pages have the same protocol, port (if specified), and domain name, then both pages have the same source.

Nginx addresses cross-domain principles

Such as:

  • The domain name of the front-end server is fe.server.com

  • The domain name of the back-end service is dev.server.com

Now I’m making a request to dev.server.com at fe.server.com and it’s bound to cross domains.

Now all we need to do is start an Nginx server, set server_name to fe.server.com, then set the location to intercept front-end requests that need to cross domains, and finally broker requests back to dev.server.com. The configuration is as follows:

server { listen 80; server_name fe.server.com; location / { proxy_pass dev.server.com; }}Copy the code

This perfectly circumvents the browser’s same-origin policy: fe.server.com visits nginx’s fe.server.com are same-origin visits, and nginx forwards requests to the server that do not trigger the browser’s same-origin policy.

The request filtering

Filter by status code

error_page 500 501 502 503 504 506 /50x.html;
    location = /50x.html {
        Change the following path to the path where the HTML is stored.
        root /root/static/html;
    }
Copy the code

Accurately matches urls based on URL name filtering. All urls that do not match are redirected to the home page.

location / {
    rewrite  ^.*$ /index.html  redirect;
}
Copy the code

Filtering by request type.

if ( $request_method! ~ ^(GET|POST|HEAD)$ ) {return 403;
}
Copy the code

Configuring gzip

GZIP is one of the three standard HTTP compression formats specified. The vast majority of websites use GZIP to transfer HTML, CSS, JavaScript and other resource files.

For text files, the effect of GZip is very obvious. After enabling GZip, the traffic required for transmission will be reduced to about 1/4~1/3.

Not every browser supports gzip. How do you know if the client supports gzip? The accept-encoding in the request header identifies support for compression.

Enabling gzip requires both client and server support. If the client supports gzip resolution, then the server can enable Gzip as long as the server can return gzip files. We can configure the server to support GZIP through nginx. Content-encoding :gzip :gzip is enabled on the server.

    gzip                    on;
    gzip_http_version       1.1;
    gzip_comp_level         5;
    gzip_min_length         1000;
    gzip_types text/csv text/xml text/css text/plain text/javascript application/javascript application/x-javascript application/json application/xml;
Copy the code

gzip

  • Enable or disable the Gzip module

  • The default value is off

  • The value can be on or off

gziphttpversion

  • The lowest HTTP version required to enable GZip

  • The default value is HTTP/1.1

Why isn’t the default version 1.0 here?

HTTP runs on top of TCP connections and has the same features as TCP, such as three-way handshakes and slow starts.

With persistent enabled, the server responds by leaving the TCP connection open. Subsequent requests and responses between the same client/server pair can be sent over this connection.

To maximize HTTP performance, it is important to use persistent connections.

HTTP/1.1 supports TCP persistent connections by default. HTTP/1.0 can also enable persistent connections by explicitly specifying Connection:keep-alive. For HTTP packets over TCP persistent connections, the client needs a mechanism to accurately determine the end position, and in HTTP/1.0, this mechanism is content-Length only. The new transfer-Encoding :chunked mechanism in HTTP/1.1 solves this problem perfectly.

Nginx also has the chunked attribute, chunked_transfer_encoding, which is enabled by default.

When GZip is enabled, Nginx does not wait for the file GZip to complete before returning a response. Instead, Nginx compresses the response as it goes, which can significantly improve TTFB(TimeToFirstByte). The only problem with this is that when Nginx starts returning a response, it has no way of knowing how big the file to transfer will end up being, i.e. it can’t give the content-Length response header.

Therefore, if GZip is enabled with Nginx in HTTP1.0, content-Length is not available, which makes it a choice between enabling persistent links and using GZip in HTTP1.0, so gzip_http_version is set to 1.1 by default.

gzipcomplevel

  • Compression level: The higher the compression level, the higher the compression rate, of course, the longer the compression time (faster transmission but more CPU consumption).

  • The default value is 1

  • The compression level ranges from 1 to 9

gzipminlength

  • Set the minimum number of bytes allowed for page compression. Requests with content-Length less than this value will not be compressed

  • Default value: 0

  • If the value is small, the length of the compressed file may be larger than that of the original file. You are advised to set the length to more than 1000

gzip_types

  • The type of file to be gzip compressed (MIME type)

  • Default: text/ HTML (default uncompressed JS/CSS)

Load balancing

What is load balancing

In the figure above, there are many service Windows in the front and many users in the bottom who need services. We need a tool or strategy to help us allocate so many users to each window, so as to achieve full utilization of resources and less queuing time.

Think of the service window in the front as our back-end server, and the people in the back end as millions of customers making requests. Load balancing is used to help us to allocate a large number of client requests to each server reasonably, in order to achieve full utilization of server resources and less request time.

How does nGINx implement load balancing

Upstream Specifies the list of back-end server addresses

Upstream balanceServer {server 10.1.22.33:12345; Server 10.1.22.34:12345; Server 10.1.22.35:12345; }Copy the code

Intercepts the response request in the server and forwards the request to the list of servers configured in Upstream.

server { server_name fe.server.com; listen 80; location /api { proxy_pass http://balanceServer; }}Copy the code

The above configuration only specifies the list of servers that nginx needs to forward, and does not specify an allocation policy.

Nginx implements load balancing policies

Polling strategy

The default policy that allocates all client requests for polling to the server. This strategy works fine, but if one of the servers becomes too stressed and delays occur, it will affect all users assigned to that server.

Upstream balanceServer {server 10.1.22.33:12345; Server 10.1.22.34:12345; Server 10.1.22.35:12345; }Copy the code

Minimum connection number policy

By prioritizing requests to less stressed servers, it balances the length of each queue and avoids adding more requests to stressed servers.

upstream balanceServer {
    least_conn;
    server 10.1.22.33:12345;
    server 10.1.22.34:12345;
    server 10.1.22.35:12345;
}
Copy the code

Maximum response time policy

Depending on NGINX Plus, priority is given to the server with the shortest response time.

upstream balanceServer {
    fair;
    server 10.1.22.33:12345;
    server 10.1.22.34:12345;
    server 10.1.22.35:12345;
}
Copy the code

Client IP binding

Requests from the same IP address are always assigned to only one server, which effectively solves the session sharing problem existing in dynamic web pages.

upstream balanceServer {
    ip_hash;
    server 10.1.22.33:12345;
    server 10.1.22.34:12345;
    server 10.1.22.35:12345;
}
Copy the code

Static resource server

location ~* \.(png|gif|jpg|jpeg)$ {
    root    /root/static/;
    autoindex on;
    access_log  off;
    expires     10h;# Set expiration time to 10 hours
}
Copy the code

Matching with the PNG | | GIF JPG | jpeg for the end of the request, and forwards the request to the local path, the path specified in the root namely nginx local path. You can also set up some caching.