Many of the problems we encounter in our work are actually nGINx-related issues, such as cross-domain issues, request filtering, gZIP configuration, load balancing, or issues related to static resource servers.

Although nginx is typically configured by operations and we do not configure it directly, it is important to understand the role it plays in our online applications and to know how to troubleshoot problems.

So what exactly is Nginx?

What is Nginx

Nginx is a lightweight Web server and reverse proxy server, which is widely used in Internet projects due to its small memory footprint, extremely fast startup and high concurrency.

The figure above Outlines the role Nginx plays in overall browsing access, a bit like an entry gateway. Nginx is a high performance reverse proxy server, so what is a reverse proxy, what is a forward proxy?

1. Forward proxy

Due to the firewall, we cannot access directly, so we need to use VPN to complete this visit. Through this example, we can see that the so-called forward proxy, the proxy is the client, the client knows exactly what he is accessing, but the target server does not know whether he is receiving access from the proxy or from the real client.

Here’s the official explanation:

Origin server is a server that sits between the client and the origin server. In order to get content from the origin server, the client sends a request to the agent and specifies the destination (the origin server). The agent then forwards the request to the origin server and returns the content to the client. The client can use the forward proxy.

2. Reverse proxy

As can be seen from the figure, when we access from the Internet, we will conduct a forwarding and proxy to the Intranet. Therefore, the reverse proxy actually represents the server, and the process is transparent to the client.

Here’s the official explanation:

The reverse proxy server is located between the user and the target server. However, for users, the reverse proxy server is the target server. Users can access the reverse proxy server directly to obtain the resources of the target server.

Basic configuration of Nginx

Here is the basic structure of the nginx configuration file:

Among them:

  • Main: indicates the global configuration of nginx
  • Events: The configuration affects the network connection between the Nginx server and users
  • HTTP: you can nest multiple servers, configure proxy, cache, log definition, etc., and configure the configuration of third-party modules.
  • Upstream: Specify the IP address of the back-end server, which is related to load balancing
  • Server: Configures the parameters of the virtual host. One HTTP server can have multiple servers
  • Location: Configures the routing of requests and the processing of various pages

Built-in variables

3. Cross-domain issues

1. Cross-domain definition

Browsers have the same origin policy for security purposes. That is, if there is a difference in protocol, domain name, or port, it is cross-domain.

2. How does Nginx solve cross-domain

• Domain name of the front-end server: • Domain name of the back-end server:

Without a proxy, making a request to at is bound to cause cross-domain problems.

If you have Nginx proxy serve_name to, then set the corresponding location to intercept requests that need to cross domains, and finally proxy the request back to The configuration is as follows:

{ listen 80; server_name; location / { proxy_pass; }}Copy the code

A visit to nginx like this is a same-origin visit, and nginx forwards requests to the server that do not trigger the browser’s same-origin policy.

Request filtering

1. Filter status codes

error_page 500 501 502 503 504 506 /50x.html; Location = /50x.html {# change the following path to the path of the HTML. root /root/static/html; }Copy the code

The 50x is the display page for the error status code, followed by the address for the specific HTML.

2. Filter by URL name

location / {
    rewrite  ^.*$ /index.html  redirect;
Copy the code

Rewrite’s directive looks like this:

rewrite regex replacement [flag];
Copy the code

The re is used to match the requested URL, and if the match is successful, the URL is changed using replacement. The final redirect means a 302 temporary redirect is returned.

So this is an exact URL match, and all urls that don’t match are redirected to the home page.

3. Request type filtering

if ( $request_method ! ~ ^(GET|POST|HEAD)$ ) { return 403; }Copy the code

5. Configure GZIP

1. What is GZIP

Gzip is the abbreviation of GNUzip. It was first used for file compression in UNIX systems. Gzip encoding over THE HTTP protocol is a technique used to improve the performance of Web applications, and both web servers and clients (browsers) must support GZIP. Chrome, Firefox and Internet Explorer all support this protocol. Common servers such as Apache, Nginx, and IIS also support GZIP.

The compression ratio of GZIP is 3 to 10 times, which greatly reduces the network bandwidth of the server. In practice, not all files are compressed, usually only static files.

2. Gzip is enabled

Request headerResponse headers

3. The nginx configuration

server{ gzip on; # Enable or disable the gzip module gzip_buffers 32 4K; Set the system to obtain several units of cache for storing gzip's compressed result data stream. gzip_comp_level 6; # Compression level, 1-10, the larger the number of compression, the better, the higher the compression level, the higher the compression rate, the longer the compression time. gzip_min_length 100; # set the minimum number of bytes allowed to compress the page. The number of bytes is taken from the content-Length of the corresponding header. gzip_types application/javascript text/css text/xml; gzip_disable "MSIE [1-6]\."; Gzip_proxied on: # Set to enable or disable Gzip compression for receiving content from the proxy server. Gzip_http_version 1.1; Gzip_proxied: off; gzip_proxied: off; Set to enable or disable gzip compression for receiving content from the proxy server. gzip_vary on; Use to add Vary: accept-encoding to the response header to make the proxy server recognize whether gzip compression is enabled based on the accept-encoding in the request header}Copy the code

Load balancing

1. What is load balancing

Load balancing is a key component of high availability network infrastructure and is typically used to distribute workloads across multiple servers to improve the performance and reliability of websites, applications, databases, or other services.

A Web architecture without load balancing looks like this:

In this case, the user is directly connected to the Web server, and if the server goes down, the user cannot access it. In addition, if the server has more users trying to access it than it can handle at the same time, it may load slowly or not connect at all.

This failure can be mitigated by introducing a load balancer and at least one additional Web server on the back end. Typically, all back-end servers guarantee the same content so that users receive the same content regardless of which server responds.

2. How does nginx implement load balancing

Upstream Specifies the list of back-end server addresses

Upstream balanceServer {server; Server; Server; }Copy the code

Intercepts the response request in the server and forwards the request to the list of servers configured in Upstream.

server { server_name; listen 80; location /api { proxy_pass http://balanceServer; }}Copy the code

3. Load balancing policies

(1) Polling policy (default)

Assign all client request polling to the server. This strategy works fine, but if one of the servers becomes too stressed and delays occur, it will affect all users assigned to that server. Code as above.

(2) Minimum connection number strategy

By prioritizing requests to less stressed servers, it balances the length of each queue and avoids adding more requests to stressed servers.

upstream balanceServer { 
Copy the code

(3) Weight strategy

The weight of different IP addresses is positively correlated with the access ratio. The higher the weight is, the greater the access is. It is suitable for machines with different performance.

Upstream {server weight=2; Server weight = 8; }Copy the code

(4) Bind the client IP address to ip_hash

Requests from the same IP address are always assigned to only one server, which effectively solves the session sharing problem existing in dynamic web pages.

upstream balanceServer {
Copy the code

(5) Fastest response time strategy FAIR (third party)

Requests are prioritized to the fastest server, which relies on the third-party plugin nginx-upstream-fair

upstream balanceServer {
Copy the code

(6) URl_hash (third party)

Requests are allocated based on the hash result of the url so that each URL is directed to the same backend server, which is more efficient when the backend server is cached.

upstream balanceServer { 
  hash $request_uri; 
Copy the code

4. Health checkup

Nginx’s ngx_HTTP_upstream_module (health check module) is essentially a server heartbeat check. It sends health check requests to servers in the cluster periodically to check whether any servers in the cluster are abnormal.

If one of these servers is detected to be abnormal, all incoming requests from the client to the Nginx reverse proxy will not be sent to the server (until the next rotation health check is normal).

Upstream =1 fail_timeout=40s; Server max_fails = 1 fail_timeout = 40 s; } server { listen 80; server_name localhost; location / { proxy_pass http://balanceServer; }}Copy the code

Two configurations are involved: fail_TIMEOUT: sets the period during which the server is considered unavailable and the period during which the number of failed attempts is counted. The default value is 10s. Max_fails: sets the number of failed attempts for Nginx to communicate with the server.

Static resource server

location ~* \.(png|gif|jpg|jpeg)$ { root /root/static/; autoindex on; access_log off; expires 10h; # set expiration time to 10 hours}Copy the code

Matching with the PNG | | GIF JPG | jpeg for the end of the request, and forwards the request to the local path, the path specified in the root namely nginx local path. You can also set up some caching.

8. Access control

Nginx whitelists can be configured to specify which IP addresses can access the server.

Location / {allow; # deny all; # disable all}Copy the code

The article | Zhun spirit

Pay attention to the object technology, hand in hand to the cloud of technology