The original link: https://www.lishuaishuai.com/server/1437.html

One, foreword

Nginx is a free, open source, high-performance HTTP server and reverse proxy, as well as IMAP/POP3 proxy server. Nginx is known for its high performance, stability, rich functionality, simple configuration, and low resource consumption. Nginx is a good alternative to Apache server in the case of high connection concurrency.

Recently, Nginx is also one of the ways to realize the micro front end of the mid-platform system integration. It happens that some friends have asked a question about Nginx, sounding the alarm. Take this opportunity to comb the knowledge of Nginx.

Second, the foundation

2.1 installation

2.1.1 Mac

Install Nginx:

brew install nginx
Copy the code

Nginx start:

sudo nginx
Copy the code

Nginx. conf configuration file

/usr/local/etc/nginx/nginx.conf
Copy the code

To see the default nginx server, type http://localhost:8080/ in your browser. 8080 is the default nginx port.

2.1.2 Linux

Install build tools and library files

yum -y install gcc gcc-c++ autoconf pcre pcre-devel make automake
yum -y install wget httpd-tools vim
Copy the code

Build directory

cd /opt/ mkdir app download logs work backup
Copy the code

Determine the nginx source

vim /etc/yum.repos.d/nginx.repo
Copy the code

Add the following code

name=nginx repo
baseurl=http://nginx.org/packages/centos/7/$basearch/
gpgcheck=0
enabled=1
Copy the code

Note that centos and 7 correspond to operating system and version numbers

The installation

yum list | grep nginx
yum install nginx
Copy the code

2.2 Basic Commands

Viewing the Version Number

nginx -v
Copy the code

Check whether the configuration file has syntax errors

nginx -t
Copy the code

Hot loading, reloading the configuration file

nginx -s reload 
Copy the code

Down quickly

nginx -s stop
Copy the code

Wait for the worker process to complete and close

nginx -s quit
Copy the code

2.3 Basic Configuration

The core of the Nginx configuration is to define which urls to handle and how to respond to those URL requests, that is, to define a set of Virtual Servers that control the processing of requests from specific domain names or IP addresses.

Each virtual server defines a set of location controls that handle a specific set of URIs. Each location defines the processing scenario for the request mapped to it, and can either return a file or broker the request.

Nginx consists of different modules controlled by instructions specified in the configuration file. Instructions are divided into simple instructions and block instructions.

A simple instruction contains the instruction name and the instruction parameters, separated by a space and followed by a semicolon (;). At the end. Block instructions are similar to simple instructions, but are surrounded by braces ({and}). A block directive is called a context if it contains other directives in braces (e.g., Events, HTTP, server, and Location).

Directives that are placed out of context in the configuration file are placed in the master profile by default (similar to inheriting the master profile). Events and HTTP are placed in the main configuration file, Server is placed in the HTTP block directive, and Location is placed in the Server block directive.

Comments on configuration files begin with #.

2.3.1 Default Configuration file

# Nginx process, generally set to the same number of CPU cores
worker_processes  1;   
# Error log directory
error_log  /var/log/nginx/error.log warn;
# Process PID location
pid        /var/run/nginx.pid;

events {
    worker_connections  1024; # Maximum number of concurrent requests for a single background process
}


http {
    include       /etc/nginx/mime.types;   File extension and type mapping table
    default_type  application/octet-stream;  The default file type
	
    # log format
    log_format  main  '$remote_addr - $remote_user [$time_local]. ""$request"'
                      '$status $body_bytes_sent "$http_referer"'
                      '"$http_user_agent""$http_x_forwarded_for"';

    # nginx access log location
    access_log  /var/log/nginx/access.log  main;

    Enable efficient transmission mode
    sendfile        on;
    # tcp_nopush on; # Reduce the number of network packet segments

    keepalive_timeout  65;  # Time to hold a connection, also known as timeout

    # gzip on; Enable gzip compression

    Import additional configuration files
    include servers/*;
}
Copy the code

2.3.2 Virtual Host Configuration

# Virtual host
server {
     listen       8080;  Configure the listening port
     server_name  localhost;  # Browser access domain name

     charset utf-8;
     access_log  logs/localhost.access.log  access;

     # routing
     location / {
         root   www; # Access the root directory
         index  index.html index.htm; # import file
     }

     error_page  404              /404.html;   # configure the 404 page

     # redirect server error pages to the static page /50x.html
     #
     error_page   500 502 503 504  /50x.html;   Error status code display page, need to restart after configuration

     location = /50x.html {
         root/usr/share/nginx/html; }}Copy the code

2.3.3 Built-in variables

The following are the built-in global variables commonly used in some configurations of Ngin X, that is, they can be used anywhere in the configuration.

The variable name role
$host Request informationHostIf not in the requestHostLine is equal to the server name set
$request_method Client request type, for exampleGET,POST
$remote_addr The client’sIPaddress
$args Parameters in the request
$content_length Content-length field in the request header
$http_user_agent Client Agent information
$http_cookie Client cookie information
$remote_port Client port
$server_protocol Protocol used in the request, for exampleHTTP / 1.0,HTTP / 1.1
$server_addr Server address
$server_name Server name
$server_port Port number of the server

2.3.4 How to use the Listen command

Listen address [: port] | port | Unix: path Nginx service listen to monitor can be set up IP: port, the port or the socket

There are additional parameters as follows (see documentation for more details) :

parameter role
default_server Specify this virtual host as the default host (listening on the same port); If not, the first virtual host is the default host.
ssl Specifies that connections accepted on this port work in SSL mode.
http2 Configure ports to accept HTTP2 connections.
spdy The port accepts spDY connections.
proxy_protocol Allows the port to accept connections using the proxy server protocol.
setfib Set the routing table
fastopen TCP quickly opens the connection between two endpoints. Not recommended.
backlog Sets the maximum length of the connection queue.
rcvbuf Socket receive buffer size
sndbuf Socket send buffer size
accept_filter The filter
deferred delay
bind Bind an address:port pair.
ipv6only on
reuseport Create a separate listening socket for each worker process
so_keepalive on

Three, practice

3.1 Forward Proxy

Forward proxy, is a server located between the client and the origin server (Origin Server), in order to get content from the origin server, the client sends a request to the proxy and specifies the destination (origin server), then the proxy forwards the request to the original server and returns the content to the client.

The forward proxy serves the client. The client can access server resources that cannot be accessed by the forward proxy.

The forward proxy is transparent to the client and opaque to the server, that is, the server does not know whether it is receiving access from the proxy or from the real client.

3.2 Reverse Proxy

In Reverse Proxy mode, a Proxy server receives Internet connection requests, forwards the requests to the Intranet server, and returns the results to the Internet client. In this case, the proxy server acts as a reverse proxy server.

A reverse proxy serves the server. The reverse proxy helps the server receive requests from clients, forward requests, and balance loads.

The reverse proxy is transparent to the server and opaque to the client, that is, we do not know that we are accessing a proxy server, and the server knows that the reverse proxy is working for it.

3.3 the HTTPS service

You need to sign an SSL certificate that is trusted by a third party. You can use Let’s Encrypt or create a free SSL certificate from a cloud service provider.

Step 2 Configure the SSL certificate for the service. In the service that requires HTTPS, listen on port 443 and define the SSL certificate file and private key file. The basic configuration is as follows (see the documentation for more parameter configurations) :

server {
   # SSL parameters
   listen              443 ssl http2;
   server_name         example.com;
   # certificate file
   ssl_certificate    /usr/local/nginx/conf/ssl/example.com.crt;
   # private key file
   ssl_certificate_key /usr/local/nginx/conf/ssl/example.com.key;
}
Copy the code

3.4 Load Balancing

Forward proxy and reverse proxy are mentioned above, and load balancing is the application of reverse proxy.

When the traffic volume of an application surges in a unit of time, the bandwidth and performance of the server are greatly affected, and the server will break down and crash. To prevent this phenomenon and achieve better user experience, we can configure Nginx load balancing to share the server pressure.

When one server goes down, the load balancer assigns other servers to users, greatly increasing the stability of the site. When users access the Web, they first access the load balancer, and then the load balancer forwards the request to the background server.

3.4.1 Common Load Balancing Policies

Polling policy (default)

Assign all client request polling to the server. This strategy works fine, but if one of the servers becomes too stressed and delays occur, it will affect all users assigned to that server.

// nginx.config upstream balanceServer {server 192.168.0.1; Server 192.168.0.2; }Copy the code
Weight strategy

The weight of different IP addresses is positively correlated with the access ratio. The higher the weight is, the greater the access is. It is suitable for machines with different performance

// nginx.config upstream balanceServer {server 192.168.0.1 weight=2; Server 192.168.0.2 weight = 8; }Copy the code
Minimum number of connections policy least_CONN

Allocating requests to back-end servers with fewer connections balances the length of each queue and avoids adding more requests to stressed servers.

upstream balanceServer {
    least_conn;
    server 10.1.22.33:12345;
    server 10.1.22.34:12345;
    server 10.1.22.35:12345;
}
Copy the code
The client IP address is bound to ip_hash

Requests from the same IP address are always assigned to only one server, which effectively solves the session sharing problem existing in dynamic web pages.

upstream balanceServer {
    ip_hash;
    server 10.1.22.33:12345;
    server 10.1.22.34:12345;
    server 10.1.22.35:12345;
}
Copy the code
Fastest Response time Policy FAIR (third Party)

Requests are prioritized to the fastest server, which relies on the third-party plugin nginx-upstream-fair

upstream balanceServer {
    fair;
    server 10.1.22.33:12345;
    server 10.1.22.34:12345;
    server 10.1.22.35:12345;
}
Copy the code
Url_hash (third party)

Requests are allocated based on the hash result of the url so that each URL is directed to the same backend server, which is more efficient when the backend server is cached.

upstream balanceServer {
    hash $request_uri;
    server 192.168.244.1:8080;
    server 192.168.244.2:8080;
    server 192.168.244.3:8080;
    server 192.168.244.4:8080;
}
Copy the code

3.4.2 Health Check

Nginx’s ngx_HTTP_upstream_module (health check module) is essentially a server heartbeat check. It sends health check requests to servers in the cluster periodically to check whether any servers in the cluster are abnormal.

If one of the servers is detected to be abnormal, all requests to the nginx reverse proxy from the client will not be sent to the server (until the next rotation health check is normal).

Example:

upstream balanceServer{
    server 192.168.0.1  max_fails=1 fail_timeout=40s;
    server 192.168.0.2  max_fails=1 fail_timeout=40s;
}

server {
    listen 80;
    server_name localhost; 
    location / {
      proxy_passhttp://balanceServer; }}Copy the code

Two configurations are involved: fail_TIMEOUT: sets the period during which the server is considered unavailable and the period during which the number of failed attempts is counted. The default value is 10s. Max_fails: sets the number of failed attempts for Nginx to communicate with the server

3.5 Static Resource Server

location ~* \.(png|gif|jpg|jpeg)$ {
    root    /root/static/;
    autoindex on;
    access_log  off;
    expires     10h; # Set expiration time to 10 hours
}
Copy the code

Matching with the PNG | | GIF JPG | jpeg for the end of the request, and forwards the request to the local path, the path specified in the root namely nginx local path. You can also set up some caching.

3.6 Adapting to PC and mobile Environments

When the user opens the scene of Baidu.com on the PC from the mobile end, it will automatically redirect to the mobile end m.baidu.com. Essentially, Nginx can obtain the userAgent on the request client through the built-in variable $http_user_Agent. In this way, you can know whether the current terminal of the current user is a mobile terminal or a PC, and then redirect to H5 station or PC station.

Server {location / # {/ mobile, PC equipment agent to obtain the if ($http_user_agent ~ * '(Android | webOS | iPhone)') {set $mobile_request '1'. } if ($mobile_request = '1') { rewrite ^.+ http://m.baidu.com; }}}Copy the code

3.7 configuration gzip

Gzip is a web page compression technology. After Gzip compression, the page size can be reduced by 30% or less. Smaller web pages make browsing experience better and faster. The implementation of GZIP web compression requires browser and server support.

server{ gzip on; # Enable/disable gzip module gzip_buffers 32 4K; Set the system to obtain several units of cache for storing gzip's compressed result data stream. gzip_comp_level 6; # Compression level, 1-10, the larger the number of compression, the better, the higher the compression level, the higher the compression rate, the longer the compression time. gzip_min_length 100; # set the minimum number of bytes allowed to compress the page. The number of bytes is taken from the content-Length of the corresponding header. gzip_types application/javascript text/css text/xml; gzip_disable "MSIE [1-6]\."; Gzip_proxied on: # Set to enable or disable Gzip compression for receiving content from the proxy server. Gzip_http_version 1.1; Gzip_proxied: off; gzip_proxied: off; Set to enable or disable gzip compression for receiving content from the proxy server. gzip_vary on; Use to add Vary: accept-encoding to the response header so that the proxy server identifies whether gZIP compression is enabled based on the accept-encoding in the request header. }Copy the code

3.8 Resolving Cross-domain Problems

server {
    listen       80;
    server_name  aaa.xxxx.com;
    location / {
        proxy_passbbb.xxxx.com; }}Copy the code

3.9 Request Filtering

Filter by status code

error_page 500 501 502 503 504 506 /50x.html;
    location = /50x.html {
        Change the following path to the path where the HTML is stored.
        root /root/static/html;
    }
Copy the code

Accurately matches urls based on URL name filtering. All urls that do not match are redirected to the home page

location / {
    rewrite ^. * $ /index.html  redirect;
}
Copy the code

Filtering by request type

if ( $request_method !~ ^(GET|POST|HEAD)$ ) {
    return 403;
}
Copy the code

3.10 hotlinking prevention

Prevents resources from being called by other sites

location ~* \.(gif|jpg|png)$ {
    Only 192.168.0.1 is allowed to request resources
    valid_referers none blocked 192.168.0.1;
    if ($invalid_referer) {
       rewrite^ / http://$host/logo.png; }}Copy the code

3.11 Access Permission Control

Nginx whitelists can be configured to specify which IP addresses can access the server.

3.11.1 Simple Configuration

 location / {
        allow  192.168.0.1;  Allow access to this IP address
        deny   all;  # Ban all
    }
Copy the code

3.11.2 Creating a Whitelist

vim /etc/nginx/white_ip.conf ... 192.168.0.1 1; .Copy the code

Modify the nginx configuration

geo $remote_addr $ip_whitelist{
    default 0;
    include ip.conf;
}
The geo directive can map a new variable from the value of the specified variable. If no variable is specified, the default is $remote_ADDR
Copy the code

Whitelisting

server { location / { if ( $ip_whitelist = 0 ){ return 403; // Not whitelisted returns 403} index index.html; root /tmp; }}Copy the code

The public no. :