A list,

Engine X (Nginx) is a high-performance Web server and reverse proxy server. It can also be used as a mail proxy server. It is developed using C language.

Nginx is characterized by small memory, strong concurrent processing capacity, known for high performance, low system resource consumption, Nginx official test for 50,000 concurrent requests.

The daily maximum concurrency is about 40,000.

1. Forward proxy

The forward proxy acts like a switchboard and accesses external resources. Is a user, for example: I can’t visit a web site, but I can visit a proxy server, the proxy server, it can access that I can’t access the website, so I’ll even on a proxy server and tell it cannot visit the web site’s content, I need that one proxy server to get it back, and then return to me. The proxy client requests the server and hides the real client. The server does not know who the real client is.

2. Reverse proxy

In Reverse Proxy mode, a Proxy server receives connection requests from the Internet, forwards the requests to a server on the internal network, and returns the results obtained from the server to the client requesting connection on the Internet. In this case, the proxy server behaves as a reverse proxy server.

Reverse proxy to hide the real server, like when you use Baidu every day, only know that knock www.baidu.com can open baidu search page, but behind thousands of Baidu servers are specific which one for our service, we do not know. We only know this proxy server, which will forward our requests to the server that actually serves us.

To sum up: The forward proxy proxy object is the client, and the reverse proxy proxy object is the server. On the software side, Nginx is commonly used as a reverse proxy server. It performs very well and is used for load balancing.

2. Nginx environment construction

1. Download

The free open source version of the official website: nginx.org/en/download… , you can download the Stable version

Nginx is available for Windows and Linux, but it is recommended to use Nginx for Linux. Download nginx – 1.14.2. Tar. Gz source code files: wget http://nginx.org/download/nginx-1.14.2.tar.gz or download on Windows and then uploaded to the Linux

2. Install

(1) Install related libraries

The Nginx installation needs to identify several libraries related to the Linux installation. Otherwise, configuration and compilation errors will occur. The related libraries are:

  • GCC compiler
  • The openssl library
  • Pcre library
  • Zlib library

Check and install it once. If it has been installed, skip it. If there is an update, update it:

yum install gcc openssl openssl-devel pcre pcre-devel zlib zlib-devel -y
Copy the code

To check whether the library is installed successfully, you can only run the following commands one by one, or run the preceding commands again. If no operation is performed, the library already exists:

yum list installed | grep gcc yum list installed | grep openssl yum list installed | grep pcre yum list installed | grep  zlibCopy the code

(2) Install Nginx

  1. Gz and run the tar -zxvf nginx-1.18.0.tar.gz command

  2. Switch to the nginx home directory and run the following command in nginx home directory nginx-1.18.0:

    The parameter after the --prefix is the installation path, you need to change it yourself
    ./configure \
    --prefix=/usr/local/nginx \
    --with-http_ssl_module \
    --with-http_v2_module \
    --with-http_stub_status_module \
    --with-http_gzip_static_module
    
    # Detailed configuration (you need to download the corresponding module, after the installation can also be added)
    ./configure \
    --prefix=/user/local/nginx \
    --sbin-path=/usr/local/nginx/sbin/nginx \
    --conf-path=/usr/local/nginx/conf/nginx.conf \
    --error-log-path=/usr/local/nginx/log/error.log \
    --http-log-path=/usr/local/nginx/log/access.log \
    --pid-path=/usr/local/nginx/log/nginx.pid \
    --lock-path=/var/lock/nginx.lock \
    --user=nginx \
    --group=nginx \
    --with-http_ssl_module \
    --with-http_stub_status_module \
    --with-http_gzip_static_module \
    --with-http_v2_module \
    --http-client-body-temp-path=/var/tmp/nginx/client/ \
    --http-proxy-temp-path=/var/tmp/nginx/proxy/ \
    --http-fastcgi-temp-path=/var/tmp/nginx/fcgi/ \
    --http-uwsgi-temp-path=/var/tmp/nginx/uwsgi \
    For the following three configurations, please download the third party module first, see chapter 2, Section 7.--with-pcre=.. / pcre - 8.44 - with - zlib =.. / zlib - 1.2.11 -- with - openssl =.. / openssl - 1.1.1 gCopy the code

    More common compiler options (please see the official document: docs.nginx.com/nginx/admin…). :

    /configure –help to see –without for the most common nginx modules.

    –prefix=PATH: specifies the nginx installation directory. Default /usr/local/nginx –conf-path= path: sets the path of the nginx.conf configuration file. Nginx allows you to start with a different configuration file, via the -c option on the command line. The default is prefix/conf/nginx.conf –user=name: sets the user name for the nginx worker process. After the installation, you can change the user directive at any time in the nginx.conf configuration file. The default user name is nobody. –group=name similar to –with-pcre: set the source path of the pcRE library. If you have installed it using yum, use –with-pcre to automatically find the library file. –with-pcre=PATH –with-pcre=PATH –with-pcre=PATH –with-pcre=PATH –with-pcre=PATH –with-pcre=PATH –with-pcre=PATH –with-pcre=PATH –with-pcre=PATH Perl regular expressions are used in the location directive and in the ngx_http_rewrite_module module. –with-zlib=PATH: specifies the source zip directory for zlib (latest version). You need to use zlib with the ngx_http_gzip_module, which is enabled by default. –with-openssl: supports HTTPS. –with-http_ssl_module: Use the HTTPS protocol module. By default, this module is not built. If openSSL and openssl-devel are installed –with-http_stub_status_module: to monitor the current state of Nginx –with-http_realip_module: This module allows us to change the IP address of the client to forwarded-for (For example, x-real-ip or X-Forwarded-for). This module allows us to change the IP address of the client to forwarded-for (x-real-ip or X-Forwarded-for). This module allows us to change the IP address of the client to Forwarded-for (x-real-ip). Enable HTTP/2 request support –add-module=PATH: Adds a third party external module, such as nginx-sticky- moding-ng or a cache module. Recompile each time a new module is added (Tengine can be added without recompiling)

  3. Run the make command to compile the file

  4. Run the make install command

After the installation is successful, you can switch to the /usr/local/nginx directory to view the content

(3) Introduction to the contents

  • Conf: stores configuration files. Each configuration file has two copies.defaultAt the end is the default configuration
  • HTML: holds the page
  • Logs: Stores logs
  • Sbin: stores executable files
  • *_temp: After Nginx is started, some files with the end of _temp appear in the installation directory. These files are temporary.

3. Start

  • Default startup:
    • Go to the sbin directory of the nginx installation directory and run the following command:./nginx
  • Starting with a configuration file:
    • through-cSpecify the location of the configuration file, and the configuration file path must specify an absolute path
    • You can add it to the end of the configuration file-tParameter to check whether the configuration file has syntax errors
    ./nginx -c /usr/local/nginx/conf/nginx.conf  # Relative path startup
    /usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/nginx.conf # Absolute path start
    
    ./nginx -c /usr/local/nginx/conf/nginx.conf -t  Check the configuration file for syntax errors
    Copy the code

4. View the process

Check the process:ps -ef | grep nginx The nGINx architecture consists of the master process and its worker process. After successful startup, there are at least two processes, less than two will not succeed

  • The master process is the main process, which reads configuration files, maintains and manages worker processes, and its parent process pid is 1
  • The worker process actually processes the request, the worker can have more, >=1, and the pid of the parent process is the PID of the master

The default port of Nginx is 80. To access Nginx, access http://ip (80) in the browser. (If the default port is 80, you do not need to specify the port in the address bar.)

5. Close the

(1) Elegant closure

  • Find the nginx process id:ps -ef | grep nginx
  • Execute command:Kill the QUIT the pid

Note:

  • Where PID is the PID (master process) of the main process number, and others are the child process PID (worker process).
  • This closing is called an elegant closing because the request is processed before closing

(2) Fast close

  • Find the nginx process id:ps -ef | grep nginx
  • Kill the TERM primary pid

Note:

  • Where PID is the PID (master process) of the main process number, and others are the child process PID (worker process).
  • This kind of closing mode regardless of whether the request processing is completed, the direct closing, more violent, called fast closing

(3) Kill directly

  • Find the nginx process id:ps -ef | grep nginx
  • Kill 9 pid Lord

6. Restart

After the configuration file is modified, you need to restart Nginx. You do not need to specify the configuration file during the restart. The configuration file used during the last startup is used

Note: If you start multiple Nginx with the same Nginx with different configuration files, you cannot restart because Nginx will not know which configuration file to restart

./nginx -s reload
Copy the code

7. Add third-party modules to Nginx

Take installing the PCRE as an example (here is the official document for detailed description, you need to find the corresponding version of Nginx module) :

To install a new module, use the directory where Nginx was not compiled.

$wget ftp://ftp.csx.cam.ac.uk/pub/software/programming/pcre/pcre-8.44.tar.gz $tar ZXF - pcre - 8.44. Tar. Gz $cdPcre -8.44 $./configure $make $sudo make installCopy the code

Other 8.

On Linux view nginx version: / usr/local/nginx/sbin/nginx – V

  • -v: Displays the version of nGINx
  • -V: Displays the nginx version, compiler version, and configuration parameters

9. Introduction to Windows environment

Download the latest version of Nginx for Windows from the official website: nginx.org/en/download… Decompress the downloaded nginx package to a directory. The software starts using the boot method. 1: Double-click the nginx.exe file in the decompressed directory to run nginx. Start mode 2: Enter the DOS window, switch to the nginx home directory, and run the following command in the DOS window: start nginx stop mode 1: Kill the nginx process in the resource manager (there are two processes) Stop mode 2: Switch to the nginx installation home directory in the DOS window, and run the following command: nginx -s stop

Iii. Introduction to configuration files

To learn Nginx, you need to have some knowledge of its core configuration file, which is located in the Nginx installation directory /usr/local/nginx/conf directory, called nginx.conf

The Nginx core configuration file consists of three parts: the basic configuration, the Events configuration, and the HTTP configuration

1. Perform basic configurations

The basic configuration is directly configured in the file:

# Configure the worker process to run with user nobody. The user is also a Linux userusernobody; # Configure the number of worker processes, depending on the hardware, usually equal to the number of cpus or2Multiple the number of cpus worker_Processes1; Configure global error log and type, [debug| info | notice | warn | error |Crit], the default is error error_log logs/error.log;  
#error_log  logs/error.log  notice;
#error_log  logs/error.log  info;

pid  logs/nginx.pid; # Configure the process PID fileCopy the code

2. Events configuration

Events is used to configure the working mode and connection number in events{}

Events {# Set the maximum number of connections per worker process. Nginx supports worker_processes*Note that worker_connections is maximum65536
    worker_connections  1024;  
}
Copy the code

3. The HTTP configuration

HTTP {# configure which multimedia types nginx supports/Types Check which multimedia types are supported. Include mime.types; # Default file type stream type, which can be understood as supporting any type default_type application/octet-stream; #log_format main'$remote_addr - $remote_user [$time_local] "$request" '
    #                  '$status $body_bytes_sent "$http_referer" '
    #                  '"$http_user_agent" "$http_x_forwarded_for"'; Configure access_log and its storage path, and use the main log format defined above. Open #access_log at the same time/access.log  main;

    sendfile        on; # enable efficient file transfer mode #tcp_nopushon; To prevent network congestion, #keepalive_timeout must be enabled when going online0;
    keepalive_timeout  65; # Long connection timeout, in seconds #gzipon; # Open gzip compression output, online must be opened, development is not open, to prevent no format ###-----------------------------------------------# Configure virtual host server {listen80; Server_name localhost; Configure the service name3.33.

        #charset koi8-r; Access_log logs/host.access.log main; # Set the default matching slash for this virtual host access log/When there is a slash in the access path/, will be matched by the location and process the location/{# nginx {root}} {# nginx {root}} # set the name of the home page file index index.html index.htm; } #error_page404              /404.html; # configuration404# redirect Server error pagesto the static page /50x.html
        #error_page   500 502 503 504  /50x.html; # configuration50X error page # exactly matches location= /50x.html { root html; } # proxy the PHP scriptsto Apache listening on 127.0. 01.:80
        #
        #location ~ \.php$ {
        #    proxy_pass   http://127.0. 01.; # pass the PHP scripts to FastCGIto FastCGI server listening on 127.0. 01.:9000
        #
        #location ~ \.php$ {
        #    root           html;
        #    fastcgi_pass   127.0. 01.:9000;
        #    fastcgi_index  index.php;
        #    fastcgi_param  SCRIPT_FILENAME  /scripts$fastcgi_script_name; # include fastcgi_params; #} # deny access to.htaccess fileto .htaccess files, if Apache's document root # concurs with nginx's one
        #
        #location ~ /\.ht {
        #    deny  all; #}} # Configure another virtual hostusing mix of IP-, name-.and port-based configuration
    #
    #server {
    #    listen       8000;
    #    listen       somename:8080;
    #    server_name  somename  alias  another.alias;

    #    location /{ # root html; # index index.html index.htm; #} #} # Configure HTTPS service, secure network transport protocol, encrypted transport, port443# # HTTPS server # #server {# listen443 ssl;
    #    server_name  localhost;

    #    ssl_certificate      cert.pem;
    #    ssl_certificate_key  cert.key;

    #    ssl_session_cache    shared:SSL:1m;
    #    ssl_session_timeout  5m;

    #    ssl_ciphers  HIGH:!aNULL:!MD5;
    #    ssl_prefer_server_ciphers  on;

    #    location /{ # root html; # index index.html index.htm; # #}}}Copy the code

(1) Basic configuration

HTTP configuration is to configure the HTTP server and use its reverse proxy function to provide load balancing support in HTTP {}

Note:

  • Configuration that must be opened to go online:Gzip on(not open during development).tcp_nopush on

Iv. Load balancing

1. Introduction

(1) Hardware load balancing

Such as F5, Deep conviction, Array, etc

  • Advantages: A professional technical service team provides support and stable performance
  • Cons: Expensive, too expensive for smaller network applications

(2) Software load balancing

Nginx, LVS, HAProxy, etc

  • Advantages: Free open source, low cost
  • Disadvantages: Performance is not as high as hardware

So, in practice, a combination of the two is often used, with hardware load balancing in key locations and software load balancing elsewhere

2. Configure load balancing

(1) Add upstream configuration in the HTTP module

  • “Upstream” : The address following is optional but must not be repeated
  • Server: specifies the address of the Tomcat server
upstream www.myweb.com { 
     	server  127.0. 01.:9100; 
      	server  127.0. 01.:9200;  
} 
Copy the code

Upstream is an important module for configuring load balancing between Nginx and back-end servers. It also checks the health status of back-end servers. If one of the back-end servers fails, front-end requests will not be forwarded to the failed server

(2) Add location in server module and configure proxy_pass

  • Proxy_pass:http://“(string following upstream)”proxy_pass http://
location /myweb {
	proxy_pass  http://www.myweb.com;
}
Copy the code

(3) Add configuration in location

Request. GetServerName () cannot get the proxy address (or domain name) when the Nginx reverse proxy is configured. You need to add the following parameters in the location:

location /myweb {
	proxy_pass  http://www.myweb.com;
	proxy_set_header Host $host;  
}
Copy the code

Reference: www.phpfensi.com/php/2013112… Very detailed: blog.csdn.net/t8116189520…

3. Common load balancing policies

In practice, if the server configuration is the same, you can use polling, if the configuration is different, a simpler point can use the minimum connection, or after testing, use weight. As for Session loss, we use SpringSession to solve this problem: editor.csdn.net/md?articleI… .

(1) Polling (default)

Polling here does not allocate each request to a different back-end server in turn, similar to IP_hash, but the request is allocated according to the hash result of the url accessed, so that each URL is directed to the same back-end server. This is mainly applied in the scenario where the back-end server is cached. If the back-end server is down, it is automatically deleted

upstream backserver { 
    	server 127.0. 01.:8080; 
    	server 127.0. 01.:9090; 
} 
Copy the code

(2) Weight

Each request is distributed to different back-end servers in a certain proportion. The greater the weight value is, the greater the access ratio is. The access ratio is about the same as the weight ratio

upstream backserver { 
    	server 192.168. 014. weight=5; 
    	server 192.168. 015. weight=2; 
} 
Copy the code

(3) the ip_hash

Ip_hash is also called IP binding. Each request is assigned based on the hash value of the access IP address. In this way, each access client accesses a fixed back-end server, which can solve the problem of Session loss

upstream backserver { 
    	ip_hash; 
    	server 127.0. 01.:8080; 
    	server 127.0. 01.:9090; 
}
Copy the code

(4) Minimum connection

Web requests are forwarded to the server with the fewest connections

upstream backserver { 
    	least_conn;
    	server 127.0. 01.:8080; 
    	server 127.0. 01.:9090; 
}
Copy the code

4. Perform other configurations

  • Backup: only request a backup machine if all other non-backup machines are down:

    upstream backserver { 
       	server 127.0. 01.:9100; # Select 'server' from 'backup' when all other non-backup machines are down127.0. 01.:9200 backup; 
    } 
    Copy the code
  • Down: Indicates that the current server is Down and does not participate in load balancing.

    upstream backserver { 
    	server 127.0. 01.:9100; # Down indicates that the current server is down and does not participate in load balancing127.0. 01.:9200 down; 
    } 
    Copy the code

Five, static proxy

Changing all static resource access to Nginx instead of Tomcat is called static proxy. Because NGINx is better at handling static resources, it performs better and is more efficient. In practice, static resources such as images, CSS, HTML, JS and so on are handled by Nginx instead of Tomcat.

1. Method 1: Configure the suffix

To access static resources, obtain them from the /opt/static directory on the Linux server (for example).

location ~ .*\.(js|css|htm|html|gif|jpg|jpeg|png|bmp|swf|ioc|rar|zip|txt|flv|mid|doc|ppt|pdf|xls|mp3|wma)$ {
    	root /opt/static;
}
Copy the code

Description:

  • ~: indicates regular match. That is, the following content can be regular expression match
  • The first point is’.‘indicates any character
  • *: indicates one or more characters
  • \.: is the escape character. It is the escape character of the following dot
  • |: indicates or
  • $: indicates the end

The entire configuration means that files ending with ‘.’ in parentheses are processed by Nginx

Note: to place static resources in the directory, to pay attention to the directory permission problem, if the permission is insufficient, give the directory permission; Otherwise, error 403 occurs

2. Method 2: Configure a directory

To access static resources, obtain them from the /opt/static directory on the Linux server (for example).

location ~ .*/(css|js|img|images) {
	root   /opt/static;
}
Copy the code

The entire configuration representation path is processed by Nginx for requests that contain these keywords in parentheses

Note: Nginx locate the static resources is based on the principle of IP + port is equal to the root to find pictures, if pictures of address for http://192.168.235.128/myweb/image/001.jpg web page request, either one way or two way, Requests are forwarded to the Nginx server for processing. Based on the principle of IP + port is equal to the root, Nginx will find a location for/opt/static/myweb/image / 001 JPG images

6. Separation of movement and movement

Nginx load balancing and static proxy together, we can achieve dynamic separation, which is a common scenario in practical applications. Dynamic resources, such as JSP, are completed by Tomcat or other Web servers. Static resources, such as images, CSS, JS, and so on, are completed by NGINx server. They focus on what they are good at and make full use of their advantages to achieve a more efficient and reasonable architecture

In practice, there is a very common structure of dynamic and static separation:

Seven, virtual host

A domain name can only be bound to a machine, an IP, and virtual host, is to divide a physical server into multiple “virtual” servers, so that we can use a physical server as multiple servers, so that you can configure multiple websites, binding multiple domain names.

Nginx provides the web hosting function so that we can run multiple web sites with different domain names without having to install multiple Nginx installations.

Under Nginx, a server{} tag is a virtual host. Nginx virtual host is through nginx.conf server node specified, want to set multiple virtual host, configure multiple server nodes can;

1. Port-based virtual hosts

Port-based virtual host configuration, using the port to distinguish between browsers using the same domain name + port or the same IP address + port access;

# when accessing: nginx-host:8080 -> http://www.baidu.comserver { listen 8080; server_name www.myweb.com; location / { proxy_pass https://www.baidu.com; }}# when accessing: nginx-host:9090/ -> http://www.p2p.comserver { listen 9090; server_name www.myweb.com; location / { proxy_pass http://www.p2p.com; }}Copy the code

2. Domain-based virtual hosts

Domain-based virtual host is the most common type of virtual host, using http.server.server_name to match the domain name

server { listen 80; server_name www.myweb.com; location /myweb { proxy_pass http://www.myweb.com; } } server { listen 80; server_name www.p2p.com; location /myweb { proxy_pass http://www.p2p.com; }}Copy the code

(1) http.server.server_name Matching rule

The server_name directive is used to configure the virtual host based on the name. When accessing nGINx, the server name is used to determine which server is matched based on the domain name. The server_name directive matches the server in the following order after receiving the request. The first server== is executed by default. Server_name = server_name = server_name = server_name

server {
     listen       80; server_name domain.com www.domain.com; . }Copy the code

2, string starting with * wildcard:

server {
     listen       80; server_name *.domain.com; . }Copy the code

3, string ending with * wildcard:

server {
     listen       80; server_name www.*; . }Copy the code

4, match regular expression:

server {
     listen       80; server_name ~^(? .+)\.domain\.com$; . }Copy the code

3. Forwarding rules

server {
    listen       8080;
    server_name  localhost;

    location /t1 {
            proxy_pass http://10.21155.2.:10000;
    }
    location /t2 {
            proxy_pass http://10.21155.2.:10000/;
    }
    location /t3 {
            proxy_pass http://10.21155.2.:10000/index;
    }
    location /t4 {
            proxy_pass http://10.21155.2.:10000/index/;
    }
    location /t5/index {
            proxy_pass http://10.21155.2.:10000;
    }
    location /t6/index {
            proxy_pass http://10.21155.2.:10000/;
    }
    location /t7/index {
            proxy_pass http://10.21155.2.:10000/index;
    }
    location /t8/index {
            proxy_pass http://10.21155.2.:10000/index/; }}Copy the code

Note that the proxy server (proxy_pass) Settings in t1 and T5 do not specify a URI, while the rest of the proxy server Settings specify a URI (/ and /index and /index/, respectively).

  • If there is a URI in the proxy server address, this URI replaces the part of the URI that the location matches.

  • If there is no URI in the proxy server address, the full request URL will be forwarded to the proxy server.

In the preceding configuration, the corresponding request address and forwarding address are as follows:

localhost:8080/t1  -->> http://10.21155.2.:10000/t1
localhost:8080/t1/  -->> http://10.21155.2.:10000/t1/
localhost:8080/t1/index  -->> http://10.21155.2.:10000/t1/index
localhost:8080/t2  -->> http://10.21155.2.:10000/
localhost:8080/t2/  -->> http://10.21155.2.:10000//
localhost:8080/t2/index  -->> http://10.21155.2.:10000//index
localhost:8080/t2index -->> http://10.21155.2.:10000/index
localhost:8080/t3  -->> http://10.21155.2.:10000/index
localhost:8080/t3/  -->> http://10.21155.2.:10000/index/
localhost:8080/t3/index  -->> http://10.21155.2.:10000/index/index
localhost:8080/t4  -->> http://10.21155.2.:10000/index/
localhost:8080/t4/  -->> http://10.21155.2.:10000/index//
localhost:8080/t4/index  -->> http://10.21155.2.:10000/index//index
localhost:8080/t5/index  -->> http://10.21155.2.:10000/t5/index
localhost:8080/t6/index  -->> http://10.21155.2.:10000/
localhost:8080/t7/index  -->> http://10.21155.2.:10000/index
localhost:8080/t8/index  -->> http://10.21155.2.:10000/index/
Copy the code

Location of the root user when configuring the virtual host

server {
    listen       8080;
    server_name  localhost;

    location /health { root html; index index.html index.htm; }}Copy the code
  • The parameter of location represents the matched URI
  • The location of therootParameter means:
    • If no parameter is set/Is the nginx installation directory. For example, the default HTML directory is the HTML directory in the nginx installation home directory
    • If the parameter is added before/Represents the root path in Linux (/) path

/ corresponds to the path behind root: For example, if my Nginx IP is 192.168.0.1, port is 80, location is /aa, and root is /opt/web. If access path for http://192.168.0.1:80/aa/index.html, Nginx will visit/opt/web/aa/index. The static HTML files

# Default match slash/When there is a slash in the access path/# site is: HTTP://www.ip:80/aa
location /Aa {# root is the default site root directory of the configuration server, default is the HTML directory of the nginx installation home directory./opt/web/aa
    root   /opt/web;  
}	
Copy the code

C:\Windows\System32\drivers\etc\hosts \hosts: C:\Windows\System32\drivers\etc\hosts: C:\Windows\System32\ hosts: C:\Windows\System32\drivers\etc\hosts: 192.168.208.128 192.168.208.128 www.myweb.com www.p2p.com

8. Split configuration files

Include introduces a separate configuration file

You can configure multiple server{} tags in the HTTP {} tags of the nginx.conf file, or you can configure them separately in a file and include the virtual host configuration

include /usr/local/nginx/conf/vhost/vhost.conf;
Copy the code

Add the configuration file for the virtual directory to the end of the “HTTP {}” section, alongside other servers.

The files are separated, the configuration is clearer, and the master file does not have as many servers

2. Include Imports all configurations in the folder

http{ ...... Add the following code to the HTTP module, and then write the virtual host configuration include in the conf.d folder/ect/nginx/conf.d/*.conf
}
Copy the code

The priority of files in the conf.d directory is alphabetical, and the priority of files in the same file is from top to bottom.

Some configurations encountered so far

1. Page 404 is refreshed when the VUE project is deployed on the Nginx server

Add the following configuration to the nginx configuration file:

location / { root ... index ... try_files $uri $uri/ /index.html; -- Resolve the page refresh 404 problemCopy the code

How it works: When the access URI is not available, try to access index.html. Since vue is in single-page mode, the entry is always index. HTML and whatever connection is accessed by Vue itself takes over

2. Some variables in Nginx

When accessing the background, set some request headers:

location /prod-api/{
		proxy_set_header Host $http_host;/ / host: port
		proxy_set_header X-Real-IP $remote_addr;// IP address of the client
		proxy_set_header REMOTE-HOST $remote_addr;
		proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;// The actual IP address of the HTTP requestor
		proxy_pass http:/ / 10.124.128.201:8084 /;
}
Copy the code

A collection of variables in the NGINX conf configuration file

3. Configure the HTTPS certificate

First, Nginx was installed with the SSL module added

server {
	listen 443 ssl;  Write this after version # 1.1
	server_name www.domain.com; Enter the domain name of the bond certificate
	ssl_certificate /ssl/ssl.pem;  # Specify the location of the certificate, absolute path (.crt file is ok)
	ssl_certificate_key /ssl/ssl.key;  # Absolute path of key
	ssl_session_timeout 5m;
	ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Configure according to this protocolssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:HIGH:! aNULL:! MD5:! RC4:! DHE;# Follow this suite configurationssl_prefer_server_ciphers on; The location / {proxy_pass http://10.124.128.201:8084/; }}Copy the code

4. HTTP requests are redirected to HTTPS requests

Some problems with this configuration (I don’t know how to solve them) : Nginx listens for HTTP on port 8888, returns it to the browser 301, and then redirects it to the specified HTTPS address. For post requests, nginx will forward them as GET requests, so the request body will be lost.

server {
        listen       8888;
        server_name  forward;

        location /access {
	        # rewrite ^(.*)$rewrite ^(.*)$rewrite;
	        # request ngnixHost: 8888 / access/index - > https://10.124.128.198:8081/dataproducer/indexRewrite ^ (. *) $https://10.124.128.198:8081/dataproducer permanent; }}Copy the code

5. Configure the forward proxy

Nginx uses an official module for the forward proxy: ngx_http_proxy_connect_module

Use NGINX as the HTTPS forward proxy server

  1. To download the module package, download the ZIP package directly from GitHub here

  2. Decompress and configure

    > unzip ~/pkg/ngx_http_proxy_connect_module-master.zip
    
    The following commands are executed in the nginx decompressed file
    Select the corresponding. Patch file according to the version of nginx. Please refer to the official document
    > patch -p1 < /home/admin/pkg/ngx_http_proxy_connect_module-master/patch/proxy_connect_rewrite_1018.patch
    # Specifies the location of the decompressed module package
    > ./configure --add-module=/home/admin/pkg/ngx_http_proxy_connect_module-master
    > make && make install
    Copy the code
  3. Modify the nginx configuration file

    server {
        listen                         8080;
    
        Configure the DNS server addressResolver 8.8.8.8 114.114.144.114;# forward proxy for CONNECT request
        proxy_connect;
        proxy_connect_allow            443 563;
        proxy_connect_connect_timeout  10s;
        proxy_connect_read_timeout     10s;
        proxy_connect_send_timeout     10s;
    
        # Address of the forward proxy
        location / {
            proxy_pass http://$host;
            proxy_set_header Host $host; }}Copy the code

    To view the DNS server:

    1. cat /etc/resolv.conf

      > cat /etc/resolv.conf
      nameserver 202.106.0.20
      nameserver 202.106.196.115
      Copy the code
    2. nslookup www.baidu.com

      > nslookup www.baidu.com
      Server:		202.106.0.20
      Address:	202.106.0.20# 53
      Copy the code

Test: Use the -x parameter to specify the proxy server address, and the -v parameter to print the details

curl https://www.baidu.com -v -x localhost:8080
Copy the code

6. Safety related

(1) Configure the Nginx account login lock policy

You are advised to start the Nginx service as a non-root user (such as Nginx or nobody) and ensure that the user is locked. You can run passwd -l

for example, passwd -l Nginx to lock the Nginx service startup user. You can run the passwd -s < user > command, for example, passwd -s nginx, to view the user status. Change the nginx startup user in the configuration file to nginx or nobody. Example: user nobody. If you are a Docker user, ignore this item (or add whitelist)

User related operations:

Useradd nginx # Add user passwd nginx # Change the password su nginx # Switch user passwd -l nginx # Lock user passwd -u nginx # Unlock user passwd -s nginx # Viewing user StatusCopy the code

(2) Enable non-root nginx to listen on ports below 1024

Non-root nginx cannot listen on ports below 1024. You need to run the following command

# sudo setcapCap_net_bind_service = + ep nginx path
sudo setcap cap_net_bind_service=+ep /usr/local/nginx/sbin/nginx
Copy the code

(3) Hide the state of the Nginx back-end service specified Header

1. Open the conf/nginx.conf configuration file. 2. Configure proxy_hide_header in HTTP and add or modify it to

proxy_hide_header X-Powered-By;
proxy_hide_header Server;
Copy the code

(4) Nginx back-end service specified Header hidden state Hides the state of the Nginx service Banner

1. Open the conf/nginx.conf configuration file. 2. Select “server_tokens off” from “server” column. If multiple items are not supported, run the ln

/etc/nginx/nginx.conf command

(5) Nginx SSL protocol uses TLSv1.2

Open the conf/nginx.conf configuration file (or inlude file in the master configuration file); 2, configuration,

server { ... Ssl_protocols TLSv1.2; . }Copy the code

Note: When configuring this item, ensure that Nginx supports OpenSSL. Run nginx -v. If built with OpenSSL is displayed in the command output, OpenSSL is supported. If not, you may need to configure SSL_Protocols TLSv1 TLSv1.1 TLSv1.2.