Nginx version: nginx-1.12.2.tar.gz

The content is introduced

Introduction to Nginx

1. What is Nginx and what can you do

Nginx is a high performance HTTP and reverse proxy server that is capable of handling high concurrency. It can withstand high load. Some reports indicate that it can support up to 50,000 concurrent connections, but in fact it can support up to 20,000 to 30,000 concurrent connections

2. Forward proxy

The proxy server needs to be configured on the client side to access the specified web site (proxy is the client)

3. Reverse proxy

Expose proxy server address, hide real server IP address (proxy is server side)

4. Load balancing

Increase the number of servers and then distribute requests to different servers, instead of concentrating requests on a single server, distribute the load to different servers, which is what we call load balancing

5, static separation

Nginx installation

1. Preparation

(1) Start the VM and use a remote connection tool to connect to the Linux OS. (2) Download the software from http://nginx.org/ on the nginx official website

2. Install nginx on the Linux server

(1) Install pcRE rely on the first way:

Step 1: Download the pcRE compressed file to your PC The second step to connect to a remote server, wget http://downloads.sourceforge.net/project/pcre/pcre/8.37/pcre-8.37.tar.gz the pcre compressed files uploaded to the server, Then unzip the compressed file command: Gz decompressed pcre-8.37.tar.gz and run./configure. After the command is executed, run make && make install. Step 4 Run the pcre-config –version command to check whether the PCRE is successfully installed

(2) Install pcRE dependency in the second way:

# yum install pcre # yum install pcre # yum install pcre

(3) Install openSSL, zlib, GCC dependencies

yum -y install make zlib zlib-devel gcc-c++ libtool openssl openssl-devel

4) Install nginx

The first step is to download the nginx zip file to your computer

Nginx 官网 download software:http://nginx.org/

Step 2 Connect to the remote server, upload the nginx compressed file to the server, and decompress the compressed file

Execute command:The tar - XVF nginx - 1.12.2. Tar. Gz

Step 3 Go to the decompressed nginx-1.12.2.tar.gz directory and run./configureTo check, execute after completion, execute againmake && make installCommand to complete the PCRE installation

Step 4 After installing Nginx, you need to know which files are on your system and where they are installed. You can use the following command to view:rpm -ql nginxWhere RPM is Linux RPM package management tool, -q stands for query mode, -l stands for return list, so we can find all installation locations of nginx

Nginx: Nginx: Nginx: Nginx: Nginx: Nginx: Nginx: Nginx: Nginx: Nginx: Nginx: Nginx: Nginx: Nginx: Nginx: Nginx: Nginx: Nginx: Nginxcd/usr/local/nginx/sbinFind the nginx startup script and execute the command./nginxTo start the nginx

Step 6 Execute commands:ps -ef | grep nginxCheck whether nginx exists in the process. If yes, nginx is successfully started

Step 7 Use a browser to access nginx and type 127.0.0.1 to view the effect



Port 80 is not open on the server. Port 80 is not open on the server

  • View open port numbers
  • firewall-cmd –list-all
  • Set an open port number
  • Firewall – CMD – add – service = HTTP – permanent
  • firewall-cmd –add-port=80/tcp –permanent
  • Restarting the Firewall
  • Firewall – CMD – reload

Nginx commands and configuration files

Nginx common commands

To use the nginx command, you must enter the nginx directory

cd /usr/local/nginx/sbin

1, check the nginx version. / nginx – v 2, see nginx production state of ps – ef | grep nginx 3, start nginx. / nginx 4, stop nginx. / nginx -s stop 4, calmly stop nginx /nginx -s quit /nginx -s stop 5. Reload nginx./nginx -s reload 6. Speaks nginx start successful ps – ef | grep nginx

Nginx configuration file

Nginx configuration file location

1. Run the RPM -ql Nginx command to query the location of the Nginx configuration file

  • Where RPM is Linux RPM package management tool, -q stands for query mode, -l stands for return list, so we can find all installation locations of nginx

2, the location of the configuration file: CD/usr/local/nginx/conf/nginx. Conf

The content in the configuration file consists of three parts: the global block, the Events block, and the HTTP block (including the HTTP block itself and the Server block).
The contents of the nginx.conf file are as follows:
User nginx (default: nginx, default: nginx); #Nginx worker_processes - worker_processes => worker_processes1; Error_log /var/log/nginx/error.log warn; Pid /var/run/nginx.pid; {worker_connections 1024; worker_connections 1024; HTTP {include /etc/nginx/mime.types; HTTP {include /etc/nginx/mime. Default_type application/octet-stream; Log_format main '$remote_addr - $remote_user [$time_local]"$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; #nginx sendFile on; Tcp_nopush on; Keepalive_timeout 65; #gzip on; #gzip on; # enable gzip include /etc/nginx/conf.d/*.conf; # Included sub-configuration item locations and files [server fast]Copy the code
The last line of the nginx.conf file contains configuration items for a subfileinclude /etc/nginx/conf.d/*.conf; Let’s open the include subfile and see what’s in it. The contents of the default.conf file are as follows:
server { listen 80; Server_name localhost;// Configure the domain name#charset koi8-r; #access_log /var/log/nginx/host.access.log main; location / { root /usr/share/nginx/html; Index.html index.htm; # default access file} #error_page404              /404.html; # configuration404Redirect Server error pages to the static Page /50x.html
    #
    error_page   500 502 503 504  /50x.html; Error status code display page, need to restart location = / after configuration50x.html {
        root   /usr/share/nginx/html;
    }

    # proxy the PHP scripts to Apache listening on 127.0. 01.:80PHP $is a regular expression that matches all files ending in.php. #location ~ \.php${# proxy_pass HTTP:/ / 127.0.0.1; // Reverse proxy
    #}

    # pass the PHP scripts to FastCGI server listening on 127.0. 01.:9000
    #
    #location ~ \.php$ {
    #    root           html;
    #    fastcgi_pass   127.0.0.1:9000;
    #    fastcgi_index  index.php;
    #    fastcgi_param  SCRIPT_FILENAME  /scripts$fastcgi_script_name;
    #    include        fastcgi_params;
    #}

    # deny access to .htaccess files, if Apache's document root
    # concurs with nginx's one
    #
    #location ~ /\.ht {
    #    deny  all; // deny indicates that access is denied
    #    allow  all; // allow allows access#}}Copy the code

Nginx configure example 1 reverse proxy

Reverse proxy directive :proxy_pass

server{
        listen 80; // The port number of the browser access address# server_name can be the domain name of the url.// Address accessed by the browser
        # server_name 192.168.191.34; // Address accessed by the browser
        # location / {
               # proxy_pass http://123.com; // Nginx reverse proxy address, can be a domain name
               # proxy_pass 192.16823.45.8080.; // Nginx reverse proxy address#}; # ~* /edu/ is a regular expression, which is case sensitive, and matches the edu directory. Location ~ /edu/ {proxy_pass 192.168.23.45.8081;// Nginx reverse proxy address}; Location ~ /vod/ {proxy_pass 192.168.23.45.8082;// Nginx reverse proxy address}}Copy the code

There are also some common instructions for reverse proxies, which I have listed here:

  • Proxy_set_header: Changes the request header information from the client before sending the request to the back-end server

  • Proxy_connect_timeout: Specifies the timeout period for Nginx to try to establish a connection with the backend proxy server

  • Proxy_read_timeout: Configures the timeout period for Nginx to wait for a read request from a back-end server group

  • Proxy_send_timeout: Specifies the timeout period for Nginx to wait for a write request from a back-end server group

  • Proxy_redirect: Used to modify Location and Refresh in the response header returned by the back-end server

  • For more information about proxy directives: www.nginx.cn/doc/mail/ma…

Nginx example 2 Load Balancing

The content in the configuration file consists of three parts: the global block, the Events block, and the HTTP block (including the HTTP block itself and the Server block).

Load balancing is implemented in HTTP blocks and server blocks

  • The HTTP block configuration is as follows:
http { # ... Upstream myServer {server 115.28.52.63:8080 ;
            server 115.2852.63.:8081; }}Copy the code
  • The server block configuration is as follows:
aerver { location / { ... Omit the default configuration in location proxy_pass HTTP://myserver; Myserver is the name of the myServer service created above
        proxy_connect_timeout 10; }}Copy the code
  • Nginx provides several common load balancing strategies.

1. Polling (default)

Each request is allocated to a different backend server one by one in chronological order. If the backend server goes down, the request is automatically removed

2、 weight

Weight indicates the weight. The default weight is 1. The higher the weight, the more clients can be assigned

Upstream server_pool {server 192.168.5.21 weight=10;// weight indicates the weight. The default weight is 1. The higher the weight, the more clients will be assignedServer 192.168.5.22 weight = 10; }Copy the code

3、 ip_hash

Each request is allocated according to the hash result of the access IP, so that each visitor has a fixed access to the back-end server, which can solve the session problem, for example:

upstream server_pool {
  ip_hash;
  server 192.168.5.21:80;
  server 192.168. 522.:80;
}
Copy the code

4. Fair (third Party)

Requests are allocated based on the response time of the back-end server, with priority given to those with short response times

Upstream server_pool {server 192.168.5.21:80;
  server 192.168. 522.:80;
  fair;
}
Copy the code

Nginx configuration Example 3 static and static separation

1. What is static separation?

Nginx separation of dynamic and static requests is simply a separation of dynamic and static requests. It cannot be understood as simply a physical separation of dynamic and static pages. Use Nginx to process static pages, and Tomcat to process dynamic pages

From the perspective of current implementation, static and static separation can be roughly divided into two types:

  • One is purely static files into a separate domain name, on an independent server, is currently the mainstream recommended scheme

  • Another option is to mix dynamic and static files and publish them separately via Nginx

2. Specific configuration of static and static separation

  • In liunx system to prepare static resources for access, in nginx, create a folder data

  • The content in the configuration file consists of three parts: the global block, the Events block, and the HTTP block (including the HTTP block itself and the Server block).

Configure the server block in the nginx configuration file as follows:

aerver { server 80; Server_name 192.168.17.129; #chartset koi8-r; #access_log log/host.access.log main; / WWW / {# /data/ index index.html index.htm; {# /data/ root /data/; # autoindex on; }}Copy the code

Nginx high availability cluster

1. What is Nginx high availability?

The server’s master nginx is down, but we can still successfully request data

(1) Two servers with Nginx are required

(2) Keepalived software is required; If the nginx is alive, use it. If the nginx is dead, use the nginx (backup nginx) on the secondary server. In this process, the software provides a virtual IP address that does not exist. But we need to use this virtual IP to access it

(3) Virtual IP is required

2. Prepare for configuring high availability

(1) Two servers 192.168.17.129 and 192.168.17.131 are required

(2) Install the Nginx software on both servers

How to install nginx

(3) Install Keepalived software on both servers

X86_64 0:1.3.5-8.el7_6.55 2) Install keepalive.x86_64 0:1.3.5-8.el7_6.55 RPM -q -a keepalived 3) After installation, create directory keepalived in etc directory, keepalived has file keepalive.conf

3. Complete the ha configuration (master/slave configuration, both servers must be configured)

(1) mainly modify/etc/keepalived/keepalivec conf configuration file

Global_defs {notification_email {[email protected] [email protected] [email protected]} Notification_email_from [email protected] smtp_server 192.168.17.129 smtp_connect_timeout 30 Router_id LVS_DEVEL # host name of the server, can be accessed in host # /etc/hosts file, add host name for example: Vrrp_script chk_http_port {script "/usr/local/src/nginx_check.sh" # Interval 2 # Check the script execution interval, 2 seconds, weight 2 # weight} # virtual IP configuration, Vrrp_instance VI_1 {state BACKUP # change MASTER to BACKUP interface ens33 Virtual_router_id 51 # Virtual_router_id must be the same as virtual_router_id priority 90 # Advert_int 1 # Authentication {auth_type PASS # Auth_pass 1111 # Auth_pass 1111 # Auth_pass 1111 # Auth_pass 1111 1111} virtual_ipAddress {192.168.17.50 // VRRP H virtual address}}Copy the code

(2) Add the check script file nginx_check.sh to /usr/local/src

/ usr/local/nginx/sbin/nginx is nginx start path

3) Start nginx and Keepalived on both servers

Keepalived: systemctl start keepalive.service

4. Final test

1) Enter the virtual IP address 192.168.17.50 in the address box of the browser

2) Stop the main server (192.168.17.129) nginx and Keepalived, then type 192.168.17.50 to see the effect

A simple analysis of the principle of Nginx

1. How Nginx works

Through Mater and Worker

2. How does worker work?

By default, there is only one master and multiple workers. When a request comes, the master will receive it first, and then Mster will inform the worker of a new request. At this time, Woker will obtain the new request through the scrambling mechanism and always process it accordingly

3. One master and multiple Wokers have advantages

(2) Each wOKer is an independent process. If one of the wokers has a problem, the other wokers are independent and continue to fight to achieve the request process without causing service interruption

4. Set the appropriate number of wokers

It is best if the number of workers is equal to the number of cpus on the server

5. Worker_connection

(1) The first one: how many connections did woker use to send the request?

Answer: two or four

Nginx uses static resources to search for database data, so it uses 4 connections

Nginx has one master and four wokers. Each woker supports a maximum of 1024 connections. What is the maximum number of concurrent connections supported?

The maximum number of concurrent static accesses is: worker_connections * worker_processes /2 The maximum number of concurrent static accesses is: maximum number of connections supported by each wOKer * Number of wokers /2

For HTTP as a reverse proxy, the maximum number of concurrent worker_connections * worker_processes/4 is: Each woker supports the maximum number of connections * number of Wokers /4

9. Other people have written about Nginx

Links: jspang.com/detailed?id…