1. What is Nginx?

Nginx (Engine X) is a high-performance HTTP and reverse proxy server, which is characterized by less memory and strong concurrency. In fact, Nginx does perform better in the same type of web server.

Nginx was developed specifically for performance optimization. Performance is the most important consideration, and the implementation is very efficient. Nginx can withstand high loads and reports indicate that it can support up to 50,000 concurrent connections.

2, Nginx basic functionality

  • Handle == static files ==, index files and automatic indexing;
  • == Reverse proxy == acceleration (no caching), simple load balancing and fault tolerance;
  • FastCGI, simple == load balancing == and fault tolerance;
  • Modular structure. Filters include Gzipping, Byte ranges, chunked Responses, and Ssi-filter. In SSI filters, multiple sub-requests to the same proxy or FastCGI are processed concurrently;
  • SSL and TLS SNI support;

3. Nginx installation

  • Installing pcre

    sudo yum install pcre
    Copy the code

    Or download the installation package to install:

    ## Take pcl-8.44.tar. gz as an example
    After downloading, upload the installation file to the server and unzip itTar - XVF pcre 8.44. Tar. GzGo to the decompression path
    cdPcre - 8.44 -Install centos compiler dependencies
    yum -y update
    yum install gcc gcc-c++ kernel-devel make
    ## 执行./configure
    ./configure
    ## Compile and install
    make && make install
    Copy the code
  • Install additional dependencies

    sudo yum -y install gcc zlib zlib-devel pcre-devel openssl openssl-devel
    Copy the code
  • Install nginx

    ## Download the nginx installation package nginx-1.18.0.tar.gz and upload it to the server
    # # the decompressionThe tar - XVF nginx - 1.18.0. Tar. Gz## Go to the unzip directory
    cdNginx - 1.18.0Perform the configure # #
    ./configure
    ## Compile and install
    make && make install
    Copy the code

    After the installation is successful, the nginx folder is generated under the /usr/local/ directory

    Go to /usr/local/nginx/sbin/and run nginx

    After running successfully, you can access it through the web page.

Note:

If you want to access the nginx server from another client, enable the port on the Nginx server first; otherwise, other hosts cannot access the Nginx server.

Check the open port number
sudo firewall-cmd --list-all
Copy the code

## 2, set the open port
# sudo firewall-cmd --add-service=http --permanent
sudo firewall-cmd --add-port=80/tcp --permanent
Copy the code

# restart the firewall
sudo firewall-cmd --reload
Copy the code

You can now access it from an IP address on another host.

4, nginx common command

If you want to use the nginx command, you need to enter the nginx folder to operate
cd /usr/local/nginx/sbin

# 2, Check the nginx version
./nginx -v
Copy the code

## start nginx
sudo ./nginx
Copy the code

## close nginx
sudo nginx -s stop
Copy the code

Reload the nginx configuration file
sudo ./nginx -s reload
Copy the code

5. Nginx configuration files

  • Path to the nginx configuration file

    /usr/local/nginx/conf/nginx.conf
    Copy the code
  • Nginx configuration file composition

    It is mainly divided into three parts: global block, Events block and HTTP block.

    • Global block

      From the start of the configuration file to the Events block, configure some configuration instructions that affect the operation of the Nginx server. Among them:

      work_proccesses 1;  ## Key configuration of nginx concurrent processing service. The larger the value is, the more concurrent processing can be supported (restricted by hardware, software equipment, etc.).
      Copy the code

    • The events of

      The instructions covered in this section mainly affect the network connection between the Nginx server and the user. Among them:

      worker_connecctions 1024; ## Maximum number of connections supported by each work_processes. This configuration has a significant impact on nginx performance and needs to be flexibly configured in practice
      Copy the code

    • HTTP block

      This section is the most heavily configured part of the Nginx server, where most of the functionality, such as proxy, cache, and log definition, and third-party module configuration are located. This part is divided into: HTTP global block and Server block.

      • HTTP global block

        Instructions include file introduction, MIME-Type definition, log definition, connection timeout, upper limit of single link requests, etc.

      • The server block

        Closely related to the host, and subdivided into: server global block and Location block.

        Server global block main configuration of virtual host listening configuration and virtual host command or IP configuration; A server block can be configured with multiple Location blocks. The location block is used to listen for requests received by the Nginx server, match strings other than virtual host names, and process specific requests.

6. Reverse proxy

define

  • Forward agent

    If a user needs to access the Google website (www.google.com), the user cannot access the Google website directly. To access the Google website, the user needs to set a proxy server in the browser and access the Google website through the proxy server. This method is called forward proxy.

  • The reverse proxy

    The reverse proxy server and target server are the same external server. The IP address and port of the proxy server are exposed, but the IP address and port of the real server are hidden. The user only needs to send the request to the proxy server. The proxy server distributes the request to the target server. After the target server returns the data, the data is returned to the client through the proxy server.

Case 1

  • Purpose: When the user accesses http://www.123.com, jump directly to 127.0.0.1:8080;

  • Steps:

    • Install tomcat

      Download tomcat and upload the file to the server: apache-tomcat-9.0.41.tar.gz
      Unzip the installation fileThe tar XVF - apache tomcat - 9.0.41. Tar. GzcdApache tomcat -- 9.0.41# # running tomcat
      cd bin/
      ./startup.sh
      Open port 8080
      sudo firewall-cmd --add-port=8080/tcp --premanent
      sudo firewall-cmd --reload
      Copy the code

      Test, access port 8080 on a web page

    • Configure the mapping between domain name www.123.com and ip192.168.106.128 (Windows) and add the mapping to the C:\Windows\System32\drivers\etc\hosts file.

      After the configuration is complete, you can access the server using the domain name:

    • Modify the nginx configuration file/usr/local/nginx/conf/nginx. Conf:

      After the modification is complete, save and exit, and then nginx reloads the configuration file:

      cd /usr/local/nginx/sbin/
      sudo ./nginx -s reload
      Copy the code

      After the reload is complete, you can access port 80 in your browser:

Case 2

  • Objective: to access the HTTP: 192.168.106.128:9001 / edu/jump to 127.0.0.1:8080;

    Visit HTTP: 192.168.106.128:9001 / vod/jump to 127.0.0.1:8081

  • Steps:

    • Prepare two Tomcats. one is port 8080 and the other is port 8081

    • Prepare recognizable HTML pages

      ## Go to the webapps directory in the Tomcat directory of port 8080
      cdThe/usr/SRC/tomcat8080 / apache tomcat -- 9.0.41 / webapps /Create edu directory
      sudo mkdir edu && cd edu
      ## Create a.HTML and add the tag 

      8080!!

      sudo vim a.html Copy the code

      After the configuration is complete, you can access the following information in the browser:

      Go to the webapps directory in the Tomcat directory for port 8081
      cdThe/usr/SRC/tomcat8081 / apache tomcat -- 9.0.41 / webapps /Create a vod directory
      sudo mkdir vod && cd vod
      ## Create a.HTML and add the tag 

      8081!!

      sudo vim a.html Copy the code

      After the configuration is complete, you can access the following information in the browser:

    • Configure nginx configuration file/usr/local/nginx/conf/nginx. Conf

      Where, the tag in location represents:

      • ~: indicates that the URL contains a regular expression and is case sensitive.
      • ~ *: indicates that the URL contains a regular expression and is case insensitive.
      • =: For urIs that do not contain the regular table, the request string must strictly match the URI.
      • ^ ~: For urIs that do not contain regular tables, the nginx server is required to use the location that identifies the URI that best matches the request string.

      After the modification is complete, reload the nginx configuration file

      cd /usr/local/nginx/sbin
      sudo ./nginx -s reload
      Copy the code

      Open ports: 9001, 8080, and 8081

      sudo firewall-cmd --add-port=9001/tcp --premanent
      sudo firewall-cmd --reload
      Copy the code

      After reloading, you can test it in a browser:

7. Load balancing

define

When a single server couldn’t solve the problem, we increased the number of servers and then distributed requests to multiple servers instead of concentrating requests on a single server. The method of distributing the load to different servers is load balancing.

Load balancing is to allocate load to different service units to ensure service availability and fast response, providing users with good experience.

case

  • Objective: to access the HTTP: 192.168.106.128:9001 / edu/a.h HTML when random jump to 127.0.0.1:8080 or 127.0.0.1:8081;

  • Steps:

    • Prepare two Tomcat files, one pointing to 8080 and one pointing to 8081 (== refer to reverse proxy case 2==), add edu/a.html path and file to tomcat8081/webapps/. The a.HTML file contents are as follows:

      <h1>8081!!!!! Hello world</h1>
      Copy the code
    • After the configuration is complete, two Tomcat servers are allocated to start:

      cd/ usr/SRC/tomcat8080 / apache tomcat - 9.0.41 / bin/startup. ShCopy the code

    • Configure nginx configuration files:

      upstrem myserver {
      	server 192.168.106.128:8080;
      	server 192.168.106.128:8081;
      }
      
      The server will assign the request to the corresponding target server according to the weight
      upstrem myserver {
      	server 192.168.106.128:8080 weight=1;
      	server 192.168.106.128:8081 weight=1;
      }
      Copy the code

    • Reload the nginx configuration file

      cd /usr/local/nginx/sbin
      sudo ./nginx -s reload
      Copy the code
    • After reloading, you can test it in a browser:

      First Visit:

      Second visit:

Load Balancing Allocation Mode (Policy)

  • Polling (default)

    Each request is allocated to a different back-end server one by one in chronological order. If the back-end server goes down, the request is automatically deleted.

  • The weight (weight)

    Weight represents the weight. The default value is 1. The higher the weight, the more requests are allocated.

    upstrem myserver {
    	server 192.168.106.128:8080 weight=10;
    	server 192.168.106.128:8081 weight=5;
    }
    Copy the code
  • ip_hash

    Each request is assigned according to the hash of the access IP address, so that each visitor has a fixed access to the back-end server, which can solve the session problem.

    upstrem myserver {
        ip_hash;
    	server 192.168.106.128:8080;
    	server 192.168.106.128:8081;
    }
    Copy the code
  • Fair (Third Party)

    Requests are allocated according to the response time of the background server, with priority given to those with short response time;

    upstrem myserver {
    	server 192.168.106.128:8080;
    	server 192.168.106.128:8081;
        fair;
    }
    Copy the code

7. Separation of activity and movement

define

In order to speed up the speed of website parsing, dynamic pages and static pages can be deployed to two servers, there are different servers to parse, speed up the parsing, reduce the pressure of a single server, this way is static separation.

Nginx static and dynamic separation is simply the separation of dynamic and static requests, not just the separation of dynamic and static pages.

case

  • Objective: visit http://192.168.106.128/www/a.html to access the server/data/WWW/a.h HTML; When access to a dynamic resource, visit the tomcat, namely goto http://192.168.106.128:8080;

  • Steps:

    • Create file a.html on /data/ WWW /

      cd / && sudo mkdir data
      cd data && sudo mkdir www
      cd www && sudo vim a.html
      Copy the code
      <h1>
          test html!!
      </h1>
      Copy the code
    • Modify the nginx configuration file

    • Reload the nginx configuration file

      cd /usr/local/nginx/sbin
      sudo ./nginx -s reload
      Copy the code
    • After reloading, you can test it in a browser:

instructions

location /image/ {
    root /data/;
    autoindex on;  When configured, the contents of the folder will be displayed when the file is accessed
}
Copy the code

8. High availability

define

Typically, the client accesses the master proxy server, which distributes requests to the target server. When the primary proxy server is faulty and cannot be accessed, the client accesses the backup proxy server. The backup proxy server sends requests to the target server. This mode is called high availability.

In this mode, a proxy server exposes a virtual IP address, and each proxy server needs to be configured with KeepAlived. KeepAlived controls which proxy server the virtual IP actually accesses. The client accesses the virtual IP address exposed by the proxy server.

case

  • Purpose: After the primary proxy server is stopped, the website can still be accessed normally.

  • Environment:

    • Two servers:192.168.106.128192.168.106.129
    • All install nginx
    • Are installed keepalived
  • Steps:

    • Install keepalived

      sudo yum install -y keepalived
      Copy the code

      Once installed, you can find the keepalive. conf file in /etc/keepalive/.

    • To configure keepalived configuration file for primary server 192.168.106.128, go to /etc/keepalived/ directory and modify keepalive. conf configuration file:

      ! Configuration File for keepalived
      
      # global configuration
      global_defs {
          # Email notification message
         notification_email {
           # the recipient
           [email protected]
           [email protected]
           [email protected]
         }
         # the sender
         notification_email_from [email protected]
         # SMTP server addressSmtp_server 192.168.200.1SMTP timeout
         smtp_connect_timeout 30
         The router id can also be written as the host name of each host
         router_id LVS_DEVEL
         vrrp_skip_check_adv_addr
         # vrrp_strict
         vrrp_garp_interval 0
         vrrp_gna_interval 0
      }
      
      A vrrp_instance is the instance name that defines a virtual router
      vrrp_instance VI_1 {
          # define type, MASTER (primary server) or BACKUP (BACKUP server)
          state MASTER
          # interface, nic name
          interface ens33
          # Virtual route ID. The value ranges from 0 to 255
          virtual_router_id 51
          The higher the value, the higher the priority. The primary server must be higher than the backup server
          priority 190
          # Frequency in seconds
          advert_int 1
          # Communication authentication mechanism
          authentication {
              Authentication can be PASS or AH
              auth_type PASS
              # Authentication password, maximum 8 digits
              auth_pass 1111
          }
          # Virtual IP addressVirtual_ipaddress {192.168.106.16}}Copy the code

      The virtual IP address should be in the same network segment as the server IP address. For example, in this example, the virtual IP address should be 192.168.106.xxx. When keepalived is started, 192.168.200.16 is set.

    • Save and exit the configuration file, then launch Keepalived:

      sudo systemctl start keepalived.service
      Copy the code
    • Nginx start:

      cd /usr/local/nginx/sbin
      sudo ./nginx
      Copy the code

      After starting, the main server is configured and can be accessed via a browser:

    • To configure keepalived configuration file from server 192.168.106.129, go to /etc/keepalived/ directory and modify keepalive. conf.

      The details of the configuration file are as follows:

      ! Configuration File for keepalived
      
      # global configuration
      global_defs {
          # Email notification message
         notification_email {
           # the recipient
           [email protected]
           [email protected]
           [email protected]
         }
         # the sender
         notification_email_from [email protected]
         # SMTP server addressSmtp_server 192.168.200.1SMTP timeout
         smtp_connect_timeout 30
         The router id can also be written as the host name of each host
         router_id LVS_DEVEL
         vrrp_skip_check_adv_addr
         # vrrp_strict
         vrrp_garp_interval 0
         vrrp_gna_interval 0
      }
      
      A vrrp_instance is the instance name that defines a virtual router
      vrrp_instance VI_1 {
          # define type, MASTER (primary server) or BACKUP (BACKUP server)
          state BACKUP
          # interface, nic name
          interface ens33
          # Virtual route ID. The value ranges from 0 to 255
          virtual_router_id 51
          The higher the value, the higher the priority. The primary server must be higher than the backup server
          priority 90
          # Frequency in seconds
          advert_int 1
          # Communication authentication mechanism
          authentication {
              Authentication can be PASS or AH
              auth_type PASS
              # Authentication password, maximum 8 digits
              auth_pass 1111
          }
          # Virtual IP addressVirtual_ipaddress {192.168.106.16}}Copy the code
    • Save and exit the configuration file, then launch Keepalived:

      sudo systemctl start keepalived.service
      Copy the code
    • Nginx start:

      cd /usr/local/nginx/sbin
      sudo ./nginx
      Copy the code

      After starting, the slave server is also configured, which is verified by the browser:

  • Testing:

    • Access the virtual IP address for the first time192.168.106.16:

    • Disable Keepalived and nginx on primary server 192.168.106.128:

      Stop # keepalived
      sudo systemctl stop keepalived.service
      Stop # nginx
      cd /usr/local/nginx/sbin
      sudo ./nginx -s stop
      Copy the code

    • Second access to virtual IP192.168.106.16, can still access:

9, Nginx principle

  • Introduction to the
    • By default, multi-process working mode is adopted.
    • When Nginx starts, it will run a master process and multiple worker processes.
    • Master the process:
      • Interface for interaction between users and workers;
      • Manage worker processes, realize service restart, smooth upgrade, log file replacement, configuration file update and other functions;
    • Worker processes:
      • Handle basic network events;
      • Workers are equal;
      • Competing for requests from clients in a competitive manner;
  • The overall model

  • Benefits of multiple workers with one master:

    • You can usenginx -s reloadHot deployment, using nginx hot deployment operations;
    • Each worker is an independent process. If there is a problem with one worker, it will not affect other workers. Other workers will scramble for requests and realize the request process without causing service interruption.
  • How many workers need to be set:

    • Each worker thread can maximize the performance of a CPU, so == the number of workers and the number of cpus on the server == is the most appropriate.
  • The number of connections worker_connection:

    • Send a request that occupies two or four connections.

    • Maximum number of concurrent requests supported:

      • The normal maximum number of concurrent static accesses is: worker_connections * worker_processes / 2;

      • The maximum number of concurrent HTTP reverse proxies is: work_connections * worker_processes / 4;

      • If nginx has a master, 4 workers, and the maximum number of connections supported by each worker is 1024, then the maximum number of connections supported by nginx is:

        1024 * 4/2 = 2048;# static1024 * 4/4 = 1024;# HTTP
        Copy the code