This is the 27th day of my participation in the August More Text Challenge. For the Nuggets’ August challenge,

This article mainly introduces the basic and simple use of Nginx

1. Basic concepts of Nginx

(1) What is nginx and what can it do

(2) Reverse proxy.

(3) Load balancing.

(4) Separation of noise and movement

Nginx installation, common commands and configuration files

(1) Install nginx on liunx.

(2) Nginx common commands

(3) Nginx configuration file.

Nginx configuration example 1- Reverse proxy

Nginx configuration example 2- Load Balancing

Nginx configuration Example 3- Static and static separation.

6. Nginx configures the ha cluster

7, Nginx principle.


Introduction to Nginx

1. What is NGINX

Nginx (" Engine X ") is a high performance HTTP and reverse proxy server with low memory consumption and high concurrency. In fact, Nginx has been developed for performance optimization. Performance is the most important consideration. The implementation is very efficient and can withstand high loads, with reports supporting up to 50,000 concurrent connections.Copy the code

2. Reverse proxy

A. Forward proxy Configure a proxy server on the client (browser) to access the Internet through the proxy server

{% asset_img image-20200606144302429.png forward proxy %}Copy the code

B. Reverse proxy Reverse proxy, in fact, the client is unaware of the proxy, because the client does not need any configuration to access the proxy, we only need to send the request to the reverse proxy server, the reverse proxy server selects the target server to obtain data, and then returns the data to the client. In this case, the reverse proxy server and the target server are one external server, exposing the proxy server address and hiding the REAL server IP address.

3. Load balancing

A single server is not the solution, so we increase the number of servers, and then distribute the requests to different servers, instead of concentrating the requests on a single server, we distribute the load to different servers, which is what we call load balancing

4. Separation of activity and movement

In order to speed up the resolution of the site, dynamic pages and static pages can be resolved by different servers to speed up the resolution. Reduce the stress of a single server.

Nginx installation

This section uses Centos7 as an example

Use a remote connection tool to connect to Centos7. 2. Install nginx dependencies

gcc
pcre
openssl
zlib
Copy the code

GCC GCC GCC GCC GCC GCC GCC GCC GCC GCC GCC GCC GCC

$ yum install gcc-c++
Copy the code

② PCRE(Perl Compatible Regular Expressions) is a Perl library, including perl-compatible Regular Expressions library. The HTTP module of Nginx uses PCRE to parse regular expressions, so you need to install the PCRE library on Linux. Pcre-devel is a secondary development library developed using PCRE. Nginx also needs this library. Command:

$ yum install -y pcre pcre-devel
Copy the code

The zlib library provides a variety of compression and decompression methods. Nginx uses zlib to gzip HTTP package contents, so you need to install zlib library on Centos.

$ yum install -y zlib zlib-devel
Copy the code

(4) OpenSSL is a powerful secure Socket layer cryptographic library that includes major cryptographic algorithms, common key and certificate encapsulation management functions, and SSL protocols, and provides rich applications for testing and other purposes. Nginx supports both HTTP and HTTPS, so you need to install the OpenSSL library on Centos.

$ yum install -y openssl openssl-devel
Copy the code

3. Install Nginx

① Download nginx in two ways

A. Directly download the.tar.gz installation package from nginx.org/en/download…

B. Run the wget command to download the file (recommended). Make sure wget is installed on your system, if not, run yum install wget to install it.

C $wget - https://nginx.org/download/nginx-1.19.0.tar.gzCopy the code

② Still a direct order:

$tar -zxvf nginx-1.19.0.tar.gz $CD nginx-1.19.0Copy the code

(3) the configuration:

In nginx-1.12.0 you don’t need to configure this stuff, the default is ok. Of course, you can configure the directory yourself. 1. Use the default Settings

$ ./configure
Copy the code

2. Custom configuration (not recommended)

$ ./configure \ --prefix=/usr/local/nginx \ --conf-path=/usr/local/nginx/conf/nginx.conf \ --pid-path=/usr/local/nginx/conf/nginx.pid \ --lock-path=/var/lock/nginx.lock \ --error-log-path=/var/log/nginx/error.log \ --http-log-path=/var/log/nginx/access.log \ --with-http_gzip_static_module \  --http-client-body-temp-path=/var/temp/nginx/client \ --http-proxy-temp-path=/var/temp/nginx/proxy \ --http-fastcgi-temp-path=/var/temp/nginx/fastcgi \ --http-uwsgi-temp-path=/var/temp/nginx/uwsgi \ --http-scgi-temp-path=/var/temp/nginx/scgiCopy the code

Note: To specify the temporary file directory as /var/temp/nginx, you need to create temp and nginx directories under /var

④ Edit installation

$ make && make install
Copy the code

Check the version number. (Before using the nginx command, you must access the nginx directory /usr/local/nginx/sbin.)

$ ./nginx -v
Copy the code

Find the installation path:

$ whereis nginx
Copy the code

⑤ Start and stop nginx

$ cd /usr/local/nginx/sbin/
$ ./nginx 
$ ./nginx -s stop
$ ./nginx -s quit
$ ./nginx -s reload
Copy the code

To query the nginx process:

$ ps aux|grep nginx
Copy the code

After successful startup, you can see the following page in the browser:

Implementation work

Modify the nginx configuration file, nginx.conf

Example 2 Reverse proxy

1, achieve the effect

The nginx reverse proxy is used to jump to services on different ports based on the access path. Nginx. The listening port is 9001. Visit http://127.0.0.1:9001/edu/ to jump directly to 127.0.0.1: visit http://127.0.0.1:9001/vod/ directly jump to 127.0.0.1 8081:8082

2. Preparation

(1) Prepare two Tomcat servers, one with port 8080 and one with port 8081

(2) Create folders and test pages.

3. Nginx configuration

$ vi /usr/local/nginx/conf/nginx.conf
Copy the code

(1) Find the nginx configuration file and configure the reverse proxy.

(2) Port 9001 for external access

(3) Restart the nginx server to make the configuration file take effect

4. Final test

5, addlocationPart of the

Location directive description. This directive is used to match urls. The syntax is as follows:

Location [= | | | ~ ~ * ^ ~] uri, = {} 1: used for excluding the regular expression the uri of the former, a string with a uri for requesting the strict match, if the match is successful, stop continue to search and process the request immediately down, ~ 2: This parameter indicates that the URI contains regular expressions and is case sensitive. 3. ~*: This parameter indicates that the URI contains regular expressions and is case insensitive. The Nginx server is required to find the location with the highest matching value between the identifying URI and the request string. The Nginx server is not required to use the regular URI in the location block to match the request string. If a URI contains a regular expression, it must be identified by ~ or ~*.Copy the code

4. Configure load balancing in Example 2

1, achieve the effect

Your browser’s address bar (1) input address 192.168. XXX. XXX/edu/index. H… , load balancing effect, average to 8080 and 8081 ports,

2. Preparation

(1) Prepare two Tomcat servers, one 8080 and one 8081

(2) Create an edu folder in the webapps directories of both Tomcat servers. Create a page index.html in the edu folder for testing.

3. Configure load balancing in the nginx configuration file

4, effects,

Load allocation strategy

Under Linux, there are Nginx, LVS, Haproxy and other services to provide load balancing services, and Nginx provides several allocation methods (strategies):

  • 1. Polling (default)

    Each request is allocated to a different back-end server one by one in chronological order. If the back-end server goes down, the request is automatically removed

  • Weight Indicates the weight. The default weight is 1. The higher the weight, the more clients can be assigned. Specifies the polling probability, weight proportional to the access ratio, for cases where back-end server performance is uneven. For example:.

  • 3, IP hash

    Each request is allocated according to the hash result of the access IP, so that each visitor has a fixed access to the back-end server, which can solve the session problem. Such as:

    Upstream server pool{ip_ hash server 192.168.5.21:80 server 192.168.5.22:80}Copy the code
  • 4. Fair (third party) allocates requests based on the response time of the back-end server, with priority given to those with short response times

    Upstream server_pool server 192.168.5.21:80; Server 192.168.5.22:80; fair; }Copy the code

Configuration Example 3 Static and static separation

Different request forwarding is implemented by specifying different suffixes through location. By setting the Expires parameter, you can enable the browser to cache the expiration time and reduce requests and traffic to the server. Specific Expires definition: Sets an expiration time for a resource. That is, the browser can directly confirm the expiration of a resource without going to the server to verify it. Therefore, no additional traffic will be generated. This approach is well suited to resources that are not subject to constant change. (If the file is frequently updated, it is not recommended to use Expires to cache.) If 3d is set, the URL will be accessed within 3 days, a request will be sent, and the file will not be fetched from the server. The status code 304 will be returned, and the file will be directly downloaded from the server if there is any change. Return status code 200.Copy the code

2. Preparation

(1) Prepare static resources in the Liunx system for access

/data/image Image folder

/data/ WWW HTML folder

3. Specific configuration

(1) Configure it in the nginx configuration file

4. Practical testing

http://192.168.1.112/www/index.html
http://192.168.1.112/image/1.jpg
Copy the code

The above image is due to the autoIndex on setting

Nginx configure high availability cluster

1. What is Nginx high availability

(1) Two Nginx servers are required. (2) Need keepalived (3) need virtual IP

2. Prepare for configuring high availability

(2) Install Keepalived on two servers 192.168.17.129 and 192.168.17.1314. (3) Install Keepalived on two servers.

3. Install Keepalived on both servers using yum command

$yum install keepalived $RPM -q -a keepalivedCopy the code

Default installation path: /etc/keepalived

After installation, create directory keepalived in etc, and configuration file keepalive.conf

4. Complete the HA configuration (master/slave configuration)

(1) Modify keepalived configuration file keepalive. conf to:

global_defs { notification_email { [email protected] [email protected] [email protected] } Notification_email_from [email protected] smtp_ server 192.168.17.129 smtp_connect_timeout 30 Router_id LVS_DEVEL # LVS_DEVEL this field is seen in /etc/hosts; } vrrp_script chk_http_ port {script "/usr/local/ SRC /nginx_check.sh" interval 2 # If this script checks true, Server weight +2} vrrp_instance VI_1 {state BACKUP # BACKUP ens33 virtual_router_id 51 # Virtual_router_id must be the same on the active and standby machines priority 100 # Different priorities on the active and standby machines; larger value on the host; smaller value on the standby machine Advert_int 1 # Send heartbeat authentication {# 1111 auth type PASS Auth PASS 1111} virtual_ipAddress {# virtual IP 192.168.17.50 // VRRP H virtual IP address}}Copy the code

(2) Create the check script nginx_check.sh in /usr/local/src/

nginx_check.sh

#! /bin/bash A=`ps -C nginx -no-header | wc - 1` if [ $A -eq 0]; then /usr/local/nginx/sbin/nginx sleep 2 if [`ps -C nginx --no-header| wc -1` -eq 0 ]; then killall keepalived fi fiCopy the code

(3) Start nginx and Keepalived on both servers

$systemctl start keepalived. Service #keepalived start $ps -ef I grep KeepalivedCopy the code

5. Final test

(1) Enter the virtual IP address 192.168.17.50 in the address box of the browser

(2) Stop primary server (192.168.17.129) nginx and Keealived and enter 192.168.17.50.

$systemctl stop keepalived. Service #keepalived stopCopy the code

7, Nginx principle analysis

1. Master and worker

2. How does worker work

(1) You can use nginx-s reload hot deployment.

First of all, for each worker process, an independent process does not need to lock, so the cost of locking is saved. Meanwhile, programming and problem finding are much more convenient. Secondly, independent processes can not affect each other. After one process exits, other processes are still working and the service will not be interrupted. The master process will soon start a new worker process. Of course, the abnormal exit of worker process must be a bug in the program. Abnormal exit will cause the previous worker to fail. All requests on, but not all requests are affected, so the risk is reduced.Copy the code

4. Set the appropriate number of wokers

Similar to Redis, Nginx adopts IO multiplexing mechanism. Each worker is an independent process, but there is only one main thread in each process. Requests are processed in an asynchronous and non-blocking way, even thousands of requests are not a problem. Each worker thread can maximize the performance of one CPU. Therefore, it is most appropriate for the number of workers to be equal to the number of cpus on the server. Setting too little will waste CPU, while setting too much will cause CPU consumption due to frequent context switching. Worker_cpu_affinity 0001 0010 0100 1000 worker_CPU_affinity 0001 0010 0100 1000 worker_cpu_affinity 0001 0010 0100 1000 worker_cpu_affinity 0001 0010 0100 Worker_cpu_affinity 0000001 00000010 00000100 00001000 worker_CPU_affinity 0000001 00000010 00000100 00001000Copy the code

5. Number of worker_connection

This is the maximum number of connections per worker process, so the maximum number of connections an nginx can establish is worker.connections * worker processes. * Connections * worker processes * Connections * worker processes * Connections * worker processes * Connections * worker processes So the normal maximum number of concurrent static accesses is: Worker. connections * worker_proceses/4. Worker. connections * worker_proceses/4. As a reverse proxy server, each concurrency establishes a connection to the client and a connection to the back-end service, occupying two connections.Copy the code

First: how many connections did woker use to send the request? Answer: two or four.

Nginx has one master and four wokers. Each woker supports up to 1024 connections. What is the maximum number of concurrent connections supported? Answer: The maximum number of concurrent static accesses is: Worker connections * worker processes/ 2, whereas for HTTP as a reverse proxy, the maximum number of concurrent requests would be Worker connections * worker processes/4