Install Nginx

  1. Download Nginx (version of your choice)
Wget HTTP: / / http://nginx.org/download/nginx-1.16.1.tar.gzCopy the code

2. Unzip

The tar - XVF nginx - 1.16.1. Tar. GzCopy the code

3. Download and install the plug-in to be compiled

1. Because Nginx relies on the GCC compilation environment, you need to install the compilation environment to enable Nginx to compile. Yum install GCC c++ 2.Nginx HTTP module needs to use pcre to parse regular expressions. yum install -y pcre pcre-devel 3. Install the dependent decompression package. Yum install -y zlib zlib-devel 4. yum install -y openssl openssl-develCopy the code

4. Install the dependent environment and start compiling Nginx

make && sudo make install 
Copy the code

The default installation path is /usr/local/nginx

5. Enter /usr/local/nginx/sbin to start nginx

/nginx -s reload # Close command./nginx -s stopCopy the code
  • Configure nginx to start automatically upon startup

vim /etc/rc.d/rc.local

6. Enter IP address authentication

If it does not show success, it may be that your server port 80 firewall is not turned on by typing as command view

firewall-cmd --query-port=80/tcp
Copy the code

Obviously port 80 is not enabled, so let’s turn on port 80 and try again

If the parameter is permanent, it will take effect permanently

Firewall-cmd --add-port=80/ TCP --permanent # restart firewalldCopy the code

Nginx configuration

  1. Nginx is one of the most popular games in the world.
  • Set up two servers, deploy the project, and start load balancing with Nginx. I set up one server 128 and one server 129

2. Configure nginx.conf to implement load balancing

user root; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' #  '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; server { listen 80; server_name localhost; #charset koi8-r; #access_log logs/host.access.log main; location / { root html; index index.html index.htm; } location /index { proxy_pass http://index-server/; access_log logs/access.log; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php${# proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php${# root HTML; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; # include fastcgi_params; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } # another virtual host using mix of IP-, name-, and port-based configuration # #server { # listen 8000; # listen somename:8080; # server_name somename alias another.alias; # location / { # root html; # index index.html index.htm; # } #} # HTTPS server # #server { # listen 443 ssl; # server_name localhost; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # ssl_session_cache shared:SSL:1m; # ssl_session_timeout 5m; # ssl_ciphers HIGH:! aNULL:! MD5; # ssl_prefer_server_ciphers on; # location / { # root html; # index index.html index.htm; Upstream {server 192.168.37.128:8080 weight=2; Server 192.168.37.129:8081 weight = 1; }}Copy the code
  • Load balancing is used to select a server from the list of back-end servers defined by the “Upstream” module to accept user requests. A basic upstream module looks like this: The server inside the module is a list of servers:
Upstream {server 192.168.37.128:8080 weight=1; Server 192.168.37.129:8081 weight = 1; }Copy the code
  • Assign the specified access reverse proxy to the server list after the upstream module is configured:
	location /index {
            proxy_pass http://index-server/;
			access_log 	logs/access.log;	
        }
Copy the code

3. Test: In this configuration, the weight is the same, so the two servers will be accessed alternately

  • 129 Tomcat runs port 8081 on the server. Therefore, if the two servers use the same port, you can configure the same port number
Upstream {server 192.168.37.128:8080 weight=1; Server 192.168.37.129:8081 weight = 1; }Copy the code
  • Access to this path: http://192.168.37.128/index/test/index
  • First Nginx will parse path decomposition, http://192.168.37.128/index
	location /index {
            proxy_pass http://index-server/;
			access_log 	logs/access.log;	
        }
Copy the code
  • Access this configuration first and parse the configuration inside: http://index-server/ this configuration is we configured above, so the path concatenation, the actual alternate access path is
http://192.168.37.128:8080/test/index
http://192.168.37.129:8081test/index
Copy the code

This is the most basic example of load balancing, but it’s not enough to meet real needs; The upstream module of the Nginx server currently supports allocation in six ways:

Nginx load balancing policies are only described in detail in this section.

  1. polling

The most basic configuration method, such as polling, is the default load balancing policy of the Upstream module. Each request is allocated to a different back-end server in chronological order.

The following parameters are available:

  • Fail_timeout is used in combination with max_fails.
  • Max_fails sets the maximum number of failures within the time set by the fail_TIMEOUT parameter. If all requests to the server fail within this time, the server is considered to be down.
  • Fail_time Downtime of the server. The default value is 10 seconds.
  • Backup marks this server as the standby server. When the primary server stops, requests are sent to it.
  • Down indicates that the server is permanently down.

Note:

In polling, if a server is down, the server is automatically removed. The default configuration is polling policy. This policy is suitable for the use of stateless and short services with the same server configuration.Copy the code
  1. weight

Weight mode, in the polling strategy on the basis of the specified polling probability. Example: Copy code

Upstream :8080 weight=2; # tomcat 7.0 server localhost: 8081; #tomcat 8.0 server localhost:8082 backup; #tomcat 8.5 server localhost:8083 max_fails=3 fail_timeout=20s; # tomcat 9.0}Copy the code

In this example, the weight parameter is used to specify the polling probability. The default value of weight is 1,; The value of weight is proportional to the access rate; for example, Tomcat 7.0 is accessed twice as often as other servers.

Note:

The higher the weight, the more requests are allocated to be processed. This policy can be used in combination with least_CONN and IP_hash. This policy is suitable when the hardware configurations of servers are different.Copy the code
  1. ip_hash

Specifies that the load balancer allocates requests based on the client IP address. This method ensures that requests from the same client are always sent to the same server to ensure that the session is maintained. In this way, each visitor has fixed access to a back-end server, which can solve the problem that sessions cannot cross servers.

# Dynamic server group

upstream dynamic_zuoyu { ip_hash; Server localhost:8080 weight=2; # tomcat 7.0 server localhost: 8081; # tomcat 8.0 server localhost: 8082; #tomcat 8.5 server localhost:8083 max_fails=3 fail_timeout=20s; # tomcat 9.0}Copy the code

Note:

Prior to nginx version 1.3.1, you could not use weight in ip_hash. Ip_hash cannot be used with backup at the same time. This strategy applies to stateful services, such as sessions. If a server needs to be deleted, you must manually delete it.Copy the code
  1. least_conn

Forwarding requests to back-end servers with fewer connections. Polling algorithm is to evenly forward the request to each backend, so that their load is roughly the same; However, some requests take a long time, resulting in high back-end load on which they reside. In this case, least_CONN can achieve better load balancing.

Upstream dynamic_zuoyu {least_conn; Server localhost:8080 weight=2; # tomcat 7.0 server localhost: 8081; #tomcat 8.0 server localhost:8082 backup; #tomcat 8.5 server localhost:8083 max_fails=3 fail_timeout=20s; # tomcat 9.0}Copy the code

Note:

This load balancing strategy is suitable for situations where the server is overloaded due to varying request processing times.Copy the code
  1. Third Party Policy

Third-party plug-ins are required to implement third-party load balancing policies. (1) fair

Requests are allocated based on the response time of the server, with priority given to those with short response times. Copy the code

Upstream dynamic_zuoyu {server :8080; # tomcat 7.0 server localhost: 8081; # tomcat 8.0 server localhost: 8082; # tomcat 8.5 server localhost: 8083; # tomcat 9.0 fair; # implement priority allocation with short response time}Copy the code

Copy the code

(2) url_hash

Allocate requests based on the hash results of urls accessed so that each URL is directed to the same backend server and used in conjunction with cache hits. Multiple requests for the same resource may reach different servers, resulting in unnecessary multiple downloads, poor cache hit ratio, and some wasted resource time. Using url_hash, the same URL (that is, the same resource request) can reach the same server, and once the resource is cached, the request can be read from the cache. Copy the code

Upstream dynamic_zuoyu {hash $request_uri; Localhost :8080; # tomcat 7.0 server localhost: 8081; # tomcat 8.0 server localhost: 8082; # tomcat 8.5 server localhost: 8083; # tomcat 9.0}Copy the code

In addition to the above scheduling policies, there are several third-party scheduling policies that can be integrated into Nginx.

In actual application, different policies need to be selected according to different scenarios. Most policies are used in combination to achieve the required performance.