Load Balancing

Before introduce Nginx load balancing implementation under the simple said the classification of load balancing, mainly divides into the hardware load balancing and load balancing software, hardware load balancing is the use of special software and hardware equipment, combining equipment chamber of commerce provide complete mature solutions, such as F5, is very reliable in the stability and security of the data, But it can be more expensive than software; Software load balancing to Nginx software such as the main implementation of a message queue distribution mechanism.

Simply put, load balancing is the process of splitting many requests and distributing them to different servers. For example, I have 3 servers, respectively A, B and C, and use Nginx for load balancing and polling strategy. At this time, if 9 requests are received, the 9 requests will be evenly distributed to A, B and Cf servers, and each server will process 3 requests. In this way we can take advantage of the multi-machine cluster to reduce the stress on a single server.

Load balancing for Nginx

Load Balancing Policy

NGINX open source supports four load balancing methods, and NGINX Plus adds two more.

1.Round Robin: The system sends all requests in the Round Robin mode.

Nginx.conf configuration example:

upstream xuwujing {
   server www.panchengming.com;
   server www.panchengming2.com;
}
Copy the code

Note: The domain name above can also be replaced by IP.

At Least Connections: Send requests to the server with the Least number of active Connections, again considering server weights.

Nginx.conf configuration example:

upstream xuwujing {
    least_conn;
    server www.panchengming.com;
    server www.panchengming2.com;
}
Copy the code

3.IP Hash: The server that sends the request is determined by the CLIENT IP address. In this case, the hash value is calculated using the first three bytes of the IPv4 address or the entire IPv6 address. This method ensures that requests from the same address arrive at the same server, unless the server is unavailable.

upstream xuwujing {
     ip_hash;
     server www.panchengming.com;
     server www.panchengming2.com;
}
Copy the code

4.Generic Hash: The server to which the request is sent is determined by a user-defined key, which can be a text string, variable, or combination.

 upstream xuwujing {
     hash $request_uri consistent;
     server www.panchengming.com;
        server www.panchengming2.com;
 }
Copy the code

5.Least Time (NGINX Plus only) – For each request, NGINX Plus selects the server with the lowest average latency and the lowest number of active connections, which is calculated based on the following parameters containing the least_time directive:

  • Header: The time when the first byte is received from the server.
  • Last_byte: time to receive the complete response from the server.
  • Last_byte inflight: Indicates the time to receive complete responses from the server. upstream xuwujing { least_time header; server www.panchengming.com; server www.panchengming2.com; }

6.Random: Each request will be delivered to a randomly selected server. If two parameters are specified, NGINX first randomly selects two servers based on the server weight, and then selects one of them using the specified method.

  • Least_conn: Minimum number of active connections
  • Least_time =header (NGINX Plus) : The minimum average time to receive a response header from the server ($upstream_header_time).
  • Least_time =last_byte (NGINX Plus) : Minimum average time to receive a complete response from the server ($upstream_response_time). upstream xuwujing { random two least_time=last_byte; server www.panchengming.com; server www.panchengming2.com; }

Nginx+SpringBoot load balancing

Environment to prepare

  • Rely on JDK1.8 or higher;
  • Depend on Nginx environment;

Nginx configuration

Nginx /conf/nginx /conf/nginx.conf

Upstream pancm {server 127.0.0.1:8085; Server 127.0.0.1:8086; }Copy the code

  • Upstream pancm: Define a name, just be casual;
  • Server + IP: port or domain name.

If you don’t want to use the Round Robin strategy, you can use another one.

Then add/modify the following configuration in server:

server { listen 80; Server_name 127.0.0.1; location / { root html; proxy_pass http://pancm; proxy_connect_timeout 3s; proxy_read_timeout 5s; proxy_send_timeout 3s; index index.html index.htm; } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; }}Copy the code

Configuration description:

  • Server: the name of a virtual host. One HTTP server can be configured with multiple servers.
  • Listen: Nginx default port;
  • Server_name: address of the Nginx service. You can use a domain name. Multiple domain names are separated by Spaces.
  • Proxy_pass: proxy path. Generally, configure the name following upstream to implement load balancing. You can directly configure an IP address to switch to the upstream.

Nginx. conf complete configuration:

events { worker_connections 1024; } error_log nginx-error.log info; http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; Upstream pancm {server 127.0.0.1:8085; Server 127.0.0.1:8086; } server { listen 80; Server_name 127.0.0.1; location / { root html; proxy_pass http://pancm; proxy_connect_timeout 3s; proxy_read_timeout 5s; proxy_send_timeout 3s; index index.html index.htm; } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; }}}Copy the code

Load Balancing Test

After the Nginx configuration is complete, we start Nginx. Linux input/usr/local/nginx – c/usr/local/nginx/cinf/nginx. Conf, if already started can use/usr/local/nginx/sbin/nginx -s reload command to thermal loading the configuration file, Windows directly click ngix.exe in the Nginx directory or run CMD start Nginx to start. If it is started, you can still use Nginx -s reload for hot loading. After the Nginx startup is complete, we start the springboot that we just downloaded and copy the project that changed the port in sequence by typing: java-jar springboot-jap-thymeleaf.jar. After successful startup, we can enter the IP of the service in the browser to access. Figure:

Note: I used Windows system for testing here, and the actual Linux system is the same. Then we operate and view the console log!

From the above sample diagram we made four interface refresh requests, which were split evenly between the two services, and from the above test results we achieved load balancing. When learning and testing, there is no problem with using the default port of Nginx to achieve load balancing. However, when using the default port of Nginx in a project, especially if there is a login interface and the port is not 80, the login interface will not be able to jump. Net ::ERR_NAME_NOT_RESOLVED error while debugging nginx net::ERR_NAME_NOT_RESOLVED error Proxy_set_header Host $Host :port = proxy_set_header Host $Host :port = listen