Use nginx to do load balancing in two modules:

Upstream defines a load node pool.

The location module does the URL matching. The proxy module sends requests to the node pool defined by upstream.

Upstream:

Nginx’s load balancing function relies on the ngx_HTTP_upstream_module module, The supported proxy modes are Proxy_pass (used for reverse proxy),fastcgi_pass(used for dynamic application interaction),memcached_pass,proxy_next_upstream,fastcgi_next_pass, memcacheD_next_pass .

Upstream should be placed inside the HTTP {} tag.

Module writing:

upstream backend {

ip_hash; 

server backend1.example.com       weight=5;
server backend2.example.com:8080;
server backup1.example.com:8080   backup;
server backup2.example.com:8080   backup;
Copy the code

}

Example 1:

upstream dynamic { zone upstream_dynamic 64k;

server backend1.example.com weight=5; server backend2.example.com:8080 fail_timeout=5s slow_start=30s; Server 192.0.2.1 max_fails = 3; server backend3.example.com resolve; server backup1.example.com:8080 backup; server backup2.example.com:8080 backup;Copy the code

}

Grammar explanation:

Nginx supports four scheduling algorithms by default

Round robin (RR). Each request is allocated to a different back-end server in chronological order. If the back-end server fails, the system automatically clears the fault, and users’ access is not affected.

Polling weight (weight), the larger the weight value, the higher the probability of access allocated, mainly used in the case of uneven performance of each back-end server.

Ip_hash: Each request is allocated according to the hash result of the access IP address. In this way, the fixed access from the same IP address to a back-end server is mainly used to solve the problem of session sharing on dynamic websites.

Url_hash allocates requests based on the hash result of the URL to be accessed. Each URL is directed to the same backend server, which can further improve the efficiency of the backend cache server. Nginx does not support this function.

The upstream_FAIL module is used to load balance requests intelligently based on page size and load time. This algorithm allocates requests based on the response time of the back-end server.

Least_conn minimum number of connections, distributed as low as that machine has connections.

This is how to write the server module

Server IP address scheduling status

The server directive specifies the IP address and port of the back-end server, and can also set the status of each back-end server in load balancing scheduling.

Down: Indicates that the current server does not participate in load balancing temporarily.

The backup server reserved by backup will be requested only when all other non-Backup servers are down or busy, because this cluster has the least pressure.

Max_fails specifies the number of times a request is allowed to fail. The default is 1, and when the maximum number is exceeded, an error defined by the proxy_next_upstream module is returned. 0 indicates that failed attempts are prohibited. In the enterprise scenario: 2-3. Jingdong once and Lanxun 10 times are configured based on service requirements.

Fail_timeout: the time to suspend services after max_fails fails. Jingdong is 3S, lanxun is 3S, according to business requirements. Normal business 2-3 seconds is reasonable.

Example: if max_fails is 5, it checks 5 times, and if it’s 502 all five times. Then, he will wait 10 seconds according to the value of fail_timeout and then check.

Server To connect to the domain name, a DNS server must be available on the Intranet or perform domain name resolution in the hosts file of the load balancer. Server can also be directly connected to IP or IP plus port.

Long connection keepalive

upstream backend {

server backend2.example.com:8080;
server backup1.example.com:8080   backup;
keepalive 100;
Copy the code

}

This directive configures the maximum number of cacheable idle connections between each worker process and the upstream server.

When this number is exceeded, the least recently used connection is closed. The Keepalive directive does not limit the total connection between worker processes and upstream servers.

location / {

Keep-alive proxy_http_version 1.1; proxy_set_header Connection ""; proxy_pass http://backup;Copy the code

} If HTTP /1.0 needs to be configured to send a “Connection: keep-alive “header.

Do not forget to enable long connection support for upstream servers.

Connection pool configuration suggestions

The total number of connections is the total number of long connections in “free connection pool “+” free connection pool”.

First, the long connection configuration does not limit the total number of connections that the worker process can open (the excess is considered as short connections). In addition, the connection pool must be set appropriately according to the scenario.

The free connection pool is too small and the connections are insufficient. You need to continuously build connections.

The idle connection pool is too large and too many idle connections are timed out before they are used.

It is recommended to enable long links only for tabloid articles.

Location module interpretation

Location sets the URI based on an instruction.

Basic syntax:

Syntax: location [ = | ~ | ~* | ^~ ] uri { … }

location @name { … }

Default: –

Context: server, location

= Exact match, if it finds something that matches the = sign, stop the search immediately and process the request immediately (highest priority)

~ case sensitive

~* is case insensitive

^~ matches only strings, not regular expressions

@ specifies a named location, generally used for internal redefinition requests, location@name {… } matches are prioritized, not according to nginx configuration files.

Official examples:

location = / {

[ configuration A ]
Copy the code

}

location / {

[ configuration B ]
Copy the code

}

location /documents/ {

[ configuration C ]
Copy the code

}

location ^~ /images/ {

[ configuration D ]
Copy the code

}

location ~* .(gif|jpg|jpeg)$ {

[ configuration E ]
Copy the code

}

Conclusion:

/ matches A.

B/index. The HTML matching

Matching/documents/document. The HTML C

D/images / 1. GIF matching

/documents/1.jpg matches E.

Test examples:

location / {

    return 401;
    }
    location = / {
        return 402;
    }
    location /documents/ {
        return 403;
    }
    location ^~ /images/ {
        return 404;
    }
    location ~* \.(gif|jpg|jpeg)$ {
        return 500;
    }
Copy the code

Test results (focus) :

[root@lb01 conf]# curl i-s -o /dev/null -w “%{http_code}\n” http://10.0.0.7/ 402

[root @ lb01 conf] # curl – I – s – o/dev/null – w “% {http_code} \ n” http://10.0.0.7/index.html 401

[root @ lb01 conf] # curl – I – s – o/dev/null – w “% {http_code} \ n” http://10.0.0.7/documents/document.html 403

[root @ lb01 conf] # curl – I – s – o/dev/null – w “% {http_code} \ n” http://10.0.0.7/images/1.gif 404

[root @ lb01 conf] # curl – I – s – o/dev/null – w “% {http_code} \ n” http://10.0.0.7/dddd/1.gif 500

Summary of results:

Precedence order of matches, =>^~ (matches fixed strings, ignores re) > exactly equal >~*> empty >/.

Try to put ‘=’ first in your work

Proxy_pass module interpretation

The proxy_pass directive belongs to the ngx_HTTP_proxy_module module, which can forward requests to another server.

Writing:

proxy_pass http://localhost:8000/uri/;

Example 1:

Upstream blog_real_Servers {server 10.0.0.3:80 weight=5; Server 10.0.0.10:80 weight = 10; Server 10.0.0.19:82 weight = 15; } server { listen 80; server_name blog.etiantian.org; location / { proxy_pass http://blog_real_servers; proxy_set_header host $host; }}Copy the code

Proxy_set_header: If multiple virtual hosts are configured on the back-end Web server, use this Header to distinguish the reverse proxy host name. Proxy_set_header host $host. .

Proxy_set_header X-Forwarded-For: If the program on the back-end Web server needs to obtain the user IP address, it obtains it from this Header. proxy_set_header X-Forwarded-For $remote_addr;

Configure the back-end server to receive real front-end IP addresses

The configuration is as follows:

log_format commonlog '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" '  '"$http_user_agent" "$http_x_forwarded_for"';Copy the code

Httpd. conf configuration of the rs_apache node

LogFormat “” % {X – Forwarded – For} I” % % u % t “l % r” > % s % b “I” “% % {Referer} {u ser – Agent} I” “combined modified logging

apache LogFormat “”%{X-Forwarded-For}i” %l %u %t “%r” %>s %b” common

Proxy_pass optimization parameters

client_max_body_size 10m; Maximum number of bytes per file that a client is allowed to request.

client_body_buffer_size 128k; The maximum number of bytes cached by the buffer agent can be interpreted as being saved locally before being passed to the user.

proxy_connect_timeout 600; Timeout for connecting to the backend server _ Timeout for initiating a handshake and waiting for a response. proxy_read_timeout 600; After the connection is successful _ waiting time of the backend server _ actually the backend is queued for processing.

proxy_send_timeout 600; The time for the back-end server to send back data means that the back-end server must send all data within the specified time.

proxy_buffer_size 8k; The proxy requests the cache, which holds the user’s header information for Nginx’s regular processing, as long as it is set to hold the header information. proxy_buffers 4 32k; Same as above

Tells Nginx how much space to save for a single page, assuming the average page size is under 32K. proxy_busy_buffers_size 64k; If the system is busy, apply for a larger proxy_buffers official recommendation (proxy_buffers*2).

proxy_max_temp_file_size 1024m; If proxy_buffers cannot accommodate the back-end server’s response, some of it will be saved to a temporary file on the hard disk. This value is used to set the maximum temporary file size. The default value is 1024 MB, which has nothing to do with proxy_cache. Anything greater than this value is returned from the Upstream server. Set this parameter to 0. proxy_temp_file_write_size 64k; Proxy cache temporary file size proxy_temp_PATH (which can be compiled at compile time) to specify which directory to write to.

Health check

Nginx provides a health_check statement to provide keyhealth checking while loading (note: this statement needs to be set in the context of location).

The following parameters are supported:

Interval =time: Specifies the interval between two health checks. The default value is 5 seconds

Fails =number: Sets the number of consecutive checks that the server is considered unhealthy. The default is 1

Passes =number: Sets the number of consecutive checks that a server is considered healthy. Default is 1

Uri = URI: Specifies the request URI for the health check. The default value is /.

Match =name: Specifies the name of the matching configuration block, which is used to test whether the response passes the health check. The default test return status codes are 2XX and 3XX

A simple setup is as follows, which will use the default values:

location / {

proxy_pass http://backend;
health_check;
Copy the code

}

For our applications, we can define a dedicated API for health checks: / API /health_check and only return HTTP status code 200. And set the interval between two checks to 1 second. Thus, the health_check statement is configured as follows:

health_check uri=”/api/health_check” interval;

Match method

http {

server { ... location / { proxy_pass http://backend; health_check match=welcome; } } match welcome { status 200; header Content-Type = text/html; body ~ "Welcome to nginx!" ; }Copy the code

}

Match examples

status 200; : Status equals 200

status ! 500; : Status is not 500

status 200 204; : Status is 200 or 204

status ! 301 302; : status is not 301 or 302.

status 200-399; : Status is between 200 and 399.

status ! 400-599; : Status is not between 400 and 599.

status 301-303 307; : status is 301, 302, 303, or 307.

header Content-Type = text/html; The value of “content-type” is text/ HTML.

header Content-Type ! = text/html; : Content-type is not text/ HTML.

header Connection ~ close; : Connection contains close.

header Connection ! ~ close; : Connection does not contain close.

header Host; : The request header contains Host.

header ! X-Accel-Redirect; : The request header does not contain “X-Accel-redirect”.

body ~ “Welcome to nginx!” ; : body contains “Welcome to nginx!” .

body ! ~ “Welcome to nginx!” ; : Body does not contain “Welcome to nginx!” .

A complete instance of nginx

[root@lb01 conf]# cat nginx.conf

worker_processes 1;

events {

worker_connections  1024;
Copy the code

}

http {

include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; #blog lb by oldboy at 201303 upstream blog_real_Servers {server 10.0.0.9:80 weight=1 max_fails=1 fail_timeout=10s; Server 10.0.0.10:80 weight=1 max_fails=2 fail_timeout=20s; } server { listen 80; server_name blog.etiantian.org; location / { proxy_pass http://blog_real_servers; include proxy.conf; }}Copy the code

}

[root@lb01 conf]# cat proxy.conf

    proxy_set_header Host $host;
    proxy_set_header X-Forwarded-For $remote_addr;
    proxy_connect_timeout 90;        
    proxy_send_timeout 90;
    proxy_read_timeout 90;
    proxy_buffer_size 4k;
    proxy_buffers 4 32k;
    proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k;
Copy the code

expansion

Only GET,HEAD, and POST methods are allowed to make requests

Only allow these request methods

if ($request_method ! ~ ^(GET|HEAD|POST)$ ) { return 444; }Copy the code

In actual combat

Implement static and static separation according to URI and location.

Finally: /static/ URL to access 10.0.0.9.

The /dynamic/ URL goes to 10.0.0.10.

Images for these static files go to 10.0.0.9.

/upload/ to access 10.0.0.10.

[root@lb01 conf]# cat nginx.conf

worker_processes 1;

events {

worker_connections  1024;
Copy the code

}

http {

include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; #blog lb by oldboy at 201303 upstream static_pools {server 10.0.0.3:80; } upstream {server 10.0.0.10:80; } upstream upload_pools {server 10.0.0.1:80; } server { listen 80; server_name blog.biglittleant.cn; location / { proxy_pass http://static_pools; include proxy.conf; } location /static/ { proxy_pass http://static_pools; include proxy.conf; } location ~* \.(gif|jpg|jpeg)$ { proxy_pass http://static_pools; include proxy.conf; } location /dynamic/ { proxy_pass http://dynamic_pools; include proxy.conf; } location /upload/ { proxy_pass http://upload_pools; include proxy.conf; }}}Copy the code