In most children’s impression, Nginx should belong to the category of backstage work, as long as the front end to write a good page; Then, with the continuous expansion of the large front-end scope, the front-end is also constantly entering the server field, and Nginx is one of the necessary skills to enter the server field; In the past, we all need to “humble” to let the colleagues at the back end to configure the page domain name for us, but learned the Nginx configuration, domain name configuration, proxy forwarding what can be completely our own, so to grab the job it… Doesn’t it smell good?

This article is published on the official account [front reading]. Please pay attention to the latest news of the official account for more exciting content.

Introduction to the

What is Nginx? First, let’s take a look at baidu Encyclopedia’s definition of Nginx:

Nginx is a high-performance HTTP and reverse proxy Web server that also provides IMAP/POP3/SMTP services.

There are three key words to break down: high performance, reverse proxy, and Web server; First of all, the Web server from needless to say, as we are familiar with Apache, IIS, Tomcat and so on are Web servers; Then is high performance, the performance of a server is naturally the most concerned website developers, so how to measure the performance of the server? This is generally measured by the amount of CPU and memory used. After the author’s simple concurrent test, at 20,000 concurrent links, CPU and memory usage is very low, CPU only 5%, memory usage is less than 2MB.

You can do a simple stress test on Nginx using Apache Bench, a Web stress testing tool; Through the command line ab -n 20000 -c 10000 [url], we have tested the total number of requests initiated by Nginx home page for 20000 and the number of concurrent requests for 10000. The test results are as follows:

We can see that the total Time taken for tests is 25 seconds and the average Time per request is 1.25 milliseconds, which is a good response performance for a high concurrency.

Then there is the reverse proxy, and the opposite is the forward proxy, and the difference between the two is often asked in interviews. Let’s take a look at what a forward agent is. The most typical example of a forward agent is a ladder.

If we directly access Google, we cannot access it, but if we use a proxy server, then we can browse Google by accessing the proxy server. The proxy server here is a forward proxy. Through forward proxies we can access previously inaccessible resources.

So what is a reverse proxy? The most typical example of a reverse proxy is our Nginx server; For example, when we visit a website, the proxy server obtains data from the target server and returns it to the client. In this way, the IP address of the real server can be hidden and only the proxy server can be opened to the outside world to prevent malicious attacks on the Intranet server from the Internet.

After understanding the above two typical cases, I believe you have also understood the forward and reverse proxy. Let’s summarize:

  • Forward proxy, the proxy client, the server does not know the actual client that initiated the request.
  • Reverse proxy, the proxy server, the client does not know the server that is actually providing the service.

Install the configuration

Nginx installation program is divided into Linux version and Windows version. The Windows version of Nginx can be directly run after downloading and decompressing, while the Linux version needs to make, configure and other commands to compile and install. The advantage is that different modules can be compiled to Nginx conveniently and flexibly. There are also many installation tutorials on the Internet, here will not repeat, you can download from the official website for their own version, after downloading, let’s take a look at his directory structure:

├ ─ ─ the conf# Directory for all configuration files├ ─ ─ nginx. Conf# Main configuration file├ ─ ─ the mime typesMedia type control file├ ─ ─ contrib# Store some utilities├ ─ ─ the docs# Documentation├ ─ ─ HTMLThe default directory for parsing static files├ ─ ─ logs# log directory├ ─ ─ sbin# Start the program
Copy the code

We often use the conf directory and the HTML directory; From the root directory, you can run some common commands to control the operation of Nginx:

nginx -s reopen 	# restart Nginx
nginx -s reload 	Reload the Nginx configuration file and restart Nginx in an elegant way
nginx -s stop   	Stop the Nginx service forcibly
nginx -s quit   	# Stop the Nginx service gracefully (that is, after processing all requests)
nginx -h 		    Open help information
nginx -v 		    Display version information and exit
nginx -V		    Display version and configuration option information, then exit
nginx -t		    Check the configuration file for syntax errors and exit
nginx -T	 	    Check whether the configuration file has syntax errors, dump and exit
nginx -q 	  	    # Mask non-error messages during profile detection
nginx -p prefix   	# set the prefix path (default :/usr/share/nginx/)
nginx -c filename	/etc/nginx/nginx.conf
nginx -g directives Set global directives outside the configuration file
killall nginx		# Kill all nginx processes
Copy the code

If we look at the first four commands, we can see that the four commands can be divided into two types, restart and stop Nginx, but one is mandatory and the other is elegant. The way to force this is to have Nginx stop processing all requests immediately, drop the link, and stop working. The elegant way is to allow Nginx to finish processing the requests it is currently processing, but not to accept new requests and stop working when all processing is complete.

Let’s take a look at the basic structure of the main configuration file nginx.conf:

# Number of nginx processes, recommended to be equal to the total number of CPU cores worker_processes 1; # process file pid logs/nginx.pid; Events {worker_connections 1024; } HTTP {# include mime.types; # default file type default_type application/octet-stream; Gzip on; sendfile on; keepalive_timeout 65; Server {# listen to port 80; server_name localhost; location / { root html; index index.html index.htm; }}}Copy the code

The configuration file can be divided into the following blocks:

  • Global module: the content from the configuration file to the Events block, where the configuration affects the overall running of the Nginx server, such as the number of worker processes, the location of error logs, etc
  • Events: Configuration affects the nginx server or network connection to the user.
  • HTTP: You can nest multiple servers, configure proxies, caches, log definitions and most other functions, and configure third-party modules.
  • Server: Configures parameters related to virtual hosts. One HTTP server can have multiple servers.
  • Location: Configures the routing of requests and the processing of various pages.

Most of the time, we don’t write all the configurations in one main configuration file, because that would be tedious and we don’t know what each module does. Instead, multiple profiles are split according to the project, each independent of the other, and then introduced into the main profile. Create a projects directory in your conf directory, and you can create multiple. Conf configuration files:

# /conf/projects/home.conf server { listen 8080; server_name localhost; location / { root html; index index.html index.htm; } } server { listen 8081; server_name localhost; location / { root html; index index.html index.htm; }}Copy the code

Then import all configuration files from the projects directory in the main configuration nginx.conf:

http { include mime.types; default_type application/octet-stream; gzip on; sendfile on; keepalive_timeout 65; Include projects/*.conf; }Copy the code

This allows you to add a.conf configuration file directly in your projects directory without changing the main configuration file. However, we are not sure whether there will be errors after the modification. We can check the configuration file by running the following command:

nginx -t
#nginx: the configuration file nginx/conf/nginx.conf syntax is ok
#nginx: configuration file nginx/conf/nginx.conf test is successful
Copy the code

After detecting that there are no errors, you can gracefully restart the server:

nginx -s reload
Copy the code

Static server

As a Web server, the most important thing is to provide access services to static resources. Our Nginx server can be used to host some static resources, such as JS, CSS, images, etc. When accessing a specific static resource path, it will be forwarded to the local directory file. Let’s take a look at how Nginx matches requests step by step through domain name configuration, URI configuration, and directory configuration.

Server_name configuration

In the above configuration, we mainly set server_name to localhost, but this is only accessible to hosts on the LAN. We want to make it accessible to other hosts on the WAN. We can match server_name to the domain name. Its parameter values can be one of the following:

  • The exact domain name, such as www.my.com
  • A wildcard name, but a wildcard can only be used at the beginning or end of a name consisting of three strings, such as*. My.com or www.my.
  • Regular expressions, using tilde~As the opening tag of a regular expression string, e.g~^www\d+\.my\.com$
  • The IP address

In the above regular expression, ^ means beginning with WWW, followed by one or more digits (\d+), followed by the domain name my.com, and ended with $; So the above expression matches a domain name such as www1.my.com, but not www.my.com.

Regular expressions also support the string capture function, that is, a part of the string of the name that matches the regular expression is truncated and put in the variable for later use. For example, set server_name as follows:

server {
  listen       80;
  server_name  ~^(.+)?\.my\.com$;
  location / {
    root   /usr/share/nginx/html/$1;
    index  index.html index.htm;
  }
}
Copy the code

Thus, when it reaches Nginx via the secondary domain home.my.com, it is captured by the server_name regular expression, and the home string is stored in the $1 variable. We are in the/usr/share/nginx/HTML/home directory can through the home.my.com domain name to access the static resources; Our server directory might look like this:

/usr/share/nginx/html/
    |- home
        |- index.html
    |- blog
        |- index.html
    |- mail
        |- index.html
    |- photo
        |- index.html
Copy the code

This requires only one server block to complete the configuration of multiple sites.

Nginx allows multiple domain names for a virtual host, so we can configure multiple domain names for server_name, separated by Spaces:

server { listen 80; server_name a.com b.com c.com; #... Other configurations}Copy the code

Since server_name supports the above three configurations, if multiple server blocks match the same domain name, which server will receive the request? Therefore, the order of priority is as follows:

  1. Match server_name exactly
  2. The wildcard matches server_name at the beginning
  3. The wildcard matches server_name at the end
  4. The regular expression matches server_name

If we want devices on the LAN to access nginx, we can set server_name to the IP address:

server {
  listen       80;
  server_name  localhost 192.168.1.101;
}
Copy the code

If Nginx is not found, check whether it is due to the firewall. Select Nginx in the applications that are allowed to pass the firewall. If Nginx is not found, click “Allow other applications to add”.

Sometimes we’ll see server_name set to _ (underscore), which means server_name is empty and matches all hosts. We can configure host to point a.com, b.com, and c.com to this host, and then configure nginx:

server { listen 80; server_name _; location / { root html; index index.html index.htm; }}Copy the code

So we can not only through the domain name a.com, b.com, c.com to access, but also through the IP way.

The location configuration

Location is used to match different URI requests. Its syntax is as follows:

location [ = | ~ | ~* | ^~ ] uri { ... } location @/name/ {... }Copy the code

In this case, the URI is the request string to be matched. It can be a non-regular string, such as /home, which is called the standard URI. It can also be a string that contains a regular URI, such as \.html$(for ending in.html), called the regular URI. The four matching characters in square brackets are optional to change the way the request string matches the URI. Let’s see how these four matching characters are interpreted:

A match explain
Don’t fill in There is no parameter after the location, and it is directly followed by the standard URI, indicating that the prefix matches, which means that the URI in the request matches from the beginning
= Before it can be used with a standard URI, the request string is required to be an exact match. Once successful, it is processed immediately and Nginx stops searching for other matches
^ ~ Before using a standard URI, it is required to process the match immediately, no longer to match other regular URIs, and is generally used to match directories
~ Used before a regular URI to indicate that the URI contains a regular expression,Case sensitive
~ * Used before a regular URI to indicate that the URI contains a regular expression,Case insensitive
@ Define a named location. The name of the location defined by @ is usually used for internal orientations

Let’s take a look at the urls that each matching rule can match. First of all, if you do not fill in a representative, it means that the prefix matches. If we have multiple similar prefix matches:

location /pre/fix {
  # ...
}
location /pre {
  # ...
}
Copy the code

For requests to /pre/fix/home, the first location is matched according to the maximum match principle.

Then =, which requires the path to match exactly:

location = /abc { # ... } # / ABC matches # /abcde does not match # / ABC/does not match, / # /cde/ ABC does not matchCopy the code

The next is the ^~ best match, which takes precedence over the regular expression:

location ^~ /login { # ... Matching # #} / login/loginss match # # / login / / home/login does not matchCopy the code

This is followed by ~ regular expression matching, which is case-sensitive (note: Windows version Nginx does not differentiate) :

location ~ \.(gif|jpg|png|js|css)$ { # ... } # /bg.png # /bg.png # /bg.png? A =1 match # /bg.jpegCopy the code

~* is also a regular match, but it is case-insensitive and will not be shown here.

If our URI matches multiple locations, they don’t match exactly in the order they appear in the configuration file. The URI matches as follows:

  1. = Exact matches are processed first. If an exact match is found, Nginx stops searching for other matches.
  2. For ordinary character matches, regular expression rules and long block rules will take precedence over the query match, that is, if the item matches, it will also need to see if there is a regular expression match and a longer match.
  3. ^~ matches only this rule, and Nginx stops searching for other matches. Otherwise, Nginx will continue processing other location directives.
  4. The final match is with~and~ *If a match is found, nginx stops searching for other matches. When no regular expression is available or no regular expression is matched, the most closely matched verbatim matching instruction is used.

Request directory configuration

After the location matches the URI, the request resource needs to be found in the directory specified by the server. Root and alias are two directives used to specify the directory. The main difference between the two is how to resolve the path after the location. Let’s look at root first. Suppose we need to forward all the paths below /data/ to HTML /roottest:

location /data/ {
  root html/roottest;
}
Copy the code

When the location receives the request from /data/index.html, it will find the index.html file in the HTML /roottest/data/ directory and conduct corresponding, and root will concatenate the root path and location path.

The alias directive changes the path of the request received by the location, if we need to forward all paths below /data1/ to HTML /aliastest:

location /data1/ {
  alias html/aliastes/;
}
Copy the code

When the location receives a request for /data1/index.html, it looks for the index.html file in the HTML /aliastes/ directory and responds.

Note that the path after the alias directive must end with /, otherwise the file will not be found, and root is optional.

Access control

For some static resources, we may set some user access permissions, such as the. Map file packaged with JS, which maps the source code; However, we want it to be open only to the company’s IP and not to the external IP, so we need to use the allow and deny commands.

If there are two more devices on the LAN, we can only allow the IP of these two devices to be accessed:

location / {
  alias html/aliastes/;
  allow 192.168.1.102;
  allow 192.168.1.103;
  deny all;
}
Copy the code

The deny and allow directives are provided by the ngx_http_access_module module, which is not included in the Windows version of Nginx.

You can also control access to the front end. Map file. The packaged map file is usually stored on the server, but if it is made available to everyone, they can see the source code. So we can control that only the company’s IP has access:

try_files

The history route mode is often used when configuring routes on the front end, so the background needs to map the corresponding route to index.html. However, it would be tedious to configure a location for each route, so try parsing with the try_files directive. The syntax for try_files is as follows:

# format 1:
try_files file ... uri;  
# Format 2:
try_files file ... =code;
Copy the code

Assuming our packaged single page is located at/HTML /my/index.html and we want to route /login, /regisrer, and so on to index.html, we can configure try_files:

server { listen 8080; server_name localhost; location / { try_files $uri /my/index.html; }}Copy the code

For multi-page applications, let’s say our pages are in the/HTML /pages/ directory. If we want to access /login, we can respond to the/HTML /pages/login.html page with $URI:

server { listen 8080; server_name localhost; location / { index index.html index.htm; root html/pages; try_files $uri /$uri.html $uri/index.html /index.html; }}Copy the code

Here we set the root directory to HTML /pages. When we access the /login route, where the $URI is /login, try_files will try to find /login.html in the root directory; If you can’t find it, try /login/index.html. If you can’t find it, it returns index.html by default.

gzip

We all know that opening gZIP compression on the server can greatly improve the access speed of JS, CSS, HTML and other files when transferring, and optimize the performance of the website; Gzip files can be 30% or less of their original size; For other multimedia files, such as pictures, videos, and audio files, compression is not enabled because the compression effect is not good.

Gzip compression is essentially a server-side compression, transfer to the browser after decompression parsing, let’s see how gZIP works:

You can see that accept-encoding and content-encoding are used to transmit the request and corresponding headers, respectively; We can see this through a JS request data:

Since gzip has all these benefits, let’s look at how nginx is configured. Gzip can be configured in an HTTP or server block:

Gzip on; Gzip_buffers 32 4K; The higher the level, the higher the compression rate, but the higher the CPU consumption. Gzip gzip_disable "MSIE [1-6]\.(? ! .*SV1)"; Gzip_min_length = 1024; gzip_min_length = 1024; Set the minimum HTTP version gzip_http_version 1.0 for gZIP compression; Gzip_types application/javascript text/ CSS text/ XML; # Add "Vary: accept-encoding" response header gzip_vary on;Copy the code

Password control

For some simple pages, we want to use passwords to restrict access to other users, but do not want to access complex account system, Nginx provides simple account password control; First of all, we create a password book to store the account password through the Linux tool:

sudo yum install httpd-tools -y
sudo htpasswd -c passwd/passwd admin
Copy the code

The passwd/passwd file is the generated password file. After running the file, the user is required to enter the password twice consecutively and the password is added for the admin user. We then modify the nginx configuration file to enable password authentication for the site:

server { listen 8000; server_name localhost; Auth_basic "Please enter your account password "; auth_basic_user_file /etc/nginx/conf/passwd/passwd; location / { # ..... }}Copy the code

Restart Nginx, and the next time you visit the site, you will see a pop-up that requires authentication.

The reverse proxy

The difference between a forward proxy and a reverse proxy is described above. The reverse proxy function is one of the three main functions of Nginx (static Web server, reverse proxy, and load balancing). The reverse proxy does not require additional modules, and by default comes with proxy_pass and fastcgi_pass directives, which can be configured in the location block:

server { listen 80; server_name a.com; The location / {proxy_pass http://192.168.1.102:8080; }}Copy the code

When configuring proxy_pass, we need to pay attention to the/’ after the URL; When we access /proxy/home.html ‘ ‘:

The location/proxy / {proxy_pass http://192.168.1.102:8080/; }Copy the code

The first case the url/on the belt, would be acting to http://192.168.1.102:8080/home.html.

The location/proxy / {proxy_pass http://192.168.1.102:8080; }Copy the code

After the second url without /, would be acting to http://192.168.1.102:8080/proxy/home.html

The location/proxy / {proxy_pass http://192.168.1.102:8080/doc/; }Copy the code

Third, agent/doc/agent at http://192.168.1.102:8080/doc/home.html

The location/proxy / {proxy_pass http://192.168.1.102:8080/doc; }Copy the code

A fourth case proxy/doc, then will be acting to http://192.168.1.102:8080/dochome.html

When configuring the reverse proxy, we can also modify the request parameters of the proxy request:

The location/proxy / {proxy_pass http://192.168.1.102:8080/; Proxy_method GET; Proxy_http_version 1.1; Proxy_set_header host $host; Proxy_set_header x-real-ip $remote_addr; Proxy_set_header x-forwarded_for $proxy_add_x_forwarded_for $proxy_add_x_forwarded_for $proxy_add_x_forwarded_for $proxy_add_x_forwarded_for proxy_redirect off; }Copy the code

After the reverse proxy, a proxy layer is added between the client and the Web server, so the Web server cannot get the host and real IP requested by the client. We modify the head of the proxy request by proxy_set_header directive. Host and host and remote_addr are the user’s Real host and IP, here pass host and x-real-ip fields as variables, Request. GetAttribute (“X-real-ip”)

Load balancing

With the development of the Internet and the increase of user scale, the pressure on servers is also increasing. If only one server is used, sometimes it cannot bear the pressure of traffic. At this time, we need to spread part of the traffic to multiple servers, so that each server can bear the pressure equally.

Nginx supports six load balancing policies: polling policy, weighted polling policy, IP_hash policy, URl_hash policy, fair policy, and sticky policy. The six policies can be divided into two categories, built-in policies (polling, weighted polling, IP_hash) and extended policies (URl_hash, fair, sticky); By default, the built-in policies are compiled automatically in Nginx, while the extension policies require additional installation.

Since this is load, we need to enable multiple servers; For the sake of demonstration, we run the Node script on one computer to simulate three servers; Also, in order to see how much traffic each server has, it counts once per visit:

const express = require("express");
const app = express();
const PORT = 8080;
const path = require("path");
let count = 0;
app.get("*".(req, res) = > {
  count++;
  res.sendFile(path.resolve(__dirname, "./index.html"));
});
app.listen(PORT);
Copy the code

Then we change the port number so that we have 8080, 8081, and 8082 servers.

Polling strategy

The polling strategy, as the name implies, is to assign different server nodes one by one according to the order of requests; If a server is faulty, it is automatically deleted.

Upstream myServer {server 192.168.1.101:8080; Server 192.168.1.101:8081; Server 192.168.1.101:8082; } server { listen 8070; server_name _; location / { proxy_pass http://myserver; }}Copy the code

We used the test tool Apache Bench to make 100 concurrent requests to Nginx:

ab -n 100 -c 10 http://localhost:8070/
Copy the code

The final statistics of the results of each server, each server request is very average:

8080:34 requests 8081:33 requests 8082:33 requestsCopy the code

Weighted polling strategy

Weighted polling considers the weight of requests received by each server node in the basic polling strategy and specifies the weight of the server node to be polled. It is mainly used in the case of uneven server node performance.

Weight is set by configuring weight after the server node. Weight is proportional to the access ratio (the default value for weight is 1). We set an access ratio of 1:3:2 for the three servers.

Upstream myServer {server 192.168.1.101:8080; Server 192.168.1.101:8081 weight = 3; Server 192.168.1.101:8082 weight = 2; }Copy the code

After the stress test, the server request results are almost the same as the ratio we configured:

8080:16 requests 8081:51 requests 8082:33 requestsCopy the code

Note: Since weight is built-in, it can be used directly with other strategies.

Ip_hash strategy

The IP_hash policy is to hash the IP addresses accessed by the front end and then allocate the requests to different nodes according to the hash result, so that each IP address can access the service node fixed. The advantage of this is that the user’s session is only on one back-end server node, and there is no need to consider the problem of sharing sessions on multiple server nodes.

upstream myserver { ip_hash; Server 192.168.1.101:8080; Server 192.168.1.101:8081 weight = 3; Server 192.168.1.101:8082 weight = 2; }Copy the code

After the stress test, we counted the server request results, and found that all the requests went to a fixed server:

8080:0 requests 8081:0 requests 8082:100 requestsCopy the code

Url_hash strategy

The URl_hash policy hashes the URL address and directs the url to the same server node according to the hash result. The advantage of URL_hash is that it improves the efficiency of the back-end cache server.

upstream myserver {
    hash $request_uri;
    server 192.168.1.101:8080;
    server 192.168.1.101:8081;
    server 192.168.1.101:8082;
}
Copy the code

Statistics server request results after stress test:

8080:0 requests 8081:0 requests 8082:100 requestsCopy the code

If we switch different urls, /home, /list, etc., they will all be assigned to different server nodes.

Fair strategy

Fair policy requests are forwarded to the back-end server node with the least load. Nginx determines the load situation by the response time of the server node. The node with the shortest response time has a relatively light load, and Nginx will forward front-end requests to this server node.

Note: The FAIR policy is not compiled into the Nginx kernel by default and requires additional installation

upstream myserver {
    fair;
    server 192.168.1.101:8080;
    server 192.168.1.101:8081;
    server 192.168.1.101:8082;
}
Copy the code

Statistics server request results after stress test:

8080:33 requests 8081:33 requests 8082:34 requestsCopy the code

Sticky strategy

Sticky strategy is a load balancing solution based on cookies, through the distribution and identification of cookies, so that the request from the same client falls on the same server, the default cookie identifier named route.

The sticky policy looks similar to the IP_hash policy, but there are some differences. Assume that there are three computers in a LAN, and they have three internal IP addresses, but they only have one external IP address when they make a request. If ip_hash is used, Nginx will allocate the request to the same server. If the sticky policy is used, requests will be allocated to different servers, which ip_hash cannot do.

Note: The sticky policy is not compiled into the Nginx kernel by default and requires additional installation

upstream myserver {
    sticky name=sticky_cookie expires=6h;
    server 192.168.1.101:8080;
    server 192.168.1.101:8081;
    server 192.168.1.101:8082;
}
Copy the code

Sticky cookie default cookie name is route, we can change by name, there are some other cookie parameters can be changed:

  • [Name =route] Sets the name of the cookie used to record the session
  • [domain=.foo.bar] Sets the domain name for the cookie
  • [path=/] Sets the URL path for cookies, the default root directory
  • [Expires =1h] Sets the lifetime of a cookie. If the cookie is not set by default, it becomes invalid when the browser closes
  • [the hash index of = | md5 | sha1] server id setting cookies are made expressly or use md5 value, the default use md5
  • [no_fallback] Set this parameter. When the sticky backend machine is down, nginx returns 502 (Bad Gateway or Proxy Error) and does not forward to other servers. This parameter is not recommended
  • [Secure] Enable secure cookies. HTTPS is required
  • [httponly] Allows cookies not to be leaked through JS, unused

We through the browser to access, in the cookie can see sticky cookie

Note: Since cookies are originally delivered by the server, if the client disables cookies, they do not take effect.

The other parameters

Upstream also has some parameters we can use for load balancing:

parameter describe
fail_timeout Used in combination with max_fails
max_fails Set the maximum number of failures within the time set by the fail_timeout parameter. If all requests to the server fail within the time set by the fail_timeout parameter, the server is considered down
fail_time The length of time the server is considered to be down, which is 10 seconds by default.
backup Mark the server as the standby server. When the primary server stops, the request is sent to it.
down Mark that the server is permanently down.
keepalive Connection Count (keepalive value) specifies the maximum number of continuous connections per worker process to the Nginx load balancer cache. If idle processes that exceed this value try to connect to the Nginx load balancer group, the first to connect will be shut down.
upstream backserver{ ip_hash; Server 192.168.1.101:8080 down; server 192.168.1.101:8080 down; Server 192.168.1.101:8081; # max_fails allows the number of failed requests to be set to 1 by default. Here the number of failed requests is set to 3. The timeout period after each failure is 30s server 192.168.1.101:8082 max_fails=3 fail_timeout=30s. Select * from 'server 192.168.1.101:8083' where 'server 192.168.1.101:8083' where 'server 192.168.1.101:8083' where 'server 192.168.1.101:8083' # Max keepalive 16 connected to nginx load balancer; }Copy the code

Nginx did you learn it? More front-end information, please pay attention to the public number [front-end read].

If you think it’s good, please pay attention to my nuggets homepage. For more articles, visit Xie xiaofei’s blog