# nginx.conf configure worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; gzip on; include extra/*.conf; server { listen 80; server_name localhost; charset koi8-r; access_log logs/host.access.log main; location / { root html; index index.html index.htm; } # error_page 404 /404.html; # redirect server error pages to the static page /50x.html error_page 500 502 503 504 /50x.html; location = /50x.html { root html; }} # extra/test.conf: server {listen 443 SSL; server_name aa.bb.com; # SSL open ssl_certificate. / conf/extra/SSL/SSL. The CRT/server. The CRT; ssl_certificate_key ./conf/extra/ssl/ssl.key/server.key; ssl_session_cache shared:SSL:1m; ssl_session_timeout 5m; ssl_ciphers HIGH:! aNULL:! MD5; ssl_prefer_server_ciphers on; Location ^~ /route00 {proxy_ignore_client_abort on; proxy_send_timeout 15; proxy_connect_timeout 15; proxy_read_timeout 15; proxy_buffering on; proxy_buffer_size 32k; proxy_buffers 8 32k; proxy_busy_buffers_size 64k; proxy_next_upstream http_502 http_504 error timeout invalid_header; proxy_pass https://aa.bb.com; Proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; break; } # location / {proxy_ignore_client_abort on; proxy_send_timeout 15; proxy_connect_timeout 15; proxy_read_timeout 15; proxy_headers_hash_max_size 512; proxy_headers_hash_bucket_size 128; proxy_buffering on; proxy_buffer_size 32k; proxy_buffers 8 32k; proxy_busy_buffers_size 64k; proxy_next_upstream http_502 http_504 error timeout invalid_header; proxy_set_header Host $host; proxy_set_header X-Real-IP $http_x_real_ip; proxy_set_header Connection ""; Proxy_pass http://127.0.0.1:8082/; } } server { listen 80; server_name aa.bb.com; # force HTTPS return 301 https://$host:443$request_uri; } # direct aa.bb.com access will be proxy to local http://127.0.0.1:8082/ # local host configuration 127.0.0.1-->aa.bb.com # The page requests local JS, CSS, and HTML but the interface accesses the remote IP corresponding to the online environment aa.bb.comCopy the code

The above configuration is the basic configuration of my agent in the project. Back to business

What is Nginx?

Nginx is a high-performance HTTP and reverse proxy Web server

Upstream configuration for load balancing

Load balancing:

When a server has more traffic per unit of time, the greater the pressure on the server will be. When the pressure exceeds its capacity, the server will crash. In order to avoid server crashes and provide users with better experience, load balancing is used to share server load.

We can set up many, many servers, form a server cluster, when the user visits the website, first visit an intermediate server, let the intermediate server choose a less stressful server in the server cluster, and then introduce the access request to the server. In this way, each user’s access ensures that the load on each server in the cluster is balanced and the load on each server is shared, preventing server crashes.

Load balancing is implemented by the principle of reverse proxy.

Upstream: server 192.168.1.12:8080 weight=5 max_fails=2 fail_timeout=15s; Server 192.168.1.13:8080 weight=6 max_fails=2 fail_timeout=15s; } location / { proxy_pass http://backserver; break; }Copy the code

Polling (default) : Each request is allocated to a different backend server one by one in chronological order

Upstream {server 192.168.1.12:8080 server 192.168.1.13:8080}Copy the code

Weight: ** Specifies the polling probability. Weight is proportional to the access ratio and is used when the back-end server performance is uneven

Upstream {server 192.168.1.12:8080 weight=10; Server 192.168.1.13:8080 weight = 90; }Copy the code

3. Ip_hash: Each request is allocated according to the hash result of the access IP address, so that each visitor can access the same back-end server permanently, which can solve the problem that the session cannot cross servers.

upstream backserver { ip_hash; Server 192.168.1.12:8080; Server 192.168.1.13:8080}Copy the code

Least_conn: forwards requests to back-end servers with fewer connections

upstream backserver { least_conn; Server 192.168.1.12:8080 Server 192.168.1.13:8080}Copy the code

5. Fair (third-party plug-in) : The implementation of third-party load balancing policies requires the installation of third-party plug-ins. In this way, the allocation is based on the response time of the back-end server, and the request that responds quickly is preferentially allocated

Upstream 192.168.1.12:8080 server 192.168.1.13:8080 fair; # implement priority allocation with short response time}Copy the code

6. Url hash(third-party plug-in) : This method is similar to that of IP_hash. Urls are allocated to the same backend server based on the hash value

Upstream 192.168.1.12:8080 server 192.168.1.13:8080 Hash $request_URI; Hash_method crc32; #hash_method is the hash algorithm used}Copy the code