preface

As a front-end developer, you often get asked to go to the server to modify the Nginx configuration, but you will be “I am a front-end, I do not know this” excuse for the past! Today let’s say goodbye to this embarrassment and move on to being a “real” programmer!! If this article helped you, check out 👍 👍 👍.

Nginx overview





NginxIt is open source, high performance and high reliabilityWebAnd reverse proxy servers, and it supports hot deployment, running almost 7 * 24 hours a day, even running for months without a reboot, and hot-updating software versions without interruption of service. Performance isNginxThe most important considerations are its low memory footprint, high concurrency capability, the ability to support up to 5W concurrent connections, and most importantly,NginxIt is free and commercially available, and easy to configure and use.

Nginx characteristics

  • High concurrency, high performance;
  • The modular architecture makes it very scalable;
  • Asynchronous non-blocking event-driven model this andNode.jsSimilar;
  • Compared to other servers, it can go on for months or more without the need to restart the server making it highly reliable;
  • Hot deployment and smooth upgrade;
  • Fully open source, ecological prosperity;

Nginx role

The most important use scenarios for Nginx are:

  1. Static resource services are provided through the local file system.
  2. Reverse proxy services, including caching, load balancing, etc.
  3. APIService,OpenResty ;


For the front endNode.jsNot unfamiliar,Nginx 和 Node.jsA lot of the same ideas,HTTPServer, event-driven, asynchronous non-blocking, etc., andNginxMost of the functions usedNode.jsI could have done that, butNginx 和 Node.jsIt’s not a conflict. They both have their own areas of expertise.NginxGood at processing underlying server resources (static resources processing forward, reverse proxy, load balancing, etc.)Node.jsBetter at the upper layer of specific business logic processing, the two can be a perfect combination.



Here is a graph:



Nginx installation

This article demonstrates how to install Nginx on A Linux centOS 7.x operating system. As for installing Nginx on other operating systems, you can search for it on the Internet. Install Nginx using yum:

yum install nginx -y
Copy the code

After the installation is complete, run the RPM -ql nginx command to view the nginx installation information:

# Nginx configuration file
/etc/nginx/nginx.conf # nginx main configuration file
/etc/nginx/nginx.conf.default

Executable program file
/usr/bin/nginx-upgrade
/usr/sbin/nginx

# nginx library file
/usr/lib/systemd/system/nginx.service # used to configure the system daemon
/usr/lib64/nginx/modules # Nginx module directory

# Help documentThe/usr/share/doc/nginx - 1.16.1 / usr/share/doc/nginx - 1.16.1 / CHANGES/usr/share/doc/nginx - 1.16.1 / README The/usr/share/doc/nginx - 1.16.1 / README. Dynamic/usr/share/doc/nginx - 1.16.1 / UPGRADE - the NOTES - 1.6 - to 1.10# Static resource directory
/usr/share/nginx/html/404.html
/usr/share/nginx/html/50x.html
/usr/share/nginx/html/index.html

Nginx log file
/var/log/nginx
Copy the code

There are two folders to focus on:

  1. /etc/nginx/conf.d/Is the repository of sub-configuration items,/etc/nginx/nginx.confThe master profile will import all the sub-profiles in this folder by default;
  2. /usr/share/nginx/html/Static files are in this folder, or in other places if you prefer;

Nginx common commands

Systemctl system command:

# Boot configuration
systemctl enable nginx # Boot automatically
systemctl disable nginx # Turn off startup automatically

# start Nginx
systemctl start nginx When you start Nginx successfully, you can directly access the host IP address, and the default Nginx page is displayed

# stop Nginx
systemctl stop nginx

# restart Nginx
systemctl restart nginx

# reload Nginx
systemctl reload nginx

# Check the Nginx running status
systemctl status nginx

# Check the Nginx process
ps -ef | grep nginx

Kill the Nginx process
kill -9 pid Kill the Nginx process. -9 indicates to forcibly terminate the Nginx process
Copy the code

Nginx application command:

nginx -s reload  # Send a signal to the main process to reload the configuration file for a hot restart
nginx -s reopen	 # restart Nginx
nginx -s stop    # Quick close
nginx -s quit    Shut down after the worker process completes processing
nginx -T         Check the current Nginx final configuration
nginx -t         Check whether the configuration is correct
Copy the code

Nginx core configuration

Configuration file structure

Typical configuration examples of Nginx:

# Main section configuration information
user  nginx;                        # Run user, default is nginx, can not set
worker_processes  auto;             # Nginx number of processes, usually set to the same as the number of CPU cores
error_log  /var/log/nginx/error.log warn;   # Nginx error log directory
pid        /var/run/nginx.pid;      # Nginx service startup pid location

# Events segment configuration information
events {
    use epoll;     # Use epoll's I/O model (if you don't know which polling method to use, Nginx will automatically select the one that works best for your operating system)
    worker_connections 1024;   # Maximum number of concurrent requests per process
}

HTTP segment configuration information
This is where most functions such as proxy, cache, log definition and third-party module configuration are set
http { 
    Set log mode
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;   # Nginx access log location

    sendfile            on;   # Enable efficient transmission mode
    tcp_nopush          on;   # Reduce the number of network segments
    tcp_nodelay         on;
    keepalive_timeout   65;   The duration of the connection, also called timeout, in seconds
    types_hash_max_size 2048;

    include             /etc/nginx/mime.types;      # File extension and type mapping table
    default_type        application/octet-stream;   Default file type

    include /etc/nginx/conf.d/*.conf;   # Load child configuration items
    
    Server segment configuration information
    server {
    	listen       80;       # Configure the listening port
    	server_name  localhost;    The configured domain name
      
    	# Location segment configuration information
    	location / {
    		root   /usr/share/nginx/html;  # Site root directory
    		index  index.html index.htm;   # Default home page fileDeny 172.168.22.11;# IP address that is forbidden to access. This can be allAllow 172.168.33.44;# Specifies the IP address that can be accessed
    	}
    	
    	error_page 500 502 503 504 /50x.html;  # Default 50x corresponding to the access page
    	error_page 400 404 error.html;   # same as above}}Copy the code
  • mainThe global configuration takes effect globally.
  • eventsConfiguration influenceNginxNetwork connection between the server and the user;
  • httpConfigure proxy, cache, log definition and other most functions and third-party module configuration;
  • serverConfigure the related parameters of the virtual host, onehttpThere can be more than one blockserverBlock;
  • locationUsed to configure matchinguri ;
  • upstreamConfiguring the IP address of the back-end server is an integral part of load balancing.


The hierarchy is clearly shown in a diagram:

Core parameters of the main section of the configuration file

user

Specifies the owner and parent group of the woker child process running Nginx. The group may not be specified.

user USERNAME [GROUP]

user nginx lion; # user is nginx; Group is a lion
Copy the code

pid

Specifies the path for storing the PID file of the Nginx Master process.

pid /opt/nginx/logs/nginx.pid The pid of the master process is stored in the nginx.pid file
Copy the code

worker_rlimit_nofile_number

Specifies the maximum number of file handles that worker child processes can open.

worker_rlimit_nofile 20480; # can be interpreted as the maximum number of connections per worker child process
Copy the code

worker_rlimit_core

Specifies the core file after the worker child process terminates abnormally, which is used to record and analyze the problem.

worker_rlimit_core 50M; # Storage size limit
working_directory /opt/nginx/tmp; # Store directory
Copy the code

worker_processes_number

Specifies the number of worker child processes that Nginx starts.

worker_processes 4; # Specifies the number of child processes
worker_processes auto; # Consistent with the number of physical cores in the current CPU
Copy the code

worker_cpu_affinity

Bind each worker child process to our CPU physical core.

worker_cpu_affinity 0001 0010 0100 1000; # 4 physical cores, 4 worker child processes
Copy the code






eachworkerChild process with specificCPUPhysical core bindings have the advantage of avoiding the sameworkerChild processes in differentCPUSwitching on the core, cache invalidates and performance degrades. But it doesn’t really prevent process switching.

worker_priority

Specifies the nice value of the worker child process to adjust the priority of running Nginx, usually set to a negative value so that Nginx is called first.

worker_priority -10; # 120-10= 110,110 is the final priority
Copy the code

The default Linux process priority value is 120. The smaller the value, the higher the priority. Nice ranges from -20 to +19. [Remarks] The default priority of an application is 120 plus the nice value equals to its final value. The smaller the value is, the higher the priority is.

worker_shutdown_timeout

Specifies the timeout period for graceful exit of worker child processes.

worker_shutdown_timeout 5s;
Copy the code

timer_resolution

The accuracy of timer used inside worker child process. The larger the adjustment interval, the less system calls, which is conducive to performance improvement; Conversely, more system calls degrade performance.

timer_resolution 100ms;
Copy the code

On Linux systems, the user needs to send a request to the operating system kernel to get the timer, and there is an overhead associated with the request, so the larger the interval, the lower the overhead.

daemon

Specifies how Nginx will run, foreground or background, foreground for debugging and background for production.

daemon off; # Default is on, in background mode
Copy the code

Configuration file events segment core parameters

use

What event-driven model Nginx uses.

use method; # It is not recommended to configure it, let nginx chooseMethod The value can be select, poll, kqueue, epoll, /dev/poll, and eventportCopy the code

worker_connections

Worker Indicates the maximum number of concurrent connections that a child process can handle.

worker_connections 1024 # Maximum number of connections per child process is 1024
Copy the code

accept_mutex

Whether to enable the load balancing mutex.

accept_mutex on # Off by default, it is recommended to turn it on
Copy the code

Server_name instruction

Specify the domain name of the virtual host.

server_name name1 name2 name3

# sample:
server_name www.nginx.com;
Copy the code

Domain name matching:

  • Exact match:server_name www.nginx.com ;
  • Left wildcard:server_name *.nginx.com ;
  • Right side configuration:server_name www.nginx.* ;
  • Regular match:server_name ~^www\.nginx\.*$ ;

Matching priority: Exact match > Left wildcard match > Right wildcard match > Regular expression match

1. Configure the local DNS to resolve vim /etc/hosts (macOS).

# Add the following content, where 121.42.11.34 is the Ali Cloud server IP address
121.42.11.34 www.nginx-test.com
121.42.11.34 mail.nginx-test.com
121.42.11.34 www.nginx-test.org
121.42.11.34 doc.nginx-test.com
121.42.11.34 www.nginx-test.cn
121.42.11.34 fe.nginx-test.club
Copy the code

Note: The virtual domain name is used for testing. Therefore, you need to configure the local DNS resolution. If the domain name purchased on Aliyun is used, you need to set the domain name resolution on Aliyun. 2. Configure Nginx, vim /etc/nginx/nginx.conf

# Http-server-side configuration # http-server-side configuration # http-server-side configuration

# left matchserver { listen 80; server_name *.nginx-test.com; root /usr/share/nginx/html/nginx-test/left-match/; location / { index index.html; }}# Regular matchserver { listen 80; server_name ~^.*\.nginx-test\.. * $; root /usr/share/nginx/html/nginx-test/reg-match/; location / { index index.html; }}# right matchserver { listen 80; server_name www.nginx-test.*; root /usr/share/nginx/html/nginx-test/right-match/; location / { index index.html; }}# Exact matchserver { listen 80; server_name www.nginx-test.com; root /usr/share/nginx/html/nginx-test/all-match/; location / { index index.html; }}Copy the code

3. Access analysis

  • When accessingwww.nginx-test.comCan be matched. Therefore, select “Exact match” which has the highest priority.
  • When accessingmail.nginx-test.com, the “left match” is performed.
  • When accessingwww.nginx-test.orgIs displayed, right matching is performed.
  • When accessingdoc.nginx-test.com, the “left match” is performed.
  • When accessingwww.nginx-test.cnIs displayed, right matching is performed.
  • When accessingfe.nginx-test.club, the regular match is performed.

root

Specifies the static resource directory location, which can be written in configurations such as HTTP, server, location, and so on.

Root path For example, location /image {root /opt/nginx/static; } when users visit www.test.com/image/1.png, actually find the server path is/opt/nginx/static/image / 1. PNGCopy the code

[Note] Root superimposes the defined path with the URI, while alias only takes the defined path.

alias

It also specifies the static resource directory location, which can only be written in location.

location /image {
	alias/opt/nginx/static/image/; } when users visit www.test.com/image/1.png, actually find the server path is/opt/nginx/static/image / 1. PNGCopy the code

You must add/at the end of alias, and it can only be in location.

location

Configure the path.

location [ = | ~ | ~* | ^~ ] uri {
	...
}
Copy the code

Matching rules:

  • =Precise matching;
  • ~Regular matching, case sensitive;
  • ~ *Regular matching, case insensitive;
  • ^ ~The search stops when the match is reached;

Matching priority: = > ^~ > ~ > ~* > Does not contain any characters. Example:

server {
  listen	80;
  server_name	www.nginx-test.com;
  
  # only when visiting www.nginx-test.com/match_all/ will match to/usr/share/nginx/HTML/match_all/index. The HTML
  location = /match_all/ {
      root	/usr/share/nginx/html
      index index.html
  }
  
  # when visit www.nginx-test.com/1.jpg etc path to/usr/share/nginx/images / 1. JPG to find the corresponding resources
  location ~ \.(jpeg|jpg|png|svg)$ {
  	root /usr/share/nginx/images;
  }
  
  # matches when visiting www.nginx-test.com/bbs/ on/usr/share/nginx/HTML/BBS/index. The HTMLlocation ^~ /bbs/ { root /usr/share/nginx/html; index index.html index.htm; }}Copy the code

The backslash in location

location /test{... } location /test/ {... }Copy the code
  • Don’t take/When accessingwww.nginx-test.com/testWhen,NginxLet’s see if we have ittestDirectory. If you have one, look for ittestIn the directoryindex.html; If there is notestDirectory,nginxI’m going to find out if there istestFile.
  • with/When accessingwww.nginx-test.com/testWhen,NginxLet’s see if we have ittestDirectory. If you have one, look for ittestIn the directoryindex.htmlIf you don’t have it, you don’t look for ittestFile.

return

Stop processing the request and return the response code directly or redirect to another URL; After the return directive is executed, subsequent directives in the location will not be executed.

return code [text];
return code URL;
returnURL; For example: location / {return 404; # Return status code directly
}

location / {
	return 404 "pages not found"; # Return status code + a text
}

location / {
	return 302 /bbs ; Return status code + redirection address
}

location / {
	return https://www.baidu.com ; # Return redirection address
}
Copy the code

rewrite

Overrides the URL based on the specified regular expression matching rule.

Syntax: What to replace with the rewrite regular expression [flag]; Context: Server, location,ifExample: rewirte /images/(.*\.jpg)$/ PIC /The $1; # $1 is a backreference to the preceding parenthesis (.*\.jpg)
Copy the code

Flag The meanings of the optional values are as follows:

  • lastrewrittenURLInitiate a new request and re-enterserverTry again,locationOf;
  • breakUse the rewrite directlyURL, no longer matches otherslocationIn the statement;
  • redirectReturn 302 temporary redirect;
  • permanentReturn 301 permanent redirect;
server{
  listen 80;
  server_name fe.lion.club; # To configure in the local hosts file
  root html;
  location /search {
  	rewrite ^/(.*) https://www.baidu.com redirect;
  }
  
  location /images {
  	rewrite /images/(.*) /pics/The $1;
  }
  
  location /pics {
  	rewrite /pics/(.*) /photos/The $1;
  }
  
  location /photos {
  
  }
}
Copy the code

According to this configuration, we analyze:

  • When accessingfe.lion.club/searchWill automatically redirect us tohttps://www.baidu.com.
  • When accessingfe.lion.club/images/1.jpg, the first step is rewrittenURL 为 fe.lion.club/pics/1.jpgTo findpics 的 locationAnd continue rewritingURL 为 fe.lion.club/photos/1.jpgTo find/photos 的 locationLater, go tohtml/photosDirectory search1.jpgStatic resources.

If the instructions

Grammar:if(condition) {... } context: server, locationif($http_user_agent ~ Chrome){
  rewrite /(.*)/browser/The $1 break;
}
Copy the code

Condition:

  • $variableFor variables only, an empty value or a string beginning with 0 will be treated asfalseProcessing;
  • = 或 ! =To be equal or unequal;
  • ~Regular matching;
  • ! ~Non-regular matching;
  • ~ *Regular matching, case insensitive;
  • -f 或 ! -fCheck whether the file exists or does not exist.
  • -d 或 ! -dCheck whether the directory exists or does not exist.
  • -e 或 ! -eCheck whether files, directories, and symbolic links exist or do not exist.
  • -x 或 ! -xDetect whether the file can be executed or not;

Example:

server {
  listen 8080;
  server_name localhost;
  root html;
  
  location / {
  	if ( $uri = "/images/" ){
    	rewrite (.*) /pics/ break; }}}Copy the code

When localhost:8080/images/ is accessed, the rewrite command is executed inside the if judgment.

autoindex

When a user request ends with /, the directory structure is listed, which can be used to quickly build a static resource download site. Conf configuration information:

server {
  listen 80;
  server_name fe.lion-test.club;
  
  location /download/ {
    root /opt/source;
    
    autoindex on; Open autoindex #, and optional parameters on | off
    autoindex_exact_size on; The default value is on, and the exact size of the file is displayed in bytes
    autoindex_format html; # in the form of HTML formatting, optional parameters have HTML | json | XML
    autoindex_localtime off; The file time displayed is the server time of the file. The default value is off, and the file time is displayed in GMT}}Copy the code

When accessingfe.lion.com/download/Will put the server/opt/source/download/The files in the path are displayed, as shown in the figure below:



variable

Nginx provides a lot of variables to the user, but it is ultimately a complete request process generated data, Nginx will provide these data to the user in the form of variables. Here are some common variables in a project:

The variable name meaning
remote_addr  The clientIPaddress
remote_port  Client Port
server_addr  The service sideIPaddress
server_port  Server Port
server_protocol  Server Protocol
binary_remote_addr  Client in binary formatIPaddress
connection  TCPThe serial number of the connection in ascending order
connection_request  TCPThe number of requests currently connected
uri  The requested URL, without parameters
request_uri  The REQUESTED URL, including the parameters
scheme  Agreement,http 或 https 
request_method  Request method
request_length  The length of the total request, including the request line, request header, and request body
args  All parameter strings
Arg_ parameter name  Gets a specific parameter value
is_args  URLIf there are any arguments in the?Otherwise returns null
query_string  withargsThe same
host  Request informationHost, if not in the requestHostLine, is found in the request header, and is used lastnginxSet in theserver_name 。
http_user_agent  User browser
http_referer  From which links to the request
http_via  As each layer of proxy server passes, the corresponding information is added
http_cookie  Get the usercookie 
request_time  The time spent processing the request
https  Is it on?httpsIf yes, returnonOtherwise returns null
request_filename  Disk file system Full path of the file to be accessed
document_root  byURI 和 root/aliasThe path of the folder generated by the rule
limit_rate  Return the speed limit for the response

Var. Conf:

server{
	listen 8081;
	server_name var.lion-test.club;
	root /usr/share/nginx/html;
	location / {
		return 200 "
remote_addr: $remote_addr
remote_port: $remote_port
server_addr: $server_addr
server_port: $server_port
server_protocol: $server_protocol
binary_remote_addr: $binary_remote_addr
connection: $connection
uri: $uri
request_uri: $request_uri
scheme: $scheme
request_method: $request_method
request_length: $request_length
args: $args
arg_pid: $arg_pid
is_args: $is_args
query_string: $query_string
host: $host
http_user_agent: $http_user_agent
http_referer: $http_referer
http_via: $http_via
request_time: $request_time
https: $https
request_filename: $request_filename
document_root: $document_root
"; }}Copy the code

When we visit http://var.lion-test.club:8081/test? If pid=121414 and cid=sadasd, chrome will download a file by default because Nginx has a return function.

Remote_addr: 27.16.220.84 REMOTE_port: 56838 server_ADDR: 172.17.0.2 server_port: 8081 server_protocol: HTTP/1.1 binARY_REMOTE_ADDR: Mal Connection: 126 URI: /test/
request_uri: /test/? pid=121414&cid=sadasd scheme: http request_method: GET request_length: 518 args: pid=121414&cid=sadasd arg_pid: 121414 is_args: ? Query_string: pid=121414& CID =sadasd host: var.lion-test.club http_user_Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_14_0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.182 Safari/537.36 Http_via: request_time: 0.000 HTTPS: request_filename: /usr/share/nginx/ HTML /test/
document_root: /usr/share/nginx/html
Copy the code

There are many more Nginx configurations. The above is just a list of some commonly used configurations. In actual projects, you should learn to consult the documentation.

Nginx applies core concepts

A proxy is a hypothetical layer of server between the server and the client. The proxy will receive a request from the client and forward it to the server, and then forward the response from the server to the client.



Both forward and reverse proxies implement the above functions.





Forward agent

A forward proxy is a server that sits between the client and the origin server. In order to get content from the original server, the client sends a request to the proxy and specifies the target (the original server), and the proxy forwards the request to the original server and returns the obtained content to the client.

The forward proxy is for us, that is, for the client, the client can use the forward proxy to access the server resources that it cannot access by itself. The forward proxy is transparent to us but non-transparent to the server, that is, the server does not know whether it is receiving access from the proxy or from the real client.

The reverse proxy

  • In Reverse Proxy mode, a Proxy server receives connection requests from the Internet, forwards the requests to a server on the internal network, and returns the results obtained from the server to the client requesting connection on the Internet. In this case, the proxy server behaves as a reverse proxy server.

The reverse proxy is for the service side, the reverse proxy can help the server to receive requests from the client, help the server to do request forwarding, load balancing, etc. The reverse proxy is transparent to the server, but non-transparent to us. That is, we don’t know that we are accessing the proxy server, and the server knows that the reverse proxy is serving it. Advantages of reverse proxy:

  • Hide real servers;
  • Load balancing facilitates horizontal scaling of back-end dynamic services;
  • Dynamic and static separation, enhance the robustness of the system;

So what is “dynamic separation”? What is load balancing?

Dynamic and static separation

Dynamic/static separation refers to the architectural design method of separating static pages from dynamic pages or static content interfaces from dynamic content interfaces in web server architecture, thus indicating the accessibility and maintainability of the entire service.





In general, it is necessary to separate dynamic resources from static resources becauseNginxFeatures such as high concurrency and static resource caching, static resources are often deployed inNginxOn. If the request is for static resources, the system directly obtains the resources from the static resource directory. If the request is for dynamic resources, the reverse proxy is used to forward the request to the corresponding background application for processing. In this way, the separation of dynamic and static resources is realized.



After the front-end and back-end separation is used, the access speed of static resources can be greatly improved. Even if dynamic services are unavailable, the access to static resources is not affected.

Load balancing

Generally, the client sends multiple requests to the server. The server processes the requests, some of which may operate on some resources, such as databases and static resources. After the server processes the requests, the results are returned to the client.



For early systems, this mode is not complex in functional requirements, and can be used in the case of relatively few concurrent requests, and the cost is low. With the increasing amount of information, traffic and data, and system service complexity, this approach can no longer meet the requirements. When the amount of concurrent requests is very large, the server is prone to crash.



Obviously this is due to server performance bottlenecks, and the most important thing to do besides heap the machine is load balancing.



Request under the condition of explosive growth, no strong of a single machine performance can meet the requirements, this time the concept of cluster, a single server cannot solve the problem, you can use multiple servers, then request distribution to all servers, load distribution to a different server, this is the load balance, the core is “share the pressure.NginxAchieving load balancing generally refers to forwarding requests to the server cluster.



For example, when taking the subway in the evening rush hour, the subway staff will often loudspeaker “Please goBMouth,BFew people in the mouth empty….” The role of this worker is load balancing.





NginxLoad balancing strategy:

  • Polling policy: A policy adopted by default to poll all client requests to the server. This strategy works fine, but if one of the servers becomes too stressed and delays occur, all users assigned to that server will be affected.
  • Minimum number of connections strategy: Allocating requests first to less stressed servers balances the length of each queue and avoids adding more requests to the stressed servers.
  • Fastest response time policy: Allocate the policy to the server with the shortest response time.
  • The clientipBinding policy: from the sameipThe request is always assigned to only one server, effectively solving the dynamic web pages that existsessionSharing problems.

Nginx combat configuration

Upstream (upstream) and proxy_pass (proxy_pass) are two core modules that we need to master before we configure reverse proxy, load balancing, etc.

upstream

The information used to define the upstream server (that is, the application server provided in the background).



Upstream name {... } context: HTTP example: upstream back_end_server{server 192.168.100.33:8081}Copy the code

Instructions that can be used in upstream:

  • serverDefine the upstream server address.
  • zoneDefine shared memory to be used acrossworkerThe child process.
  • keepaliveEnable persistent connections for upstream services.
  • keepalive_requestsMaximum number of requests for a single persistent connectionHTTPThe number of;
  • keepalive_timeoutTimeout duration of a long connection in idle mode;
  • hashHash load balancing algorithm;
  • ip_hashOn the basis ofIPLoad balancing algorithm for hash calculation;
  • least_connMinimum number of connections load balancing algorithm;
  • least_timeMinimum response time load balancing algorithm;
  • randomRandom load balancing algorithm;

server

Define the upstream server address.

Syntax: server address [parameters] context: upstreamCopy the code

Parameters optional value:

  • weight=numberWeight value, default is 1;
  • max_conns=numberMaximum number of concurrent connections to the upstream server;
  • fail_timeout=timeThe time the server is unavailable;
  • max_fails=numerNumber of server unavailable checks;
  • backupBackup server: Enabled only when other servers are unavailable.
  • downThe server is not available for a long time and is maintained offline.

keepalive

Limit the maximum number of idle long connections per worker child process to the upstream server.

keepalive connections; Upstream example: Keepalive 16;Copy the code

keepalive_requests

Maximum number of HTTP requests that can be processed by a single persistent connection.

Syntax: keepalive_requests number; Default: keepalive_requests 100; Upstream:Copy the code

keepalive_timeout

Maximum hold time of an idle long connection.

Syntax: keepalive_timeout time; Default value: keepalive_timeout 60s; Upstream:Copy the code

Configure the instance

Upstream back_end{server 127.0.0.1:8081 weight=3 max_conns=1000 fail_timeout=10s max_fails=2; keepalive 32; keepalive_requests 50; keepalive_timeout 30s; }Copy the code

proxy_pass

Used to configure the proxy server.

Syntax: proxy_pass URL; Context: location,if, limit_except example: proxy_pass proxy_pass http://127.0.0.1:8081 http://127.0.0.1:8081/proxyCopy the code

URL Parameter Principles

  1. URLHave to behttp 或 httpsAt the beginning.
  2. URLCan carry variables in;
  3. URLWhether or not to bring inURI, which will directly affect the upstream requestsURL ;

Let’s look at two common uses of urls:

  1. Proxy_pass http://192.168.100.33:8081 
  2. Proxy_pass http://192.168.100.33:8081/

The difference between these two uses is with/and without /, which makes a big difference when configuring the proxy:

  • Don’t take/meansNginxYou don’t modify the userURL, but directly through to the upstream application server;
  • with/meansNginxWill change the userURL, the modification method is tolocationAfter theURLFrom the userURLRemove the;

Without / :

The location/BBS / {proxy_pass http://127.0.0.1:8080; }Copy the code

Analysis:

  1. User requestURL : /bbs/abc/test.html 
  2. The request toNginx 的 URL : /bbs/abc/test.html 
  3. The request reaches the upstream application serverURL : /bbs/abc/test.html 

With / :

The location/BBS / {proxy_pass http://127.0.0.1:8080/; }Copy the code

Analysis:

  1. User requestURL : /bbs/abc/test.html 
  2. The request toNginx 的 URL : /bbs/abc/test.html 
  3. The request reaches the upstream application serverURL : /abc/test.html 

There is no/BBS splicing, which is consistent with the difference between root and alias.

Configuring the Reverse Proxy

In order to make the demonstration more realistic, the author has prepared two cloud servers whose public IP addresses are 121.42.11.34 and 121.5.180.193.

We use server 121.42.11.34 as the upstream server and do the following configuration:

# /etc/nginx/conf.d/proxy.confserver{ listen 8080; server_name localhost; location /proxy/ { root /usr/share/nginx/html/proxy; index index.html; }}# /usr/share/nginx/html/proxy/index.html
<h1> 121.42.11.34 proxy html </h1>
Copy the code

After the configuration, restart the Nginx -s reload server. Configure server 121.5.180.193 as the proxy server as follows:

# /etc/nginx/conf.d/proxy.confUpstream back_end {server 121.42.11.34:8080 weight=2 max_conns=1000 fail_timeout=10s max_fails=3; keepalive 32; keepalive_requests 80; keepalive_timeout 20s; } server { listen 80; server_name proxy.lion.club; location /proxy { proxy_pass http://back_end/proxy; }}Copy the code

The local machine needs to access the proxy.lion. Club domain name. Therefore, you need to configure the hosts on the local computer.

121.5.180.193 proxy. The lion. The clubCopy the code





Analysis:

  1. When accessingproxy.lion.club/proxythroughupstreamConfiguration found121.42.11.34:8080 ;
  2. Therefore, the access address changeshttp://121.42.11.34:8080/proxy ;
  3. Connect to the121.42.11.34Server, found8080Port providedserver ;
  4. throughserverfind/usr/share/nginx/html/proxy/index.htmlResources are finally displayed.

Configuring load Balancing

Configuring load balancing is primarily done using the upstream directive.

We 121.42.11.34 server as the upstream server, do the following configuration (/ etc/nginx/conf. D/balance. The conf) :

server{
  listen 8020;
  location / {
  	return 200 'return 8020 \n';
  }
}

server{
  listen 8030;
  location / {
  	return 200 'return 8030 \n';
  }
}

server{
  listen 8040;
  location / {
  	return 200 'return 8040 \n'; }}Copy the code

After the configuration:

  1. nginx -tCheck whether the configuration is correct;
  2. nginx -s reloadrestartNginxThe server;
  3. performss -nltCommand to check whether the port is occupiedNginxThe service is started correctly.

Do 121.5.180.193 server as a proxy server, the following configuration (/ etc/nginx/conf. D/balance. The conf) :

Upstream demo_server {server 121.42.11.34:8020; Server 121.42.11.34:8030; Server 121.42.11.34:8040; } server { listen 80; server_name balance.lion.club; location /balance/ { proxy_pass http://demo_server; }}Copy the code

Restart the Nginx server after the configuration. In addition, the mapping between the IP address and the domain name is configured on the client to be accessed.

# /etc/hosts121.5.180.193 balance. The lion. The clubCopy the code

Executed on the client machinecurl http://balance.lion.club/balance/Command:







As you can see, the load balancing configuration is in effect, and the upstream server that is distributed to us is different each time. Is a simple polling strategy for upstream server distribution.



Now, let’s look at it againNginxOther distribution policies.

The hash algorithm

The hash key is mapped to a specific upstream server based on the hash algorithm. Keywords can contain variables, strings.

upstream demo_server {
  hash $request_uri; Server 121.42.11.34:8020; Server 121.42.11.34:8030; Server 121.42.11.34:8040; } server { listen 80; server_name balance.lion.club; location /balance/ { proxy_pass http://demo_server; }}Copy the code

Hash $request_URI indicates that the variable request_URI is used as the hash key and will be distributed to the same server as long as the accessed URI remains the same.

ip_hash

Based on the request IP address of the client, as long as the IP address is unchanged, the client is always assigned to the same host. It can effectively solve the background server session persistence problem.

upstream demo_server { ip_hash; Server 121.42.11.34:8020; Server 121.42.11.34:8030; Server 121.42.11.34:8040; } server { listen 80; server_name balance.lion.club; location /balance/ { proxy_pass http://demo_server; }}Copy the code

Minimum number of connections algorithm

Each worker child process obtains the back-end server information by reading the data in the shared memory. To select the server with the fewest current connections.

Grammar: least_conn; Upstream: upstream;Copy the code

Example:

upstream demo_server {
  zone test 10M; You can set the name and size of the shared memory spaceleast_conn; Server 121.42.11.34:8020; Server 121.42.11.34:8030; Server 121.42.11.34:8040; } server { listen 80; server_name balance.lion.club; location /balance/ { proxy_pass http://demo_server; }}Copy the code

At the end of the day, you’ll find that configuring load balancing isn’t complicated at all.

Configure the cache

Caching can be a very effective performance enhancer, so it can be used to some extent by clients (browsers), proxy servers (Nginx), and even upstream servers. You can see that caching is very important for every step. Let’s learn how to set the caching policy in Nginx.

proxy_cache

Stores resources that have been accessed before and may be accessed again so that the user can obtain them directly from the proxy server, thereby reducing the pressure on the upstream server and accelerating the overall access speed.

Grammar: proxy_cache zone | off;# zone is the name of shared memoryDefault value: proxy_cache off; Context: HTTP, server, locationCopy the code

proxy_cache_path

Set the path for storing cache files.

Syntax: proxy_cache_path path [level=levels]... Optional parameters are omitted. Default values are listed below: proxy_cache_PATH off Context: HTTPCopy the code

Parameter Meaning:

  • pathPath for storing cached files;
  • level pathDirectory level;
  • keys_zoneSet shared memory.
  • inactiveIf the cache is not accessed for a specified period of time, the cache will be cleared.

proxy_cache_key

Set the key of the cache file.

Syntax: proxy_cache_key Default: proxy_cache_key $scheme$proxy_host$request_uri; Context: HTTP, server, locationCopy the code

proxy_cache_valid

Configure what status codes can be cached and for how long.

Syntax: proxy_cache_valid [code...]  time; Context: HTTP, server, location Example configuration: proxy_cache_VALID 200 304 2m;The cache time for cached files in state 200 and 304 is 2 minutes
Copy the code

proxy_no_cache

Define the conditions for storing the response in the cache. If at least one value of the string parameter is not empty and is not equal to “0”, the response will not be stored in the cache.

Syntax: proxy_no_cache string; Context: HTTP, server, location Example: proxy_no_cache$http_pragma    $http_authorization;
Copy the code

proxy_cache_bypass

Define a condition under which the response will not be retrieved from the cache.

Syntax: proxy_cache_bypass String; Context: HTTP, server, location Example: proxy_cache_bypass$http_pragma    $http_authorization;
Copy the code

Upstream_cache_status variable

It stores the cache hit information, which is set in the response header and is useful in debugging.

Miss. HIT: the cache is HIT EXPIRED: the cache is EXPIRED STALE: the cache is matched REVALIDDATED: Nginx verifies that the cache is still valid. UPDATING: The contents are old but the BYPASS is being updated: The X response is obtained from the original serverCopy the code

Configure the instance

We 121.42.11.34 server as the upstream server, do the following configuration (/ etc/nginx/conf. D/cache. Conf) :

server { listen 1010; root /usr/share/nginx/html/1010; location / { index index.html; } } server { listen 1020; root /usr/share/nginx/html/1020; location / { index index.html; }}Copy the code

Do 121.5.180.193 server as a proxy server, the following configuration (/ etc/nginx/conf. D/cache. Conf) :

proxy_cache_path /etc/nginx/cache_temp levels=2:2 keys_zone=cache_zone:30m max_size=2g inactive=60m use_temp_path=off;

upstream cache_server{
  server 121.42.11.34:1010;
  server 121.42.11.34:1020;
}

server {
  listen 80;
  server_name cache.lion.club;
  location / {
    proxy_cache cache_zone; Set cache memory, as defined in the configuration above
    proxy_cache_valid 200 5m; # Cache a request with status 200 for 5 minutes
    proxy_cache_key $request_uri; The key of the cache file is the URI of the request
    add_header Nginx-Cache-Status $upstream_cache_status Set the cache state to the header and respond to the client
    proxy_pass http://cache_server; # proxy forwarding}}Copy the code

This is how the cache is configured. You can find the cache file in /etc/nginx/cache_temp. For pages or data that are very real-time, you should not set up a cache. Here’s how to configure content that is not cached.

. server { listen 80; server_name cache.lion.club;# uris with.txt or.text set variable value to "no cache"
  if ($request_uri ~ \.(txt|text)$) {
  	set $cache_name "no cache"
  }
  
  location / {
    proxy_no_cache $cache_name; # Determine whether the variable has a value. If there is a value, it is not cached. If there is no value, it is cached
    proxy_cache cache_zone; # Set cache memory
    proxy_cache_valid 200 5m; # Cache a request with status 200 for 5 minutes
    proxy_cache_key $request_uri; The key of the cache file is the URI of the request
    add_header Nginx-Cache-Status $upstream_cache_status Set the cache state to the header and respond to the client
    proxy_pass http://cache_server; # proxy forwarding}}Copy the code

HTTPS

Before we learn how to configure HTTPS, let’s briefly review how the HTTPS workflow works. How is it encrypted to keep it safe?

HTTPS workflow

  1. Client (browser) accesshttps://www.baidu.comBaidu website;
  2. Baidu server returnHTTPSThe use ofCACertificate;
  3. Browser authenticationCAWhether the certificate is valid;
  4. If the authentication succeeds, the certificate is valid, and a series of random numbers are generated and encrypted using the public key (provided in the certificate).
  5. Send the random number encrypted by the public key to the Baidu server;
  6. Baidu server gets the ciphertext, decrypts it through the private key, and obtains the random number (public key encryption, private key decryption, and vice versa);
  7. Baidu server to send the content to the browser, using random numbers for encryption and then passed to the browser;
  8. At this time, the browser can use random number to decrypt and obtain the real transmission content of the server;

This is how HTTPS basically works. It uses both symmetric encryption and asymmetric secrecy to ensure the security of transmitted content.

For more information on HTTPS, check out the author’s other article, Learning the HTTP Protocol.

Certificate of configuration

Download the compressed certificate file, which contains the Nginx folder, copy the xxx. CRT and xxx.key files to the server directory, and then perform the following configuration:

server {
  listen 443 ssl http2 default_server;   # SSL access port 443
  server_name lion.club;         Enter the domain name of the binding certificate.
  ssl_certificate /etc/nginx/https/lion.club_bundle.crt;   # Certificate address
  ssl_certificate_key /etc/nginx/https/lion.club.key;      # Private key address
  ssl_session_timeout 10m;
  ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Support SSL protocol versions, default to the last three, mainstream version is [TLSv1.2]location / { root /usr/share/nginx/html; index index.html index.htm; }}Copy the code

After this configuration, you can normally access the HTTPS version of the website.

Configure cross-domain CORS

Just a quick review of what cross-domain is all about.

Cross-domain definitions

The same origin policy restricts how a document or script loaded from one source interacts with a resource from another source. This is an important security mechanism for isolating potentially malicious files. Read operations between different sources are generally not allowed.

Definition of homology

If both pages have the same protocol, port (if specified), and domain name, then both pages have the same source. Is given below with the URL http://store.company.com/dir/page.html comparing the source of the sample:

Different source homologous http://store.company.com/dir2/other.html, https://store.company.com/secure.html, Protocol different http://store.company.com:81/dir/etc.html source, different http://news.company.com/dir/other.html source port, the host is differentCopy the code

Different sources have the following restrictions:

  • WebAt the data level, the same-origin policy restricts the access of sites from different sources to the current siteCookie 、 IndexDB 、 LocalStorageSuch data.
  • DOMLayer, the same origin policy limits the number of sources from different sourcesJavaScriptScript against currentDOMObject read and write operations.
  • At the network layer, the same-origin policy restricts accessXMLHttpRequestTo send site data to sites of different sources.

Nginx addresses the cross-domain principle

Such as:

  • The front endserverThe domain name is:fe.server.com 
  • The domain name of the back-end service is:dev.server.com 

Now when I make a request from fe.server.com to dev.server.com, I’m bound to cross domains. Now all we need to do is start an Nginx server, set server_name to fe.server.com, set location to intercept cross-domain requests, and broker the requests back to dev.server.com. The configuration is as follows:

server { listen 80; server_name fe.server.com; location / { proxy_pass dev.server.com; }}Copy the code

This is a perfect way to bypass the browser’s same-origin policy: fe.server.com accessing Nginx’s fe.server.com is same-origin access, and requests forwarded by Nginx to the server do not trigger the browser’s same-origin policy.

Enable gZIP compression

GZIPAre the three standards stipulatedHTTPOne of the compression formats. The vast majority of websites are currently in useGZIPtransmissionHTML 、CSS 、 JavaScriptResource file.



For text files,GZiPThe effect of the switch is very obvious, and the traffic required for transmission will be reduced to approximately1/4 ~ 1/3 。



Not every browser supports itgzipHow do I know if the client supports itgzipIn the request headerAccept-EncodingTo identify support for compression.



To enable thegzipBoth the client and the server need to support, if the client supportsgzipSo long as the server can returngzipCan be enabledgzipYes, we can passNginxFor the server to supportgzip. The followingrespone 中 content-encoding:gzip“Indicates that the server is enabledgzipThe compression method.



in/etc/nginx/conf.d/Create a configuration file in the foldergzip.conf :

Gzip is enabled by default
gzip on; 
# MIME file type to be compressed with gzip, where text/ HTML is forcibly enabled;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;

# ---- The above two parameters can support Gzip compression ---- #

When this module is enabled, Nginx first checks if there is a file with the end of gz that requests static files. If there is, Nginx returns the contents of the.gz file.
gzip_static on;

Nginx is used as a reverse proxy to enable or disable gZIP compression from the proxy server.
gzip_proxied any;

# Use Vary: accept-encoding in the response header to enable the proxy server to recognize whether gZIP compression is enabled based on accept-encoding in the request header;
gzip_vary on;

# GZIP compression ratio, the compression level is 1-9, 1 compression level is the lowest, 9 compression level is the highest, the higher the compression ratio is, the longer the compression time is, recommend 4-6;
gzip_comp_level 6;

How much memory is used to cache the compression results? 16 8k = 8K *16
gzip_buffers 16 8k;

# The minimum number of pages that can be compressed. Page bytes are obtained from content-length in the header header. The default value is 0, regardless of the size of the page. You are advised to set the number of bytes larger than 1K. If the number is smaller than 1K, the pressure may increase.
# gzip_min_length 1k;

# Default 1.1, the lowest version of HTTP required to enable gzip;Gzip_http_version 1.1;Copy the code

In fact, you can also use front-end build tools such as WebPack, Rollup, etc. to do Gzip compression at the time of production package, and then put it into the Nginx server, which can reduce server overhead and speed up access.

The practical application of Nginx will be learned here. I believe that by mastering the core configuration and actual configuration of Nginx, we can easily deal with any needs we encounter after that. Next, let’s dig a little deeper into the Nginx architecture.

Nginx architecture

The process structure

Multiprocess structureNginxProcess model diagram:







Multi-processNginxThe process architecture is shown in the following figure. There is a parent process (Master Process), it will have many child processes (Child Processes).

  • Master ProcessUsed to manage child processes, which themselves do not actually handle user requests.
    • A child processdownIf you drop it, it goes toMasterThe process sends a message indicating that it is unavailableMasterThe process creates a new subprocess.
    • A configuration file has been modifiedMasterThe process will notifyworkThe process gets new configuration information, which is what we call hot deployment.
  • Child processes communicate with each other through shared memory.

Principle of configuration file overloading

Reload reload reload configuration file:

  1. tomasterThe process to sendHUPSignal (reloadCommand);
  2. masterThe process checks that the configuration syntax is correct.
  3. masterThe process opens the listening port.
  4. masterThe process starts the new one with the new configuration fileworkerThe child process.
  5. masterProgress toward the oldworkerChild process sendQUITSignal;
  6. The oldworkerThe process closes the listener handle and closes the process after processing the current connection.
  7. The whole processNginxIt is always in smooth operation and realizes smooth upgrade without user perception;

Nginx modular management mechanism

NginxThe internal structure is composed of core parts and a series of functional modules. This division is to make the function of each module is relatively simple, easy to develop, but also easy to expand the system function.NginxThe modules are independent of each other, low coupling and high cohesion.

conclusion

I believe that after learning this article, you should have a more comprehensive understanding of Nginx. If you see here, just go to 👍 👍 👍.