Nginx is a lightweight, HTTP-based, high-performance reverse proxy server and static Web server.

Forward and reverse proxies

Both forward and reverse proxies are client-based.

  • Forward agent

    • The characteristics of

      • The forward proxy is the proxy to the client

      • The forward proxy is a host installed on the client

      • When using a forward proxy server, clients need to know the address of the target service they are accessing

    • case

      • Hide real visitors

        • Hide real visitors from the server. On the server side, the real visitor is the proxy server. It plays the role of hiding the client. For example, in real life, you don’t know who is texting you. Ddos attacks also work the same way, using a lot of ‘chicken’ machines to attack our servers, we can’t find the real source of the attack.
      • Over the wall

        • For many complicated reasons, server A cannot access server B directly, but server C can access server B, and server A can access server C. In this case, server C acts as A proxy server for A to access B. Current wall climbing software uses this principle.
      • To speed up

        • The connection between server A and server B is slow, but the connection between server C and server B is fast, and the connection between server A and server C is fast. A proxy server is used to improve efficiency.
      • The cache

        • Increase client caching to reduce the pressure on server request resources. Maven’s Nexus, for example, is a classic example of client caching.
      • authorization

        • For example, in a company, it is also used to monitor and authorize employees’ computers from the Internet.
  • The reverse proxy

    • The characteristics of

      • A reverse proxy is a proxy for the server

      • A reverse proxy is a host on the server

      • The client does not know the address of the real server host when accessing the server

    • case

      • Protection hides the real service

        • The client can only access the server proxy server, but the real server is not directly accessible, protecting the server.
      • Distributed routing

        • Requests are routed to different servers according to different client requests.
      • Load balancing

        • The server evenly distributes requests from clients to ensure high availability of the server.
      • Dynamic and static separation

        • For example, images, static pages, CSS, and JS are all static resources. If you put them in the corresponding directory, the client does not request the server for static resources but sends the request for dynamic resources to the server, reducing the pressure on the server.
      • Data cache

        • The reverse proxy has the same data caching function as the forward proxy to reduce the pressure on the server.

Load balancing

1. Nginx has two load balancing policies: built-in policy and extended policy

2. Built-in policies: Polling, weight polling, IP Hash, least_CONN

3. Expansion strategy: How do you implement it

polling The default mode
Weight (weight) Weight way
ip_hash Based on the IP address assignment mode
least_conn Minimum connection mode
Fair (Third Party) Response time mode
Url_hash (third party) Based on the URL assignment

polling

Polling is upstream’s default load balancing strategy and each request is allocated to a different server one by one in chronological order

The parameters are as follows:

The command instructions
fail_timeout Used in combination with max_fail
max_fails Set the maximum number of failures within fail_timeout
fail_time The server is considered to be down for 10 seconds
backup Mark the server as a standby server. Requests are sent when the primary server stops
down The flag server is permanently down

Example:

#Dynamic server groupupstream dynamic_zuoyu { server localhost:8080; # tomcat 7.0 server localhost: 8081; #tomcat 8.0 server localhost:8082 backup; #tomcat 8.5 server localhost:8083 max_fails=3 fail_timeout=20s; # tomcat 9.0}Copy the code

Note: This policy is suitable for the use of stateless and short services with the same server configuration

The weight

The way of weighting is to formulate the probability of polling on the basis of polling; The higher the weight allocation, the more requests need to be processed. This policy can also be used with ip_hash and least_conn.

Example:

#Dynamic server groupupstream dynamic_zuoyu { server localhost:8080 weight=2; # tomcat 7.0 server localhost: 8081; #tomcat 8.0 server localhost:8082 backup; #tomcat 8.5 server localhost:8083 max_fails=3 fail_timeout=20s; # tomcat 9.0}Copy the code

Note: This policy is applicable when the server hardware configuration is different

ip_hash

IP hash: Allocates requests from clients with the same IP address to the same server to ensure sessions. This can solve the problem that sessions cannot cross domains

Example:

#Dynamic server groupupstream dynamic_zuoyu { ip_hash; Server localhost:8080 weight=2; # tomcat 7.0 server localhost: 8081; # tomcat 8.0 server localhost: 8082; #tomcat 8.5 server localhost:8083 max_fails=3 fail_timeout=20s; # tomcat 9.0}Copy the code

Note: This policy is suitable for solving the problem that the server session cannot cross domains

least_conn

Minimum connection mode: forwards requests to servers with fewer connections. Because polling is done by forwarding requests equally to each server, the load is roughly the same. However, some of the request links are too long and occupy a long time, resulting in high load on some servers. So the least connected approach can balance this problem with polling

Example:

#Dynamic server groupupstream dynamic_zuoyu { least_conn; Server localhost:8080 weight=2; # tomcat 7.0 server localhost: 8081; #tomcat 8.0 server localhost:8082 backup; #tomcat 8.5 server localhost:8083 max_fails=3 fail_timeout=20s; # tomcat 9.0}Copy the code

Note: This policy is suitable for situations where the server is under load due to varying request times

Fair (third party, separate plug-in required)

Allocate requests based on the server’s response time, with servers with short response times getting priority.

Example:

#Dynamic server groupupstream dynamic_zuoyu { server localhost:8080; # tomcat 7.0 server localhost: 8081; # tomcat 8.0 server localhost: 8082; # tomcat 8.5 server localhost: 8083; # tomcat 9.0 fair; # implement priority allocation with short response time}Copy the code

Url_hash (third party, separate plug-in required)

Allocating requests to the server according to the URL hash so that the same URL is sent to the same server each time can also reduce the strain on the server (used in conjunction with cache hits)

Example:

#Dynamic server groupupstream dynamic_zuoyu { hash $request_uri; Localhost :8080; # tomcat 7.0 server localhost: 8081; # tomcat 8.0 server localhost: 8082; # tomcat 8.5 server localhost: 8083; # tomcat 9.0}Copy the code

Note: This policy applies to multiple requests for the same resource

Static Web server

  • Front end separation
location / { root /data/paibo_web_8081_css; Index index.html index.htm; }Copy the code
  • Static resources (files, images, etc.)
location /upfile/ { root /home/audit_files/; # file path index index. HTML; }Copy the code

Download and install Nginx

#Download the GCC compiler
yum -y install gcc gcc-c++

#Download the PCRE
yum -y install pcre-devel openssl-devel

#To download nginx, the official website is http://nginx.org, find the version you want, right click and copy the download linkWget HTTP: / / http://nginx.org/download/nginx-1.19.2.tar.gz
#Unpack theThe tar - ZXVF nginx - 1.19.2. Tar. Gz
#Generated makefile. Use. / configure --helpCheck the usage of each module and use -- without-http_SSI_module to close unnecessary modules. You can use --with-http_perl_modules to install the required modulesCD nginx-1.19.2. /configure --prefix=/usr/local/nginx --with-http_stub_status_module --with-http_ssl_module
#Compile the installation
make && make install

#The installation directory is displayed
cd /usr/local/nginx/

#The/usr /local/nginx/sbin/nginx soft connects to /usr/local/sbin, you can use the nginx command anywhere
ln -s /usr/local/nginx/sbin/nginx /usr/local/sbin/

#Modifying a Configuration File
vim /usr/local/nginx/conf/nginx.conf

#Test whether the nginx configuration file is normal
nginx -t

#Start the nginx
nginx

#Disabling the Firewall
systemctl stop firewalld
systemctl disable firewalld

#For external access, nginx listens on port 80 by default
192.168.198.98:80

#Reload the configuration file
nginx -s reload

#Restart the nginx
nginx -s reopen

#Stop nginx
nginx -s stop
Copy the code

Nginx+Keepalived high availability

#Install nginx on both servers, see 
      
#Here I use the following two servers, to distinguish, made the following modifications# edit index.html, <h1>Welcome to nginx! 2</h1> 192.168.198.98 # edit index.html, <h1>Welcome to nginx! 1</h1>
#Perform the following operations on both servers

#Download the Keepalived installation packageCD/data/soft/wget HTTP: / / https://www.keepalived.org/software/keepalived-2.1.0.tar.gz
#Decompress the installation packageThe tar - ZXVF keepalived - 2.1.0. Tar. Gz
#Compile the installationCD keepalived-2.1.. /configure --prefix=/usr/local/keepalived make && make install
#Keepalived startup script variable reference file, default file path is /etc/sysconfig/, also can not do soft link, directly modify the startup script file path can be (installation directory)
cp /usr/local/keepalived/etc/sysconfig/keepalived  /etc/sysconfig/keepalived

#Add the Keepalived main program to the environment variables (installation directory)
cp /usr/local/keepalived/sbin/keepalived /usr/sbin/keepalived

#Keepalived startup script (source directory), in /etc/init.d/ directory can be easily called using the service commandCp/data/soft/keepalived - 2.1.0 / keepalived/etc/init. D/keepalived/etc/init. D/keepalived
#Place the configuration file in the default path
mkdir /etc/keepalived
cp /usr/local/keepalived/etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf

#Add services for the systemThe chkconfig -- add keepalived
#Powered up
chkconfig keepalived on

#View services started upon startupThe chkconfig -- the list
#Start, close, restart
service keepalived start|stop|restart

######################### The installation is complete. HA configuration. The following is the keepalived conf do # # # # # # # # # # # # # # # # # # # # # # # # #
vim /etc/keepalived/keepalived.conf

##### master #####
! Configuration File for keepalived
#Global definition blockGlobal_defs {router_id Redis -rocketMQ # Specifies a string that identifies this node. Hostname is recommended.#Keepalived executes scripts periodically and analyzes the results of script execution, dynamically adjusting the priority of VRrP_instance. If the script execution result is 0 and the value of weight is greater than 0, the priority is increased by >. If the script execution result is non-zero and the weight configuration value is less than 0, the priority is reduced accordingly. In other cases, keep the original priority, that is, the value of priority in the configuration file.Vrrp_script chk_nginx {script "/etc/keepalived/nginx_check.sh" Weight - 20}
#Define a virtual route. VI_1 is the identifier of the virtual routeVrrp_instance VI_1 {state MASTER # Eno16777736: virtual_router_id 51: indicates the ID of the virtual route. McAst_src_ip 192.168.198.98 priority 100 # Set the priority. The value ranges from 0 to 254. Nopreempt nopreempt advert_int 1 # advert_int 1 # multicast message send interval, must be set authentication {# set authentication message; Auth_type PASS auth_pass 1111} # Add track_script block to instance configuration block track_script {chk_nginx # execute Nginx monitored service} Virtual_ipaddress {# virtual node pool: 192.168.198.111 # virtual IP address: 192.168.198.111}}
##### backup #####! Configuration File for keepalived global_defs { router_id zk_alone } vrrp_script chk_nginx { script "/etc/keepalived/nginx_check.sh" interval 2 weight -20 } vrrp_instance VI_1 { state BACKUP interface eno16777736 Virtual_router_id 51 McAst_src_ip 192.168.198.6 priority 90 advert_int 1 authentication {auth_type PASS auth_pass 1111 } track_script {chk_nginx} virtual_ipaddress {192.168.198.111}}
#Write Nginx status detection scripts
#Ps - C nginx | wc -l, view the current how many nginx process
#Logic: If nginx stops running, try to start it. If it can't start, kill the keepalived process on the local machine. Keepalied binds the virtual IP to the BACKUP machine
vi /etc/keepalived/nginx_check.sh

#! /bin/bashA = ` ps - C nginx - no - the header | wc -l ` if [$A - eq 0]; then /usr/local/nginx/sbin/nginx sleep 2 if [ `ps -C nginx --no-header |wc -l` -eq 0 ]; then killall keepalived fi fi
#Assign permissions
chmod +x /etc/keepalived/nginx_check.sh

#Start two Keepalived
service keepalived start

#Accessing a Virtual IP Address# Welcome to nginx! 1
## # # # # # # # # # # # # # # # # # # # # # # # HA test # # # # # # # # # # # # # # # # # # # # # # # # #
#Since we wrote a script to automatically start Nginx in Keepalived, we should first close Keepalived and then close Nginx
192.168.198.98
service keepalived stop
nginx -s stop
#Visit 192.168.198.111 again to see changes Welcome to nginx! 2192.168.198.98 service keepalived start # service keepalived start ##Visit 192.168.198.111 again to see changes Welcome to nginx! 1
Copy the code

Nginx high concurrency processing principle

High concurrency is usually handled by multi-process, multi-thread, and asynchronous mechanisms, and nGINx uses these three effective ways to handle high concurrency.

Nginx process model

The process model adopts Master/Worker mode. When nginx is started, a Master process is created, and the Master process forks worker child processes to handle requests based on the items in the nginx.conf configuration file.

Master process is responsible for managing the life cycle of Worker process, processing network events, receiving external signals, etc. Since the Master process can fork multiple Worker processes, Nginx is multi-process.

Nginx thread model

The threading model refers to the worker process used to receive and process client requests. Each worker process can handle multiple user requests simultaneously.

Nginx’s asynchronous processing model

Asynchronous processing mode is to point to nginx when processing the request is to use the I/O multiplexing (select | poll | epoll model), which can reuse a multiple I/O process. When the worker process receives the request from the client, it will call the server to process the request. If the response result is not immediately received, the worker process does not block, but processes other requests until the request is processed by the server and returns the response result.

The worker process here uses epoll multiplexing by default for server processing. When the server returns the response result, the ePoll multiplexer is called back, and ePoll tells the worker process that the worker will suspend the thread currently being processed, retrieve the response result and return it to the client, and then execute the suspended thread. There is no waiting during the whole process, so theoretically one process at Ngnix can handle an infinite number of connections without polling.

Note: The worker process is not using the epoll model to receive client requests, but a mutex mechanism; The epoll model is used only for server requests and responses.

The characteristics of Nginx

High concurrency

The default concurrency of a nginx is 1024, because there is a default woker process, each process is 1024, so 1*1024; However, nginx can support 5 to 10w of concurrency if the hardware conditions are met. Specific practices are as follows:

## # # # # # # # # # # # # # # # # # # # # # operating system configuration start # # # # # # # # # # # # # # # # # # # # # # #
#Look at all the Linux core configurations in the current session, and we only need to focus on open File
ulimit -a

#View Maximum Number of files that can be opened by a process in Linux. The default value is 1024
ulimit -n

#Modify "Setting the maximum number of open files for a process"
vim /etc/security/limit.conf

#Add the following two linesSoft noFile 65535 # Limit of the maximum number of files that can be opened by the application level Limit Hard nofile 65535 # Limit of the maximum number of files that can be opened by the operating system level limit
#The file does not take effect immediately after being saved, so you have to change the current session-level configuration
ulimit -n 65535
## # # # # # # # # # # # # # # # # # # # # # operating system configuration end # # # # # # # # # # # # # # # # # # # # # # #

## # # # # # # # # # # # # # # # # # # # # # nginx configuration start # # # # # # # # # # # # # # # # # # # # # # #
#Modify the nginx configuration file (next two lines)vim /src/local/nginx/conf/nginx.conf user root root; worker_processes 4; worker_rlimit_nofile 65535; # This line, look here, look here#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;

#pid logs/nginx.pid;events { use epoll; worker_connections 65535; # This line, look here, look here}
#Hot deployment reloads the configuration file
nginx -s reload
## # # # # # # # # # # # # # # # # # # # # # nginx configuration end # # # # # # # # # # # # # # # # # # # # # # #

## # # # # # # # # # # # # # # # # # # # # # to verify the configuration is correct start # # # # # # # # # # # # # # # # # # # # # # #
#View information about the current Nginx process
ps -ef | grep nginx

root     103734      1  0 13:27 ?        00:00:00 nginx: master process nginx
nobody   103735 103734  0 13:27 ?        00:00:00 nginx: worker process
root     105129   3066  0 14:50 pts/0    00:00:00 grep --color=auto nginx

#Notice, also look at open Files
cat /proc/103735/limits
## # # # # # # # # # # # # # # # # # # # # # to verify the configuration is correct start # # # # # # # # # # # # # # # # # # # # # # #

## # # # # # # # # # # # # # # # # # # # # # Max client calculation start # # # # # # # # # # # # # # # # # # # # # # #Max Client = worker_processes * worker_connections or Max Client = worker_processes * worker_connections / 4
#The first is easy to understand, number of processes * number of concurrent processes per process
#Why do I have to divide by 4? Because there's a quote on the nginx websiteSince a browser opens 2 connections by default to a server and nginx uses the fds (file descriptors) from the same pool Connect to the upstream backend.#That is, the browser will make two links to nginx, and nginx will make two links to the server, so 4
## # # # # # # # # # # # # # # # # # # # # # Max client calculation end # # # # # # # # # # # # # # # # # # # # # # #

Copy the code

Low consumption

Ten thousand inactive links consume only 2.5m of memory temporarily, so it is not affected by normal DOS attacks, but ddos still has problems.

Hot deployment

It can provide 7*24 hours uninterrupted service for smooth upgrade of version and configuration

#The command
nginx -s reload

## # # # # # # # # # # # # # # # # # # # # # parse command process start # # # # # # # # # # # # # # # # # # # # # # #If the configuration file has been changed, a new master process will be created. 2. All current worker processes will not receive new requests and close the current requests after they are executed## # # # # # # # # # # # # # # # # # # # # # parse command process end # # # # # # # # # # # # # # # # # # # # # # #

Copy the code

High availability

The reason for high availability is that in Nginx, Woker runs process by process, and if one of the processes dies, the other processes are not affected, and the other processes take over.

High extension

Since nginx is modular integration, we install any missing module in our use (modules are generally divided into C language extension module and Lua script extension module).

#Download the module
git clone https://github.com/agentzh/echo-nginx-module

#Put in the specified position
mv echo-nginx-module-master /usr/local/nginx/echo-nginx-module

#Use this command to generate a new makefile
./configure 
--prefix=/usr/local/nginx 
--with-http_stub_status_module 
--with-http_ssl_module 
--add-module=/usr/local/nginx/echo-nginx-module

#Compile (only make is required, do not make install, otherwise it will be overwritten)
make

#Backing up original files
cp /usr/local/nginx/sbin/nginx /usr/local/nginx/sbin/nginx.bak

#Replace nginx binaries
cp /usr/local/nginx/objs/nginx /usr/local/nginx/sbin/nginx

#Re-establish the soft connection, detect the configuration file, and smooth the startup
ln -s /usr/local/nginx/sbin/nginx /usr/local/bin/nginx
nginx -t
nginx -s reload

Copy the code

Nginx configuration file details

The overall structure

Global block

Configure global directives that affect nginx. Include:

  • Configure the server user group to run nginx

  • The worker process for

  • Nginx process

  • Pid Storage Path

  • Path for storing error logs

  • Importing configuration files

The events of

The configuration affects the Nginx server or network connections to users. Include:

  • Set up serialization of network connections (scare group)

  • Whether to accept multiple network connections at the same time

  • Choose an event-driven model

  • Set the maximum number of connections

HTTP block

You can nest multiple Server modules, configure proxy, cache, log definition, and so on, and configure third-party modules. Include:

  • Define MIMI – Type

  • Customize the service log format

  • File transfer in SendFile mode is allowed

  • Connection timeout

  • Maximum number of single connection requests

The server block

Configure virtual host parameters. Include:

  • Configuring Network Listening

  • Configure name-based virtual hosts

  • Configure ip-based virtual hosts

The location of

Configure the routing of requests and the processing of pages and other static resources. Include:

  • The location configuration

  • Request a root configuration change

  • URL

  • The default homepage is configured

Example Analysis of configuration list

Break down

Configuration File 1

########### Each command must end with a semicolon #################
#user administrator administrators;  Configure the user or group. Default is nobody nobody.
#worker_processes 2;  # Number of processes allowed to be generated. Default is 1
#pid /nginx/pid/nginx.pid;   # specify the location where nginx run files are storederror_log log/error.log debug; Specify log path, level. This setting can fit into a global, HTTP, server block level to: debug | info | notice | warn | error | crit | alert | emerg events {accept_mutex on; Set network connection serialization to prevent stampedes, default to on multi_accept on; If a process accepts multiple network connections at the same time, the default is off. # event driven model, select | poll | kqueue | epoll | who | / dev/poll | eventport worker_connections 1024; HTTP {include mime.types; Default_type application/octet-stream; The default file type is text/plain #access_log off. Log_format myFormat '$remote_addr - $remote_user [$time_local] $request $status $body_bytes_sent $http_referer $http_user_agent $http_x_forwarded_for'; Access_log log/access.log myFormat; #combined = default value sendFile on; # allow sendFile transfer, default is off, HTTP block, server block, location block sendfile_max_chunk 100k; The number of transfers per call cannot exceed the set value. The default value is 0, that is, no upper limit is set. keepalive_timeout 65; # connection timeout, default is 75s, can be in HTTP, server, location block. Upstream mysvr {server 127.0.0.1:7878; 3333 backup server 192.168.10.121:; # hot standby} error_page 404 https://www.baidu.com; Server {keepalive_requests 120; # Maximum number of single connection requests. listen 4545; Server_name 127.0.0.1; Location ~*^.+${# request url filtering, regular matching, ~ is case sensitive, ~* is case insensitive. #root path; # root directory #index vv.txt; Proxy_pass http://mysvr; Mysvr defines the server list to deny 127.0.0.1; Allow 172.18.5.54; # allowed IP}}}Copy the code

Configuration File 2

#Run the user
user nobody;
#Start processes, usually set to equal the number of cpus
worker_processes  1;

#Global error log and PID file
#error_log logs/error.log;
#error_log logs/error.log notice;
#error_log logs/error.log info;

#pid logs/nginx.pid;

#Working mode and upper limit of connectionsEvents {#epoll is a way of Multiplexing IO(I/O Multiplexing). #epoll is a way of Multiplexing IO(I/O Multiplexing). # worker_connections maximum number of concurrent links for a single worker process Max_clients = worker_processes * worker_connections # Max_clients = worker_processes * worker_connections / 4 reverse proxy = worker_processes * worker_connections / 4 Under normal circumstances, the maximum number of connections that Nginx Server can handle is: / / max_clients must be smaller than the maximum number of files the system can open. / / Max_clients must be smaller than the maximum number of files the system can open. The number of files that can be opened on a machine with 1GB of memory is about 100,000. # $cat /proc/sys/fs/file-max # $cat /proc/sys/fs/file-max # $cat /proc/sys/fs/file-max # $cat /proc/sys/fs/file-max The value of worker_connections needs to be set appropriately based on the number of worker_processes and the maximum number of files that the system can open # so that the total number of concurrent files is less than the maximum number of files that the operating system can open # Of course, the theoretical total number of concurrent requests can be different from the actual number of concurrent requests because the host has other worker processes that consume system resources. # ulimit -shn 65535} HTTP {# set the MIME type, which is defined by the mime.type file include mime.types; default_type application/octet-stream; Log_format main '$remote_addr - $remote_user [$time_local] "$request" "$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log logs/access.log main; # sendFile specifies whether nginx calls sendFile (zero copy) to output files. # Set on for common applications, # set off for download applications. # to balance disk and network I/O processing speed, reduce system uptime. Sendfile on; #tcp_nopush on; Keepalive_timeout 0; keepalive_timeout 65; tcp_nodelay on; # enable gzip on; gzip_disable "MSIE [1-6]."; Client_header_buffer_size 128K; large_client_header_buffers 4 128k; {# listen port 80 listen 80; Use www.nginx.cn to access server_name www.nginx.cn; # define the server's default site root location root HTML; Access_log /nginx.access.log main; PHP index. HTML index. HTM; } # error_page 500 502 503 504/50x.html; Location = / 50 x) HTML {} # static files, nginx own location ~ ^ / javascript (images' | | js flash | | | CSS media | static) / {# 30 days overdue, static files don't update, The expiration date can be set to a larger value, and # can be set to a smaller value if updates are frequent. expires 30d; } #PHP script requests are all forwarded to FastCGI processing. PHP ${fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } # prohibit access to.htxxx file location ~ /.ht {deny all; }}}Copy the code

Configuration File 3

worker_processes 8; Error_log /var/log/nginx/error.log info; # define global error log type, the debug | info | notice | warn | error | crit] pid/var/run/nginx pid; # process file
#The maximum number of open file descriptors for an Nginx process. The theoretical value should be the maximum number of open files (system value)ulimit-n) is divided by the number of nginx processes, but nginx does not allocate requests evenlyulimitThe value of -n is the same.
worker_rlimit_nofile 65535;

#Working mode and upper limit of connection number
events{
#Refer to the event model, use [kqueue | rtsig | epoll | / dev/poll | select | poll]; The Epoll model is a high-performance network I/O model in the Linux 2.6 or higher kernel. If running on FreeBSD, the Kqueue model is used.use epoll;#Maximum number of connections for a single process (Maximum number of connections = Number of connections x Number of processes)worker_connections 65535; }
#Setting up the HTTP Serverhttp{ include mime.types; Default_type application/octet-stream; # default file type #charset UTF-8; # Server_names_hash_bucket_size 128; Client_header_buffer_size 32K; Large_client_header_buffers 4 64K; Client_max_body_size 8m; Sendfile on; Sendfile specifies whether nginx calls sendFile to output files. Set it to ON for common applications. Set it to off for heavy load applications such as downloading disk I/O to balance disk and network I/O processing speed and reduce system load. Note: Change this to off if the image does not display properly. autoindex on; # Enable directory list access, suitable for download server, default disabled. tcp_nopush on; Tcp_nodelay on; Keepalive_timeout 120; FastCGI Parameters are used to improve site performance: reduce resource usage and increase access speed. The following parameters can be understood literally. fastcgi_connect_timeout 300; fastcgi_send_timeout 300; fastcgi_read_timeout 300; fastcgi_buffer_size 64k; fastcgi_buffers 4 64k; fastcgi_busy_buffers_size 128k; fastcgi_temp_file_write_size 128k; #gzip module set gzip on; Gzip_min_length 1k; Gzip_buffers 4 16k; # Compress buffer gzip_http_version 1.0; Gzip_comp_level 2; Gzip_types text/plain Application/X-javascript text/ CSS application/ XML; The default compression type already contains text/ HTML, so there is no need to write it below. There will be no problem writing it, but there will be a WARN. gzip_vary on; #limit_zone crawler $binary_remote_addr 10m; # open limit the number of IP connection when need to use the upstream blog.ha97.com {# upstream of load balancing, weight is the weight, you can define weights based on the machine configuration. The weigth parameter represents the weight, and the higher the weight, the more likely it is to be assigned. Server 192.168.80.121:80 weight = 3; Server 192.168.80.122:80 weight = 2; Server 192.168.80.123:80 weight = 3; } # virtual host configuration server{listen 80; Server_name aa.cn www.aa.cn; HTML index.htm index.php; Set $subdomain ''; # binding directory root directory for the secondary domain name bbb.aa.com/BBB folder if ($host ~ * (?) (\ w + \.)" {0,})(\b(? ! www\b)\w+)\.\b(? ! (com|org|gov|net|cn)\b)\w+\.[a-zA-Z]+" ){ set $subdomain "/$2"; } root /home/wwwroot/aa.cn/web$subdomain; Rewrite /dedecms.conf; # # rewrite end load other configuration file location. ~ *. (PHP | php5)? The ${fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi.conf; } # image cache time. Set the location ~ *. (GIF | JPG | jpeg | PNG | BMP | SWF) ${10 d expires; } # JS and CSS cache time set the location ~ *. (JS) | CSS? 1 h ${expiresexpires; }} log_format access '$remote_addr - $remote_user [$time_local] "$request" "$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" $http_x_forwarded_for'; # define this virtual host access log access_log/var/log/nginx/ha97access log access; Location / {proxy_pass http://127.0.0.1:88; proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; Http_forwarded_for this server is forwarded_forwarded_for. If the forwarded_forwarded_for is forwarded_forwarded_for, the forwarded_forwarded_for is forwarded_forwarded_for. # Here are some optional reverse proxy configurations. proxy_set_header Host $host; client_max_body_size 10m; Client_body_buffer_size 128K; Proxy_connect_timeout 90; #nginx connection timeout proxy_send_timeout 90; Proxy_read_timeout 90; Proxy_buffer_size 4K; proxy_buffer_size 4k; # change the buffer size of the proxy server (nginx). #proxy_buffers, page average set below 32K proxy_busY_buffers_size 64K; Proxy_temp_file_write_size 64K; Location /NginxStatus {stub_status on; access_log on; auth_basic "NginxStatus"; auth_basic_user_file conf/htpasswd; The contents of the #htpasswd file can be generated using the htpasswd tool provided by Apache. } # # local noise separation reverse proxy configuration all JSP pages to a tomcat or resin processing location ~. (JSP | JSPX | do)? $ { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; Proxy_pass http://127.0.0.1:8080; } # All static files are read directly by nginx without tomcat or resin location ~ .*.(htm|html|gif|jpg|jpeg|png|bmp|swf|ioc|rar|zip|txt|flv|mid|doc|ppt|pdf|xls|mp3|wma)$ { expires 15d; } location ~ .*.(js|css)? ${ expires 1h; }}Copy the code

Configuration File 4

######Nginx configuration file nginx.conf: #####
  
#Define the users and user groups that Nginx runs on
user www www;
  
#Number of Nginx processes. You are advised to set this parameter to the total number of CPU cores.
worker_processes 8;
  
#Global error log definition type, [the debug | info | notice | warn | error | crit]
error_log /usr/local/nginx/logs/error.log info;
  
#Process PID file
pid /usr/local/nginx/logs/nginx.pid;
  
#Specifies the maximum number of descriptors that a process can open: number
#Working mode and upper limit of connection number
#The maximum number of open file descriptors for an nginx process should be the maximum number of open file descriptors.ulimit-n) is divided by the number of nginx processes, but nginx allocation requests are not that uniform, so it is best toulimitThe value of -n is the same.
#In Linux 2.6, the number of open files is 65535. Worker_rlimit_nofile should be 65535.
#This is because nGINx does not allocate requests to processes evenly, so if you fill in 10240 and the total number of concurrent requests reaches 30,000-40,000, some processes may exceed 10240, which will return error 502.
worker_rlimit_nofile 65535;
  
  
events{
 #Refer to the event model, use [kqueue | rtsig | epoll | / dev/poll | select | poll]; Epoll model
 #Epoll is a high performance network I/O model in the Linux 2.6 or higher kernel. Linux recommends epoll and kqueue if running on FreeBSD.
 #Supplementary notes:
 #Like Apache, Nginx has different event models for different operating systems
 #A) Standard event model
 #Select and poll belong to the standard event model. If no more efficient method exists, nginx chooses Select or poll
 #B) Efficient event model
 #Kqueue: works with FreeBSD 4.1+, OpenBSD 2.9+, NetBSD 2.0 and MacOS X. Dual processor MacOS X systems using Kqueue can cause a kernel crash.
 #Epoll: Used for Linux kernel 2.6 and later systems.
 #/dev/poll: used on Solaris 7 11/99+, HP/UX 11.22+ (EventPort), IRIX 6.5.15+ and Tru64 UNIX 5.A +.
 #Eventport: Used on Solaris 10. To prevent kernel crashes, it is necessary to install security patches.
 use epoll;
  
 #Maximum number of connections for a single process (Maximum number of connections = Number of connections x Number of processes)
 #Adjust according to the hardware, and work with the previous process, as large as possible, but do not run the CPU to 100% on the line. The maximum number of connections allowed per process. Theoretically, the maximum number of connections per Nginx server is.
 worker_connections 65535;
  
 #Keepalive timeout duration.
 keepalive_timeout 60;
  
 #Buffer size for client request headers. This can be set depending on your system's paging size. The size of a request header should not exceed 1K, but since system paging is usually greater than 1K, it is set to the paging size.
 #The PAGESIZE can be obtained using the command getconf PAGESIZE.
 #[root@web001 ~]# getconf PAGESIZE
 #4096
 #There are cases where client_header_BUFFer_size exceeds 4K, but client_header_BUFFer_size must be set to an integer multiple of system paging size.
 client_header_buffer_size 4k;
  
 #Max specifies the number of cached files. It is recommended that this number be the same as the number of open files. Inactive means how long it will take before the cache is deleted before the file is requested.
 open_file_cache max=65535 inactive=60s;
  
 #This is how often the cache is checked for valid information.
 #Syntax: open_file_cache_VALID Time Default: open_file_cache_VALID 60 Fields: HTTP, server, The location directive specifies when to check for valid information about cached items in open_file_cache.
 open_file_cache_valid 80s;
  
 #The minimum number of times a file has been used in inactive time in the open_file_cache directive. If this number is exceeded, the file descriptor will remain open in the cache. As shown above, if a file has not been used once in inactive time, it will be removed.
 #Syntax :open_file_cache_min_uses number Default :open_file_cache_min_uses 1 Fields: HTTP, server, The location directive specifies the minimum number of files that can be used within a certain time range for open_file_cache invalid arguments. If larger values are used, the file descriptor is always open in the cache.
 open_file_cache_min_uses 1;
  
 #Grammar: open_file_cache_errors on | off default: open_file_cache_errors off use field: HTTP, server, whether this directive specifies the location in the search for a file is a record cache error.
 open_file_cache_errors on;
}
  
  
  
#Set up the HTTP server to provide load balancing support with its reverse proxy capabilities
http{
 #Mapping table of file extensions and file types
 include mime.types;
  
 #Default file type
 default_type application/octet-stream;
  
 #The default encoding
 #charset utf-8;
  
 #Server-namedhashTable size
 #Save the server namehashTables are controlled by the directives server_names_hash_max_size and server_names_hash_bucket_size. parameterhashBucket size is always equal tohashTable size, and is a multiple of the path processor cache size. Enables faster lookups in the processor after reducing the number of accesses in memoryhashTable key values become possible. ifhashBucket size is equal to the size of the path's processor cache, so the worst-case number of times a key is searched in memory is 2. The first is to determine the address of the storage unit, and the second is to look up the key value in the storage unit. Therefore, if Nginx gives need to increasehashMax size orhashThe first step is to increase the size of the previous parameter.
 server_names_hash_bucket_size 128;
  
 #Buffer size for client request headers. This can be set depending on your system paging size. Generally, the header size of a request should not exceed 1K, but since system paging is usually greater than 1K, it is set to the paging size. The PAGESIZE can be obtained using the command getconf PAGESIZE.
 client_header_buffer_size 32k;
  
 #Client request header buffer size. By default, nginx uses client_header_buffer_size to read the header value, or large_client_header_buffers if the header is too large.
 large_client_header_buffers 4 64k;
  
 #Set the size of files to be uploaded via nginx
 client_max_body_size 8m;
  
 #Enable efficient file transfer mode. Sendfile specifies whether nginx calls sendFile to output files. Set it to ON for common applications, and set it to off for heavy load applications such as downloading disk I/O to balance disk and network I/O processing speed and reduce system load. Note: Change this to off if the image does not display properly.
 #The sendfile directive specifies whether nginx calls sendfile (zero copy) to output files. For common applications, it must be set to on. If it is used for heavy disk I/O load applications, such as downloads, you can set it to Off to balance the I/O processing speed between disks and networks and reduce the uptime of the system.
 sendfile on;
  
 #Open directory list access, suitable for downloading servers, closed by default.
 autoindex on;
  
 #This option allows or disallows the use of socke's TCP_CORK option, which is only used when sendfile is used
 tcp_nopush on;
   
 tcp_nodelay on;
  
 #Duration of long connection timeout, in seconds
 keepalive_timeout 120;
  
 #The FastCGI parameters are designed to improve the performance of your site: reduce resource footprint and increase access speed. The following parameters can be understood literally.
 fastcgi_connect_timeout 300;
 fastcgi_send_timeout 300;
 fastcgi_read_timeout 300;
 fastcgi_buffer_size 64k;
 fastcgi_buffers 4 64k;
 fastcgi_busy_buffers_size 128k;
 fastcgi_temp_file_write_size 128k;
  
 #Gzip module Settingsgzip on; Gzip_min_length 1k; Gzip_buffers 4 16k; # Compress buffer gzip_http_version 1.0; Gzip_comp_level 2; Gzip_types text/plain Application/X-javascript text/ CSS application/ XML; The type of compression already contains textML by default, so there is no need to write it. There will be no problem writing it, but there will be a WARN. gzip_vary on; #This parameter is required when limiting the number of IP connections is enabled
 #limit_zone crawler $binary_remote_addr 10m;
  
 #Load Balancing Configuration
 upstream piao.jd.com {
   
  #Upstream's load balancer, weight is a weight that can be defined based on machine configuration. The weigth parameter represents the weight, and the higher the weight, the more likely it is to be assigned.Server 192.168.80.121:80 weight = 3; Server 192.168.80.122:80 weight = 2; Server 192.168.80.123:80 weight = 3;  #Nginx's upstream currently supports four modes of allocation
  #1. Polling (default)
  #Each request is allocated to a different backend server one by one in chronological order. If the backend server goes down, the request is automatically removed.
  #2, weight
  #Specifies the polling probability, weight proportional to the access ratio, for cases where back-end server performance is uneven.
  #Such as:
  #upstream bakend {
  #Server 192.168.0.14 weight = 10;
  #Server 192.168.0.15 weight = 10;
  #}
  #2, ip_hash
  #Each request by access IPhashResults allocation, so that each visitor has fixed access to one back-end server, solves the session problem.
  #Such as:
  #upstream bakend {
  # ip_hash;
  #Server 192.168.0.14:88;
  #Server 192.168.0.15:80;
  #}
  #3. Fair (third Party)
  #Requests are allocated based on the response time of the back-end server, with priority given to those with short response times.
  #upstream backend {
  # server server1;
  # server server2;
  # fair;
  #}
  #4. Url_hash (third party)
  #By accessing the URLhashThe result is to allocate requests so that each URL is directed to the same backend server, which is more efficient when cached.
  #Example: Add in upstreamhashThe server statement cannot write weight or other parameters. Hash_method is usedhashalgorithm
  #upstream backend {
  # server squid1:3128;
  # server squid2:3128;
  # hash $request_uri;
  # hash_method crc32;
  #}
  
  #tips:
  #upstream bakend{Define load balancing device Ip and device status}{
  # ip_hash;
  #Server 127.0.0.1:9090 down;
  #Server 127.0.0.1:8080 weight = 2;
  #Server 127.0.0.1:6060;
  #7070 backup server 127.0.0.1:;
  #}
  #Add proxy_pass http://bakend/ to servers that require load balancing.
  
  #The state of each device is set to:
  #1. Down indicates that the server does not participate in the load temporarily
  #The greater the weight, the greater the weight of the load.
  #Max_fails: The number of failed requests allowed is 1 by default. When the maximum number of failed requests is exceeded, an error defined by the proxy_next_upstream module is returned
  #4. Fail_timeout :max_fails specifies the timeout period after this failure.
  #5. Backup: Request the backup machine when all other non-backup machines are down or busy. So this machine will have the least pressure.
  
  #Nginx supports load balancing of multiple groups at the same time, which can be used by different servers.
  #Client_body_in_file_only if set to On, the data posted by the client can be recorded to a file for debugging
  #Client_body_temp_path Sets the directory for recording files up to 3 levels of directories can be set
  #Location Matches urls. Redirects or new proxy load balancing can be performed
 }
   
 #Virtual host configuration
 server {
  #Listen on port
  listen 80;
  
  #There can be multiple domain names separated by Spaces
  server_name www.jd.com jd.com;
  index index.html index.htm index.php;
  root /data/www/jd;
  
  #Load balancing is performed for ******location ~ .*.(php|php5)? The ${fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi.conf; }  #Image cache time Settings
  location ~ .*.(gif|jpg|jpeg|png|bmp|swf)$ {
   expires 10d;
  }
    
  #Set the cache time of JS and CSSlocation ~ .*.(js|css)? $ { expires 1h; }  #Log Format setting
  #$remote_addrwith$http_x_forwarded_forTo record the IP address of the client;
  #$remote_user: Used to record the client user name;
  #$time_local: Records the access time and time zone.
  #$request: Used to record the REQUEST URL and HTTP protocol;
  #$status: Records the request status. Success is 200,
  #$body_bytes_sent: Records the size of the file body sent to the client.
  #$http_referer: used to record visits from that page link;
  #$http_user_agent: Records information about the client's browser;
  #Usually, the Web server is placed behind the reverse proxy, so that the client's IP address cannot be obtained$remote_addThe IP address obtained is that of the reverse proxy server. The reverse proxy server can add x_forwarded_for to the HTTP header to record the IP address of the original client and the server address requested by the original client.
  log_format access '$remote_addr - $remote_user [$time_local] "$request" '
  '$status $body_bytes_sent "$http_referer" '
  '"$http_user_agent" $http_x_forwarded_for';
    
  #Define the access log for this virtual host
  access_log /usr/local/nginx/logs/host.access.log main;
  access_log /usr/local/nginx/logs/host.access.404.log log404;
    
  #right"/"Enabling reverse ProxyLocation / {proxy_pass http://127.0.0.1:88; proxy_redirect off; proxy_set_header X-Real-IP $remote_addr;   #The back-end Web server obtains the real IP address of the user through X-Forwarded-For
   proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
     
   #The following are some optional reverse proxy configurations.
   proxy_set_header Host $host;
  
   #Maximum number of bytes per file that a client is allowed to request
   client_max_body_size 10m;
  
   #The maximum number of bytes that the buffer agent can buffer a client request,
   #If you set it to a large value, such as 256K, it is normal to submit any image smaller than 256K, whether using Firefox or Internet Explorer. If you annotate this directive with the default client_body_BUFFer_size setting, which is twice the operating system page size, 8K or 16K, problems arise.
   #Return 500 Internal Server Error when submitting a large image, whether using firefox4.0 or Internet explorer 8.0
   client_body_buffer_size 128k;
  
   #Indicates that nginx prevents replies with an HTTP reply code of 400 or higher.
   proxy_intercept_errors on;
  
   #Timeout for back-end server connection _ Timeout for initiating handshake waiting response
   #Nginx connection timeout with backend server (proxy connection timeout)
   proxy_connect_timeout 90;
  
   #Back-end server data return time (proxy send timeout)
   #Back - end server data transfer time _ Indicates that the back - end server must transfer all data within the specified time
   proxy_send_timeout 90;
  
   #Back-end server response time after successful connection (proxy receive timeout)
   #The time it takes to wait for the back-end server to respond to a successful connection.
   proxy_read_timeout 90;
  
   #Set the buffer size for the proxy server (Nginx) to hold user headers
   #Sets the buffer size for the first part of the reply read from the proxy server. Normally this part of the reply contains a small reply header. By default this value is the size of a buffer specified in the proxy_buffers directive, although it can be set to smaller
   proxy_buffer_size 4k;
  
   #Proxy_buffers buffer, page average set below 32K
   #Set the number and size of buffers used to read replies (from the proxyed server). The default is also the paging size, which can be 4K or 8K depending on the operating system
   proxy_buffers 4 32k;
  
   #Buffer size under high load (proxy_buffers*2)
   proxy_busy_buffers_size 64k;
  
   #Set the size of data when writing proxy_temp_path to prevent a worker process from blocking too long while passing files
   #Set the cache folder size to upstream if it is larger than this value
   proxy_temp_file_write_size 64k;
  }
    
    
  #Set the address to view Nginx status
  location /NginxStatus {
   stub_status on;
   access_log on;
   auth_basic "NginxStatus";
   auth_basic_user_file confpasswd;
   #The contents of the htpasswd file can be generated using the htpasswd tool provided by Apache.
  }
    
  #Configure local reverse proxy for static and static separation
  #All JSP pages are processed by Tomcat or resinlocation ~ .(jsp|jspx|do)? $ { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; Proxy_pass http://127.0.0.1:8080; }  #All static files are read directly by Nginx without Tomcat or resinlocation ~ .*.(htm|html|gif|jpg|jpeg|png|bmp|swf|ioc|rar|zip|txt|flv|mid|doc|ppt| pdf|xls|mp3|wma)$ { expires 15d; } location ~ .*.(js|css)? $ { expires 1h; }}}######Nginx configuration file nginx.conf: #####
Copy the code

Configuration File 5

user nobody nobody; Worker_processes 4; Auto worker_rlimit_nofile 51200; Error_log logs/error.log notice; pid /var/run/nginx.pid; events { use epoll; ## Use the epoll event-driven model worker_connections 51200; ## The maximum concurrency a worker can handle} HTTP {server_tokens off; ## Hide nginx version include mime. Types; proxy_redirect off; Proxy_set_header Host $Host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; client_max_body_size 20m; Client_body_buffer_size 256K; Proxy_connect_timeout 90; Proxy_send_timeout 90; Proxy_read_timeout 90; Proxy_buffer_size 128K; proxy_buffer_size 128K; ## Proxy_buffers 4 64K for the first part of the response received from the proxy server ## Buffer to read responses from proxy server number and size proxy_busY_buffers_size 128K; Proxy_temp_file_write_size 128K; proxy_temp_file_write_size 128K; Default_type application/octet-stream; charset utf-8; Client_body_temp_path /var/ TMP /client_body_temp 1 2; Proxy_temp_path /var/tmp/proxy_temp 1 2; Fastcgi_temp_path /var/ TMP /fastcgi_temp 1 2; Uwsgi_temp_path /var/tmp/uwsgi_temp 1 2; # uWSgi server accepts temporary directory scgi_temp_path /var/tmp/scgi_temp 1 2 ##scgi server accepts temporary directory ignore_invalid_headers on ## Enable control to ignore header fields with invalid names server_names_hash_max_size 256; ## Maximum value of server name hash table server_names_hash_bucket_size 64; Client_header_buffer_size 8K; ## Set buffers to read client request headers large_client_header_buffers 4 32K; Number and size connection_pool_size 256; Request_pool_size 64K allows precise adjustment of memory allocation per connection; ## Allows precise tuning of output_buffers per request. ## Set the buffer used to read responses from disk number and size postPONe_output 1460; Client_header_timeout 1 MB; Client_body_timeout 3m; Send_timeout 3m; Log_format main '$server_ADDR $remote_ADDR [$time_local] $msec+$connection '" "$request" $status $connection $request_time $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; open_log_file_cache max=1000 inactive=20s min_uses=1 valid=1m; access_log logs/access.log main; log_not_found on; sendfile on; tcp_nodelay on; Tcp_nopush off tcp_nopush off ## Disable socket option reset_timedout_connection on; Keepalive_timeout 10 5; Keep-alive :timeout=5 keepalive_requests 100; ## Set the maximum number of requests that can be served over a keepalive connection. ## Enable compression of gzip_http_version 1.1; ## Enable the minimum version of the compression protocol gzip_vary on; gzip_proxied any; ## Enable compression gzip_min_length 1024 for all proxy requests; # set the minimum length of the response to be gzip compressed gzip_comp_level 6; ## Change the buffers level of gzip_buffers. ## Number and size gzip_Proxied expired no-cache no-store private Auth no_last_Modified no_etag; ## Number and size gzip_Proxied expired no-cache no-store private Auth no_last_Modified no_etag; gzip_types text/plain application/x-javascript text/css application/xml application/json; gzip_disable "MSIE [1-6]\.(? ! .*SV1)"; upstream tomcat8080 { ip_hash; Server 172.16.100.103:8080 weight = 1 max_fails = 2; Server 172.16.100.104:8080 weight = 1 max_fails = 2; Server 172.16.100.105:8080 weight = 1 max_fails = 2; } server { listen 80; server_name www.chan.com; # config_apps_begin root /data/webapps/htdocs; access_log /var/logs/webapp.access.log main; error_log /var/logs/webapp.error.log notice; location / { location ~* ^.*/favicon.ico$ { root /data/webapps; expires 180d; ## Cache 180 days break; } if ( ! -f $request_filename ) { proxy_pass http://tomcat8080; break; } } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } server { listen 8088; server_name nginx_status; location / { access_log off; deny all; return 503; } location /status { stub_status on; access_log off; Allow 127.0.0.1; Allow 172.16.100.71; deny all; }}}Copy the code

Performance tuning

Tuning under global modules

#Setting the number of processes
worker_processes 2;
#Set the number of kernels and how the process uses them
worker_cpu_affinity 01 10;
#Maximum number of files that can be opened
worker_rlimit_nofile 65535;
Copy the code
  1. worker_processes

    The number of worker processes. This value is usually set to the number of CPU cores or an integer multiple of the kernel. Example: If you currently have two 4-core cpus, you can set worker_PROCESSES to either 8 or 16, or even 4. This depends not only on the number of CPU cores, but also on the number of disks and load balancing mode. If you don’t know what to do, you can set it to Auto.

  2. worker_cpu_affinity

    Bind the worker process to the CPU kernel, and the configuration is set in multi-bit binary number. For example: In the preceding configuration, there are four kernels. 0 indicates disabled, and 1 indicates enabled.

    The kernel number worker_processes worker_cpu_affinity explain
    2 2 0001 0010 0100 1000 Each process uses one kernel
    2 4 01 01. 10 Four processes alternate between two kernels
    4 2 0101, 1010, Each process uses two kernels
    4 4 0001 0010 0100 1000 Each process uses one kernel
  3. worker_rlimit_nofile

    Set the maximum number of files a worker process can open. The default value is the same as the maximum open file descriptor set on the current Linux system. See the section on High Concurrency above.

Tuning under the Events module

Events {# set the connection mode between worker process and client (event-driven model) use epoll; Worker_connections 1024 worker_connections 1024 Set network connection serialization (on accept_mutex on); accept_mutex_delay 500ms; # set whether a process accepts multiple network connections at the same time. Default is off multi_accept on. }Copy the code
  1. use

    Set the connection mode between worker process and client. Nginx supports multiple connection modes. By default, nginx selects the most efficient mode based on the current version of the system. And we also broadly to set, this value can be set to select | poll | epoll | kqueue, etc. I/O multiplexing is involved here, and I’ll do that at the end.

  2. worker_connections

    Set the maximum number of concurrent requests that each worker process can handle. This value cannot exceed the value of worker_rlimit_nofile

  3. accept_mutex

    Set the worker’s receiving mutex, and all idle worker processes are put into a blocking queue, waiting for new connections. If enabled: the system will create a mutex, and only the worker process blocking the head of queue can obtain the lock, thus obtaining new connections (to prevent stampedes); If closed: all worker processes in the blocked queue try to get new connections, this is stampede.

  4. multi_accept

    Sets whether a worker process can handle multiple connections at once. If enabled: When multiple new connections come, the server will calculate the number of connections currently being processed by each worker process, select the least one and give it all the new connections. If closed: When multiple new connections come in, the server allocates the new connections in turn to the woker process that is currently handling the fewest connections based on the load balancing algorithm.

Tuning under THE HTTP module

HTTP {# include the mime.types file in the current directory (conf directory). Default_type application/octet-stream = application/octet-stream = octet-stream; # set the request and response character encoding charset UTF-8; Log_format main '$remote_addr - $remote_user [$time_local] "$request" "$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; # open access log, storage path for the logs/access. The log, with log_format main format stored access_log logs/access. Log the main; log_format postdata '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' ' $request_body' '"$http_user_agent" "$http_x_forwarded_for"'; If the value is set to ON, zero copy is enabled for Linux. Otherwise, zero copy is disabled. Of course, whether this function works depends on the system version you are using. CentOS6 and later versions support zero copy of sendfile. Note: Change this to off if the image does not display properly. sendfile on; Tcp_nopush on; Tcp_nodelay on; # set the lifetime of a long connection between a client and Nginx. When the lifetime expires, the connection is automatically closed. Keepalive_timeout 65; Keepalive_requests 2000; # enable gzip on; Client_header_buffer_size 128K; large_client_header_buffers 4 128k; Upstream API {server 127.0.0.1:19090 weight=1; } # virtual host server {listen 445 SSL; server_name localhost; ssl_certificate /usr/local/nginx/conf/server.crt; ssl_certificate_key /usr/local/nginx/conf/server.key; Location ^~ / API / {# set proxy proxy_set_header Host $Host; #proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_pass http://api/; } location /upfile/ { root /home/audit_files; add_header Access-Control-Allow-Origin *; } error_page 404 /404.html; error_page 500 502 503 504 /50x.html; location = /50x.html { root html; }}}Copy the code
  1. sendfile

    If this parameter is set to ON, the zero-copy mechanism of Linux is enabled. Otherwise, zero-copy is disabled. Of course, whether this function works depends on the system version you are using. CentOS6 and later versions support zero copy of sendfile. Note: Change this to off if the image does not display properly.

  2. tcp_nopush

    Set whether to send nginx response headers separately. If this function is enabled: nginx response headers are sent separately, and actual response body data is sent separately in packets. If closed: the nginx response header is sent along with the actual response body data, which is contained in each response.

  3. tcp_nodelay

    Set the data sending cache. If this function is enabled: Data sending cache is not enabled, which is suitable for small data transmission. If off: Data sending cache is enabled. If large files, such as images, are to be transferred, off is recommended.

  4. keepalive_timeout

    Set the active time of the connection. If the connection times out, disconnect the connection. If the value is set to 0, the Keepalive connection is disabled. If the amount of data transmitted is small and the amount of system computation is small, you can set the value to a smaller value. On the other hand.

  5. keepalive_requests

    Set the maximum number of long connection requests to be sent. If the current system has high concurrency, if this value is set to a small value, then keepalive_timeout has not been reached, but keepalive_requests has been reached. Therefore, we need to set this value based on the actual system concurrency and connection active time.

  6. client_body_timeout

    Set the timeout period for the client to obtain the response. If time out, the request is considered failed. According to this value setting, a simple small pressure test of the interface can be done, and a reasonable value can be set to ensure the optimal response of the request.

I/O multiplexing

Explain in a separate article

Zero copy

Explain in a separate article