1. The number of worker processes Nginx runs

Nginx number of working processes Generally set the CPU core or core number x2. If you don’t understand the CPU auditing, can be seen after the top command press 1, you can also view the/proc/cpuinfo file grep ^ processor/proc/cpuinfo | wc -l

/ root @ lx ~ # vi/usr/local/nginx1.10 / conf/nginx. Conf worker_processes 4; / root @ lx ~ # / usr/local/nginx1.10 / sbin/nginx -s reload/root @ lx ~ # ps - aux | grep nginx | grep -v grep root 9834 0.0 0.0 47556 1948? Ss 22:36:00 nginx: Master processNginx WWW 10135 0.0 0.0 50088 2004? S 22:58 0:00 nginx: Worker process WWW 10136 0.0 0.0 50088 2004? S 22:58 0:00 nginx: Worker process WWW 10137 0.0 0.0 50088 2004? S 22:58 0:00 nginx: Worker process WWW 10138 0.0 0.0 50088 2004? S 22:58 0:00 nginx: Worker process duplicates codeCopy the code

2, Nginx run CPU affinity

For example, 4-core configuration:

worker_processes 4; Worker_cpu_affinity 0001 0010 0100 1000 Duplicates codeCopy the code

For example, octa-core configuration:

worker_processes 8; worker_cpu_affinity 00000001 00000010 00000100 0000100000010000 00100000 01000000 10000000; Copy the codeCopy the code

Worker_processes has a maximum of 8 enabled, and the performance gains don’t get any better after 8, and the stability becomes much lower, so 8 processes is enough.

3. Maximum number of open files in Nginx

worker_rlimit_nofile 65535; This directive refers to the maximum number of open file descriptors for an Nginx process. The theoretical value should be the maximum number of open files (ulimit-n) divided by the number of nginx processes, but nginx allocation requests are not uniform, so it is best to keep the value of ulimit-n consistent. Note: configuration file resource limits can be in the/etc/security/limits the conf Settings, for individual users or like root/user * on behalf of all our users to set.

* soft nofile 65535 * hard nofile 65535 User login takes effect (ulimit -n) Copy the codeCopy the code

4. Nginx event processing model

events { use epoll; worker_connections 65535; multi_accept on; } Duplicate codeCopy the code

Nginx uses the epoll event model for efficient processing. Work_connections refers to the maximum number of client connections allowed by a single worker process. This value is generally determined based on server performance and memory. The actual maximum value is the number of worker processes multiplied by work_connections. In fact, we filled in a 65535, enough, these are counted as concurrent values, a site to reach such a large number of concurrent, also counted as a large site! Multi_accept tells nginx to accept as many connections as possible after receiving a new connection notification. The default value is ON. When set to ON, multiple workers process connections in a serial manner, i.e. only one worker is awakened from a connection while the others are in hibernation. Multiple workers process connections in parallel, that is, one connection will wake up all workers until connections are allocated, and those that do not get connections continue to sleep. When you have a small number of server connections, this parameter will reduce the load somewhat, but when the server throughput is high, you can turn this parameter off for efficiency.

5. Enable efficient transmission mode

http { include mime.types; default_type application/octet-stream; ... sendfile on; tcp_nopush on; ... } Duplicate codeCopy the code

Include MIme. types: media types. Include is simply an instruction to Include the contents of another file in the current file. Default_type Application/OCtet-stream: The default media type is sufficient. Sendfile on: Enable efficient file transfer mode. Sendfile specifies whether nginx calls sendFile to output files. Set it to ON for common applications, and set it to off for heavy load applications such as downloading disk I/O to balance disk and network I/O processing speed and reduce system load. Note: Change this to off if the image does not display properly. Tcp_nopush on: it is valid only when sendFile is enabled to prevent network congestion and actively reduce the number of network packet segments (send the response header with the beginning of the body rather than one after the other).

The main purpose of connection timeout is to protect server resources, CPU, memory, and control the number of connections, because establishing connections also needs to consume resources.

keepalive_timeout 60; tcp_nodelay on; client_header_buffer_size 4k; open_file_cache max=102400 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 1; client_header_timeout 15; client_body_timeout 15; reset_timedout_connection on; send_timeout 15; server_tokens off; client_max_body_size 10m; Copy the codeCopy the code

Keepalived_timeout: indicates the timeout period for the client to hold the session. If the timeout period is exceeded, the server disconnects the connection.

Tcp_nodelay: also prevents network blocking, but only valid if keepalived parameter is included.

Client_header_buffer_size 4K: Buffer size of client request headers. This can be set according to your system paging size. Generally, the size of a request header should not exceed 1K, but since system paging is usually larger than 1K, it is set to the paging size. The PAGESIZE can be obtained using the command getconf PAGESIZE.

Open_file_cache Max =102400 inactive=20s: This specifies the cache for open files. This is not enabled by default. Max specifies the cache number.

Open_file_cache_valid 30s: Indicates how often to check the cache for valid information.

Open_file_cache_min_uses 1: The minimum number of times a file has been used in inactive time in the open_file_cache directive. If this number is exceeded, the file descriptor will remain open in the cache. As shown above, if a file has not been used once in inactive time, it will be removed.

Client_header_timeout: Sets the timeout period of the request header. We can also set this lower, if no data is sent after this time, nginx will return a Request Time out error.

Client_body_timeout Specifies the timeout period of the request body. We can also set this lower and send no data after this time, same error as above.

Reset_timeout_connection: Tells Nginx to close unresponsive client connections. This will free up memory occupied by that client.

Send_timeout: response client timeout. This timeout is limited to the time between two activities. If this timeout is exceeded, there is no activity on the client and nginx closes the connection.

Server_tokens: Does not make NGINx execute faster, but it does turn off the number of nGINx versions in error pages, which is good for security.

Client_max_body_size: specifies the maximum size of uploaded files.

7. Fastcgi tuning

fastcgi_connect_timeout 600; fastcgi_send_timeout 600; fastcgi_read_timeout 600; fastcgi_buffer_size 64k; fastcgi_buffers 4 64k; fastcgi_busy_buffers_size 128k; fastcgi_temp_file_write_size 128k; Fastcgi_temp_path/usr/local/nginx1.10 / nginx_tmp; fastcgi_intercept_errors on; Fastcgi_cache_path/usr/local/nginx1.10 / fastcgi_cache levels = 1:2 keys_zone = cache_fastcgi: 128 minactive = 1 d max_size = 10 g; Copy the codeCopy the code

Fastcgi_connect_timeout 600: Specifies the timeout period for connecting to the backend FastCGI.

Fastcgi_send_timeout 600: Timeout time for sending requests to FastCGI.

Fastcgi_read_timeout 600: Specifies the timeout period for receiving FastCGI replies.

Fastcgi_buffer_size 64K: Specifies the size of the buffer needed to read the first part of the FastCGI reply. The default buffer size is. The block size in the fastcgi_buffers directive, which can be set to a smaller value.

Fastcgi_buffers 4 64 k: Specify how much and how large a buffer is needed locally to buffer FastCGI replies. If a PHP script produces a page size of 256KB, four 64KB buffers are allocated. If the page size is larger than 256KB, Anything larger than 256KB will be cached in the path specified by fastCGI_temp_PATH, but this is not a good idea because data is processed faster in memory than on disk. In general, this value should be the middle of the page size generated by PHP scripts on your site. If most scripts on your site are 256KB, you can set this value to “8 32K,” “4 64K,” and so on.

Fastcgi_busy_buffers_size 128K: You are advised to set this parameter to twice the value of fastcgi_buffers.

Fastcgi_temp_file_write_size 128K: How many blocks will be used when writing fastcGI_TEMP_PATH. The default value is twice the value of fastcgi_buffers. This value may be set to 502BadGateway if the load is coming up.

Fastcgi_temp_path: temporary cache directory.

Fastcgi_intercept_errors on: This directive specifies whether 4XX and 5XX error messages are passed to the client, or whether nginx is allowed to handle error messages using error_page. Note: static files do not exist to return 404 pages, but PHP pages return blank pages!

Fastcgi_cache_path/usr/local/nginx1.10 / fastcgi_cachelevels = 1:2 keys_zone = cache_fastcgi: 128 minactive max_size = 10 g = 1 d: Fastcgi_cache specifies the cache directory level. For example, at 1:2 cache_fast_cgi is the name of this cache space and how much memory is used. Inactive indicates the default expiration time. If cached data is not accessed within the expiration time, it will be deleted. Max_size indicates the maximum amount of disk space to be used.

Fastcgi_cache cache_fastCGI: # enables the FastCGI cache and gives it a name. Enabling caching is very useful to reduce CPU load and prevent error release of 502. Cache_fastcgi The name of the cache created for the proxy_cache_path directive.

Fastcgi_cache_valid 200 302 1h: # Specifies the cache time for the response code. The value in the example indicates that 200 and 302 are cached for one hour. This parameter is used with fastcgi_cache.

Fastcgi_cache_valid 301 1D: Cache 301 responses for one day.

Fastcgi_cache_valid any 1M: Cache other replies for 1 minute.

Fastcgi_cache_min_uses 1: This directive is used to set how many requests the same URL will be cached.

fastcgi_cache_key http://Request_uri: This directive is used to set the Key value of the Web cache, which nginx stores based on the MD5 hash of the Key value. According to the generalVariables such as request_URI (the requested path) are combined to form proxy_cache_key.

Fastcgi_pass: Specifies the port and address on which the FastCGI server listens, either local or otherwise. Conclusion:

Nginx provides proxy_cache and fastcgi_cache

Proxy_cache is used to cache the contents of the backend server, which can be anything, static or dynamic. Fastcgi_cache is used to cache fastCGI generated content, in many cases PHP generated dynamic content. Proxy_cache reduces the number of times nginx communicates with the back end, saving transmission time and back-end broadband. The fastCgi_cache reduces the number of times nginx communicates with PHP and reduces the stress on PHP and mysql.

8. Gzip tuning

Using gzip compression may save us bandwidth, speed up transmission, have a better experience, and also save us costs, so this is a key point. Nginx compression requires you to use ngx_http_gzip_module, apache uses mod_deflate. Generally we need to compress the content has: text, JS, HTML, CSS, for pictures, videos, flash and so on do not compress, but also note that we use gzip function is CPU consumption!

gzip on; gzip_min_length 2k; gzip_buffers 4 32k; Gzip_http_version 1.1; gzip_comp_level 6; gzip_typestext/plain text/css text/javascriptapplication/json application/javascript application/x-javascriptapplication/xml; gzip_vary on; gzip_proxied any; gzip on; # Enable compression to copy codeCopy the code

Gzip_min_length 1K: sets the minimum number of bytes allowed to compress a page. The number of bytes is obtained from the content-length of the header header. The default value is 0.

Gzip_buffers 4 32K: specifies the size of the compressed buffer. This parameter indicates that four buffers of the same size as the original data are required to store the gzip compressed results.

Gzip_http_version 1.1: compressed version, used to identify the HTTP protocol version. The default value is 1.1. Most browsers support GZIP decompression.

Gzip_comp_level 6: specifies the compression ratio of GZIP. The compression ratio of 1 is the smallest and the processing speed is the fastest. The compression ratio of 9 is the largest and the transmission speed is fast but processing is slow and consumes CPU resources.

Gzip_types TEXT/CSS TEXT/XML application/javascript: Used to specify the compression type. ‘text/ HTML’ type is always compressed. Default value: gzip_types text/ HTML (js/ CSS files are not compressed by default.) Compression type, matching MIME type. Wildcard characters text/* cannot be used; Text/HTML is compressed by default (whether specified or not); To set the compressed text file, refer to conf/mime.types.

Gzip_vary on: varyheader support, this option allows the front-end cache server to cache gZIP-compressed pages, such as Squid cache nginx-compressed data.

Expires cache tuning

Cache, mainly for images, CSS, js elements such as change the opportunity under the condition of less use, especially images, large bandwidth, we can set in the local browser cache 365 d images, CSS, js, HTML can cache a 10 days, so the user first open load slowly, the second time, was very fast! When caching, we need to list the extensions that need to be cached. The Expires cache is set in the Server field.

location ~* \.(ico|jpe? g|gif|png|bmp|swf|flv)$ { expires 30d; #log_not_found off; access_log off; } location ~* \.(js|css)$ { expires 7d; log_not_found off; access_log off; } Duplicate codeCopy the code

Note: log_not_found off; Whether to record non-existent errors in error_log. The default is.

Conclusion:

Expire features advantages: Expires can reduce the bandwidth purchased by websites and save costs. At the same time, improve user access experience; Reducing the pressure on the service and saving the server cost are very important functions of Web services.

Disadvantages of the EXPIRE feature: cached pages or data are updated, and users may see the same old content, which affects the user experience. Workaround: The first reduces the cache time, e.g., 1 day, but not completely, unless the update frequency is greater than 1 day; The second rename the cached object. Content that sites don’t want cached: web traffic statistics tools; Frequently updated files (Google logo).

10, anti-theft chain

Prevent others directly from your website quotes pictures and other links, consumption of your resources and network traffic, then our solution by several: watermark, brand publicity, your bandwidth, server enough; Firewall, direct control, if you know the IP source; The following method is to give a 404 error message directly.

location ~*^.+\.(jpg|gif|png|swf|flv|wma|wmv|asf|mp3|mmf|zip|rar)$ { valid_referers noneblocked www.benet.com benet.com;  if($invalid_referer) { #return 302 http://www.benet.com/img/nolink.jpg; return 404; break; } access_log off; } Duplicate codeCopy the code

The argument can be of the following form: None: means that the Referer header does not exist (meaning empty, i.e. direct access, such as opening an image directly in the browser). Blocked: Indicates that the Referer header is disguised according to the firewall, for example, “Referer:XXXXXXX”. Server_names: A list of one or more servers. After version 0.5.33, you can use the * wildcard in names.

11. Kernel parameter optimization

Fs.file-max = 999999: This parameter indicates the maximum number of handles that a process (such as a worker process) can open at the same time. This parameter limits the maximum number of concurrent connections.

Net.ipv4.tcp_max_tw_buckets = 6000: This parameter indicates the maximum number of TIME_WAIT sockets allowed by the operating system. If this number is exceeded, TIME_WAIT sockets will be cleared immediately and a warning message will be printed. This parameter defaults to 180000, and too many TIME_WAIT sockets can slow down the Web server. Note: A server that actively closes a connection generates a connection in TIME_WAIT state

Net.ipv4. ip_local_port_range = 1024 65000: range of ports that can be opened by the system.

Net.ipv4. tcp_tw_recycle = 1: Enables timewait fast recycle.

Net.ipv4. tcp_tw_reuse = 1: enables reuse. Allows time-wait Sockets to be reused for new TCP connections. This makes sense for a server, where there are always a large number of time-wait connections.

Net.ipv4. tcp_keepalive_time = 30: This parameter indicates the frequency at which TCP sends Keepalive messages when Keepalive is enabled. The default is 2 hours, but setting it lower will make it quicker to clean up invalid connections.

Net.ipv4. tcp_syncookies = 1: Enable SYN Cookies. When the SYN wait queue overflows, enable Cookies.

Net.core. somaxconn = 40960: The Listen backlog for Web applications gives us kernel arguments by default.

Net.core. somaxconn: Limited to 128, while the NGX_LISTEN_BACKLOG defined by Nginx defaults to 511, so it is necessary to adjust this value. Note: For a TCP connection, a three-way handshake is required between the Server and Client to establish the network connection. After the three-way handshake is successful, we can see that the port state changes from LISTEN to ESTABLISHED, and data can be transmitted on the link. Each port in the Listen state has its own listening queue. The length of the listening queue is related to things like the somaxconn parameter and the listen() function in programs that use the port. Somaxconn defines the maximum listening queue length for each port in the system. This is a global parameter, and the default value is 128, which is too small for a high-load Web services environment that often processes new connections. In most environments this value is recommended to be increased to 1024 or more. Large listening queues can also help prevent denial of service DoS attacks.

Net.core.net dev_max_backlog = 262144: The maximum number of packets allowed to be sent to the queue if each network interface receives packets faster than the kernel can process them.

Net. Ipv4. Tcp_max_syn_backlog = 262144: This parameter specifies the maximum length of the TCP three-way handshake establishment queue that can accept SYN requests. The default is 1024. Setting this parameter to a larger size will prevent Linux from losing client connections if Nginx is too busy to accept new connections.

Net.ipv4. tcp_rmem = 10240 87380 12582912: This parameter defines the minimum, default, and maximum value of the TCP accept cache (for the TCP accept sliding window).

Net.ipv4. tcp_wmem = 10240 87380 12582912: This parameter defines the minimum, default, and maximum value of the TCP send cache (used for the TCP send sliding window).

Net.core. rmem_default = 6291456: This parameter indicates the default size of the kernel socket that accepts the cache.

Net.core. wmem_default = 6291456: This parameter represents the default size of the kernel socket send cache.

Net.core. rmem_max = 12582912: This parameter represents the maximum size of the kernel socket accept cache.

Net.core. wmem_max = 12582912: This parameter represents the maximum size of the kernel socket send cache.

Net.ipv4. tcp_syncookies = 1: this parameter is independent of performance and is used to resolve TCP SYN attacks.

Here is a complete set of kernel optimizations:

fs.file-max = 999999 net.ipv4.ip_forward = 0 net.ipv4.conf.default.rp_filter = 1 net.ipv4.conf.default.accept_source_route = 0 kernel.sysrq = 0 kernel.core_uses_pid = 1 net.ipv4.tcp_syncookies = 1 kernel.msgmnb = 65536 kernel.msgmax = 65536 kernel.shmmax = 68719476736 kernel.shmall = 4294967296 net.ipv4.tcp_max_tw_buckets = 6000 net.ipv4.tcp_sack = 1 net.ipv4.tcp_window_scaling = 1 net.ipv4.tcp_rmem = 10240 87380  12582912 net.ipv4.tcp_wmem = 10240 87380 12582912 net.core.wmem_default = 8388608 net.core.rmem_default = 8388608 net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.core.netdev_max_backlog = 262144 net.core.somaxconn = 40960 net.ipv4.tcp_max_orphans = 3276800 net.ipv4.tcp_max_syn_backlog = 262144 net.ipv4.tcp_timestamps = 0 net.ipv4.tcp_synack_retries = 1 net.ipv4.tcp_syn_retries = 1 net.ipv4.tcp_tw_recycle = 1 net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_mem = 94500000 915000000 927000000 net.ipv4.tcp_fin_timeout = 1 net.ipv4.tcp_keepalive_time = 30 Net.ipv4. ip_local_port_range = 1024 65000 Copy the codeCopy the code

Run sysctl -p for the kernel changes to take effect.

12. Optimization of system connection number

The default value of Open Files in Linux is 1024. To view the current system value:

# ulimit -n 1024 Copy codeCopy the code

Note A maximum of 1024 files can be opened simultaneously on the server.

You can run the ulimit -a command to view all limits of the current system. You can run the ulimit -n command to view the maximum number of open files.

The default value of a newly installed Linux server is 1024, so it is easy to encounter error: Too many Open Files on a heavily loaded server. Change needs to be big, therefore, in the/etc/security/limits the conf last increase:

*               soft    nofile           65535
*               hard   nofile           65535
*               soft    noproc         65535
*               hard   noproc         65535
Copy the code