CC 4.0 by-sa
Blog.csdn.net/qq_23598037…

Based on the technical article ideas on the Internet, their own collation, there are commonly used and not commonly used strategies for reference.

The optimization of nginx

1. Gzip Compression optimization 2. 3. Expires expires Network IO Event Model Optimization 4. Hiding the software Name and version 5. Anti-theft chain optimization 6. Prohibit malicious domain name resolution 7. Prohibit website access through IP address 8. HTTP request method optimization 9. Anti-dos attack Single-IP concurrent connection control and connection rate control 10. Strictly set the permission of web site directory 11. Run nginx processes and sites in prison mode 12. Crawler optimization with the Robot protocol and HTTP_USER_AGENT 13. Configure the error page and send feedback to the user according to the error code 14. Nginx log related optimization Access log cutting polling, do not record the log of the specified element, minimize the log directory permission 15. FastCGI parameter buffer and cache configuration file optimization 17. Php.ini and php-fpm.conf configuration file optimization 18. Deep optimization of Linux kernel for Web services (network connection, IO, memory, etc.) 19. Nginx Encryption transmission optimization (SSL) 20. Web server disk mount and network file system optimization 21. Use nginx cacheCopy the code

1. Basic security optimization

1.1 Hiding version Information

In general, software bugs are version-dependent, so we hide or eliminate sensitive information that web services display to visitors.

1 [root@db01 RPM]# curl -I 10.0.0.8 2 HTTP/1.1 401 Unauthorized 3 Server: Date: Thu, 21 Jul 2016 03: 23:38 GMT 5 Content-Type: nginx # Hidden Version 4 Date: Thu, 21 Jul 2016 03: 23:38 GMT 5 text/html 6 Content-Length: 188 7 Connection: keep-alive 8 WWW-Authenticate: The Basic realm = "oldboy training" process: 10 vim/application/nginx/conf/nginx. 11 under the HTTP module to join the conf: 12 server_tokens off; 13 /application/nginx/sbin/nginx -t 14 /application/nginx/sbin/nginx -s reloadCopy the code

1.2 Hide nginx to modify the source code

Path to modify content:

First path:

/home/oldboy/tools/nginx-1.6.3/ SRC /core/nginx.h 16 line 2 #define NGINX_VER "nginx/" NGINX_VERSION Such as Apache.Copy the code

The second path

/home/oldboy/tools/nginx-1.6.3/ SRC/HTTP /ngx_http_header_filter_module.c第49行. ngx_http_header_filter_module.cstatic 3 sed -i 's#Server:nginx#Server:Apache#g' ngx_http_header_filter_module.cCopy the code

The third path

C on line 21,30 "<hr><center>"NGINX_VER "(http://oldboy.blog.51cto.com)</center>" CRLF "<hr><center>OWS</center>" CRLFCopy the code

And then recompile

1.3 Changing the default user for the Nginx service

The first method:

Change the nginx.conf.default parameter to #user nobody; Instead the user nginx. Nginx;

The second method:

To specify users and user groups when compiling nginx, run the following command:

/configure –prefix=/application/nginx-1.6.3 –user=nginx –group=nginx –with-http_ssl_module –with-http_stub_status_module

1.4 Start Nginx with reduced rights

1 useradd inca 2 cd /home/inca/ 3 mkdir conf logs www 4 echo inca >www/index.html 5 chown -R inca.inca * 6 ln -s / application/nginx/conf/mime types conf/mime. # mime types. The file types media typesCopy the code

egrep -v “#|^$” /application/nginx/conf/nginx.conf.default >conf/nginx.conf

Nginx. conf configuration file

worker_processes 1; error_log /home/inca/logs/error.log; pid /home/inca/logs/ nginx.pid; events { worker_connections 1024; } http { include mime.types; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"' ; default_type application/octet-stream; sendfile on; keepalive_timeout 65; server { listen 8080; server_name localhost; location / { root /home/inca/www; index index.html index.htm; } access_log /home/inca/logs/access.log main; }}Copy the code

Su – Inca – c “/ application/nginx/sbin/nginx – c/home/Inca/conf/nginx. Conf” # start nginx service

Emphasize:

1. All related paths in nginx.conf should be changed

2. The port of a common user is faulty

2. Optimize nginx service performance based on parameters

2.1 Strategies for optimizing the number of Nginx processes

In high-concurrency, high-traffic Web service scenarios, more Nginx processes need to be started in advance to ensure fast response and handle requests from a large number of concurrent users.

worker_processes 1; Generally adjust to the same number of cores as the CPU (for example,2 quad-core cpus count as 8)

(1) View LInux. You can view the number of cpus and total cores

grep processor /proc/cpuinfo|wc -lCopy the code

(2) Check the total number of cpus

grep 'physical id' /proc/cpuinfo|sort|uniq|wc -lCopy the code

(3) Run the top command and press 1 to display the number of ALL CPU cores

Top Press 1 to display the first information CPU0:0.0% US, 0.0% SY, 0.0% Ni,100.0% ID, 0.0% WA, 0.0% HI, 0.0% Si, 0.0Copy the code

2.2 Optimization of binding different Nginx processes to different cpus

By default, nginx processes run on a certain CPU or a certain CPU core, resulting in uneven hardware resource usage by Nginx processes. This section optimizes the use of hardware resources by different Nginx processes for different cpus

Quad-core CPU configuration

worker_processes    4;
worker_cpu_affinity 0001 0010 0100 1000;Copy the code

Dual-core configuration

worker_processes    2;
worker_cpu_affinity 0101 1010;Copy the code

There is also the taskset -c command to assign services to the CPU

2.3 Optimization of nGINx event processing model

The connection processing mechanism of Nginx is that different operating systems use different I/O models. On Linux, Nginx uses epoll I/O multiplexing model, on FreeBSD uses KQueue I/O multiplexing model, and on Solaris uses /dev/pool I/O multiplexing model. Icop used in Windows and so on. To choose different transaction processing model depending on the type of system, choose to have “use [kqueue | rtsig | epool | dev/pool | select | pllo];” We used Linux with Centos6.5, so we adapted the nginx event handling model to the epool model.

events {
worker_connections  1024;
use epoll;
}Copy the code

Official note: When no event handling model is specified, nginx automatically selects the best event handling model service by default

2.4 Adjust the maximum number of client connections allowed by a single Nginx process

Parameter syntax: worker_connections Number Default value: worker_connections 512 Location: events tag

events { worker_connections 1024; # concurrency of a worker process}Copy the code

Total concurrency = worker_processes* worker_connections

2.5 Configuring the maximum number of open files for nginx worker processes

Worker_rlimit_nofile number Changes the maximum number of files that worker processes can open

worker_rlimit_nofile 65535;Copy the code

These parameters are limited by the maximum number of open files in the system.

[root@admin nginx]# cat /proc/sys/fs/file-max
8192Copy the code

Maximum number of open files in a file system

[root@admin nginx]# ulimit -n
1024Copy the code

The program limits the number of files to 1024

Use # ulimit -n 8192 to adjust

Add fs.file-max= XXX at the end of /etc/rc.d/rc.local

2.6 Enabling efficient File Transfer Mode

Set sendFile on;

The sendFile parameter is used to enable efficient file transfer mode. Tcp_nopush and tcp_nodelay can be set to on to prevent network and disk I/O congestion and improve nginx working efficiency.

HTTP {sendfile on; # HTTP, server, location}Copy the code

Tcp_nopush;

Enabling the tcp_nopush parameter allows the httpresponse header and the start of the file to be published in one file, which has the positive effect of reducing the number of network segments (only if sendFile on is enabled).

Example: the sendfile on; tcp_nopush on; tcp_nodelay on; server_tokens off; server_names_hash_bucket_size 128; server_names_hash_max_size 512; keepalive_timeout 65; client_header_timeout 15s; client_body_timeout 15s; send_timeout 60s;Copy the code

2.7 FastCGI Parameters Tuning

The fastCGI parameter is associated with nginx’s backward request to the PHP dynamic engine service.

 

 

 

fastcgi_connect_timeout 240;       
fastcgi_send_timeout 240;
fastcgi_read_timeout 240;
fastcgi_buffer_size 64k;
fastcgi_buffers 4 64k;
fastcgi_busy_buffers_size 128k;
fastcgi_temp_file_write_size 128k;
#fastcgi_temp_path /data/ngx_fcgi_tmp;
fastcgi_cache_path /data/ngx_fcgi_cache levels=2:2 keys_zone=ngx_fcgi_cache:512m inactive=1d max_size=40g;Copy the code

 

 

 

2.8 Configuring Nginx Gzip Compression for Performance optimization

Nginx compression

The nginx GZIP compression module provides the function of compressed file content. Before the user requests the content to be sent to the client, the Nginx server will implement compression according to some specific policies, in order to save the website exit bandwidth, at the same time, speed up the data transmission efficiency, improve the user’s access experience.

2.8.1 Advantages of compression

Improve the user experience of the site: because the content delivered to the user is smaller, the user can visit the page per unit size faster, and the user experience is improved

Save web bandwidth costs: because data is compressed and transmitted, it consumes some CPU resources

2.8.2 Compressed Objects:

Plain text content has a high compression ratio, so it is best to compress plain text content

The plain text file to be compressed must be larger than 1KB. Due to special compression algorithms, a very small file compression becomes larger

Do not compress files such as pictures and videos (streaming media), because most of these files are compressed. If you compress them again, they may not decrease or decrease very much, or they may increase. In addition, the compression will consume a lot of CPU and memory resources

2.8.3 Parameter Description and Configuration Description

Gzip on; # indicates that compression is enabled

1 k gzip_min_length; # indicates the minimum number of bytes of a page that can be compressed. The number of bytes of a page is obtained from the Content-Length of the header. The default value is 0, which indicates that the page is compressed regardless of size. You are advised to set the value to a value greater than 1K. If it’s less than 1 kelvin it’s going to get bigger and bigger

Gzip_buffers 432 k; # compress the cache size

Gzip_http_version 1.1; # compressed version

Gzip_comp_level 9; # Compression ratio

Gzip_types text/CSS text/XML application/javascript. # specify the type of compression

Gzip_vary on; # than the header

Perfect configuration:

Nginx. conf HTTP module gzip on; 1 k gzip_min_length; Gzip_buffers 4 32 k; Gzip_http_version 1.1; Gzip_comp_level 9; Gzip_types text/CSS text/XML application/javascript. gzip_vary on;Copy the code

2.9 Nginx Expires function

For users to access web content set an expiration time, when the user access to the content for the first time, the content stored on the user’s browser, local users after the second time and continue to visit the site, the browser will check already cached in the user’s browser local content, just won’t be the browser to download, until the content of the cache expiration or cleared.

 

 

2.9.1 Expires Functions and Advantages:

Expires can reduce the bandwidth and cost of a site

Speed up the user access to the website, improve the user access experience

The server traffic is reduced, the server pressure will be reduced, the cost of the server will be reduced, and even can save labor costs

This is one of the most important features for almost any Web service, and Apache services have this feature as well.

2.9.2 Nginx Expires

## Add expires header according to URI(path or dir).

location ~ ^/(images|javascript|js|css|flash|media|static)/ {
    expires 360d;
}Copy the code

2.9.3 Disadvantages and Solutions:

When a website’s cached pages or data are updated, users may still see the old cached content, which will affect user experience. How to solve this problem?

First: For frequently needed changing pictures and other files, the object cache time can be shortened. For example, the home page pictures of Baidu, Google and other websites often change into pictures of some festivals. Here, the cache period can be changed to one day

Second: when the website changes or updates the content, you can rename the cached object in the server (website code program)

For the website pictures, attachments, generally will not be directly modified by the user, the user level of modified pictures, in fact, is re-transmitted to the server, although the content is the same but a new picture name

Website upgrade will modify JS, CSS elements, if the revision of the time to change the name of these elements, will make the front-end CDN and the user need to cache the content again

3. Optimization of nginx logs

3.1 Writing scripts to implement log polling

Write a script to implement nginx Access log polling

When a user requests a software, most software will record the access status of the user. Currently, Nginx software does not have the function like Apache to separate logs through Cronolog or Rotatelog. However, Operation personnel can use script development, nginx signal control function or Reload reload to realize automatic log cutting and polling.

Operation steps:

Write a scheduled task

mv www_access.log www_access_$(date +F -d -1day).log
/usr/local/nginx/sbin/nginx -s reloadCopy the code

3.2 Do not Record Unnecessary Logs

In practice, logs of load balancer health check nodes or specific files (such as images, JS, and CSS) do not need to be recorded. PV statistics are calculated on a page basis. In addition, frequent log writing consumes disk I/O and reduces service performance

Specific preparation methods: the location ~. * \. (js | JPG | JPG | jpeg | jpeg | | CSS BMP | | GIF GIF) ${access_log off; }Copy the code

3.3 Setting log Access Permission

If the log directory is /app/logs, the authorization method is:

  chown -R root.root /usr/local/nginx/logs
  chmod -R 600 /usr/local/nginx/logsCopy the code

There is no need to give nginx users access or permission to log directories. Many people do not notice this and this is called a security risk.

4. Nginx site directory and file URL access control

4.1 Restrict program and file access by extension

In the era of Web2.0, most websites are user-centered. These products have something in common, that is, they do not allow users to publish content to the server, but also allow users to send pictures or even attachments to upload to the server, enabling users to upload. It brings great security risks.

Nginx prevents access to PHP,SHELL,PERL,PYTHON program files in the upload directory, so that even if the user uploads Trojan files can not be executed

1 location ~ ^/images/.*\.(php|php5|.sh|.pl|.py)$ 2 { 3 deny all; 5 the location ~ ^ 4} / static /. * \. (PHP | php5. | sh |. Pl. | py) $6 {7 deny all; 8} 9 location ~ * ^ / data/(attachment | avatar) /. * \. (PHP | php5) $10 {11 deny all; 12}Copy the code

Restrictions on directories mentioned above must precede nginx’s PHP service configuration

4.2 Deny access to all files and directories in the specified directory

To disable access to one or more specified directories

location ~ ^/(static)/ {
        deny all;
}

 

location ~ ^/static {
        deny all;
}Copy the code

Disallow directory access and return code 404

server { listen 80; server_name www.etiantian.org etiantian.org; root /data0/www/www; index index.html index.htm; access_log /app/logs/www_access.log commonlog; location /admin/ { return 404; } location /templates/ { return 403; }}Copy the code

4.3 Restrict IP access from website sources

Use ngx_HTTP_access_module to restrict web site source IP access.

Example 1: Disable external access but allow an IP address to access the directory

Location ~ ^/oldboy/ {allow 202.111.12.211; deny all; }Copy the code

Example 2: Restrict and specify IP address or IP segment access.

Location / {deny 192.168.1.1; Allow 192.168.1.0/24; Allow 10.1.1.0/16; deny all; }Copy the code

4.4 Configuring Nginx to prevent Unauthorized domain name Resolution from accessing Enterprise Websites

Problem: How does nginx prevent users from accessing websites using IP addresses?

Method 1: Directly report an error, resulting in poor user experience

server {

listen 80 default_server;

server_name _;

return 501;

}Copy the code

Method 2: Go to the home page through section 301

server {

listen 80 default_server;

server_name _;

rewrite ^(.*) http//:blog.etiantian.org/$1 permanent; 
}Copy the code

 

5. Nginx image anti-theft chain solution.

Simply put, embed your images on your website without your permission.

 

  

5.1 Basic principles of Common Anti-theft solutions

  1. Implement anti-theft chain according to HTTP referer
  2. Prevent theft by cookie

 

5.2 Actual combat of anti-theft chain

The configuration of the site that has been compromised

1 #Preventing hot linking of images and other file types 2 3 location ~* ^.+\.(jpg|png|swf|flv|rar|zip)$ { 4 5 valid_referers none blocked *.etiantian.org etiantian.org; 6 7 if ($invalid_referer) { 8 9 rewrite ^/ http://bbs.etiantian.org/img/nolink.gif; 10 11 } 12 13 root html/www; 14 15}Copy the code


Cloud server, cloud database solution and network security protection are preferred

6. Elegant display of nginx error pages

Example: A 403 error redirects you to the 403.html page

error_page  403  /403.html;Copy the code

7. Nginx site directory file and directory permission optimization

 

 

 

8. Deploy the site application permission Settings

(1) WordPress site directory permission setting

Plan 1: Recommended plan

755 directory:

644 file:

Owner: root

Image and upload directory set owner to WWW

cd /application/apache/html/ chown -R root.root blog

find ./blog/ -type f|xargs chmod 644                                                    find ./blog/ -type d|xargs chmod                                                    755Copy the code

9. Nginx crawler optimization

configuration

if ($http_user_agent ~* LWP:Simple|BBBike|wget)  {

return  403 ;

rewrite ^(.*) http://blog.etiantian.org/$1 permanent; 
}Copy the code

 

Use Nginx to limit HTTP request methods

Configuration:

if ($request_method ! ~ ^(GET|HEAD|POST)$ ) { return 501; }Copy the code

Configure the upload server to restrict the CONFIGURATION of HTTP GET

#Only allow these request methods ##

if ($request_method ~*(GET)$ ) {

   return 501;

}Copy the code

 

 

11. Use CDN for site content acceleration

Characteristics of CDN

Local Cache acceleration improves the speed and stability of enterprise sites, especially those with lots of images and static pages

The mirroring service eliminates the bottleneck caused by the interconnection between different carriers, accelerates the network across carriers, and ensures good access quality for users on different networks.

Remote acceleration Remote access Users automatically select the Cache server based on the DNS load balancing technology and select the fastest Cache server to speed up remote access

Bandwidth optimization The remote Mirror cache server is automatically generated for the server. When remote users access the cache server, data is read from the cache server. This reduces the bandwidth for remote access, shares network traffic, and lightens the load on the WEB server of the original site.

Cluster anti-attack Widely distributed CDN nodes and intelligent redundancy mechanism between nodes can effectively prevent hacker intrusion and reduce the impact of various D.D.O.S attacks on websites, while ensuring better service quality.

12. Start Nginx as a normal user (Prison mode)

The solution

To Nginx service reduction, with ordinary users to run Nginx service, for development and operation and maintenance set up ordinary accounts

Developers can use a normal account to manage applications and logs under the Nginx service and site

Responsibility division: Network problems: operation and maintenance responsibility, the website can not open development responsibility. (Shared responsibility)

Actual combat configuration:

useradd inca

cd /home/inca

mkdir conf www log

echo inca >www/index.html
Copy the code

Modifying a Configuration File

error_log /home/inca/log/error.log

pid /home/inca/log/nginx.pidCopy the code

13. Control the number of concurrent Nginx connections

Ngx_http_limit_conn_module This module is used to limit the number of connections for each defined key, especially for a single TP. Not all connections are counted. A connection that meets the count requirement is one whose entire request header has been read.

1) Limit_conn_zone: Syntax: limit_conn_zone key zone=name:size; Context: HTTP is used to set shared memory areas, and keys can be strings, Nginx native variables, or a combination of the first two. Name indicates the name of the memory region, and size indicates the size of the memory region.

2) limit_conn: limit_conn zone number; Context: HTTP, server, location are used to specify the maximum number of connections set by the key. When the maximum number of connections times out, the server returns 503.

14. Control the rate at which clients request Nginx

The ngx_HTTP_limit_req_module module is used to limit the rate of requests per IP address to access each defined key.

Limit_req_zone Parameters are described as follows: Syntax: limit_req_zone key zone=name:size rate=rate; Context: HTTP is used to set shared memory areas, keys can be strings, Nginx native variables, or a combination of the first two. Name is the name of the memory area, size is the size of the memory area, and rate is the rate (unit: r/s). One request per second. Limit_req Parameters are described as follows: Syntax: limit_req zone=name [burst-number] [nobelay] context: HTTP, server, location: burst=num, a num quick token, when the token is issued, the extra requests will return 503. By default, nodelay queues up for processing within the value of the burst. If this parameter is used, num+1 requests are processed, and 503 is returned for the remaining requests.

Plus a Linux kernel and memory optimization

1. Optimization of kernel parameters

net.ipv4.tcp_max_tw_buckets = 6000

Number of timewaits, default is 180,000.

net.ipv4.ip_local_port_range = 1024 65000

Range of ports allowed to be opened by the system.

net.ipv4.tcp_tw_recycle = 1

Enable timewait quick collection.

net.ipv4.tcp_tw_reuse = 1

Enable reuse. Allows time-wait Sockets to be reused for new TCP connections.

net.ipv4.tcp_syncookies = 1

Enable SYN Cookies to handle SYN wait queue overflow.

net.core.somaxconn = 262144

The listen backlog for web applications limits the kernel parameter net.core.somaxconn to 128 by default, while the NGX_LISTEN_BACKLOG defined by Nginx defaults to 511, so it is necessary to adjust this value.

net.core.netdev_max_backlog = 262144

The maximum number of packets that are allowed to be sent to the queue if each network interface receives packets at a rate faster than the kernel can process them.

net.ipv4.tcp_max_orphans = 262144

The maximum number of TCP sockets in the system that are not associated with any user file handle. If this number is exceeded, the orphan connection is immediately reset and a warning message is printed. This limit is intended only to prevent simple DoS attacks and should not be relied upon or artificially reduced, but rather increased (if memory is added).

net.ipv4.tcp_max_syn_backlog = 262144

The maximum number of connection requests logged that have not received client confirmation. The default value is 1024 for a system with 128 MB of memory and 128 for a system with small memory.

net.ipv4.tcp_timestamps = 0

Timestamps prevent serial number winding. A 1Gbps link is bound to encounter a previously used serial number. Timestamps enable the kernel to accept such “abnormal” packets. I need to turn it off here.

net.ipv4.tcp_synack_retries = 1

To open the connection to the peer, the kernel sends a SYN with an ACK that responds to the previous SYN. The second of the three handshakes. This setting determines how many SYN+ACK packets the kernel sends before abandoning the connection.

net.ipv4.tcp_syn_retries = 1

The number of SYN packets sent before the kernel aborts the connection.

net.ipv4.tcp_fin_timeout = 1

If the socket is closed at the request of the local end, this parameter determines how long it remains in fin-WaIT-2 state. The peer end can go wrong and never close the connection, or even go down unexpectedly. By default, it is 60 seconds. 2.2 The usual kernel value is 180 seconds, 3 You can press this setting, but keep in mind that even if your machine is a lightweight WEB server, there is a risk of memory overflow due to a large number of dead sockets, and fin-WaIT-2 is less dangerous than Fin-Wait-1. Because they can only eat up to 1.5K of memory, but they live longer.

net.ipv4.tcp_keepalive_time = 30

Indicates the frequency at which TCP sends keepalive messages when Keepalive is enabled. The default value is 2 hours.


The default value for Linux is Open Files and Max User processes: 1024

#ulimit -n

1024

#ulimit Cu

1024

Symptom: The server can open only 1024 files and process 1024 user processes at the same time

You can run the ulimit -a command to view all limits of the current system. You can run the ulimit -n command to view the maximum number of open files.

The default value of a newly installed Linux server is 1024, so it is easy to encounter error: Too many Open Files on a heavily loaded server. So you need to make it bigger.

Solutions:

Using ulimit Cn 65535 can be modified immediately, but will be invalid after restart. (note ulimit-shn 65535 is equivalent to ulimit-n 65535, -s means soft, -h means hard)

There are three modification methods:

1. Add a line of ulimit-shn 65535 in /etc/rc.local. Add a line of ulimit-shn 65535 3 to /etc/profile. In the/etc/security/limits the conf last increase:

* soft nofile 65535

* hard nofile 65535

* soft nproc 65535

* hard nproc 65535

The first method has no effect on CentOS, the third method has effect, and the second method has effect on Debian

# ulimit -n

65535

# ulimit -u

65535

Note: The ulimit command itself has soft and hard Settings, add -h is hard, add -s is soft default display soft limit

Soft limit refers to the setting value that is currently in effect on the system. The hard limit value can be lowered by ordinary users. But it can’t increase. Soft limits cannot be set higher than hard limits. Only the root user can increase the hard limit.