Nginx proxy Service common mode

As a proxy service, Nginx is summarized according to the application scenario mode. The proxy can be divided into forward proxy and reverse proxy

Forward agent

Forward proxy, (Intranet) client <–> proxy -> server

1, client over the wall for example: scientific way to access Google

2, client speed for example: game accelerator.

3, client cache for example: download resources, you can first check whether the proxy service exists, if yes, directly obtain through the proxy.

4, client authorization many companies for security, the need to connect to the Internet through the firewall, firewall can configure rules, allow who can go on the Internet, who can not be above the Internet.

The reverse proxy

Reverse proxy, used in corporate cluster architecture, client -> proxy <–> server

1. The routing function dispatches the URI requested by the user to servers with different functions for processing.

2. Load balancing Selects an appropriate node to process the request sent by the user through the load balancing scheduling algorithm.

3. Static and dynamic separation According to the URl requested by the user, dynamic resources are scheduled to the application server for processing, and static resources are scheduled to the static resource server for processing.

4. Data cache Stores the back-end query data to the reverse proxy cache, which can accelerate the user’s resource acquisition.

The difference between forward and reverse proxies

  • The difference lies in the form of service “object” is not the same, followed by the location of the point is not the same
    • Forward proxy The object of the proxy is the client and serves the client
    • Reverse proxy The reverse proxy serves the server

Nginx proxy service supports protocols

1. As a proxy service, Nginx supports many proxy protocols, as shown in the following figure

⒉. However, in general, we use Nginx as a reverse proxy and often use the following proxy protocols, as shown in the figure below

Nginx reverse proxy scenario practice

1.Nginx reverse proxy configuration syntax example

Syntax : proxy_pass URL;
Default: -
Context: location,if inThe location, limit_except http://localhost:8000/uri/ http://192.168.56.11:8000/uri/ http://unix:/tmp/backend.socket:/uri/Copy the code

2. Configure the Nginx reverse proxy

role The host name IP networks outside (NAT) IP network (LAN)
Proxy proxy01 Eth0:10.0.0.5 Eth1:172.16.1.5
web01 web01 Eth1:172.16.1.7

3. On the web01 server, configure a website and listen in 8080. Only users on network segment 172 can access the website

[root@web01 ~]# cat /etc/nginx/conf.d/web.birenchong.cn.conf
server {
    listen 8080;
    server_name web.birenchong.cn;

    location / {
    	root /code_8080;
    	index index.html;
    }
}

[root@web01 ~]# mkdir /code_8080
[root@web01 ~]# echo "web01-7..." >/code_8080/index.html
[root@web01 ~]# systemctl restart nginx
Copy the code

4. Proxy Configure the proxy server to listen on port 80 of eth0, so that users on the 10.0.0.0 network segment can access the back-end site information on port 8080 of 172.16.1.7 through the proxy server

[root@proxy01 ~]# cat /etc/nginx/conf.d/proxy_web.birenchong.cn.confserver { listen 80; server_name web.birenchong.cn; Location / {proxy_pass http://172.16.1.7:8080; } } [root@proxy01 ~]# systemctl enable nginx
[root@proxy01 ~]# systemctl start nginx
Copy the code

4. Analyze the process of Nginx agent processing the entire request

​ 略

5. Add the request header sent to the back-end server

Syntax : proxy_set_header field value;
Default: proxy_set_header Host $proxy_host;
		 proxy_set_header Connection close;
Context: http,server,location

If the client requests the Host value www.birenchong.cn, the proxy service will carry the Host variable www.birenchong.cn as in the backend request
proxy_set_header Host $http_host;
Insert the value of $remote_ADDR into the x-real-ip variable. $remote_addr is the client'S IP address
proxy_set_header X-Real-IP $remote_addr;
The client accesses the backend service through the proxy service, and the backend service records the real client address through this variable
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
Copy the code

6. HTTP protocol version used by the proxy’s back-end request. The default version is 1.0. If the long connection is used, you are advised to use the 1.1 protocol

Syntax: proxy_http_version | 1.0 1.1; Default: proxy_http_version 1.0; Context: http,server,location This directive appearedinVersion 1.1.4.Copy the code

7. Timeout duration of TCP connection, response, and return between the proxy and the backend

# nginx proxy connection timeout (proxy connection timeout)
Syntax : proxy_connect_timeout time;
Default: proxy_connect_timeout 60s;
Context: http,server,location

# Timeout for the nginx agent to wait for a backend server response (Header)
Syntax : proxy_read_timeout time;
Default: proxy_read_timeout 60s;
Context: http,server,location

Timeout for backend server Data back to nginx agent
Syntax : proxy_send_timeout time;
Default: proxy_send_timeout 60s;
Context: http,server,location
Copy the code

8. Proxy_buffer proxy buffer

1) When buffering is enabled, the nginx proxy server will receive the response Headers and response packets as soon as possible and store them in the buffer set by proxy_buffer_size(Headers) and proxy_buffers(data).

Syntax : proxy_buffering on | off;
Default: proxy_buffering on;
Context: http,server,location
Copy the code

If response packets are too large to be stored in memory, some of them are saved to temporary files on disk. Write temporary files by proxy_temp_path (control temporary storage directory) proxy_max_temp_FILe_size (control temporary storage directory size) proxy_temp_FILe_write_size (control temporary file size) The maximum temporary file size is limited by proxy_buffer_size and proxy_buffers. But when caching is disabled, the Nginx proxy server synchronously passes the response to the client as soon as it receives it. The nginx proxy server does not read the entire response.

2) Proxy_BUFFer_size is used to control the buffer size for the proxy service to read the first part of the back end response Header

Syntax : proxy_buffer_size size;
Default: proxy_buffer_size 4k | 8k;
Context: http,server,location

# proxy_buffer_size 64k;
Copy the code

If a back-end service returns a page size of 256KB, it will allocate 4 64KB buffers to the buffers. If the page size is larger than 256KB, the proxy server will allocate 4 64KB buffers to the buffers. The portion larger than 256KB will be cached in the path specified by proxy_temp_path. This is not a good idea, however, because data is processed faster in memory than on hard disk. Therefore, it is generally recommended to set this value to the middle of the page size generated by the site response. If the page size generated by most of the scripts on your site is 256KB, you can set this value to 16 16k, 4 64k, and so on.

Syntax : proxy_buffers number size; Default: proxy_buffers 8 4k |8k ; Context: http,server,location#proxy_buffers 4 64k;
Copy the code

9. The optimized configuration of proxy websites is as follows. Write the configuration into a new file and reference it with include

[root@Nginx ~]# vim /etc/nginx/proxy_paramsProxy_http_version 1.1; peoxy_set_header Connection"";
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

proxy_connect_timeout 30;
proxy_send_timeout 60;
proxy_read_timeout 60;

proxy_buffering on;
proxy_buffer_size 64k;
proxy_buffers 4 64k;
Copy the code

10. Called when the proxy is configured for location to facilitate repeated use of multiple locations

Location / {proxy_pass http://127.0.0.1:8080; include proxy_params; }Copy the code