1. What is a reverse proxy?

Before introducing reverse proxies, let’s look at forward proxies.

Forward proxy: If the Internet outside the LAN is regarded as a huge resource library, clients on the LAN need to access the Internet through a proxy server. This proxy service is called forward proxy. The following is a schematic of forward proxy.

Due to the working environment, daily work can only be limited to the unit’s LAN, if you want to access the Internet, how to do? This requires the use of forward proxies. I often use forward proxy to access the Internet.

Reverse proxy: Take a look at the schematic below. In fact the client’s agent is no perception, because the client does not require any configuration can access, we only need to send the request to the reverse proxy server, the reverse proxy server to select the target server to get data, returned to the client, at this time the reverse proxy server and the target server is a server, exposure is a proxy server address, The IP address of the real server is hidden.

The difference between a forward proxy and a reverse proxy is, in a word: if we use it by ourselves, it is a forward proxy. If the real server is used, we users have no perception, it is a reverse proxy.

Here’s the problem: reverse proxy server, how do you choose which specific server to hang behind it? The answer is load balancing.

2. Nginx configuration files

Before we learn about Nginx, we need to familiarize ourselves with its configuration files. After all, all the configuration we need to do next (reverse proxy, load balancing, static/static separation, etc.) is based on its configuration files.

The default configuration file of Nginx is in the conf directory of the installation directory. The subsequent use of Nginx is basically to modify the corresponding configuration file. The complete configuration file can be seen at the end of the article. After modifying the nginx.conf configuration file, remember to restart the nginx service.

There are a lot of # signs in the configuration file. This symbol indicates the content of the comment. Remove all the paragraphs starting with #, and the simplified content of the configuration file is as follows:

worker_processes 1; events { worker_connections 1024; }http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; server { listen 80; server_name localhost; location / { root html; index index.html index.htm; } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; }}Copy the code

With the comment information removed, the nginx.conf configuration file can be divided into three parts:

Part ONE: Global blocks

worker_processes  1;
Copy the code

From the start of the configuration file to the events block, it mainly sets some configuration instructions that affect the overall operation of Nginx server, including: Configure the number of users (groups) running the Nginx server, the number of worker processes allowed to be generated, the path and type of process PID storage, the path and type of log storage, and the import of configuration files.

The worker_PROCESSES configuration above is a key configuration for Nginx server concurrent processing services. The larger this value is, the more concurrent processing it can support is limited by hardware, software, and other devices.

The second part: Events block

events {worker_connections 1024; }Copy the code

The instructions involved in the Events block mainly affect the network connection between the Nginx server and the user. Common Settings include: Whether serialization of network connections under multiple work processes is enabled, whether multiple network connections are allowed to be received at the same time, which event-driven model is selected to handle connection requests, the maximum number of connections that each WordProcess can support at the same time, etc.

The above example indicates that each work process supports a maximum of 1024 connections. This part of the configuration has a great impact on Nginx performance, and should be flexibly configured in practice.

Part three: HTTP blocks

http {    include       mime.types;    default_type  application/octet-stream;    sendfile        on;    keepalive_timeout  65;    server {        listen       80;        server_name  localhost;        location / {            root   html;            index  index.html index.htm;        }        error_page   500 502 503 504  /50x.html;        location = /50x.html {            root   html;    }}
Copy the code

This is the most frequent part of the Nginx server configuration, where most of the functionality, such as proxy, cache, and log definition, and third-party module configuration are located. Note that HTTP blocks can also include HTTP global blocks and server blocks. The following reverse proxy, static/static separation, and load balancing are configured in this section

HTTP global block

HTTP global block configuration commands include file import, MIME-Type definition, log customization, connection timeout, maximum number of single link requests, etc.

(2), the server

This and virtual host has a close relationship, virtual host from the user’s point of view, and an independent hardware host is exactly the same, the technology is produced in order to save the cost of Internet server hardware.

Each HTTP block can contain multiple Server blocks, and each server block is equivalent to a virtual host. Each Server block is also divided into global Server blocks and can contain multiple Locaton blocks simultaneously.

1. Global Server block

The most common configuration is the listening configuration of the host and the name or IP configuration of the host.

2, the location

A server block can be configured with multiple Location blocks.

The main purpose of this block is to process a specific request based on the request string received by the Nginx server (e.g. Server_name/URI-string), matching any string other than the virtual host name (or IP alias) (e.g. / URI-string). Address targeting, data caching and response control, as well as many third-party modules are configured here.

3. How do I configure the reverse proxy

1. Reverse proxy Instance 1

Using the Nginx reverse proxy, go to www.123.com and jump to 127.0.0.1:8080.

Note: If you want to jump from www.123.com to the IP address specified on the host, you need to modify the hosts file on the host. Here to skip the

Configuration code

server {listen 80; Server_name 192.168.17.129; location / {root html; index index.html index.htm; Proxy_pass http://127.0.0.1:8080}}Copy the code

In the previous configuration, we listen on port 80. The access domain name is www.123.com (the default port number is port 80 if no port number is added). Therefore, the access to this domain name will jump to 127.0.0.1:8080.

The nginx reverse proxy service listens on port 80 at 192.168.17.129. If a request comes in, it goes to the corresponding server configured by proxy_pass. That’s all.

Experimental results:

Reverse proxy Instance 2

Implementation effect: The Nginx reverse proxy is used to switch to a service on a different port based on the access path. The Nginx listening port is 9001.

Visit http://127.0.0.1:9001/edu/ jump straight to the 127.0.0.1:8081

Visit http://127.0.0.1:9001/vod/ jump straight to the 127.0.0.1:8082

The first step is to prepare two Tomcat servers, one 8001 port and one 8002 port, and prepare the test page

Second, modify the nginx configuration file to configure the server in the HTTP block

server {listen 9001; Server_name 192.168.17.129; Location ~ /edu/ {proxy_pass http://127.0.0.1:8080}location ~ /vod/ {proxy_pass http://127.0.0.1:8081}}Copy the code

According to the above configuration, when the request arrives at the Nginx reverse proxy server, it will be distributed to different services based on the request.

Experimental results:

Added: location directive description

This directive matches urls in the following syntax:

Location [= | | | ~ ~ * ^ ~] uri {} (1), = : used for excluding the regular expression the uri of the former, a string with a uri for requesting the strict match, if the match is successful, will stop down to search for and handles the request immediately. (2), ~ : indicates that the URI contains regular expressions and is case sensitive. (3), ~* : indicates that the URI contains regular expressions and is case insensitive. (4), ^~ : Before using urIs without regular expressions, the Nginx server is required to find the location with the highest degree of matching between the identification URI and the request string, and immediately use this location instead of using the regular URI in the Location block to match the request string.Copy the code

Note: If the URI contains a regular expression, it must be marked with ~ or ~*

Nginx complete configuration file

#user nobody; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; }http { include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' #  '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; server { listen 80; server_name localhost; #charset koi8-r; #access_log logs/host.access.log main; location / { root html; index index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php${# proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php${# root HTML; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; # include fastcgi_params; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } # another virtual host using mix of IP-, name-, and port-based configuration # #server { # listen 8000; # listen somename:8080; # server_name somename alias another.alias; # location / { # root html; # index index.html index.htm; # } #} # HTTPS server # #server { # listen 443 ssl; # server_name localhost; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # ssl_session_cache shared:SSL:1m; # ssl_session_timeout 5m; # ssl_ciphers HIGH:! aNULL:! MD5; # ssl_prefer_server_ciphers on; # location / { # root html; # index index.html index.htm; # #}}}Copy the code

PS: In case you can’t find this article, please click “like” to browse and find it