Writing in the front

I have used Nginx for a long time, and the most common use scenario is the reverse proxy function (to be exact, I only use its reverse proxy function). I still have a far from enough understanding of nginx. Recently, I read the article “Fully Understand what NGINx can do”, which is quite comprehensive, and it is very suitable for beginners to learn. Authorized by the author (thanks big guy) here to forward, below the original post, good things must be shared out of course…

Fully understand what Nginx can do

preface

This article only focuses on what Nginx can do without loading third-party modules. Because there are too many third-party modules, it cannot be finished. Of course, this article itself may not be complete, after all, it is only my personal use and knowledge.

What can Nginx do

  1. The reverse proxy
  2. Load balancing
  3. HTTP server (including static and dynamic separation)
  4. Forward agent

These are the things I know Nginx can handle without relying on third-party modules. Here’s how to do each of them

The reverse proxy

Reverse proxy should be Nginx do the most one thing, what is the reverse proxy, the following is baidu Encyclopedia: In Reverse Proxy mode, a Proxy server receives connection requests from the Internet, forwards the requests to a server on the internal network, and returns the results obtained from the server to the client requesting connection on the Internet. In this case, the proxy server behaves as a reverse proxy server. In short, the real server cannot be directly accessed by the external network, so a proxy server is required. The proxy server can be accessed by the external network, but it is in the same network environment as the real server. Of course, the proxy server may also be the same server with different ports. Pasted below is a simple code that implements the reverse proxy

server {  
  listen       80;                                                         
  server_name  localhost;                                               
  client_max_body_size 1024M;

  location / {
    proxy_pass http://localhost:8080;
    proxy_set_header Host $host:$server_port; }}Copy the code

After saving the configuration file, start Nginx, so that when we access localhost, we are accessing localhost:8080

Load balancing

Load balancing is also a common function of Nginx. Load balancing means that it is distributed among multiple operation units, such as Web servers, FTP servers, enterprise critical application servers, and other mission-critical servers, so that work tasks can be completed together. In simple terms, when there are two or more servers, the requests are randomly distributed to the specified server for processing according to the rules. In load balancing configuration, the reverse proxy needs to be configured at the same time to jump to the load balancing through the reverse proxy. Nginx currently supports three load balancing policies and two commonly used third-party policies.

RR (default)

Each request is allocated to a different back-end server one by one in chronological order. If the back-end server goes down, the request is automatically deleted. Simple configuration

upstream test {
  server localhost:8080;
  server localhost:8081;
}
server {
  listen       81;                                                         
  server_name  localhost;                                               
  client_max_body_size 1024M;

  location / {
    proxy_pass http://test;
    proxy_set_header Host $host:$server_port; }}Copy the code

The core code for load balancing is

upstream test {
  server localhost:8080;
  server localhost:8081;
}
Copy the code

I have configured two servers here, of course, it is actually one server, but the port is different, and the server of 8081 does not exist, that is to say, it cannot be accessed, but there will be no problem when we access http://localhost. Nginx will jump to http://localhost:8080 by default because Nginx will automatically determine the status of the server. If the server is not accessible (the server hangs), it will not jump to this server, so it avoids the situation that the use of a server hangs, because Nginx defaults to the RR policy. So we don’t need any more Settings.

The weight

Specifies the polling probability, weight proportional to access ratio, in case of uneven backend server performance. For example,

  upstream test {
    server localhost:8080 weight=9;
    server localhost:8081 weight=1;
  }
Copy the code

So one out of 10 times you’re going to get to 8081, and nine out of 10 you’re going to get to 8080

ip_hash

The problem with both methods is that the next request may be sent to another server. When our application is not stateless (using session to store data), there is a big problem, such as saving login information to the session. In many cases, we need a client to access only one server, so we need to use iphash. Each request of iphash is assigned according to the hash result of the access IP, so that each visitor accesses a fixed back-end server, which can solve the session problem.

upstream test {
  ip_hash;
  server localhost:8080;
  server localhost:8081;
}
Copy the code

Fair (third party)

Requests are allocated based on the response time of the back-end server, and those with short response times are allocated first.

upstream backend { 
  fair; 
  server localhost:8080;
  server localhost:8081;
}
Copy the code

Url_hash (third party)

Requests are allocated based on the hash result of the accessed URL, so that each URL is directed to the same back-end server. This is effective when the back-end server is cached. “(upstream) add a hash statement while reading upstream. Do not write weight or other parameters in the server statement

upstream backend { 
  hash $request_uri; 
  hash_method crc32; 
  server localhost:8080;
  server localhost:8081;
}
Copy the code

The above five load balancing modes are applicable to different situations, so you can choose which policy mode to use according to the actual situation. However, fair and URl_hash require third-party modules to be installed. This article focuses on what Nginx can do, so installing third-party modules for Nginx will not be covered in this article

The HTTP server

Nginx itself is also a static resource server, when only static resources, you can use Nginx to do the server, at the same time, it is also very popular now, you can use Nginx to achieve static resource server, first look at Nginx to do static resource server

server { listen 80; server_name localhost; client_max_body_size 1024M; location / { root e:\wwwroot; index index.html; }}Copy the code

In this way, if you visit http://localhost, you will go to the index. HTML in the E disk wwwroot directory by default. If a web site is just a static page, you can deploy it in this way.

Dynamic and static separation

Separation of dynamic and static is to make dynamic web pages in dynamic websites distinguish constant resources from frequently changing resources according to certain rules. After the separation of dynamic and static resources, we can do cache operation according to the characteristics of static resources. This is the core idea of static website processing

upstream test{  
  server localhost:8080;  
  server localhost:8081;  
}   

server {  
  listen       80;  
  server_name  localhost;  

  location / {  
    root   e:\wwwroot;  
    index  index.html;  
  }  

  All static requests are handled by nginx and stored in the HTML directory
  location ~ \.(gif|jpg|jpeg|png|bmp|swf|css|js)$ {  
    root    e:\wwwroot;  
  }  

  # All dynamic requests are forwarded to Tomcat for processing
  location ~ \.(jsp|do)$ {  
    proxy_pass  http://test; } error_page 500 502 503 504 /50x.html; location = /50x.html { root e:\wwwroot; }}Copy the code

In this way, we can put the HTML, images, CSS, and JS in wwwroot, while Tomcat only handles the JSP and requests. For example, when we suffix GIF, Nginx will return the dynamic file from the current request from wwwroot by default. Of course, the static file is on the same server as Nginx. We can also configure it on another server through reverse proxy and load balancing. As long as we understand the basic process, many configurations are very simple

Forward agent

A forward proxy is a server that sits between the client and the origin server. In order to get content from the original server, the client sends a request to the proxy and specifies the target (the original server), and the proxy forwards the request to the original server and returns the obtained content to the client. The forward proxy can only be used by clients. When you need to use your server as a proxy server, you can use Nginx to implement the forward proxy. However, there is a problem with Nginx at present, that is, it does not support HTTPS. Although I have tried to configure the HTTPS forward proxy on Baidu, I still found that the proxy was not available. So I also hope that there are comrades who know the right way to leave a message to explain.

Resolver 114.114.114.114 8.8.8.8; server { resolver_timeout 5s; listen 81; access_log e:\wwwroot\proxy.access.log; error_log e:\wwwroot\proxy.error.log; location / { proxy_pass http://$host$request_uri; }}Copy the code

If resolver is the DNS server configured with the forward proxy and LISTEN is the port of the forward proxy, you can use the server IP address + port number to proxy on Internet Explorer or other proxy plug-ins.

One last word

Start, stop, and configuration file location commands:

/etc/init.d/nginx start/restart Start/restart the Nginx service

/etc/init.d/nginx stop # Stop Nginx service

/etc/nginx/nginx.conf # Nginx configuration file location
Copy the code

I don’t know how many people know this, but I didn’t know it at first, so I often killed the Nginx thread before starting it… The command for Nginx to reread the configuration is

nginx -s reload
Copy the code

So here’s Windows

nginx.exe -s reload
Copy the code

PS: Article source geek tutorial network