1. Introduction

Our real server should not be directly exposed to the public network, otherwise it will be easier to leak the server information, and more vulnerable to attack. A more “civilian” solution is to use Nginx to reverse proxy it. Today I’ll talk about some of the capabilities of using Nginx reverse proxies. Nginx proxies help us implement a number of very effective API controls. This explains why I have always recommended using Nginx to proxy our Spring Boot application.

2. What capabilities Nginx can provide

Nginx doesn’t need too much praise, it has been widely accepted in the industry. Let’s talk about what it does.

2.1 Agency Capability

This is the most common function we use on the server side. An Nginx server with a public network can proxy a real server that it can communicate with on the Intranet. Keep our servers from being directly exposed to the outside world and increase their resilience.

Suppose that the Nginx server 192.168.1.8 can communicate with the application server 192.168.1.9 on the same Intranet segment, and the Nginx server has the public network capability and binds the public network to the domain name felord.cn. Our Nginx proxy configuration (nginx.conf) looks like this:

server { listen 80; server_name felord.cn; # ^~ indicates that the URI starts with a regular string, and if a match is found, no further matches are made. Location ^~ / API /v1 {proxy_set_header Host $Host; Proxy_pass http://192.168.1.9:8080/; }}Copy the code

After the above configuration our true interface http://192.168.1.9:8080/foo/get server can be accessed through http://felord.cn/api/v1/foo/get.

If proxy_pass ends with /, it is equivalent to the absolute root path. Nginx will not partially proxy the path matching location. If it does not end with a /, the matching portion of the path will also be proxy.

2.2 Rewrite function

Nginx also provides a rewrite capability that lets us rewrite the URI when the request arrives at the server, a bit like Servlet Filter, to do some pre-processing.

In the 2.1 example, if we wanted to return 405 if we judged the request to be POST, we would simply change the configuration to:

location ^~  /api/v1 {
    proxy_set_header Host $host;
    if ($request_method = POST){
      return 405;
    }
    proxy_pass http://192.168.1.9:8080/;
}Copy the code

You can use Nginx’s supplied global variables (such as $request_method in the configuration above) or your own variables as conditions, with regular expressions and flags (last, break, redirect, permanent) to implement URI rewriting and redirection.

2.3 configure HTTPS

Before, many students in the group asked how to configure HTTPS in the Spring Boot project, I recommend using Nginx to do this. Nginx is much more convenient than Spring Boot to configure SSL, and does not affect our local development. The HTTPS configuration in Nginx can be used with the following modifications:

Server {# SSL: listen 443; Server_name felord.cn; Enable SSL SSL on; Ssl_certificate /etc/ssl/cert_felord.cn. CRT; Ssl_certificate_key /etc/ssl/cert_felord.cn. ssl_session_timeout 5m; Ssl_protocols TLSv1 TLSv1.1 TLSv1.2; Ssl_ciphers ecdhe-rSA-aes128-GMC-sha256 :HIGH:! aNULL:! MD5:! RC4:! DHE; Ssl_prefer_server_ciphers on; location ^~ /api/v1 { proxy_set_header Host $host; Proxy_pass http://192.168.1.9:8080/; }} # this is a necessary operation if the user overwrites the HTTP access directly to HTTPS. server_name felord.cn; rewrite ^/(.*)$ https://felord.cn:443/$1 permanent; }}Copy the code

Rewrite is used here to improve the user experience.

2.4 Load Balancing

General projects are from small to big up, the start of the deployment of a server is enough, if your project users more up, first of all congratulations to you, that your project direction is very right. But there’s also server stress, and you don’t want to lose all the money that comes with server downtime. You need to quickly improve your server’s resilience, or you need to keep your service up for maintenance. This can be done with Nginx load balancing, and it’s very simple. Suppose felord.cn we deploy three nodes:

Simplest polling strategy

This configuration is the simplest:

HTTP {upstream app {# 110 upstream server 192.168.1.8:8080; Node 2 server 192.168.1.10:8081; Node 3 server 192.168.1.11:8082; } server { listen 80; server_name felord.cn; # ^~ indicates that the URI starts with a regular string, and if a match is found, no further matches are made. Location ^~ / API /v1 {proxy_set_header Host $Host; # load balancing proxy_pass http://app/; }}}Copy the code

Weighted polling strategy

Specify polling probability, weight proportional to access ratio, for cases where back-end server performance is uneven:

Upstream app {# 1 server 192.168.1.8:8080 weight = 6; Server 192.168.1.10:8081 weight = 3; Server 192.168.1.11:8082 weight = 1; }Copy the code

The final number of requests processed will be allocated 6:3:1. In fact, simple polling can be viewed as all the weights are equal to 1. Polling outages can be automatically eliminated.

IP HASH

Hash based on the access IP address, so that each client accesses the server permanently. If the server is down, you need to manually remove it.

upstream app { ip_hash; Server 192.168.1.9:8080 weight = 6; Server 192.168.1.10:8081 weight = 3; Server 192.168.1.11:8082 weight = 1; }Copy the code

The minimum connection

Requests are forwarded to servers with fewer connections, making full use of server resources:

upstream app { least_conn; Server 192.168.1.9:8080 weight = 6; Server 192.168.1.10:8081 weight = 3; Server 192.168.1.11:8082 weight = 1; }Copy the code

The other way

Other modes of load balancing can be implemented with plug-ins, such as dynamic load balancing with nginx-Upsync-Module. Could we use this to develop a grayscale publishing feature?

2.5 current limiting

By configuring Nginx, we can implement leak-bucket and token-bucket algorithms that limit access speed by limiting the number of requests per unit of time and the number of connections at the same time. I haven’t done a lot of research on this but just to mention it, you can look it up.

3. Summary

Nginx is very powerful, and we recommend using it to proxy our back-end applications. We can configure a lot of useful functions without having to do some non-business logic coding. If you implement stream limiting in Spring Boot and configure SSL, it will also affect local development. Using Nginx allows us to focus on our business. You can say that Nginx acts as a small gateway here. In fact, many well-known gateways are Nginx, such as Kong, Orange, Apache APISIX, etc. If you are interested, you can play the advanced form of Nginx Openresty. In addition, I also have a very good Nginx introductory information for you, you can pay attention to: code farmers xiao Fat brother reply to Nginx get.

Follow our public id: Felordcn for more information

Personal blog: https://felord.cn