“This is the 20th day of my participation in the November Gwen Challenge. See details of the event: The Last Gwen Challenge 2021”.

Previous articles:

  • I met Nginx
  • The installation of the nginx
  • Nginx core configuration file structure
  • Nginx Static Resource Deployment [1]
  • Nginx Static Resource Deployment [2]
  • Nginx Static Resource Deployment [3]
  • Nginx Static Resource Deployment [4]
  • Nginx reverse proxy [1]
  • Nginx reverse proxy [2]

Overview of Load Balancing

Early web traffic and business functions are simple, single server is enough to meet basic needs, but with the development of Internet, more and more business flow and the business logic also follow more and more complex, the performance of the single server and a single point of failure problem is highlighted, so need more servers to the performance of scale and avoid a single point of failure. So how do you distribute request traffic from different users to different servers?

Load balancing principles and processing flow

The expansion of the system can be divided into vertical expansion and horizontal expansion.

Vertical expansion is to improve the server processing capacity by increasing the hardware processing capacity of the system from the perspective of a single machine

Horizontal scaling is done by adding machines to meet the processing power of large web services.

There are two important roles involved: “application cluster” and “load balancer”.

Application cluster: The same application is deployed on multiple machines to form a processing cluster, which receives the requests from the load balancing device, processes the requests, and returns the response data.

Load balancer: Distributes user access requests to a server in the cluster for processing based on the load balancing algorithm.

Function of load balancing

1. Solve the high concurrency pressure of the server and improve the processing performance of the application.

2. Provide failover to achieve high availability.

3. Enhance the scalability of the site by adding or reducing the number of servers.

4. Perform filtering on the load balancer to improve system security.

Common processing mode of load balancing

Method 1: Select manually

This way is relatively primitive, as long as the way to achieve is to provide different lines and different server links on the website home page, so that users can choose their own access to the specific server, to achieve load balancing.

Method 2 :DNS polling

DNS

The Domain Name System (DNS) protocol is a distributed network directory service, which is used to translate domain names and IP addresses.Copy the code

Most domain name registrars support adding multiple records to the same host name. This is called DNS polling. The DNS server randomly allocates resolution requests to different IP addresses based on the order of certain records to implement simple load balancing. The cost of DNS polling is very low and is used frequently on some unimportant servers.

DNS polling does not require too much cost. Although DNS polling is cheap, DNS load balancing has obvious disadvantages.

1. Low reliability

Suppose that a domain name DNS polling multiple servers, if one server fails, then all the request will not be able to access the server respond, even if you will be the server IP is removed from the DNS, but due to the major broadband portal will many DNS stored in the cache, to save the access time, lead to the DNS not updated in real time. Therefore, DNS takes turns to solve the load balancing problem to a certain extent, but it has the disadvantage of low reliability.

2. Load balancing is unbalanced

DNS load balancing adopts a simple polling load balancing algorithm. It cannot distinguish between servers, reflect the current running status of servers, and cannot allocate multiple requests to servers with good performance. In addition, local computers cache the mapping of resolved domain names to IP addresses. In this case, users using the DNS server access the same Web server within a certain period of time, causing load imbalance on the Web server.

The load imbalance will result in a few servers are under low load, while other servers are under high load, and the speed of processing requests is slow. The server with high configuration allocates fewer requests, while the server with low configuration allocates more requests.

Mode 3: Layer 4 or layer 7 load balancing

Before introducing layer 4/7 load balancers, we looked at the Open System Interconnection (OSI), or OSI, a network of architectures specified by the International Organization for Standardization (ISO) that are not based on specific models, operating systems or companies. The model divides the network communication work into seven layers.

Application layer: Provides network services for applications.

Presentation layer: data formatting, encoding, encryption, compression and other operations.

Session layer: establishes, maintains, and manages session connections.

Transport layer: Establishes, maintains, and manages end-to-end connections, such as TCP/UDP.

Network layer: IP addressing and routing

Data link layer: Controls communication between the network layer and the physical layer.

Physical layer: Bitstream transmission.

The so-called four-layer load balancing refers to the transport layer in the OSI seven-layer model, which is mainly based on IP+PORT load balancing

Hardware: F5 big-IP, Radware and other software: LVS, Nginx, and HayproxyCopy the code

The so-called layer 7 load balancing refers to the load balancing based on virtual URL or host IP address at the application layer

Methods to achieve seven-layer load balancing: software: Nginx, Hayproxy, etcCopy the code

The difference between layer 4 and Layer 7 load balancing

Layer-4 LB packets are distributed at the bottom layer, while Layer-7 LB packets are distributed at the top. Therefore, layer-4 LB packets are more efficient than layer-7 LB packets. Layer-4 LOAD balancing does not identify domain names, while Layer-7 load balancing identifies domain names.Copy the code

Layer 4 and Layer 7 load balancing is implemented at the data link layer based on MAC addresses, and Layer 3 load balancing is implemented at the network layer based on virtual IP addresses.

The pattern adopted by the actual environment

Four layer load (LVS)+ seven layer load (Nginx)Copy the code

Nginx layer 4 load balancing

Today we will introduce nGINx layer 4 load balancing

After 1.9, Nginx added a stream module to implement layer 4 protocol forwarding, proxy, load balancing, etc. The use of the Stream module is similar to that of HTTP, allowing us to configure a set of TCP or UDP listeners and forward our requests through proxy_pass and load balancing by adding back-end services through upstream.

LVS, HAProxy, F5 and so on are generally used to achieve the load balancing of the four-layer protocol, which is either very expensive or very troublesome to configure. However, the configuration of Nginx is relatively simpler and can complete the work more quickly.

Added support for the STREAM module

Nginx does not compile this module by default. If you want to use the stream module, add –with-stream when compiling.

Complete the steps to add –with-stream:

The original/usr/local/nginx/sbin/nginx before you make a backup copy nginx configuration information In nginx installation source configured to specify the corresponding module. / configure -- with - stream through the make template to compile Nginx: /usr/local/nginx/sbin: /usr/local/nginx/sbin: /usr/local/nginx/sbinCopy the code

Nginx four layer load balancing instructions

The stream order

This directive provides the configuration file context in which the stream server directive is specified. The same level as HTTP directives.

grammar stream { … }
The default value
location main
Upstream instruction

This directive is similar to the HTTP upstream directive.

An example of four-layer load balancing

Demand analysis

Implementation steps

(1) Prepare Redis server. Prepare two Redis on one server with port 6379,6378 respectively

1. Upload the Redis installation package, Redis-4.0.14.tar. gz

2. Decompress the installation package

The tar - ZXF redis - 4.0.14. Tar. GzCopy the code

3. Access the Redis installation package

CD redis - 4.0.14Copy the code

4. Use make and install to compile and install

make PREFIX=/usr/local/redis/redis01 install
Copy the code

5. Copy redis redis configuration file. The conf to/usr/local/redis/redis01 / bin directory

cp redis.conf   /usr/local/redis/redis01/bin
Copy the code

6. Modify the redis.conf configuration file

Port 6379 #redis daemonize yesCopy the code

7. Copy redis01 as redis02

cd /usr/local/redis
cp -r redis01 redis02
Copy the code

8. Modify the redis.conf file in the redis02 folder

Port 6378 # daemonize yesCopy the code

9. Start them separately to obtain two Redis and view them

ps -ef | grep redis
Copy the code

Use Nginx to distribute requests to different Redis servers.

(2) Prepare the Tomcat server.

1. Upload the Tomcat installation package apache-tomcat-8.5.56.tar.gz

2. Decompress the installation package

The tar - ZXF apache tomcat - 8.5.56. Tar. GzCopy the code

3. Go to the bin directory of tomcat

CD apache tomcat - 8.5.56 / bin/startupCopy the code

Nginx. Conf configuration

Stream {upstream redisbackend {server 192.168.200.146:6379; Server 192.168.200.146:6378; Upstream tomcatbackend {server 192.168.200.146:8080; } server { listen 81; proxy_pass redisbackend; } server { listen 82; proxy_pass tomcatbackend; }}Copy the code