Author: JackTian Public account: Jack’s IT Journey (ID: Jake_Internet)

This is the third day of my participation in the More text Challenge. For details, see more text Challenge


1. Load balancing

When the number of visits to a server is greater, the server will bear more pressure, beyond the specified access pressure will collapse, to avoid such things happen, so there is a load balancing to share the pressure of the server.

So what exactly is load balancing? Popular to speak, is that we have dozens, hundreds, or even more server, the server of a cluster of servers, when the client data, access to a device among first sends a request to a server, and through the intermediate server in the server in the cluster evenly spread to other server, therefore, When a user sends a request each time, all the devices in the server cluster are evenly distributed to share the server pressure. In this way, the collation performance of the server cluster is optimal and the crash phenomenon is avoided.

The role of Nginx load balancing

  • Forward capability: Nginx will forward the requests sent by clients to different application servers according to a certain algorithm of polling and weight, and reduce the pressure on a single server and improve the concurrency of the server.
  • Fault migration: When a server fails, the client automatically sends requests to other servers. *Add back: When the faulty server recovers, it is automatically added to the processing of user requests.

Three, Nginx load balancing strategies

1) Polling (default)

Each request sent by the client is allocated to different back-end servers one by one in chronological order. If the back-end server goes down, the request is automatically deleted.

Upstream backserver {server 192.168.1.10; Server 192.168.1.11; }Copy the code
2) weight

Weight represents the weight. The default value is 1. The higher the weight, the more clients will be assigned.

Specifies the polling probability, weight is proportional to the access ratio, used in the case of uneven backend server performance, that is, the server that has fewer connections will be routed to the server.

Upstream backserver {server 192.168.1.10 weight=3; Server 192.168.1.11 weight = 7; }Copy the code
3) ip_hash

Each request is assigned according to the hash result of the access IP. Each visitor accesses a fixed back-end server, which can solve the session problem.

upstream backserver {
    ip_hash;
    server 192.168.1.10:80;
    server 192.168.1.11:88;
    }
Copy the code
4) Fair (third party)

Requests are allocated based on the response time of the back-end server, and those with short response times are allocated first.

upstream backserver {
    server server1;
    server server2;
    fair;
    }
Copy the code

5) url_hash (third party)

Requests are allocated based on the hash result of the accessed URL, so that each URL is directed to the same back-end server. This is effective when the back-end server is cached.

upstream backserver {
    server squid1:3128;
    server squid2:3128;
    hash $request_uri;
    hash_method crc32;
    }
Copy the code

Iv. What are the common load balancing schemes?

The following figure is divided into the common Distributed architecture of the Internet, which is mainly divided into:

  • The client layer
  • Reverse proxy layer
  • Server site layer
  • The service layer
  • The data layer

Therefore, it can be seen that the upstream can access the downstream from the request sent by one client to the final data layer, achieving the final uniform amortization.

Layer 1: from the client layer to the reverse proxy layer

The load balancing between the client layer and the reverse proxy layer is implemented through DNS polling. Multiple IP addresses are configured for the corresponding domain name on the DNS server. When the client sends a request to the DNS server, the DNS server polls the IP address configured for the corresponding domain name to ensure that the RESOLVED IP address is the same as the IP address of the Nginx server. The distribution of requests to the Nginx server will also be balanced.

Layer 2: From the reverse proxy layer to the server site layer

The load balancing between the reverse proxy layer and the server site layer is implemented by Nginx. The nginx.conf configuration file is modified to achieve multiple load balancing policies.

PS: Here we go throughnginx.conf3. Nginx load balancing methods (including: polling, weight, IP_hash, fair (third party), url_hash (third party))

Layer 3: From the server site layer to the service layer

The load balancing between the server site layer and the service layer is achieved by the service connection pool. The upstream connection pool establishes multiple connections to the downstream service, and each request will randomly select connections to access the downstream service.

Layer 4: From the service layer to the data layer

When a large amount of data is generated from the service layer to the data layer, the data layer (DB, cache) involves horizontal data splitting. Therefore, the load balancing of the data layer is more complicated, which is divided into data balancing and request balancing.

  • Data equalization: refers to after horizontal segmentationEach service (DB, cache).The amount of dataIt’s uniform.
  • Balancing of requests: refers to after horizontal segmentationEach service (DB, cache).The request quantityIt’s uniform.

There are two common ways of horizontal segmentation:

The first way: divide by range level

Each data service stores a certain range of data

  • User0 service, storage UID range: 1-1KW;
  • User1 service. The storage UID ranges from 1KW to 2KW;

The benefits of this scheme are:

  • Rules are simple, the service only needs to determine the UID range to route to the corresponding storage service;
  • The data balance is good;
  • Easy extension, a UID [2KW, 3KW] data service can be added at any time;

The disadvantages of this scheme are:

The load of requests may not be balanced. New users will be more active than old users, and the pressure of service requests in a large range will be greater.

The second: by ID hash level segmentation

Each data service stores part of the data after the hash of a key value

  • User0 service to store even-numbered UID data;
  • User1 service, which stores odd UID data;

The benefits of this scheme are:

  • Rules are simple, service hashes the UID and routes it to the corresponding storage service.
  • The data balance is good;
  • The request uniformity is good;

The disadvantages of this scheme are:

  • It is not easy to scale. To scale a data service, data migration may be required when the hash method changes.

Nginx load balancing configuration instance

1. Achieve results

In the browser address bar, type http://192.168.1.10/abc/20200320.html, load balancing effect on average to the port number 8080 and 8081.

2. Preparation

1) Prepare two Tomcat servers, one 8080 and the other 8081.

2) In the Webapps directory of the two Tomcat servers, create a folder named ABC and create the page 20200320. HTML in the ABC folder to test.

In the previous article, we created 8080 and 8081 for the two Tomcat services, so we don’t need to create them here, and check to see if there are test page files in the Webapps directory under the 8080 and 8081 services, respectively. If not, we can create our own.

Tomcat8080

# cat/root/tomcat8080 / apache tomcat - 7.0.70 / webapps/ABC / 20200320 HTML < h1 > welcome to tomcat 8080! </h1>Copy the code

Tomcat8081

# CD /root/tomcat8081/apache-tomcat-7.0.70/webapps/ # mkdir ABC # CD ABC / # vim 20200320. HTML <h1>welcome to tomcat 8081! </h1>Copy the code

Switch to /root/tomcat8081/apache-tomcat-7.0.70/bin/ and start the Tomcat service of 8081.

#./startup.sh Using CATALINA_BASE: /root/tomcat8081/apache-tomcat-7.0.70 Using CATALINA_HOME: /root/tomcat8081/apache-tomcat-7.0.70 Using CATALINA_TMPDIR: /root/tomcat8081/apache-tomcat-7.0.70/temp Using JRE_HOME: /usr Using CLASSPATH: / root/tomcat8081 / apache tomcat - 7.0.70 / bin/bootstrap jar: / root/tomcat8081 / apache tomcat - 7.0.70 / bin/tomcat - juli. Jar tomcat  started.Copy the code

The validation test

Respectively in the client browser testing Tomcat8080: http://192.168.1.10/abc/20200320.html and Tomcat8081: http://192.168.1.10:8081/abc/20200320.html.

Tomcat8080

Tomcat8081

3) Configure load balancing in the Nginx configuration file;

“Upstream myserver” : select * from ‘localhost’ where ‘server_name’ = ‘Nginx server’ and ‘proxy_pass http://myserver’; Can;

# vim /usr/local/nginx/conf/nginx.conf 17 http { 18 ...... 34 upstream myServer {35 server 192.168.1.10:8080; 36 server 192.168.1.10:8081; 37 } 38 39 server { 40 listen 80; 41 server_name 192.168.1.10. 42 43 #charset koi8-r; 44 45 #access_log logs/host.access.log main; 46 47 location / { 48 proxy_pass http://myserver; 49 root html; 50 index index.html index.htm; 51} 52...Copy the code

After configuring load balancing for the Nginx file, the Nginx service is restarted. The following problems occur:

/nginx -s stop nginx: [warn] Conflicting Server name "192.168.1.10" on 0.0.0.0:80, ignored #./nginxCopy the code

It means it’s bound againserver name, this warning does not affect server operation. Furthermore, this repeated binding means that it is now runningNginxService and the new configuration that will be loaded, so this warning is unnecessary.

The validation test

Enter in the client browser: http://192.168.1.10/abc/20200320.html, and constantly refresh, observe changes, this is the request from the client to share into different Tomcat service, known as an effect of load balancing.

The effect of load balancing

conclusion

This article introduces what load balancing, the role of Nginx load balancing, Nginx load balancing strategies, common load balancing schemes, Nginx load balancing configuration instances, etc. Load balancing is one of the factors that must be considered in the design of a distributed system architecture. It usually refers to the process of evenly distributing requests/data across multiple units of operation. The key to load balancing is to evenly distribute:

  • Reverse proxy layerThe load balancing is passedDNS round-robinImplementation;
  • Server site layerThe load balancing is passedNginxImplementation;
  • The service layerThe load balancing is passedService connection poolImplementation;
  • The data layerFor load balancing, to considerData equalizationwithBalancing of requestsTwo things, the common way isDivide it horizontally by rangewithHash level segmentation

Original is not easy, if you think this article is a little useful to you, please give me a like, comment or forward for this article, because this will be my power to output more quality articles, thanks!

See you next time!