1. LVS of load Balancing

The architecture and principle of an LB cluster is simple. When a user's request comes in, it is directly sent to the Director Server. Then the LB cluster intelligently and evenly distributes the request to the real Server based on the preset scheduling algorithm. To avoid different data requests from different machines, shared storage is used to ensure that the data requested by all users is the same. LVS is short for Linux Virtual Server, Linux Virtual Server. This is a an open source project, launched by Dr ZhangWenSong its official web site is http://www.linuxvirtualserver.org the LVS is now part of the standard of the Linux kernel. The technical goal that can be achieved by using LVS is to achieve a high performance and high availability Linux server cluster through LVS load balancing technology and Linux operating system, which has good reliability, scalability and operability. Thus achieving optimal performance at low cost. LVS is an open source software project to implement load balancing cluster. LVS architecture can be logically divided into scheduling layer, Server cluster layer and shared storage.Copy the code

Second, the basic working principle of LVS

1. When a user sends a request to the load balancing Director Server, the scheduler sends the request to the kernel space. 2. 3. IPVS works on the INPUT chain. When a request arrives at the INPUT, it compares the request to its own defined cluster service. If the request is for a defined cluster service, then it forces changes to the destination IP address and port. 4. After receiving the packet from the POSTROUTING link, it is found that the target IP address is its own back-end server. At this time, the packet is finally sent to the back-end server through route selectionCopy the code

Iii. Composition of LVS

LVS consists of two parts, including ipvS and IPVSADm. 1. Ipvs (IP Virtual Server) : a piece of code that works in kernel space, called IPVS, is the code that actually implements scheduling. 2. Ipvsadm: Another section of ipvSADm works in user space, called IPVSADm, and is responsible for writing rules for the IPVS kernel framework that define who is a cluster service and who is a back-end Real Server.Copy the code

Terms related to LVS

1. DS: Director Server. Refers to the front-end load balancer node. 2. RS: Real Server. Back-end real work server. 3. VIP: an IP address that directly requests users externally and serves as the target of user requests. DIP: Director Server IP address used to communicate with internal hosts. RIP: indicates the Real Server IP address, which is the IP address of the back-end Server. 6. CIP: indicates the IP address of the Client. The following is a summary of the principles and characteristics of the three working modes.Copy the code

5. Principles and characteristics of LVS/NAT

1. Focus on understanding the implementation principle of NAT and packet changes.Copy the code

(a). When a user request arrives at the Director Server, the requested data packets go to the PREROUTING chain of the kernel space first. In this case, the source IP address of the packet is CIP and the destination IP address is VIP (b). PREROUTING checks that the destination IP address of the packet is the local host, and sends the packet to the INPUT chain (C). IPVS compares whether the service requested by the packet is a cluster service. The packet is then sent to the POSTROUTING chain. In this case, the source IP address of the packet is CIP and the target IP address is RIP (D). The POSTROUTING chain selects routes and sends the packet to the Real Server (E). After comparison, the Real Server finds that the target is its OWN IP address and sends the response packet back to the Director Server. In this case, the source IP address of the packet is RIP and the target IP address is CIP (f). Before responding to the client, the Director Server changes the source IP address to its OWN VIP address and then responds to the client. In this case, the source IP address of the packet is VIP and the target IP address is CIP. 2. Features of the LVS-NAT model RS should use a private address and the gateway of RS must point to DIP DIP and RIP must be in the same network segment. Director Server becomes a performance bottleneck Port mapping RS can use any operating system. Defects: The Director Server is under heavy pressure, and requests and responses must pass through the Director ServerCopy the code

Principles and characteristics of LVS/DR

1. Set the destination MAC address of the request packet to the selected MAC address of the RSCopy the code

(a) When a user request arrives at the Director Server, the requested data packets go to the PREROUTING chain of the kernel space first. The destination IP address of the packet is CIP and VIP. (b) PREROUTING checks that the destination IP address of the packet is the local machine. (c) IPVS checks whether the service requested by the packet is cluster service. Change the target MAC address to the RIP MAC address, and then send the packet to the POSTROUTING chain. In this case, the source AND destination IP addresses are not changed. Only the source MAC address is changed to the DIP MAC address, and the destination MAC address is changed to the RIP MAC address. (D) Because DS and RS are on the same network, they are transmitted through Layer 2. The POSTROUTING chain checks the MAC address of the target RIP, at which point the packet will be sent to Real Server. (e) RS finds that the MAC address of the request packet is its own MAC address and receives the packet. After the processing is complete, the response packet is sent to eth0 through lo and then sent out. In this case, the source IP address is VIP and the destination IP address is CIP (f). The response packet is finally sent to the client. 2. If a public IP address is used, RIP can be directly accessed through the Internet. The RS and the Director Server must be on the same physical network. All request packets pass through the Director Server. RS can be the gateway of most common operating systems. RS is never allowed to point to the DIP(because we do not allow it to pass through the Director). Lo interface on RS is configured with VIP IP address defect: RS and DS must be in the same machine room. The solution to feature 1 is to bind static addresses on the front-end router and route only VIP addresses to the Director Server. The user may not have the permission to perform routing operations because it may be provided by the carrier, so this method may not use arptables: At the arp level, firewall rules are implemented in ARP resolution to filter RS and respond to ARP requests. This is provided by Iptables to modify the kernel parameters (arp_ignore and arp_announce) on RS to configure the VIP on RS to the alias of the LO interface and restrict it from responding to VIP address resolution requests.Copy the code

7. LVS/Tun principle and characteristics

Encapsulate another layer of IP header, internal IP header (source address is CIP, target IIP is VIP), and outer IP header (source address is DIP, target IP is RIP).Copy the code

(a) When a user request arrives at the Director Server, the requested data packets go to the PREROUTING chain of the kernel space first. In this case, the source IP address of the packet is CIP and the destination IP address is VIP. (b) PREROUTING checks show that the destination IP address of the packet is the local host, and the packet is sent to the INPUT chain. (c) IPVS checks to determine whether the service requested by the packet is a cluster service. If so, another LAYER of IP packets is encapsulated at the header of the request packet, where the source IP is DIP and the destination IP is RIP. Then send it to the POSTROUTING chain. At this time, the source IP is DIP, and the target IP is RIP (D) POSTROUTING chain sends data packets to RS according to the latest ENCAPSULATED IP packets (because there is an extra LAYER of IP header in the outer package, it can be understood that the packets are transmitted through tunnels at this time). In this case, the source IP is DIP and the target IP is RIP (e). After receiving the packet, RS finds that it is its OWN IP address and receives the packet. After removing the outermost IP, RS will find that there is another layer of IP header inside and the target is its LO interface VIP. The port is sent to eth0 through lo, and then sent out. In this case, the source IP address is VIP and the destination IP address is CIP (f), and the response message is finally sent to the client. The LVS-TUN model features RIP, VIP, and DIP are all public IP addresses. The gateway of RS does not and cannot point to DIP. However, the response packets must not pass through the Director Server. The system that does not support port mapping RS must support tunnels. In enterprises, DR is the most commonly used implementation mode, but NAT configuration is simple and convenient.Copy the code

Eight scheduling algorithms of LVS

1. Round-robin RR scheduling is the simplest algorithm. It schedules requests to different servers in a round-robin manner. The polling algorithm assumes that all servers are equally capable of handling requests, and the scheduler will evenly distribute all requests to each real server, regardless of back-end RS configuration and processing power. 2. Weighted round call WRR algorithm has a more concept of weight than RR algorithm. You can set the weight for RS. The higher the weight is, the more requests are distributed. LVS is an optimization and supplement to THE RR algorithm. LVS considers the performance of each server and assigns A weight to each server. If the weight of server A is 1 and that of server B is 2, the number of requests scheduled to server B will be twice that of server A. The higher the weight of the server, the more requests it processes. The algorithm will decide who to distribute the request to according to the number of connections of the back-end RS. For example, if the number of connections of RS1 is less than that of RS2, the request will be sent to RS1 first. Weighted least link WLC this algorithm has a weight concept more than LC. 5. Locally based least Connection scheduling algorithm LBLC this algorithm is a scheduling algorithm that requests the destination IP address of the packet. The algorithm first looks for all the servers using the nearest destination IP address according to the destination IP address of the request. The scheduler will try to select the same server, otherwise it will continue to select another viable server 6. Instead of recording the connection between a target IP address and a single server, the complex, location-based connection algorithm LBLCR maintains a mapping between a target IP address and a set of servers, preventing a single point of server load from being too high. 7. Destination IP address hash scheduling algorithm DH The hash function is used to map the destination IP address to the server. When the server is unavailable or overloaded, the requests sent to the destination IP address are sent to the server. 8. The source address hash scheduling algorithm SH is similar to the target address hash scheduling algorithm, but it statically allocates fixed server resources according to the source address hash algorithm.Copy the code

Practice THE NAT mode of LVS

The director has an external network card (172.16.254.200) and an internal IP address (192.168.0.8). The two real servers only have Intranet IP addresses (192.168.0.18) and (192.168.0.28), # yum install -y nginx # yum install -y nginx Ipvsadm # yum install -y ipvsadm Director # yum install -y ipvsadm Director # yum install -y ipvsadm Director /bin/bash #director Echo 1 > / proc/sys/net/ipv4 / ip_forward # close icmp redirect echo 0 > / proc/sys/net/ipv4 / conf/all/send_redirects echo 0 > /proc/sys/net/ipv4/conf/default/send_redirects echo 0 > /proc/sys/net/ipv4/conf/eth0/send_redirects echo 0 > / proc/sys/net/ipv4 / conf/eth1 send_redirects # director setup NAT firewall iptables -t NAT NAT - iptables F - t - X iptables - t NAT - A POSTROUTING -s 192.168.0.0/24 -j MASQUERADE #director Set ipvSadm ipvSadm ='/sbin/ipvsadm' $ipvSadm -c $ipvsadm -a-t 172.16.254.200:80 -s WRR $IPVSADM -a t 172.16.254.200:80 -r 192.168.0.18:80 -m -w 1 $IPVSADM -a t 172.16.254.200:80-r 192.168.0.28:80 -m -w 1 After saving, To configure LVS/NAT, run the script /bin/bash /usr/local/sbin/lvs_nat.sh to check the rules of ipvsadm setting ipvsadm-ln to test the effect of LVS Test web content on 2 machines using a browser http://172.16.254.200. To distinguish them, we can change the default page of nginx: Execution on RS1 # echo "rs1rs1" > / usr/share/nginx/HTML/index. The HTML on the RS2 executive # echo "rs2rs2" > / usr/share/nginx/HTML/index. The HTML Note that the IP address of the gateway must be set to the internal IP address of the director on both RS.Copy the code

Practice THE DR mode of LVS

(eth0 192.168.0.8 VIP eth0:0 192.168.0.38) Eth0 192.168.0.18 VIP lo:0 192.168.0.38 # yum install -y install ipvsadm on nginx Director /usr/local/sbin/lvs_dr.sh #! /bin/bash echo 1 > /proc/sys/net/ipv4/ip_forward ipv=/sbin/ipvsadm VIP =192.168.0.38 rs1=192.168.0.18 rs2=192.168.0.28 Ifconfig eth0:0 down ifconfig eth0:0 $VIP broadcast $VIP netmask 255.255.255.255 up route add -host $VIP dev eth0:0 $IPv -c $IPv-a-T $VIP :80 -s WRR $IPv-a-T $VIP :80 -r $rs1:80 -g -w 3 $IPv-a-T $VIP :80 -r $rs2:80 -g -w 1 4, Run the following command to configure lvs_dr_rs.sh on both RSS: vim /usr/local/sbin/lvs_dr_rs.sh #! /bin/bash VIP =192.168.0.38 ifconfig lo:0 $VIP broadcast $VIP netmask 255.255.255.255 Up route add -host $VIP lo:0 echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce echo "1" > / proc/sys/net/ipv4 / conf/all/arp_ignore echo "2" > / proc/sys/net/ipv4 / conf/all/arp_announce respectively performed on rs script: /usr/local/sbin/lvs_dr_rs.sh In DR mode, the gateway of the two RS nodes does not need to be set to the IP address of the DIR node. Reference links: http://www.cnblogs.com/lgfeng/archive/2012/10/16/2726308.htmlCopy the code

11. LVS combined with Keepalive

LVS can achieve load balancing, but cannot perform health checks. For example, if an RS fails, LVS will still forward requests to the failed RS server, which will result in ineffective requests. Keepalive software can perform health checks and simultaneously achieve high availability of LVS to solve the single point of failure of LVS. In fact, Keepalive is designed for LVS. Keepalived1 + lvs1(Director1) : 192.168.0.48 Keepalived2 + lvs2(Director2) : 192.168.0.58 Real Server1: Real Server2:192.168.0.28 IP: Ipvsadm Keepalived -y Real Server + nginx service yum install ipvsadm Keepalived -y Real Server + nginx service Epel -release -y # yum install nginx -y /bin/bash VIP =192.168.0.38 ifconfig lo:0 $VIP broadcast $VIP netmask 255.255.255.255 Up route add -host $VIP lo:0 echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignore echo "2" >/proc/sys/net/ipv4/conf/lo/arp_announce echo "1" > / proc/sys/net/ipv4 / conf/all/arp_ignore echo "2" > / proc/sys/net/ipv4 / conf/all/arp_announce 2 nodes respectively performed on rs script: Bash /usr/local/sbin/lvs_dr_rs.sh keepalived node configuration (2 nodes) : The MASTER node (MASTER) vim configuration file/etc/keepalived/keepalived conf vrrp_instance VI_1 {state MASTER interface eth0 virtual_router_id 51 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.0.38}} virtual_server 192.168.0.38 80 {delay_loop 6 lb_algo rr lb_kind DR Persistence_timeout 0 protocol TCP Real_server 192.168.0.18 80 {weight 1 TCP_CHECK {connect_timeout 10 nb_get_retry 3 delay_before_retry 3 connect_port 80}} real_server 192.168.0.28 80 {weight 1 TCP_CHECK {connect_timeout 10 nb_get_retry 3 delay_before_retry 3 Connect_port 80}}} Copy the keepalive. conf configuration file of the primary node from the BACKUP configuration file and modify the following: State MASTER -> state BACKUP priority 100 -> priority 90 Keepalived nodes run the following command to enable forwarding: Echo 1 > /proc/sys/net/ipv4/ip_forward 4. Start keepalive service keepalived start and then start keepalive service keepalived start. 5 Run the nginx stop service to stop the nginx service from accessing http://192.168.0.38 on the client. The result is normal. Node 18 is not accessed. In experiment 2, manually restart the nginx service on node 192.168.0.18 and run the service nginx start command to access http://192.168.0.38 on the client. The result is normal. Nodes 18 and 28 are accessed using the RR scheduling algorithm. Experiment 3: To test keepalived HA features, first run the command IP addr on the master, you can see 38 VIP on the master node. If you run the service keepalived stop command on the master node, then the VIP is no longer on the master node. If you run the IP addr command on the slave node, you can see that the VIP is floating to the slave node correctly. When the client visits http://192.168.0.38, it is still normal to verify keepalived's HA feature.Copy the code

The LVS is introduced: www.it165.net/admin/html/…