The implementation principle of LVS load balancing NAT, FULLNAT, DR and TUN models was introduced before. This chapter is about hands-on practice ~

Practice environment

LVS is now part of the Linux kernel – a module called IPVS that supports NAT, DR, and Tunnel models. Users cannot operate the IPVS module directly. They need to install the interactive software IPvSADM and use IPvSADM to interact with IPVS.

Use 3 UCloud cloud hosts to build the experimental environment. When creating cloud hosts, choose time-sharing purchase, which is more cost-effective.

Experimental machine and environment

  • 3 UCloud cloud hosts, CentOS 7.9 64-bit, 1 core 1 G, need to pay attention to the firewall rules, in practice, choose [Web Server Recommendation], open 22, 3389, 80, 443 port numbers, this can be configured
  • Two Real servers: RS01, RS02, and one load balancing Server: LB01
  • RS01:, RS02:, LB01:
  • RS01, RS02 installs HTTPD, quickly starts HTTP server, and configures different request response
  • LB01 install ipvsadm, and start ipvsadm experimental machine display

NAT mode operation

Review the features of the NAT pattern

  • NAT mode modifies the “destination IP address” or “source IP address” of the packet. All request packets and response packets have to pass through the load balancer, so NAT mode supports port conversion
  • The default gateway for the real server is the load balancer, so the real server and the load balancer must be on the same network segment

At the beginning of the actual operation, the first thing to do is to do some preparatory work, that is, to install and start up the software and services for the installation.

RS01, RS02 install HTTPD, quick start HTTP service

yum install httpd -y && service httpd start
echo "HelloFrom RS01/RS02" > /var/www/html/index.html

Curl curl curl curl curl curl curl curl curl curl curl

LB01 installs ipvsadm and starts ipvsadm

yum install ipvsadm && ipvsadm --save> /etc/sysconfig/ipvsadm && service ipvsadm start

The image below indicates that ipvsadm has been successfully started

With all this preparatory work done, the specific load rules are configured against the NAT pattern.

RS01, RS02

Set the default gateway to DIP, which is the Intranet IP – of LB01

View RS01, RS02 current default gateway

route -n

You can see that the current default gateway is

Set the default gateway to

Route add default gw

After typing the command and pressing Enter, it is normal that there will be no response for a long time. After its connection is broken, and then through LB01 login to RS01, RS02

Delete the previous default gateway

Route del default GW

LB01 configures routing entry rules, using the -a parameter

  • Because the experiment uses the cloud host, and the EIP or extranet IP address of the cloud host itself is mapped to the bound cloud host through NAT, so EIP cannot be regarded as the VIP binding port. Here, use the Intranet IP directly as a DIP

Ipvsadm-a-t rr

Configure the routing entry rules, using the -a parameter


Verify the configuration

ipvsadm -ln

Routing and forwarding enabled

echo 1 >/proc/sys/net/ipv4/ip_forward

Here is a summary of the configuration information for ipvsadm:

-a Add A new virtual server record, that is, add A new virtual server

-a Add a new real server record, that is, add a real server to the virtual server

-t Real server provides TCP service

-S Load Balancing scheduling algorithm, RR stands for polling

-w sets the weight

-r Specifies the real server

-m specifies that LVS uses NAT mode

-g Specifies that LVS uses DR mode

-i specifies that LVS use TUNNEL mode

As you can see, the above configuration uses NAT mode and the scheduling algorithm is polling.

At this point, the configuration is complete, and the next step is to verify whether the LB01 can load to the RS01 and RS02 as expected. Use a browser to directly open the LB01 extranet IP address.

Due to the browser’s caching mechanism, the return may not change during a short refresh. You can use curl to curl something like this.

To this verification success ~

Tunnel Mode implementation

Review the characteristics of the Tunnel model

Tunnel mode does not change the original packet. Instead, it adds a layer of IP header information on top of the original packet. So TUNNEl mode does not support port conversions, and the real server must be able to parse two layers of IP header information

The real server and the load balancer may not be in the same network segment

The real server needs to change the ARP protocol to “hide” the VIP on the LO interface

Tunnel mode is a bit different from other modes in that you can’t use a VIP directly as a DIP like you did before. So an extra DIP is needed:

Begin configuring specific load rules ~

RS01, RS02

Install the IPIP module

modprobe ipip

Verify that the IPIP module was loaded successfully

lsmod | grep ipip

Modify the ARP protocol

echo 1 > /proc/sys/net/ipv4/conf/tunl0/arp_ignore
echo 2 >/proc/sys/net/ipv4/conf/tunl0/arp_announce
echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce

RP_FILTER indicates whether or not to turn on validation for the source address of the packet. Here, simply turn off validation.

echo 0 > /proc/sys/net/ipv4/conf/tunl0/rp_filter
echo 0 > /proc/sys/net/ipv4/conf/all/rp_filter

Configuration DIP

Ifconfig tunl0 Broadcast netmask 255.255.255 up to route add-host tunl0

Verify the configuration


route -n

IPvsadm-a-t WRR configuration routing exit rules, because in A different network segment, here need to configure RS01, RS02 external network address


Configure the DIP installation of the IPIP module

Modprobe ipip ifconfig tunl0 Broadcast netmask 255.255.255 up to Route Add-host  tunl0

Verify the configuration

ipvsadm -ln

route -n

After the configuration is completed, apply for a cloud host to verify the actual operation results.

The DIP is a virtual IP address, so it can not be found in the network, need to manually access the DIP route, access to the LB01.

Route Add-Host GW

Verify (route-n)

Finally, verify that the TUNNEL model was successful.

Verification successful ~

  • This next article will continue with the implementation of the DR model and implementation using Keepalived