How to undertake external access to the newly built Kubernetes cluster traffic, is just starting Kubernetes often encountered problems. On the public cloud, the official answer is a LoadBalancer Service, which uses the load balancing Service provided by the public cloud to accept traffic and perform load balancing among multiple servers.

In a private environment, however, there is no standard practice for how to properly channel external traffic into the cluster. This article will introduce a method of carrying traffic and load balancing based on IPVS for your reference. Before reading this article, it is recommended to understand the basic knowledge of the article. It is recommended to read the article “Analysis of several ways to access Kubernetes cluster applications from outside”.

IPVS

IPVS, part of the LVS project, is a layer 4 load balancer running in the Linux kernel. According to this article, more than 100,000 forward requests per second can be easily handled with a tuned kernel. IPVS is now widely used in medium to large Internet projects to handle traffic at the entrance to web sites.

Kubernetes Service

Service is one of the basic concepts of Kubernetes. It abstracts a group of PODS into a Service, which provides unified external services and implements load balancing among all pods. There are many types of services. The most basic type of Service is ClusterIP, which is used to access services from within a cluster. The NodePort type is used to expose services through a Node port, and the LoadBalancer type is used. The traffic is distributed to the exposed ports of each Node through the front-end load balancer, and then the load is balanced through IPtables, and finally distributed to the actual Pod.

Kube-proxy adds an IPtables rule to the externalIPs field. When an IP address is specified in the externalIPs field, kube-Proxy adds an IPtables rule to the externalIPs field. IPtables implements NAT and forwards traffic to the corresponding service. ExternalIPs is rarely used when a server accepts traffic that is not bound to its own IP, but with other tools at the network layer, it can be used to bind services to externalIP addresses.

Today, we will use externalIPs and IPVS Direct Routing (DR) mode to introduce external traffic into the cluster and achieve load balancing.

Environment set up

To demonstrate, we set up a cluster of four servers. One server runs IPVS, acting as a load balancer. One server runs the Kubernetes Master component, and the other two servers join the Kubernetes cluster as nodes. The construction process is not detailed here, you can refer to the relevant documentation, such as: “Step by step with me to deploy kubernetes cluster”.

All servers are in the 172.17.8.0/24 network segment, and the VIP service is set to 172.17.8.201. The overall architecture is shown in the figure below:

Next let’s configure IPVS and Kubernetes.

Use externalIPs to expose the Kubernetes Service

Start by running two Nginx pods inside the cluster as a demonstration.

$ kubectl run nginx --image=nginx --replicas=2Copy the code

Expose it as Service and set the externalIPs field

$ kubectl expose deployment nginx --port 80 --external-ip 172.17.8.201Copy the code

View the IPtables configuration and verify that the IPtables rule has been added.

$ sudo iptables -t nat -L KUBE-SERVICES -n Chain KUBE-SERVICES (2 references) target prot opt source destination Kube-svc-4n57tfcl4md7ztda TCP -- 0.0.0.0/0 10.3.0.156 /* default/nginx: Cluster IP */ TCP DPT :80 kube-mark-masq TCP -- 0.0.0.0/0 172.17.8.201 /* default/nginx: External IP */ TCP DPT :80 KUBE -svc-4n57tfcl4md7ztda TCP -- 0.0.0.0/0 172.17.8.201 /* default/nginx: external IP */ tcp dpt:80 PHYSDEV match ! --physdev-is-in ADDRTYPE match src-type ! LOCAL kube-svc-4n57tfcl4md7ztda TCP -- 0.0.0.0/0 172.17.8.201 /* default/nginx: External IP */ TCP DPT :80 ADDRTYPE match dst-type LOCAL KUBE- svc-npx46m4pTMtkrn6y TCP -- 0.0.0.0/0 10.3.0.1 /* Default /kubernetes: HTTPS cluster IP */ TCP DPT :443 kube-nodeports all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes service nodeports; NOTE: this must be the last rule in this chain */ ADDRTYPE match dst-type LOCALCopy the code

IPVS is configured to forward traffic

First, on the IPVS server, turn on ipv4_Forward.

$ sudo sysctl -w net.ipv4.ip_forward=1Copy the code

Next, load the IPVS kernel module.

$ sudo modprobe ip_vsCopy the code

Bind the VIP to the network adapter.

$sudo ifconfig eth0:0 172.17.8.201 netmask 255.255.255.0 broadcast 172.17.8.255Copy the code

To configure IPVS using ipvSADm, we use the Docker image directly to avoid binding to a particular distribution.

$docker run --privileged -it --rm --net host luizbafilho/ipvsadm / # ipvsadm IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn / # ipvsadm -a -t 172.17.8.201:80 / # ipvsadm -a -t 172.17.8.201:80 -r 172.17.8.11:80 -g / # ipvsadm -a -t 172.17.8.201:80 -r 172.17.8.12:80 -g / # ipvsadm IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 172.17.8.201: HTTP WLC -> 172.17.8.11: HTTP Route 1 0 0 -> 172.17.8.12: HTTP Route 1 0 0Copy the code

As you can see, we successfully set up the forwarding from the VIP to the back-end server.

Verifying the forwarding effect

Using curl to check whether the Nginx service can be accessed correctly.

$curl http://172.17.8.201 <! DOCTYPE html> <html> <head> <title>Welcome to nginx! </title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx! </h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>Copy the code

Packets are then captured on 172.17.8.11 to verify that IPVS is working properly.

$ sudo tcpdump -i any port 80 tcpdump: verbose output suppressed, use -v or -vv for full protocol decode listening on any, link-type LINUX_SLL (Linux cooked), Capture size 262144 bytes 04:09:07.503858 IP 172.17.8.1.51921 > 172.17.8.201. HTTP: Flags [S], seq 2747628840, win 65535, options [mss 1460,nop,wscale 5,nop,nop,TS val 1332071005 ecr 0,sackOK,eol], Length 0 04:09:07.504241 IP 10.2.0.1.51921 > 10.2.0.3. HTTP: Flags [S], seq 2747628840, win 65535, options [mss 1460,nop,wscale 5,nop,nop,TS val 1332071005 ecr 0,sackOK,eol], Length 0 04:09:07.504498 IP 10.2.0.1.51921 > 10.2.0.3. HTTP: Flags [S], seq 2747628840, win 65535, options [mss 1460,nop,wscale 5,nop,nop,TS val 1332071005 ecr 0,sackOK,eol], Length 0 04:09:07.504827 IP 10.2.0.3. HTTP > 10.2.0.1.51921: Flags [S.], seq 3762638044, ack 2747628841, win 28960, options [mss 1460,sackOK,TS val 153786592 ecr 1332071005,nop,wscale 7], Length 0 04:09:07.504827 IP 10.2.0.3. HTTP > 172.17.8.1.51921: Flags [S.], seq 3762638044, ack 2747628841, win 28960, options [mss 1460,sackOK,TS val 153786592 ecr 1332071005,nop,wscale 7], Length 0 04:09:07.504888 IP 172.17.8.201. HTTP > 172.17.8.1.51921: Flags [S.], seq 3762638044, ack 2747628841, win 28960, options [mss 1460,sackOK,TS val 153786592 ecr 1332071005,nop,wscale 7], Length 0 04:09:07.505599 IP 172.17.8.1.51921 > 172.17.8.201. HTTP: Flags [.], ack 1, win 4117, options [nop,nop,TS val 1332071007 ecr 153786592], length 0Copy the code

As you can see, packets sent from 172.17.8.1 to 172.17.8.201 were forwarded by IPVS to 172.17.8.11 and then NAT to Pod 10.2.0.3. The returned packet was sent directly from 172.17.8.11 to 172.17.8.1, bypassing the IPVS server. Note DR mode of IPVS works properly. Repeat the test for multiple times. Traffic enters from 172.17.8.11 and 172.17.8.12 and is distributed to different pods. This indicates that load balancing works properly.

Unlike the traditional IPVS DR configuration, we do not bind VIP to the server that receives traffic and then disable ARP. That’s because processing for viPs happens directly on IPtables, so instead of running a program on the server to take traffic, IPtables forwards it to the corresponding Pod.

To accept traffic using this method, you only need to configure externalIPs as VIP and do not need to do any special Settings for the server.

conclusion

In this paper, IPVS and externalIPs implementation are demonstrated to import external traffic into Kubernetes cluster and implement load balancing. Hopefully, this will help you understand how IPVS and externalIPs work so that they can be used in appropriate situations to solve problems. In actual deployment, issues such as background server availability check, IPVS node primary/secondary backup, and horizontal scaling are also considered. I won’t go into details here.

Kubernetes has a number of very useful features similar to externalIPs, some of which are even configured using annotations for future sharing.

Copyright: This article is copyrighted by the author

Today’s idea

Too want the same things, it is the beginning of the lost, no, miss will be so much that you got, and afraid to lose, a grasp of tight sand, erosion, the more a person, the forcibly hold, the faster he walked, when you open your arms, your hands, your arms to embrace the whole world, when you don’t want anything, everything is a pleasant surprise.

— A zen little Monk

Recommended reading

  • Diagram of Docker architecture

  • Illustrates the Kubernetes architecture

  • Rapid deployment of Ingress using Helm

  • How Docker commands work

  • Kubernetes data persistence scheme