preface

Spring in May, the epidemic is basically over, worth celebrating, in this beautiful day we come to the K8S high availability load balancing cluster.

Update history

  • 07 May 2020 – First draft – left side
  • The original address – blog.zuolinux.com/2020/05/07/…

Platform environment

Software information

  • CentOS Linux release 7.7.1908 (Kernel 3.10.0-1062.18.1.el7.x86_64)
  • Docker CE 18.09.9
  • Kubernetes v1.18.2
  • The Calico v3.8
  • Keepalived v1.3.5
  • HAproxy v1.5.18

Hardware information

The host name ip
master01 192.168.10.12
master02 192.168.10.13
master03 192.168.10.14
work01 192.168.10.15
work02 192.168.10.16
work03 192.168.10.17
VIP 192.168.10.19

The cluster configuration

Initialize the

The master/worker are executed

# cat >> /etc/hosts << EOF
192.168.10.12    master01
192.168.10.13    master02
192.168.10.14    master03
192.168.10.15    work01
192.168.10.16    work02
192.168.10.17    work03

EOF
Copy the code
SeLinux setenforce 0 sed -i "S/SELINUX = enforcing/SELINUX = disabled/g"/etc/SELINUX/config # close swap swapoff - a yes | cp/etc/fstab/etc/fstab_bak cat / etc/fstab_bak | grep -v swap > / etc/fstab # install wget yum install wget - y # backup/etc/mv. Yum repos. D/CentOS - Base. Repo /etc/yum. Repos. D/centos-base. Repo http://mirrors.aliyun.com/repo/Centos-7.repo# for ali cloud epel wget source - O/etc/yum repos. D/epel. Repo http://mirrors.aliyun.com/repo/epel-7.repo# to clear the cache and create the new cache yum clean all && yum makecache # update yum update - y # synchronization time timedatectl timedatectl set-ntp trueCopy the code

Install the Docker

The master/worker are installed

Y y y y y y y y y y y y y y y y Y Y Y Y Y Y Y Y Yum - config manager \ - add - repo \ http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo # Docker CE installation. Yum IO/docker-ce/cli-18.09.9 / docker-ce/cli-18.09.9 Cat > /etc/docker/daemon.json <<EOF {"exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2", "registry-mirrors":[ "http://hub-mirror.c.163.com", "https://docker.mirrors.ustc.edu.cn", "https://registry.docker-cn.com" ]} EOF mkdir -p /etc/systemd/system/docker.service.d # Restart docker. systemctl daemon-reload systemctl restart dockerCopy the code

Install kubeadm, kubelet, kubectl

Run this command on both master and worker nodes

# configure K8S yum source It is best to use the official source cat < < EOF > / etc/yum repos. D/kubernetes. '[kubernetes] name = kubernetes baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF # add configuration cat < < EOF > / etc/sysctl. D/k8s. Conf Net.ipv4. ip_forward=1 net.bridge.bridge-nf-call-ip6tables =1 net.bridge.bridge-nf-call-iptables =1 EOF # Load sysctl Kubectl yum install -y kubelet-1.18.2 kubeadm-1.18.2 kubectl-1.18.2 kubectl-1.18.2 kubeadm-1.18.2 kubectl-1.18.2 -- DisableExcludes =kubernetes # Start and set kubelet start systemctl start kubelet systemctl enable kubeletCopy the code

HAProxy implements apiserver load balancing cluster

All master nodes execute

Yum install haproxy-1.5.18 -y cat > /etc/haproxy.cfg <<EOF global log 127.0.0.1 local2 chroot /var/lib/haproxy pidfile /var/run/haproxy.pid maxconn 4000 user haproxy group haproxy daemon # turn on stats unix socket stats socket /var/lib/haproxy/stats defaults mode http log global option httplog option dontlognull option http-server-close retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 frontend k8s-api mode tcp option tcplog bind *:16443 default_backend K8s-api Backend K8S-API mode TCP balance Roundrobin Server Master01 192.168.10.12:6443 Check Server Master02 192.168.10.13:6443 Check Server master03 192.168.10.14:6443 Check EOFCopy the code

HAProxy is enabled on all master nodes

systemctl start haproxy
systemctl enable haproxy
Copy the code

Keepalived implements apiserver high availability clustering

All master nodes execute

yum -y install keepalived psmisc
Copy the code

Keepalived configuration on Master01:

# cat > /etc/keepalived/keepalived.conf <<EOF ! Configuration File for keepalived global_defs { router_id master01 script_user root enable_script_security } vrrp_script  check_haproxy { script "killall -0 haproxy" interval 2 weight 10 } vrrp_instance VI_1 { state MASTER interface ens192 virtual_router_id 50 priority 100 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.10.19} track_script {check_haproxy}} EOFCopy the code

Keepalived configuration on Master02:

# cat > /etc/keepalived/keepalived.conf <<EOF ! Configuration File for keepalived global_defs { router_id master02 script_user root enable_script_security } vrrp_script  check_haproxy { script "killall -0 haproxy" interval 2 weight 10 } vrrp_instance VI_1 { state BACKUP interface ens192 virtual_router_id 50 priority 98 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.10.19} track_script {check_haproxy}} EOFCopy the code

Keepalived configuration on Master03:

# cat > /etc/keepalived/keepalived.conf <<EOF ! Configuration File for keepalived global_defs { router_id master03 script_user root enable_script_security } vrrp_script  check_haproxy { script "killall -0 haproxy" interval 2 weight 10 } vrrp_instance VI_1 { state BACKUP interface ens192 virtual_router_id 50 priority 96 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.10.19} track_script {check_haproxy}} EOFCopy the code

All master nodes execute

service keepalived start
systemctl enable keepalived
Copy the code

Pay attention to

  • Vrrp_script weight must be greater than the difference between master and backup priority
  • The nopreempt parameter in vrrp_instance prevents automatic switchback after the master recovers

Create a K8S cluster

Before initialization, configure hosts to resolve MASTER_IP as the VIP address and APISERVER_NAME as the VIP domain name

Export MASTER_IP=192.168.10.19 export APISERVER_NAME=k8s. API echo "${MASTER_IP} ${APISERVER_NAME}" >> /etc/hostsCopy the code

Execute kubeadm init on master01 for initialization

Kubeadm init \ --apiserver-advertise-address 0.0.0.0 \ --apiserver-bind-port 6443 \ --cert-dir /etc/kubernetes/pki \ --control-plane-endpoint k8s.api \ --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \ --kubernetes-version 1.18.2 \ --pod-network-cidr 192.10.0.0/16 \ --service-cidr 192.20.0.0/16 \ --service-dns-domain cluster.local \ --upload-certsCopy the code

Note that the two lines starting with kubeadm Join are used to add other master and worker nodes to the cluster.

Loading environment variables

Run on master01 to manage clusters

If you are logged in to the root user

echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
source .bash_profile
Copy the code

If you are not the root user, run the

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Copy the code

Install the Pod network component

Master01 performed on

# for the configuration file mkdir calico && CD calico wget vi https://docs.projectcalico.org/v3.8/manifests/calico.yaml # modify configuration file Kubectl apply -f calico.yamlCopy the code

View the POD status in real time

watch kubectl get pods --all-namespaces -o wide
Copy the code

Add other master nodes to the K8S cluster

Execute on other master nodes

The result of executing kubeadm init on master01 contains join instruction information

The port number is changed from 6443 to 16443

Export MASTER_IP=192.168.10.19 export APISERVER_NAME=k8s. API echo "${MASTER_IP} ${APISERVER_NAME}" >> /etc/hostsCopy the code
Kubeadm join k8s. API :16443 -- Token ztjux9.2tau56zck212j9ra \ --discovery-token-ca-cert-hash sha256:a2b552266902fb5f6620330fc1a6638a9cdd6fec3408edba1082e6c8389ac517 \ --control-plane --certificate-key 961494e7d0a9de0219e2b0dc8bdaa9ca334ecf093a6c5f648aa34040ad39b61aCopy the code
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
source .bash_profile
Copy the code

Add all Worker nodes to the K8S cluster

Execute on the worker node

The result of executing kubeadm init on master01 contains join instruction information

The port number is changed from 6443 to 16443

Export MASTER_IP=192.168.10.19 export APISERVER_NAME=k8s. API echo "${MASTER_IP} ${APISERVER_NAME}" >> /etc/hostsCopy the code
Kubeadm join k8s. API :16443 -- Token ztjux9.2tau56zck212j9ra \ --discovery-token-ca-cert-hash sha256:a2b552266902fb5f6620330fc1a6638a9cdd6fec3408edba1082e6c8389ac517Copy the code

View clusters on Master01

watch kubectl get nodes -o wide
Copy the code

If all are Ready, the cluster is successfully installed.

Destructive testing

  • Stop haProxy for Master01
  • Turn off master01

It can be seen that VIP can drift onto master02

Then you can do the same operation in Master02 to observe whether THE VIP can be floated to Master03

conclusion

Today, I mainly built K8S high availability load balancing cluster, which is the record of my actual operation.

So do you see an area that you can actually optimize further?

A bowl of chicken soup for you

  • The sooner you have a problem, the better. If you don’t have a problem, you have a bigger problem.

Refer to the article

Wsgzao. Making. IO/post/keepal…

www.kubernetes.org.cn/7315.html

To contact me

Wechat official account: zuolinux_com