If the Master node has a problem, the whole cluster will collapse. Therefore, the key measure of high availability lies in the multi-master node, and the deployment process is much more complex than the single-master node cluster.

After much thought, given the number of problems I encountered in my actual deployment process (god knows what I experienced), I decided to describe only the deployment process in this article, i.e. how to do it, in an effort to successfully deploy a set of highly available Kubernetes clusters. And why? I’ll leave the question as small talk in the next article.

The ha solution architecture diagram is as follows:

Haproxy and Keepalived components are required to provide Master nodes with high availability.

The preparatory work

Machine role IP
master 10.128.2.53
master 10.128.2.52
master 10.11.0.220
node 10.128.1.187
node 10.11.7.94
node 10.11.0.181
node 10.11.7.125

All machines need to be set up in accordance with the above Kubeadm rapid deployment Kubernetes cluster preparation, in addition to the following points need special attention:

Set the hostname

You need to set the hosts file and hostname

## vi /etc/hosts
10.128.2.53 kubernetes-master01
10.128.2.52 kubernetes-master02
10.11.0.220 kubernetes-master03
10.128.1.187 kubernetes-node01
10.11.7.94 kubernetes-node02
10.11.0.181 kubernetes-node03
10.11.7.125 kubernetes-node04
10.128.2.111 kubernetes-vip
Copy the code
hostnamectl set-hostname kubernetes-master01 ... .Copy the code

10.128.2.111 Kubernetes-VIP is a VIP address implemented by Keepalived.

The node trust

You need to set the trust between the Master nodes and exempt the Master from logging in to the Node. For details, see the previous section.

Install the ipvs

Ipvs is a technical implementation of kube-Proxy for load balancing. The default implementation is iptables, which is described later.

  • Install the software
yum install ipvsadm ipset sysstat conntrack libseccomp -y
Copy the code
  • Load module
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#! /bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
modprobe -- ip_tables
modprobe -- ip_set
modprobe -- xt_set
modprobe -- ipt_set
modprobe -- ipt_rpfilter
modprobe -- ipt_REJECT
modprobe -- ipip
EOF
Copy the code

Configure automatic restart loading

Execution of all nodes

chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack
Copy the code

Kubeadm, kubelet, kubectl installation

Configuration source

cat > /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
#Rebuild the YUM cache and type y to add certificate authentication
yum makecache fast
Copy the code

This installation is the latest version 1.20.4

Set kubeletcggroupDriver to systemd. This is the official recommendation, but there is an interesting point here

cat > /var/lib/kubelet/config.yaml <<EOF
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
EOF
Copy the code

Installing all Nodes

Yum install -y kubelet-1.20.4 kubeadm-1.20.4 kubectl-1.20.4 systemctl enable kubelet && systemctl start kubelet && systemctl status kubeletCopy the code

The cluster structures,

The first step is to install the high availability components haProxy and Keepalived. Here I use docker mode, and all 3 Master nodes need to be installed.

Haproxy installation

  • Creating a Configuration Filehaproxy.cfg
vim /etc/haproxy/haproxy.cfg
Copy the code

The node information is modified according to the actual situation:

#---------------------------------------------------------------------
# Example configuration for a possible web application.  See the
# full configuration options online.
#
# https://www.haproxy.org/download/2.1/doc/configuration.txt
#   https://cbonte.github.io/haproxy-dconv/2.1/configuration.html
#
# -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -

#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------global # to have these messages end up in /var/log/haproxy.log you will # need to: # # 1) configure syslog to accept network log events. This is done # by adding the '-r' option to the SYSLOGD_OPTIONS in  # /etc/sysconfig/syslog # # 2) configure local2 events to go to the /var/log/haproxy.log # file. A line like the Following can be added to # /etc/sysconfig/syslog # # local2.* /var/log/haproxy.log # log 127.0.0.1 local2
#chroot /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
#user haproxy
#group haproxy
    # daemon

    # turn on stats unix socket
    stats socket /var/lib/haproxy/stats

#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000

#---------------------------------------------------------------------
# main frontend which proxys to the backends
#---------------------------------------------------------------------Frontend kubernetes-apiserver mode TCP bind *:9443 ## Bind *:443 SSL # To be completed.... acl url_static path_beg -i /static /images /javascript /stylesheets acl url_static path_end -i .jpg .gif .png .css .js default_backend kubernetes-apiserver
#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------Backend kubernetes-apiserver mode TCP # TCP balance roundrobin # load algorithm using roundrobin# k8s-apiservers backend  # Configure apiserver with port 6443Server kubernetes-master01 10.128.2.53:6443 Check server kubernetes-master02 10.128.2.52:6443 Check server Kubernetes - master03 10.11.0.220:6443 checkCopy the code

The configuration file is copied to another Master node

scp /etc/haproxy/haproxy.cfg root@kubernetes-master02:/etc/haproxy
scp /etc/haproxy/haproxy.cfg root@kubernetes-master03:/etc/haproxy
Copy the code

And finally Docker starts up,

docker run -d --name=haproxy --net=host -v /etc/haproxy:/usr/local/etc/haproxy:ro -v /var/lib/haproxy:/var/lib/haproxy Haproxy: 2.3.6Copy the code

Install keepalived

It took me a while to install Keepalived and I had a lot of problems with it because I had never used Keepalived before. Keepalived also requires all three Master nodes to be installed, and is slightly different.

Again, create a configuration file first:

vim /etc/keepalived/keepalived.conf
Copy the code

Viewing nic information:

ip addr
Copy the code

As follows:

global_defs { script_user root enable_script_security } vrrp_script chk_haproxy { script "/bin/bash -c 'if [[ $(netstat -nlp | grep 9443) ]]; then exit 0; else exit 1; Fi '" # haProxy Interval 2 # Checks weight 11 # Weight changes every two seconds} vrrp_instance VI_1 {interface ens192 # Enter the value based on the actual situation by using the IP addr command State MASTER # backup = backup virtual_router_id 51 # id = same; priority 100 # initial weight
  #Note that my three Master nodes are not on the same network segment. If you do not configure the three Master nodes, multiple Master nodes will split. Configure the other two nodes based on the current node situationUnicast_peer {10.128.2.53 10.128.2.52} virtual_ipAddress {10.128.2.111 # VIP virtual IP address} authentication {auth_type PASS auth_pass password } track_script { chk_haproxy } notify "/container/service/keepalived/assets/notify.sh" }Copy the code

Finally docker is ready to run:

docker run --cap-add=NET_ADMIN --cap-add=NET_BROADCAST --cap-add=NET_RAW --net=host --volume / etc/keepalived/keepalived. Conf: / container/service/keepalived/assets/keepalived. Conf - d osixia/keepalived: 2.0.20 --copy-serviceCopy the code

Kubernetes cluster initialization

The following operations are performed on kubernetes-Master01

Kubeadm this still use to form of the cluster, the new kubeadm kubeadm cluster configuration file. The yaml, before the kube – proxy used the ipvs, also need to configure statement:

cat >> kubeadm.yaml <<EOF apiVersion: kubeadm.k8s.io/v1beta2 kind: ClusterConfiguration kubernetesVersion: V1.20.2 imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers controlPlaneEndpoint: Networking: dnsDomain: cluster.local podSubnet: 192.168.0.0/16 serviceSubnet: 10.211.0.0/12 - apiVersion: kubeproxy. Config. K8s. IO/v1alpha1 kind: KubeProxyConfiguration mode: ipvs EOFCopy the code

Execute the initialization command:

kubeadm init --config kubeadm.yaml --upload-certs
Copy the code

Execute after success:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Copy the code

And remember the node to join the cluster command:

kubeadm join kubernetes-vip:9443 --token 6dntoh.syhp7vi2j7ikv5uv \
    --discovery-token-ca-cert-hash sha256:250115fad0a4b6852a919dbba4222ac65bc64843c660363ab119606ff8819d0a \
    --control-plane --certificate-key 7bd1bc54ee1fdbffcbfb17f93b32496cb93c0688523f7d5fe414cefd48fb05fe

kubeadm join kubernetes-vip:9443 --token 6dntoh.syhp7vi2j7ikv5uv \
    --discovery-token-ca-cert-hash sha256:250115fad0a4b6852a919dbba4222ac65bc64843c660363ab119606ff8819d0a
Copy the code

Install the Calico component and execute:

Wget https://docs.projectcalico.org/v3.8/manifests/calico.yaml kubectl apply - f the calico. YamlCopy the code

Add the remaining master nodes and nodes to the cluster according to the previous node add command. Ensure that the status is normal:

At this point the high availability cluster is set up.

Cluster testing

Keepalived setup setup, we need to do a test to verify the high availability, the above configuration file we configure the detection script is to check the 9443 port, so the test solution is also very simple, just stop haProxy running and check keepalived logs, the results are as follows:

You can see the switching process very clearly.

This is the end of this article, which is a purely procedural summary. The next article will summarize some of the issues and principles behind Kubernetes high availability cluster deployment.