This article has participated in the good article call order activity, click to see: back end, big front end double track submission, 20,000 yuan prize pool for you to challenge!

The previous article, “Kubernetes Setup (Part 1) : MiniKube-based Local Deployment,” introduced a local Kubernetes setup to set up a local development environment. In a production environment, the Kubernetes environment is much more complex than this, at least to start clustering, so this article will take you from the production environment, to bring you a production based Kubernetes cluster deployment, so that you can really understand how to deploy a real Kubernetes cluster environment.

1. Environment preparation

If VMware VMS are used to install the Kubernetes cluster, prepare the following environment:

  • Two VMS: CentOS 7. The higher the configuration, the better.
  • Docker Version: 19.03.13
  • Kubeadm Version: V1.20.0

2. System initialization

Before the installation, configure system parameters and configurations in a unified manner to ensure smooth subsequent installation.

Perform system initialization on the Master and Node nodes.

2.1 Setting the System Host Name

hostnamectl set-hostname <hostname>
Copy the code

Implementation process:

  • The Master node
[root@localhost xcbeyond]# hostnamectl set-hostname k8s-master
Copy the code
  • The Node Node:
[root@localhost xcbeyond]# hostnamectl set-hostname k8s-node01
Copy the code

2.2 Modifying the host file

You are advised to change the host file to ensure that nodes in a cluster can communicate with each other through host names.

Modify the host file /etc/hosts on the Master and node nodes and add the following content:

192.168.11.100 k8s - master 192.168.11.101 k8s - node01Copy the code

The preceding IP addresses are the actual IP addresses of the corresponding nodes.

2.3 Installing Dependency Packages

As you work with Kubernetes, you’ll probably need some tools that you can install in advance for later use.

yum install -y conntrack ntpdate ntp ipvsadm ipset jq iptables curl sysstat libseccomp net-tools
Copy the code

2.4 Setting the firewall to Iptables and setting the null rule

systemctl  stop firewalld  &&  systemctl  disable firewalld

yum -y install iptables-services  &&  systemctl  start iptables  &&  systemctl  enable iptables&&  iptables -F  &&  service iptables save
Copy the code

2.5 close the SELINUX

Swapoff -a && sed -i '/ swap/s/^\(.*\)$/#\1/g' /etc/fstab # setenforce 0 && sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/configCopy the code

2.6 Adjusting kernel Parameters

Cat > kubernetes.conf <<EOF net.bridge.bridge -nF-call-iptables =1 # the iptables on the node can view bridge traffic correctly Net.bridge. bridge-nf-call-ip6tables=1 # Iptables can view the bridge traffic net.ipv4.ip_forward=1 net.ipv4.tcp_tw_recycle=0 Vm. swappiness=0 Overcommit_memory =1 # Do not check whether the physical memory is sufficient for the VM. Panic_on_oom =0 # Enable OOM fs.inotify. Max_user_instances =8192 fs.inotify.max_user_watches=1048576 fs.file-max=52706963 fs.nr_open=52706963 net.ipv6.conf.all.disable_ipv6=1 net.netfilter.nf_conntrack_max=2310720 EOF cp kubernetes.conf /etc/sysctl.d/kubernetes.conf sysctl -p /etc/sysctl.d/kubernetes.confCopy the code

2.7 Adjusting the Time Zone

(If the time zone is correct, you do not need to adjust it.)

Timedatectl set-timezone Asia/Shanghai # Write the current UTC time to the hardware timedatectl set-local-rTC 0 # Restart the system time dependent service systemctl restart rsyslog systemctl restart crondCopy the code

2.8 Upgrading the System Kernel to 5.4

CentOS 7.x has some Bugs in its 3.10.x kernel, causing unstable Docker and Kubernetes to run. For example: RPM -uvh www.elrepo.org/elrepo-rele…

The RPM - Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm # After the installation is complete, check whether the menuentry of the kernel in /boot/grub2/grub. CFG contains the initrd16 configuration. If the menuentry of the kernel does not contain the initrd16 configuration, install it again. Yum --enablerepo=elrepo-kernel install -y kernel-lt # configure grub2-set-default 'CentOS Linux (5.4.93-1. El7. Elrepo. X86_64) 7 (Core) 'Copy the code

Implementation process:

[root@k8s-master xcbeyond]# uname -r 3.10.0-1127.19.1.el7.x86_64 [root@k8s-master xcbeyond]# RPM -uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm Access http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm warning: / var/TMP/RPM - TMP. XF145X: Header V4 DSA/SHA1 Signature, key ID BAADAE52: NOKEY Preparing... # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # [100%] is upgrading/installing... 1: elrepo - release - 7.0-3. El7. Elrepo # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # [100%] [root @ k8s - master xcbeyond] # yum --enablerepo=elrepo-kernel install -y kernel-lt Fastestmirror, Langpacks Loading Mirror speeds from cached Hostfile... Warning: RPM database has been modified by a non-YUM program. Installing: kernel-lT-5.4.93-1.el7.elreo. x86_64 1/1 Verifying: kernel-LT-5.4.93-1.el7.elreo. x86_64 1/1 Installed: Kernel-lt. X86_64 0:5.4.93-1.el7.elrepo End! [root@k8s-master xcbeyond]# grub2-set-default 'CentOS Linux (5.4.93-1.el7.elrepu.x86_64) 7 (Core)' [root@k8s-master xcbeyond]# rebootCopy the code

After the restart is complete, check whether the system kernel has been upgraded successfully.

[xcbeyond@k8s-master ~]$uname -r 5.4.93-1.el7.elrepo.x86_64Copy the code

Don’t forget to execute on node!

2.9 IpvS prerequisites for kube-proxy

modprobe br_netfilter cat > /etc/sysconfig/modules/ipvs.modules <<EOF #! /bin/bash modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack_ipv4 EOF chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod  | grep -e ip_vs -e nf_conntrack_ipv4Copy the code

Implementation process:

[root@k8s-master xcbeyond]# modprobe br_netfilter [root@k8s-master xcbeyond]# cat > /etc/sysconfig/modules/ipvs.modules <<EOF > #! /bin/bash > modprobe -- ip_vs > modprobe -- ip_vs_rr > modprobe -- ip_vs_wrr > modprobe -- ip_vs_sh > modprobe -- nf_conntrack_ipv4 > EOF [root@k8s-master xcbeyond]# chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4 modprobe: FATAL: Module nf_conntrack_ipv4 not found.Copy the code

Don’t forget to execute on node!

Docker installation

The Docker installation process will not be covered here. For details, see the previous article.

4, installation,kubeadm

4.1 installationkubeadm,kubectlandkubelet

The following software packages need to be installed on each machine (master, Node) :

  • kubeadm: The instruction used to initialize the cluster.
  • kubectl: Command line tool used to communicate with the cluster.
  • kubelet: is used to start pods, containers, etc on each node in the cluster.

(1) Configure the Kubernetes data source

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
Copy the code

Implementation process:

[root@k8s-master xcbeyond]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
> [kubernetes]
> name=Kubernetes
> baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
> enabled=1
> gpgcheck=0
> repo_gpgcheck=0
> gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
> http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
> EOF
Copy the code

(2) Install kubeadm, Kubectl, kubelet

yum -y  install  kubeadm kubectl kubelet
Copy the code

Implementation process:

[root@k8s-master xcbeyond]# yum -y install kubeadm kubectl kubelet fastestmirror, langpacks Loading mirror speeds from cached hostfile * base: mirrors.neusoft.edu.cn * elrepo: mirrors.neusoft.edu.cn * extras: mirrors.neusoft.edu.cn * updates: Mirrors.neusoft.edu.cn kubernetes | 1.4 kB 00:00:00 is resolve dependencies -- -- -- -- -- > > is checking affairs package kubeadm. X86_64. 0.1.20.2-0 will be installed - > Kubeadm-1.20.2-0.x86_64 --> kubernetes-cni >= 0.8.6 --> kubernetes-cni >= 0.8.6 --> kubeadm-1.20.2-0.x86_64 It is required by the package kubeadm-1.20.2-0.x86_64 --> The package kubectl.x86_64.0.1.20.2-0 will be installed --> the package kubelet.x86_64.0.1.20.2-0 will be installed --> Processing dependency socat, It is required by the package kubelet-1.20.2-0.x86_64 --> Checking transactions --> the package cri-tools.x86_64.0.1.13.0-0 will be installed --> the package Kubernetes-cni.x86_64.0.0.8.7-0 will be installed --> Software package socat.x86_64.0.1.7.3.2-2.el7 will be installed --> Resolve dependencies Complete dependency resolution = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = version Package structure size of the source = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = are installing: Kubeadm x86_64 1.20.2-0 kubernetes 8.3m kubectl x86_64 1.20.2-0 kubernetes 8.5m kubelet x86_64 1.20.2-0 kubernetes 20 M installed for dependencies: Cri-tools x86_64 1.13.0-0 kubernetes 5.1m kubernetes-cni x86_64 0.8.7-0 kubernetes 19 M socat x86_64 1.7.3.2-2.el7 base 290 k transaction profile = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = 3 installation package (+ 3 Dependent software Package) Total: 61 M Total download: 52 M Install size: 262 M Downloading Packages: (1/5): 14 bfe6e75a9efc8eca3f638eb22c7e2ce759c67f95b43b16fae4ebabde1549f3 - cri - tools - 1.13.0-0. X86_64. RPM | 5.1 MB 00:00:03 (2/5) : B46459afb07aaf12937f7f310b876fab9f5f904eaa8f4a88a21547477eafba78 - kubeadm - 1.20.2-0. X86_64. RPM | 8.3 MB 00:00:06 (3/5) : Socat 1.7.3.2-2. El7. X86_64. RPM | 290 kB 00:00:02 (4/5) : A79d632b1f8c40d2a00e2f98cba68b55c3928d70b97c32aad61c10e17965c2f1 - kubelet - 1.20.2-0. X86_64. RPM | 20 MB 00:00:14 (5/5) : Db7cb5cb0b3f6875f54d10f02e625573988e3e91fd4fc5eef0b1876bb18604ad kubernetes - the cni - 0.8.7-0. X86_64. 19 MB 00:00:11 RPM | -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- - a total of 2.8 MB/s | 52 MB 00:00:18 Running TRANSACTION check Running Transaction Test Transaction test Succeeded Running TRANSACTION Installing: Socat-1.7.3.2-2.el7.x86_64 1/6 iS being installed: kubelet-1.20.2-0.x86_64 2/6 is being installed: kubernetes-cnI-0.87-0.x86_64 3/6 is being installed: kubernetes-cnI-0.8.7-0.x86_64 3/6 is being installed: kubelet-1.20.2-0.x86_64 1/6 is being installed: Kubernetes-CNI-0.87-0.x86_64 3/6 is being installed: kubernetes-CNI-0.8.7-0.x86_64 3/6 is being installed: Kubectl-1.20.2-0.x86_64 4/6 is being installed: Cri-tools-1.13.0-0.x86_64 5/6 is being installed: Kubeadm-1.20.2-0.x86_64 6/6 Is being installed: kubeadm-1.20.2-0.x86_64 5/6 is being installed: kubeadm-1.20.2-0.x86_64 6/6 is being installed: Kubernetes-cni-0.8.7-0.x86_64 1/6 validation: kubelet-1.20.2-0.x86_64 2/6 validation: Kubeadm-1.20.2-0.x86_64 3/6 Validation: Kubernetes-cnI-0.8.7-0.x86_64 1/6 Validation: Kubeadm-1.20.2-0.x86_64 3/6 Validation: Kubeadm-1.20.2-0.x86_64 Cri-tools-1.13.0-0.x86_64 4/6 verification: Kubectl-1.20.2-0.x86_64 5/6 Verification: socat-1.7.3.2-2.el7.x86_64 6/6 Installation: Kubead.x86_64 0:1.20.2-0 kubectl.x86_64 0:1.20.2-0 kubelet.x86_64 0:1.20.2-0 Cri-tools.x86_64 0:1.13.0-0 kubernetes-cnI. x86_64 0:0.8.7-0 socat. X86_64 0:1.7.3.2-2.EL7 End!Copy the code

(3) Set the startup of Kubelet

systemctl enable kubelet.service
Copy the code

Implementation process:

[root@k8s-master xcbeyond]# systemctl enable kubelet.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
Copy the code

Don’t forget to execute on node!

4.2 Creating a Cluster

4.2.1 Installing an Image Pull

By default, the Docker image warehouse used by Kubeadm to create clusters is k8s.gcr. IO. However, the docker image warehouse cannot be directly accessed in China.

(The required image build has been published on Docker Hub for direct domestic use)

This command is executed on both master and node nodes.

The installation image pull script k8s-images-pull.sh is as follows:

#! /bin/bash kubernetes_version="v1.20.0 - kubernetes - version = ${kubernetes_version} | sed -e 's / ^ / docker pull/g' - e 's # k8s. GCR. Io# xcbeyond# g' | sh - x # rename mirror docker images |grep xcbeyond |awk '{print "docker tag ",$1":"$2,$1":"$2}' |sed -e 's#xcbeyond#k8s.gcr.io#2' |sh -x # Delete xcbeyond mirror docker images | grep xcbeyond | awk '{print "docker rmi", $1 ":" $2}' | sh -xCopy the code

To see which images are needed: kubeadm config images list

The execution process is as follows:

[root@k8s-master xcbeyond]#./ k8s-image-pulle. sh + docker pull xcbeyond/kube-apiserver:v1.20.0 v1.20.0: Pulling from xcbeyond/kube-apiserver f398b465657e: Pull complete cbcdf8ef32b4: Pull complete a9b56b1d4e55: Pull complete Digest: sha256:c54e33e290aa1463eae80f6bd4440af3def87f01f86a37a12ec213eb205e538a Status: Newer image Downloaded /kube-apiserver:v1.20.0 docker. IO/xcBEYOND /kube-apiserver:v1.20.0 + docker pull V1.20.0 v1.20.0: Pulling from xcbeyond/ kube-controller-Manager f398b465657e: Already exists cbcdf8ef32b4: Already exists 2ffb969cde54: Pull complete Digest: sha256:5f6321aaa0d9880bd3a96a0d589fc96e912e30f7f5f6d6f53c406eb2b4b20b68 Status: Newer image Downloaded /kube-controller-manager:v1.20.0 docker. IO /xcbeyond/kube-controller-manager:v1.20.0 + Docker pull xcbeyond/kube-scheduler:v1.20.0 v1.20.0: Pulling from xcbeyond/kube-scheduler f398b465657e Already exists cbcdf8ef32b4: Already exists 2f71710e6dc2: Pull complete Digest: sha256:10f3ae3ed09f92b3be037e1dd465214046135eabd9879db43b3fe7159a1bae1c Status: I/XCBEYOND/kube-Scheduler :v1.20.0 docker. I/xcBEYOND/Kube-Scheduler: V1.20.0 + docker pull Xcbeyond /kube-proxy:v1.20.0 v1.20.0: Pulling from xcbeyond/kube-proxy e5a8C1ed6cf1: Pull complete f275DF365C13: Pull complete 6a2802bb94f4: Pull complete cb3853c52da4: Pull complete db342cbe4b1c: Pull complete 9a72dd095a53: Pull complete 6943e8f5bc84: Pull complete Digest: sha256:d583d644b186519597dfdfe420710ab0888927e286ea43b2a6f54ba4329e93e4 Status: Newer image Downloaded /kube-proxy:v1.20.0 docker. IO /kube-proxy:v1.20.0 + docker pull Xcbeyond /pause:3.2 3.2: Pulling from xcbeyond/pause C74f8866df09: Pull complete Digest: sha256:4dcd2075946239537e21adcf4bb300f07eb5c2c8058d699480f2ae62a5cc5085 Status: Downloaded newer image for xcbeyond/pause: 3.2 docker. IO/xcbeyond/pause: 3.2 + docker pull xcbeyond/etcd: 3.4.13-0 3.4.13-0: Pulling from xcbeyond/etcd 4000ADBBc3EB: Already exists D72167780652: Already EXISTS D60490A768b5: Already exists 4a4b5535d134: Pull complete 0dac37e8b31a: Pull complete Digest: sha256:79d32edd429163b1ae404eeb078c75fc2f63fc3d606e0cd57285c832e8181ea3 Status: Downloaded newer image for xcbeyond/etcd: 3.4.13-0 docker. IO/xcbeyond/etcd: 3.4.13 0 + docker pull xcbeyond/coredns: 1.7.0 1.7.0: Pulling from xcbeyond/ coreDNS c6568d217a00: Pull complete Digest: sha256:4310e3ed7a0a9b82cfb2d31c6a7c102b8d05fef2b0208072b87dc4ceca3c47bb Status: Downloaded newer image for xcbeyond/coredns: 1.7.0 docker. IO/xcbeyond/coredns: 1.7.0 + docker tag xcbeyond/pause: 3.2 K8s.gcr. IO /pause:3.2 + docker tag xcbeyond/kube-controller-manager:v1.20.0 k8s.gcr. IO /kube-controller-manager:v1.20.0 + Docker tag xcbeyond/ coreDNS: 1.7.0k8s.gcr. IO/coreDNS :1.7.0 + Docker tag xcbeyond/etcd:3.4.13-0 k8s.gcr. IO /etcd:3.4.13-0 + docker tag xcbeyond/kube-proxy:v1.20.0 k8s.gcr. IO + kube-proxy:v1.20.0 K8s.gcr. IO /kube-scheduler:v1.20.0 + docker tag xcbeyond/kube-apiserver:v1.20.0 k8s.gcr. IO /kube-apiserver:v1.20.0 + Docker rmi xcbeyond/pause:3.2 Untagged: xcbeyond/pause:3.2 Untagged: xcbeyond/pause@sha256:4dcd2075946239537e21adcf4bb300f07eb5c2c8058d699480f2ae62a5cc5085 + docker rmi Xcbeyond/kube - controller - manager: v1.20.0 Untagged: xcbeyond/kube - controller - manager: v1.20.0 Untagged: xcbeyond/kube-controller-manager@sha256:5f6321aaa0d9880bd3a96a0d589fc96e912e30f7f5f6d6f53c406eb2b4b20b68 + docker rmi Xcbeyond/coredns: 1.7.0 Untagged: xcbeyond/coredns: 1.7.0 Untagged: xcbeyond/coredns@sha256:4310e3ed7a0a9b82cfb2d31c6a7c102b8d05fef2b0208072b87dc4ceca3c47bb + docker rmi Xcbeyond/etcd: 3.4.13 Untagged: 0 xcbeyond/etcd: 3.4.13 0 Untagged: xcbeyond/etcd@sha256:79d32edd429163b1ae404eeb078c75fc2f63fc3d606e0cd57285c832e8181ea3 + docker rmi Xcbeyond/kube - proxy: v1.20.0 Untagged: xcbeyond/kube - proxy: v1.20.0 Untagged: xcbeyond/kube-proxy@sha256:d583d644b186519597dfdfe420710ab0888927e286ea43b2a6f54ba4329e93e4 + docker rmi Xcbeyond/kube - the scheduler: v1.20.0 Untagged: xcbeyond/kube - the scheduler: v1.20.0 Untagged: xcbeyond/kube-scheduler@sha256:10f3ae3ed09f92b3be037e1dd465214046135eabd9879db43b3fe7159a1bae1c + docker rmi Xcbeyond/kube - apiserver: v1.20.0 Untagged: xcbeyond/kube - apiserver: v1.20.0 Untagged: xcbeyond/kube-apiserver@sha256:c54e33e290aa1463eae80f6bd4440af3def87f01f86a37a12ec213eb205e538a [root@k8s-master Xcbeyond]# Docker image ls REPOSITORY TAG ID CREATED SIZE k8s.gcr. IO /pause 3.2b76329639608 16 hours ago 688 KB K8s.gcr. IO/kube-controller-Manager v1.20.0 630f45a9961f 16 hours ago 116MB k8s.gcr. IO/coreDNS 1.7.04e42ad8cda50 21 Hours ago 45.2MB k8s.gcr. IO /etcd 3.4.13-0 999b6137af27 21 hours ago 253MB k8s.gcr. IO /kube-proxy v1.20.0 51912faaf3a3 21 Hours ago 118MB k8s.gcr. IO /kube-scheduler v1.20.0 62181d1BF9a1 21 hours ago 46.4MB k8s.gcr. IO /kube-apiserver v1.20.0 0f7e1178e374 22 hours ago 122MBCopy the code

Don’t forget to execute on node!

4.2.2 Initializing the Primary Node

The Master node is the controller node in the Kubernetes cluster, including etCD (cluster database) and API Server (cluster control entry process).

To initialize the master node, execute kubeadm init

.

(1) Modify the kubeadm initialization configuration file.

Run the ‘ ‘kubeadm config print init-defaults command to obtain the kubeadm’ initial configuration file template and store it in kubeadm-config.yml:

kubeadm config print init-defaults > kubeadm-config.yml
Copy the code

Modify the following parameters:

KubernetesVersion: v1.20.0 Networking: podSubnet: 192.168.66.10 # IP address of the active node "10.244.0.0/16" # serviceSubnet: 10.96.0.0/12 # new content as follows: - apiVersion: kubeproxy. Config. K8s. IO/v1alpha1 kind: KubeProxyConfiguration featureGates: SupportIPVSProxyMode: true mode: ipvs kubeadm init --config=kubeadm-config.yml --upload-certs | tee kubeadm-init.logCopy the code

(2) Initialization.

kubeadm init --config=kubeadm-config.yml  | tee kubeadm-init.log
Copy the code

It is easy to view the initialization log and save it in the kubeadm-init.log file.

If kubeadm init fails to be initialized, run the kubeadm reset command before executing the kubeadm init initialization statement next time. The function of this command is to reset the node. You can think of this command as cleaning up the situation where the last kubeadm init initialization failed.

Implementation process:

[root@k8s-master xcbeyond]# kubeadm init --config=kubeadm-config.yml | tee kubeadm-init.log [init] Using Kubernetes version: V1.20.0 [preflight] Running pre-flight checks [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.11.100] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is Signed for DNS names [k8s-master localhost] and IPs [192.168.11.100 127.0.0.1 ::1] [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.11.100 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"  [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Starting the kubelet [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating  static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are Healthy after 28.009413 seconds [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "Kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.20" in Namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Skipping phase. Please see --upload-certs [mark-control-plane] Marking the node k8s-master as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)" [mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: abcdef.0123456789abcdef [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: Kubeadm join 192.168.11.100:6443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:79f34a5872b3df5817d29330ec055d14509a66c96c5de01bfa0d640fab671d90Copy the code

4.2.3 Adding a Primary Node

Note After the kubeadm init command is successfully executed on the Master Node, pay attention to the message at the end of the log and run the related command on the Master and Node nodes as required.

Kubeadm init initialization log is as follows:

... Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: Kubeadm join 192.168.11.100:6443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:79f34a5872b3df5817d29330ec055d14509a66c96c5de01bfa0d640fab671d90Copy the code

To enable non-root users to run kubectl, run the following command (which is part of the kubeadm init output log) :

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Copy the code

Or, if you are root, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf
Copy the code

4.2.4 Adding a Working Node

The work node is where your workload (containers, PODS, etc.) runs. To add a new node to the cluster, do the following for each working node.

The root user runs the command kubeadm init:

Kubeadm join 192.168.11.100:6443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:79f34a5872b3df5817d29330ec055d14509a66c96c5de01bfa0d640fab671d90Copy the code

Implementation process:

[root@k8s-node01 xcbeyond]# kubeadm join 192.168.11.100:6443 --token abcdef.0123456789abcdef \ > --discovery-token-ca-cert-hash sha256:79f34a5872b3df5817d29330ec055d14509a66c96c5de01bfa0d640fab671d90 [preflight] Running pre-flight checks [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml' [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags  to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Starting the kubelet [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap... This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details. Run 'kubectl get nodes' on the control-plane to see this node join the cluster.Copy the code

4.2.5 Installing Pod Network Add-on Plug-ins

Kubectl get nodes command on the Master node

[root@k8s-master xcbeyond]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master NotReady control-plane,master 1m8s v1.20.2k8s -node01 NotReady < None > 18s v1.20.2Copy the code

The state is found to be NotReady, because Kubernetes requires a network to exist, i.e., no Pod network add-on has been built yet, and the Pod network add-on needs to be installed.

You can use the official kube-flannel.yml file to create a flannel.

(1) Download the official kube-flannel.yml file.

File address: github.com/coreos/flan…

(2) Create a network.

[root@k8s-master xcbeyond]# kubectl create -f kube-flannel.yml podsecuritypolicy.policy/psp.flannel.unprivileged created  clusterrole.rbac.authorization.k8s.io/flannel created clusterrolebinding.rbac.authorization.k8s.io/flannel created serviceaccount/flannel created configmap/kube-flannel-cfg created daemonset.apps/kube-flannel-ds createdCopy the code

(3) Check Pod.

If the flannel is not in the Running state, wait for the flannel to be successfully constructed.

[root@k8s-master xcbeyond]# kubectl get pod -n kube-system NAME READY STATUS RESTARTS AGE coredns-74ff55c5b-fr4jj 0/1 ContainerCreating 0 6m3s coredns-74ff55c5b-wcj2h 0/1 ContainerCreating 0 6m3s etcd-k8s-master 1/1 Running 0 6m5s kube-apiserver-k8s-master 1/1 Running 0 6m5s kube-controller-manager-k8s-master 1/1 Running 0 6m5s kube-flannel-ds-2nkcv  1/1 Running 0 13s kube-flannel-ds-m8tf2 1/1 Running 0 13s kube-proxy-mft9t 0/1 CrashLoopBackOff 6 6m3s kube-proxy-n67px  0/1 CrashLoopBackOff 3 68s kube-scheduler-k8s-master 1/1 Running 0 6m5sCopy the code

(4) Check the node status.

The system is in the Ready state.

[root@k8s-master xcbeyond]# kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master Ready control-plane,master 6m30s V1.20.2k8s -node01 Ready < None > 85s v1.20.2Copy the code

4.3 Cluster Environment Verification

So far, based on the Kubeadm way of cluster construction has been completed, let us together in the Kubernetes cluster environment to open Kubernetes exploration!

5, summary

You may encounter a variety of problems and obstacles during the installation process, but don’t worry, it will be the first installation.

Facing the problem, I have the following views or suggestions:

  1. When you have a problem, you have done it yourself, which is a pleasure in itself. (That’s how the pit was made.)
  2. Do not panic, carefully check the error log and prompts.
  3. Various searches, especially on the official site or Github, come together based on critical error information.
  4. After the problem is resolved, record it.

Reference article:

  1. Kubernetes. IO/useful/docs/set…
  2. www.cnblogs.com/nb-blog/p/1…
  3. www.cnblogs.com/shoufu/p/13…