Make writing a habit together! This is the third day of my participation in the “Gold Digging Day New Plan · April More text Challenge”. Click here for more details.

Environment to prepare

  • Change the hostname
hostnamectl set-hostname k8s-master
hostnamectl set-hostname k8s-node1
hostnamectl set-hostname k8s-node2
Copy the code

Or modify vi /etc/hostname

  • Modify the hosts

vi /etc/hosts

192.168.143.130k8S-master 192.168.143.131k8S-node1 192.168.143.132k8S-node2Copy the code
  • Disabling the Firewall
systemctl disable firewalld.service
systemctl stop firewalld.service
Copy the code
  • Close the SELinux
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
Copy the code
  • Close the swap
Sed -ri 's/.*swap.*/#$/' /etc/fstabCopy the code

Installing Basic Components

  • docker
~>> yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo ~>> yum makecache fast  ~>> yum -y install docker-ce ~>> systemctl start dockerCopy the code
  • kubelet kubeadm kubectl
~>> cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF ~>> setenforce 0 ~>> sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config ~>> yum install -y kubelet kubeadm kubectl ~>> systemctl  enable kubelet && systemctl start kubeletCopy the code

Install the Kubernetes cluster

Master installation and configuration

  • The number of cpus must be at least two, which can be adjusted by vm software

  • Do not have less than 2 gigabytes of memory, or you may not be able to close the swap

  • Swap memory not supported For performance reasons, Kubernetes disables swap memory, so swap is disabled.

Sed -ri 's/.*swap.*/#$/' /etc/fstabCopy the code
  • Kubernetes’ service uses iptables to forward and route back-end PODS, so we must ensure that iptables can forward traffic
~>> cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

~>> sudo sysctl --system
Copy the code

Kubeadm init installation

Because kubeadm needs to use containers, these images are k8s. GCR websites. For well-known reasons, they cannot be accessed in China, so they cannot be created successfully. One way is to scientifically go online, the other way is to download it from other places. I found the kubernetes synchronization library GoTok8s on Docker Hub. It should be the official synchronization. The update is timely, the versions are corresponding to each other, and the download is also very fast when the accelerator is configured. If the corresponding version of the library does not exist, find a similar version (the first few kube libraries should be the same version), specify the corresponding version when installing.

Querying required Images

kubeadm config images list
Copy the code

With the above command, you can query the list of K8S images required for our kubeadm installation

The simple strategy is to download the image through docker pull, and then use docker tag to change the image name to kubeadm. V1.18.9 is not available for this version, so use v1.18.8. The command is as follows:

Docker pull Gotok8s /kube-apiserver:v1.18.8 Docker tag Gotok8s /kube-apiserver: v1.18.8k8s.gcr. IO /kube-apiserver:v1.18.8 Docker rmI Gotok8s/kuBE-Apiserver :v1.18.8 Docker pull Gotok8s/KuBE-Controller-Manager :v1.18.8 docker tag Gotok8s/kube - controller - manager: v1.18.8 k8s. GCR. IO/kube - controller - manager: v1.18.8 docker rmi Gotok8s/kuBE-Controller-Manager :v1.18.8 Docker pull Gotok8s/KuBE-Scheduler :v1.18.8 docker tag Gotok8s/kuBE-gcr. IO/kuBE-scheduler :v1.18.8 docker pull Gotok8s /kube-proxy:v1.18.8 docker tag Gotok8s /kube-proxy: v1.18.8k8s.gcr. IO /kube-proxy:v1.18.8 docker RMI Gotok8s /kube-proxy:v1.18.8 Docker pull Gotok8s /pause:3.2 Docker tag Gotok8s /pause: 3.2k8s.gcr. IO /pause:3.2 docker RMI Gotok8s /pause:3.2 Docker pull Gotok8s/ETcd :3.4.3-0 Docker tag Gotok8s/ETcd :3.4.3-0 k8s.gcr. IO/ETcd :3.4.3-0 docker RMI Gotok8s/ETCD :3.4.3-0 Docker pull Gotok8s/CoreDNS :1.6.7 Docker Tag Gotok8s/CoreDNS: 1.6.7k8s.gcr. IO/CoreDNS :1.6.7 docker Rmi gotok8s/coredns: 1.6.7Copy the code

Installation of the master

Kubeadm init --kubernetes-version=1.18.8 --pod-network-cidr 10.244.0.0/16Copy the code

Kubeadm reset kubeadm reset kubeadm reset kubeadm reset kubeadm reset kubeadm reset kubeadm

Run the following command to create a cluster as prompted

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Copy the code

To check the cluster status, run the kubectl get Nodes command

At this time, there is only one Master Node, and it is in the NotReady state. There is no need to rush. The master Node will be installed here.

Node installation and configuration

Log in to the node and run the kubeadm join command after master is installed

Kubeadm join 192.168.143.130:6443 --token o3cc3t.4ikrxmog4wxiijrt \ --discovery-token-ca-cert-hash sha256:684bd6c7f33db9e4c28f5006d896f0fd02015fe2bfc5b186202a6ab00507ba68Copy the code

If you forget this command, you can run the following command on the Master node

kubeadm token create --print-join-command
Copy the code

Possible problems

Kubeadm join (kubeadm join)

W1002 01:53:33.649117 9708 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set. [preflight] Running pre-flight  checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ error execution phase preflight: couldn't validate the identity of the API Server: Get https://192.168.143.130:6443/api/v1/namespaces/kube-public/configmaps/cluster-info? Timeout =10s: Dial TCP 192.168.143.130:6443: connect: no route to hostCopy the code

If the iptables rule is abnormal, run the following command to restore it:

[root@k8s-master ~]# systemctl stop kubelet [root@k8s-master ~]# systemctl stop docker [root@k8s-master ~]# iptables --flush [root@k8s-master ~]# iptables -tnat --flush [root@k8s-master ~]# systemctl Start kubelet [root@k8s-master ~]# systemctl start docker [root@k8s-master ~]# kubeadm token create [root@k8s-master ~]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | Openssl DGST sha256 - hex | sed 's / ^. * / /' # performs kubeadm in node node joins, [root@k8s-node1 ~]# kubeadm join 192.168.143.130:6443 -- Token m8or6s.j5RM351b5hzyk8el --discovery-token-ca-cert-hash sha256:684bd6c7f33db9e4c28f5006d896f0fd02015fe2bfc5b186202a6ab00507ba68Copy the code

The above information shows that nodes have been added to the cluster.

Configure kubectl command automatic completion

[root@k8s-master ~]# yum install -y bash-completion

[root@k8s-master ~]# source /usr/share/bash-completion/bash_completion

[root@k8s-master ~]# source <(kubectl completion bash)

[root@k8s-master ~]# echo “source <(kubectl completion bash)” >> ~/.bashrc

The following error occurred on another machine:

The first two errors indicate that the file content is not set to 1, as long as you set it, you can use the following command to modify:

[root@k8s-node2 ~]# echo "1">/proc/sys/net/ipv4/ip_forward
[root@k8s-node2 ~]# echo "1">/proc/sys/net/bridge/bridge-nf-call-iptables
Copy the code

The third error is to turn off memory swapping. Let’s query the status of Nodes on the master.

Note a master node and a node are in the NotReady state, indicating that the node is NotReady. At this point, the installation is still a few steps away. At this point, switch to the master machine and query the status of each pod in the kube-system namespace.

kubectl get pod -n kube-system
Copy the code

We found that kube-proxy is in ContainerCreating state and CoreDNS is Pending state. Once these issues are resolved and all pods are Running, the cluster is normal.

Kube-proxy: kube-proxy: kube-proxy

kubectl describe pod kube-proxy-pm5f6 -n kube-system
Copy the code

View the Event information at the bottom

Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedCreatePodSandBox 3m45s (x139 over 84m) kubelet, K8s-node1 Failed to create pod sandbox: RPC error: code = Unknown desc = Failed pulling image "k8s.gcr. IO /pause:3.2": Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)Copy the code

IO /pause:3.2 failed to pull the image from k8s-node1, but the image was pulled from k8s-node1.

[root@k8s-node1 ~]# docker pull gotok8s/pause:3.2 [root@k8s-node1 ~]# docker tag gotok8s/pause: 3.2k8s.gcr. IO /pause:3.2 [root@k8s-node1 ~]# docker rmi gotok8s/pause:3.2Copy the code

Kube-proxy pod (ImagePullBackOff) {/ / kube-proxy pod (ImagePullBackOff) {/ / Kube-Proxy pod (ImagePullBackOff) {/ / Kube-Proxy pod (ImagePullBackOff) {/ / Kube-Proxy pod (ImagePullBackOff) {/ / Kube-Proxy pod (ImagePullBackOff)

Continue to query pod details using Kubectl Describe

IO /kube-proxy:v1.18.8

Download the image on Node in the same way

[root@k8s-node1 ~]# docker pull gotok8s/kube-proxy:v1.18.8 [root@k8s-node1 ~]# docker pull Gotok8s /kube-proxy:v1.18.8 K8s.gcr. IO /kube-proxy:v1.18.8 [root@k8s-node1 ~]# docker rmi gotok8s/kube-proxy:v1.18.8Copy the code

Kube-proxy Running kube-Proxy Running kube-Proxy Running kube-Proxy Running kube-Proxy

Flannel: Flannel: Flannel: Flannel: Flannel: Flannel: Flannel: Flannel: Flannel: Flannel: Flannel: Flannel: Flannel: Flannel: Flannel: Flannel: Flannel

# Access the Flannel repository file on GitHub The content stored in the local/home/kube - flannel. Yml https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml # setup command execution [root@k8s-master ~]# cd /home [root@k8s-master ~]# kubectl apply -f kube-flannel.ymlCopy the code

Then view the POD information

Kubectl describe pod Kube-flannel-ds-bjvjx-n Kube-system kubectl describe pod Kube-flannel-ds-bjvjx-n Kube-system Kubectl describe pod Kube-flannel-ds-bjvjx-n Kube-system

This is k8s rac-node1 machine lack of mirror quay. IO/coreos/flannel: v0.13.0 – rc2, still need to manually download process. However, there are some problems here. The external website cannot download the flannel version, and the domestic website also does not have this version. Therefore, the next best thing is to find the first version and directly download the Coreos/Flannel version on GitHub.

Docker load -i flanneld-v0.12.0-adm64.docker

Then delete the creation of kube-flannel.yml first

[root@k8s-master home]# kubectl delete -f kube-flannel.yml
Copy the code

Change the image version v0.13.0-rc2 required in kube-flannel.yml to v0.12.0-amd64, and then apply again.

And of course in a different way, if you have a cloud server, can download quay on the cloud server. The IO/coreos/flannel: v0.13.0 – rc2, Travel through the docker save – o flannel. Tar quay. IO/coreos/flannel: v0.13.0 – rc2 to save image for flannel. Tar packages, downloaded to a local, Docker load -i flannel.tar: kube-flannel.yml: kube-flannel.yml

After installing the Flannel on each node, check the status of the Flannel

[root@k8s-master home]# kubectl get pod -n kube-system
Copy the code

All pods are in the Running state, and then check the status of all nodes below

[root@k8s-master home]# kubectl get nodes
Copy the code

You can see that all nodes are in the Ready state.

conclusion

Kubernetes can be installed offline even if the network is limited and does not use an agent. Kubectl describe: kubectl describe kubectl describe: kubectl describe: Kubectl describe