Upgrade instructions

The Kubernetes cluster mini-version upgrade is basically just a binary update. Note the changes in kubelet parameters and other components after a major upgrade. Because the Kubernete version has been updated too quickly and many dependencies have not been resolved, newer versions are not recommended for production environments.

You are advised to repeatedly test and verify the upgrade in the production environment

Github.com/kubernetes/…

component Before the upgrade After the upgrade
Etcd 3.4.13 Don’t need to upgrade
kube-apiserver 1.20.13 1.21.7
kube-scheduler 1.20.13 1.21.7
kube-controller-manager 1.20.13 1.21.7
kubectl 1.20.13 1.21.7
kube-proxy 1.20.13 1.21.7
kubelet 1.20.13 1.21.7
calico 3.15.3 3.21.1
coredns 1.7.0 1.8.6

To upgrade the Master

Master01

#The backup
[root@k8s-master01 ~]# cd /usr/local/bin/
[root@k8s-master01 bin]# cp kube-apiserver kube-controller-manager kube-scheduler kubectl /tmp

#Downloading the Installation package
[root@k8s-master01 ~]# wget https://storage.googleapis.com/kubernetes-release/release/v1.21.7/kubernetes-server-linux-amd64.tar.gz
[root@k8s-master01 ~]# tar zxf kubernetes-server-linux-amd64.tar.gz

#Stop the service
[root@k8s-master01 ~]# systemctl stop kube-apiserver
[root@k8s-master01 ~]# systemctl stop kube-controller-manager
[root@k8s-master01 ~]# systemctl stop kube-scheduler
#Update package[root@k8s-master01 bin]# cd /root/kubernetes/server/bin/ [root@k8s-master01 bin]# ll total 1075472 -rwxr-xr-x 1 root root 50810880 Nov 17 22:52 apiextensions-apiserver -rwxr-xr-x 1 root root 44867584 Nov 17 22:52 kubeadm -rwxr-xr-x 1 root root 48586752 Nov 17 22:52 kube-aggregator -rwxr-xr-x 1 root root 122204160 Nov 17 22:52 kube-apiserver -rw-r--r-- 1 root root 8 Nov 17 22:50 kube-apiserver.docker_tag -rw------- 1 root root 126985216 Nov 17 22:50 kube-apiserver.tar -rwxr-xr-x 1 root root 116404224 Nov 17 22:52 kube-controller-manager -rw-r--r-- 1 root root 8 Nov 17 22:50 kube-controller-manager.docker_tag -rw------- 1 root root 121185280 Nov 17 22:50 kube-controller-manager.tar -rwxr-xr-x 1 root root 46669824 Nov 17 22:52 kubectl -rwxr-xr-x 1 root root 55317704 Nov 17 22:52 kubectl-convert -rwxr-xr-x 1 root  root 118390192 Nov 17 22:52 kubelet -rwxr-xr-x 1 root root 43376640 Nov 17 22:52 kube-proxy -rw-r--r-- 1 root root 8 Nov 17 22:50 kube-proxy.docker_tag -rw------- 1 root root 105378304 Nov 17 22:50 kube-proxy.tar -rwxr-xr-x 1 root root 47349760 Nov 17 22:52 kube-scheduler -rw-r--r-- 1 root root 8 Nov 17 22:50 kube-scheduler.docker_tag -rw------- 1 root root 52130816 Nov 17 22:50 kube-scheduler.tar -rwxr-xr-x 1 root root 1593344 Nov 17 22:52 mounter [root@k8s-master01 bin]# cp kube-apiserver kube-scheduler kubectl kube-controller-manager /usr/local/bin/ cp: Overwrite '/ usr/local/bin/kube - apiserver'? Cp: y overwrite '/ usr/local/bin/kube - the scheduler'? Cp: y overwrite '/ usr/local/bin/kubectl'? Cp: y overwrite '/ usr/local/bin/kube - controller - the manager? y
#Start the service[root@k8s-master01 bin]# systemctl start kube-apiserver [root@k8s-master01 bin]# systemctl start kube-controller-manager  [root@k8s-master01 bin]# systemctl start kube-scheduler
#Check the service to see if there are errors
[root@k8s-master01 bin]# systemctl status kube-apiserver 
[root@k8s-master01 bin]# systemctl status kube-controller-manager
[root@k8s-master01 bin]# systemctl status kube-scheduler

#Check the version
[root@k8s-master01 bin]# kube-apiserver --version
Kubernetes v1.21.7
[root@k8s-master01 bin]# kube-scheduler --version
Kubernetes v1.21.7
[root@k8s-master01 bin]# kube-controller-manager --version
Kubernetes v1.21.7
Copy the code

Master02

#Stop the service
[root@k8s-master02 ~]# systemctl stop kube-apiserver
[root@k8s-master02 ~]# systemctl stop kube-controller-manager
[root@k8s-master02 ~]# systemctl stop kube-scheduler

#Update package[root@k8s-master01 bin]# scp kube-apiserver kube-controller-manager kube-scheduler root@k8s-master02:/usr/local/bin Kube-apiserver 100% 117MB 72.2MB/s 00:01 kube-controller-manager 100% 111MB 74.3MB/s 00:01 kube-scheduler 100% 45MB 71.1 MB/s 00:00#Start the service
[root@k8s-master02 ~]# systemctl start kube-apiserver 
[root@k8s-master02 ~]# systemctl start kube-controller-manager
[root@k8s-master02 ~]# systemctl start kube-scheduler

#Check the service to see if there are errors
[root@k8s-master02 ~]# systemctl status kube-apiserver 
[root@k8s-master02 ~]# systemctl status kube-controller-manager
[root@k8s-master02 ~]# systemctl status kube-scheduler

#Check the version
[root@k8s-master02 ~]# kube-apiserver --version
Kubernetes v1.21.7
[root@k8s-master02 ~]# kube-scheduler --version
Kubernetes v1.21.7
[root@k8s-master02 ~]# kube-controller-manager --version
Kubernetes v1.21.7
Copy the code

Master03

#Stop the service
[root@k8s-master03 ~]# systemctl stop kube-apiserver
[root@k8s-master03 ~]# systemctl stop kube-controller-manager
[root@k8s-master03 ~]# systemctl stop kube-scheduler

#Update package[root@k8s-master01 bin]# scp kube-apiserver kube-controller-manager kube-scheduler root@k8s-master03:/usr/local/bin Kube-apiserver 100% 117MB 72.2MB/s 00:01 kube-controller-manager 100% 111MB 74.3MB/s 00:01 kube-scheduler 100% 45MB 71.1 MB/s 00:00#Start the service
[root@k8s-master03 ~]# systemctl start kube-apiserver 
[root@k8s-master03 ~]# systemctl start kube-controller-manager
[root@k8s-master03 ~]# systemctl start kube-scheduler

#Check the service to see if there are errors
[root@k8s-master03 ~]# systemctl status kube-apiserver 
[root@k8s-master03 ~]# systemctl status kube-controller-manager
[root@k8s-master03 ~]# systemctl status kube-scheduler

#Check the version
[root@k8s-master03 ~]# kube-apiserver --version
Kubernetes v1.21.7
[root@k8s-master03 ~]# kube-scheduler --version
Kubernetes v1.21.7
[root@k8s-master03 ~]# kube-controller-manager --version
Kubernetes v1.21.7
Copy the code

To upgrade the Node

Suggest low peak time, smooth upgrade yo

Setting unschedulable

[root@k8s-master01 ~]# kubectl cordon k8s-node01 node/k8s-node01 cordoned [root@k8s-master01 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION K8S-master01 Ready < None > 2D21H v1.20.13K8S-master02 Ready < None > 2D21H v1.20.13K8S-master03 Ready < None > 2D21h v1.20.13K8S-node01 Ready,SchedulingDisabled < None > 2D18h v1.20.13K8S-node02 Ready < None > 2D18h v1.20.13Copy the code

Expulsion of Pod

[root@k8s-master01 ~]# kubectl drain k8s-node01 --delete-local-data --ignore-daemonsets --force Flag --delete-local-data  has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data. node/k8s-node01 already cordoned WARNING: deleting Pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet: default/busybox; ignoring DaemonSet-managed Pods: kube-system/calico-node-jrc6b evicting pod default/busybox pod/busybox evictedCopy the code
  • – delete-local-data: deletes even if pod uses emptyDir
  • – ignore-daemonsets: the pod of the Deamonset controller is ignored. If not, the pod controlled by the Deamonset controller may start on this node immediately after being deleted, which becomes an infinite cycle.
  • – the force: If the force parameter is not added, only pods created by ReplicationController, ReplicaSet, DaemonSet,StatefulSet or Job will be deleted from the NODE. This will also remove ‘naked pod’ (not bound to any Replication controller)

Stop the service

[root@k8s-node01 ~]# systemctl stop kube-proxy
[root@k8s-node01 ~]# systemctl stop kubelet
Copy the code

The backup package

[root@k8s-node01 ~]# mv /usr/local/bin/kubelet /tmp
[root@k8s-node01 ~]# mv /usr/local/bin/kube-proxy /tmp
Copy the code

Update package

[root@k8s-master01 bin]# scp kubelet kube-proxy root@k8s-node01:/usr/local/bin
Copy the code

Start the service

[root@k8s-node01 ~]# systemctl start kubelet
[root@k8s-node01 ~]# systemctl start kube-proxy
Copy the code

Setting schedulable

[root@k8s-master01 bin]# kubectl uncordon k8s-node01
node/k8s-node01 uncordoned
Copy the code

Verify the upgrade

#K8s-node01 has been upgraded to V1.21.7[root@k8s-master01 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION K8S-master01 Ready < None > 3D v1.20.13 K8s-master02 Ready < None > 3D v1.20.13k8S-master03 Ready < None > 3D v1.20.13k8S-node01 Ready < None > 2D20h v1.21.7 K8s-node02 Ready < None > 2D20h v1.20.13Copy the code

The calico upgrade

Upgrade is not recommended because there are no special requirements

Based on the number of data stores and nodes, select the following installation methods:

  • Install Calico with Kubernetes API datastore, 50 nodes or less
  • Install Calico using Kubernetes API datastore, over 50 nodes
  • Install Calico using etCD data store

The backup

#Backup secret[root@k8s-master01 bak]# kubectl get secret -n kube-system calico-node-token-d6ck2 -o yaml > calico-node-token-d6ck2.yml  [root@k8s-master01 bak]# kubectl get secret -n kube-system calico-kube-controllers-token-r4v8n -o yaml > calico-kube-controllers-token.yml [root@k8s-master01 bak]# kubectl get secret -n kube-system calico-etcd-secrets -o yaml  > calico-etcd-secrets.yml
#Backup configmap
[root@k8s-master01 bak]# kubectl get cm -n kube-system calico-config -o yaml >  calico-config.yaml

#Backup ClusterRole
[root@k8s-master01 bak]# kubectl get clusterrole calico-kube-controllers -o yaml > calico-kube-controllers-cr.yml
[root@k8s-master01 bak]# kubectl get clusterrole calico-node -o yaml > calico-node-cr.yml

#Backup ClusterRoleBinding
[root@k8s-master01 bak]# kubectl get ClusterRoleBinding calico-kube-controllers -o yaml > calico-kube-controllers-crb.yml
[root@k8s-master01 bak]# kubectl get ClusterRoleBinding calico-node -o yaml > calico-node-crb.yml

#Backup DaemonSet
[root@k8s-master01 bak]# kubectl get DaemonSet -n kube-system calico-node -o yaml > calico-node-daemonset.yml

#Backup ServiceAccount
[root@k8s-master01 bak]# kubectl get sa -n kube-system  calico-kube-controllers -o yaml > calico-kube-controllers-sa.yml
[root@k8s-master01 bak]# kubectl get sa -n kube-system  calico-node -o yaml > calico-node-sa.yml

#The backup Deployment
[root@k8s-master01 bak]# kubectl get deployment -n kube-system calico-kube-controllers -o yaml > calico-kube-controllers-deployment.yml

Copy the code

update

[root@k8s-master01 ~]# curl https://docs.projectcalico.org/manifests/calico-etcd.yaml -o calico-etcd.yaml


#Modify the configurationsed -i 's#etcd_endpoints: "http://<ETCD_IP>:<ETCD_PORT>"#etcd_endpoints: "Https://192.168.1.101:2379, https://192.168.1.102:2379, https://192.168.1.103:2379," # g 'calico - etcd. Yaml ETCD_CA = ` cat /etc/kubernetes/pki/etcd/etcd-ca.pem | base64 | tr -d '\n'` ETCD_CERT=`cat /etc/kubernetes/pki/etcd/etcd.pem | base64 | tr -d '\n'` ETCD_KEY=`cat /etc/kubernetes/pki/etcd/etcd-key.pem | base64 | tr -d '\n'` sed -i "s@# etcd-key: null@etcd-key: ${ETCD_KEY}@g; s@# etcd-cert: null@etcd-cert: ${ETCD_CERT}@g; s@# etcd-ca: null@etcd-ca: ${ETCD_CA}@g" calico-etcd.yaml sed -i 's#etcd_ca: ""#etcd_ca: "/calico-secrets/etcd-ca"#g; s#etcd_cert: ""#etcd_cert: "/calico-secrets/etcd-cert"#g; s#etcd_key: "" #etcd_key: "/calico-secrets/etcd-key" #g' calico-etcd.yaml
#Change this to your POD network segmentPOD_SUBNET="172.16.0.0/12" sed -i 's@# -name: CALICO_IPV4POOL_CIDR@- name: CALICO_IPV4POOL_CIDR@g; POD_SUBNET="172.16.0.0/12" sed -i 's@# -name: CALICO_IPV4POOL_CIDR@- name: CALICO_IPV4POOL_CIDR@g; S @ # value: "192.168.0.0/16" @ value: '" ${POD_SUBNET} "@ g' calico - etcd. Yaml

$ kubectl apply -f calico.yaml 
secret/calico-etcd-secrets unchanged
configmap/calico-config configured
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers unchanged
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers unchanged
clusterrole.rbac.authorization.k8s.io/calico-node configured
clusterrolebinding.rbac.authorization.k8s.io/calico-node unchanged
daemonset.apps/calico-node configured
serviceaccount/calico-node unchanged
deployment.apps/calico-kube-controllers configured
serviceaccount/calico-kube-controllers unchanged

Copy the code

Coredns upgrade

[root@k8s-master01 ~]# git clone https://github.com/coredns/deployment.git Cloning into 'deployment'... [root @ k8s - master01 ~] # deployment/kubernetes/CD [root @ k8s - master01 kubernetes] #. / deploy. Sh - s - I 10.96.0.10 | kubectl apply -f -
#To view[root@k8s-master01 ~]# kubectl get pod -n kube-system coredns-86f4cdc7bc-ccgr5 -o yaml | grep image image: Coredns/coreDNS :1.8.6 imagePullPolicy: IfNotPresent Image: CoreDNS/coreDNS :1.8.6 imageID: docker-pullable://coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590eCopy the code

Click “Read the original article” for a better reading experience!