Upgrade the primary node (Control Plane Nodes)

  • Update the first master yum Repo cache

    [root@k8s-prod-master1 ~]# yum makecache fast
    Copy the code
  • View the current K8S version

    [root@k8s-prod-master1 ~]# kubeadm version kubeadm version: & version. The Info {Major: "1", Minor: "15", GitVersion: "v1.15.5 GitCommit:" f3abc15296f3a3f54e4ee42e830c61047b13895f." GitTreeState:"clean", BuildDate:" 2021-01-13T13:18:5z ", GoVersion:" GO1.13.15 ", Compiler:" GC ", Platform:" Linux/AMD64 "}Copy the code
  • Find available upgrades

    [root@k8s-prod-master1 ~]# yum list --showduplicates kubeadm --disableexcludes=kubernetes
    Copy the code

    Based on the listed versions, we selected the upgrade version as 1.16.15-0.

    For example, if the current version is 1.15.5 and the Minor version is 15, the upgrade can be 1.15.5+ or 1.16.x. The upgrade cannot be Minor, that is, the upgrade cannot be directly 1.17.x

  • Upgrade kubeadm version to 1.16.15 on the first primary node

    [root@k8s-prod-master1 ~]# yum install -y kubeadm-1.16.15-0 -- disableExcludes =kubernetesCopy the code
  • Verifying the Upgrade Plan

    [root@k8s-prod-master1 ~]# kubeadm upgrade plan [upgrade/config] Making sure the configuration is correct: [upgrade/config] Reading configuration from the cluster... [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [preflight] Running pre-flight checks. [upgrade] Making sure the cluster is healthy: [upgrade] Fetching available versions to upgrade to [upgrade/versions] Cluster version: V1.15.5 upgrade/versions kubeadm version: V1.16.15 I0518 10:38:56.544764 20379 version. Go :251] Remote version is much newer: V1.21.1; Falling back to: ststable 1.16 [upgrade/versions] Latest stable version: V1.16.15 [Upgrade/Versions] Latest version in the V1.15 Series: V1.15.12 Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade Apply ': COMPONENT CURRENT AVAILABLE Kubelet 6 x v1.15.0 v1.15.12 Upgrade to the latest version in the V1.15 series COMPONENT CURRENT AVAILABLE API Server V1.15.5 v1.15.12 Controller Manager v1.15.5 v1.15.12 Scheduler v1.15.5 v1.15.12 Kube Proxy V1.15.5 v1.15.12 CoreDNS 1.3.1 1.6.2 Etcd 3.3.10 3.3.10 You can now apply the upgrade by executing the update following command: Kubeadm upgrade the apply v1.15.12 _____________________________________________________________________ Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply': COMPONENT CURRENT AVAILABLE Kubelet 6 x v1.15.0 v1.16.15 Upgrade to the latest stable version: COMPONENT CURRENT AVAILABLE API Server V1.15.5 v1.16.15 Controller Manager v1.15.5 v1.16.15 Scheduler v1.15.5 v1.16.15 Kube Proxy V1.15.5 v1.16.15 CoreDNS 1.3.1 1.6.2 Etcd 3.3.10 3.3.15-0 You can now apply the upgrade by executing the update The following command: kubeadm upgrade the apply v1.16.15 _____________________________________________________________________Copy the code

    We are prompted to upgrade to 1.15.12, or to 1.16.15, and list the component versions required for that version

  • Based on the component versions listed above, we pre-pull the image

    [root @ k8s - prod - master1 ~] # docker pull harbor.olavoice.com/k8s.gcr.io/kube-apiserver:v1.16.15 [root @ k8s - prod - master1 ~] # Docker pull harbor.olavoice.com/k8s.gcr.io/kube-controller-manager:v1.16.15 [root @ k8s - prod - master1 ~] # docker pull Harbor.olavoice.com/k8s.gcr.io/kube-scheduler:v1.16.15 [root @ k8s - prod - master1 ~] # docker pull Harbor.olavoice.com/k8s.gcr.io/kube-proxy:v1.16.15 [root @ k8s - prod - master1 ~] # docker pull Harbor.olavoice.com/k8s.gcr.io/coredns:1.6.2 [root @ k8s - prod - master1 ~] # docker pull harbor.olavoice.com/k8s.gcr.io/etcd:3.3.15-0Copy the code
  • If the image source is not harbor.olavoice.com during k8S initialization, you need to re-tag the image. For example, the gcr.azk8s.cn image source is disabled

[root @ k8s - prod - master1 ~] # docker tag harbor.olavoice.com/k8s.gcr.io/kube-apiserver:v1.16.15 gcr.azk8s.cn/google_containers/kube-apiserver:v1.16.15 [root@k8s-prod-master1 ~]# docker tag harbor.olavoice.com/k8s.gcr.io/kube-controller-manager:v1.16.15 gcr.azk8s.cn/google_containers/kube-controller-manager:v1.16.15 [root@k8s-prod-master1 ~]# docker tag Harbor.olavoice.com/k8s.gcr.io/kube-scheduler:v1.16.15 GCR. Azk8s. Cn/google_containers/kube - the scheduler: v1.16.15 [root @ k8s - prod - master1 ~] # docker tag harbor.olavoice.com/k8s.gcr.io/kube-proxy:v1.16.15 gcr.azk8s.cn/google_containers/kube-proxy:v1.16.15 [root@k8s-prod-master1 ~]# docker tag Harbor.olavoice.com/k8s.gcr.io/coredns:1.6.2 GCR. Azk8s. Cn/google_containers/coredns: 1.6.2 / root @ k8s - prod - master1 ~ # Docker tag harbor.olavoice.com/k8s.gcr.io/etcd:3.3.15-0 GCR. Azk8s. Cn/google_containers/etcd: 3.3.15-0Copy the code
  • Run the kubeadm upgrade apply v1.16.15 command on the first active node to upgrade K8S components to 1.16.15

    [root@k8s-prod-master1 ~]# kubeadm upgrade apply v1.16.15
    [upgrade/config] Making sure the configuration is correct:
    [upgrade/config] Reading configuration from the cluster...
    [upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
    [preflight] Running pre-flight checks.
    [upgrade] Making sure the cluster is healthy:
    [upgrade/version] You have chosen to change the cluster version to "v1.16.15"
    [upgrade/versions] Cluster version: v1.15.5
    [upgrade/versions] kubeadm version: v1.16.15
    [upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
    [upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd]
    [upgrade/prepull] Prepulling image for component etcd.
    [upgrade/prepull] Prepulling image for component kube-apiserver.
    [upgrade/prepull] Prepulling image for component kube-controller-manager.
    [upgrade/prepull] Prepulling image for component kube-scheduler.
    [apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
    [apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
    [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
    [apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
    [apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
    [apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-etcd
    [apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
    [upgrade/prepull] Prepulled image for component kube-controller-manager.
    [upgrade/prepull] Prepulled image for component etcd.
    [upgrade/prepull] Prepulled image for component kube-apiserver.
    [upgrade/prepull] Prepulled image for component kube-scheduler.
    [upgrade/prepull] Successfully prepulled the images for all the control plane components
    [upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.16.15"...
    Static pod: kube-apiserver-k8s-prod-master1 hash: 3063ebeaaeeb0b0ae290b42909feed15
    Static pod: kube-controller-manager-k8s-prod-master1 hash: e35efcd0b54080a8e2537ed9c174e4cd
    Static pod: kube-scheduler-k8s-prod-master1 hash: c888f571a5ca45c57074e8bd29d45798
    [upgrade/etcd] Upgrading to TLS for etcd
    Static pod: etcd-k8s-prod-master1 hash: 2c48bf5edd224ad10bf56cd5ead33095
    [upgrade/staticpods] Preparing for "etcd" upgrade
    [upgrade/staticpods] Renewing etcd-server certificate
    [upgrade/staticpods] Renewing etcd-peer certificate
    [upgrade/staticpods] Renewing etcd-healthcheck-client certificate
    [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-05-18-10-48-44/etcd.yaml"
    [upgrade/staticpods] Waiting for the kubelet to restart the component
    [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
    Static pod: etcd-k8s-prod-master1 hash: 2c48bf5edd224ad10bf56cd5ead33095
    Static pod: etcd-k8s-prod-master1 hash: a576dcf3cdae038d4cd3520500c0de38
    [apiclient] Found 3 Pods for label selector component=etcd
    [upgrade/staticpods] Component "etcd" upgraded successfully!
    [upgrade/etcd] Waiting for etcd to become available
    [upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests551642047"
    [upgrade/staticpods] Preparing for "kube-apiserver" upgrade
    [upgrade/staticpods] Renewing apiserver certificate
    [upgrade/staticpods] Renewing apiserver-kubelet-client certificate
    [upgrade/staticpods] Renewing front-proxy-client certificate
    [upgrade/staticpods] Renewing apiserver-etcd-client certificate
    [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-05-18-10-48-44/kube-apiserver.yaml"
    [upgrade/staticpods] Waiting for the kubelet to restart the component
    [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
    Static pod: kube-apiserver-k8s-prod-master1 hash: 3063ebeaaeeb0b0ae290b42909feed15
    Static pod: kube-apiserver-k8s-prod-master1 hash: 3a6e3625419d59fc23a626ee48b98ae5
    [apiclient] Found 3 Pods for label selector component=kube-apiserver
    [upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
    [upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
    [upgrade/staticpods] Renewing controller-manager.conf certificate
    [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-05-18-10-48-44/kube-controller-manager.yaml"
    [upgrade/staticpods] Waiting for the kubelet to restart the component
    [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
    Static pod: kube-controller-manager-k8s-prod-master1 hash: e35efcd0b54080a8e2537ed9c174e4cd
    Static pod: kube-controller-manager-k8s-prod-master1 hash: 4f8382a5e369e7caf52148006eb21dac
    [apiclient] Found 3 Pods for label selector component=kube-controller-manager
    [upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
    [upgrade/staticpods] Preparing for "kube-scheduler" upgrade
    [upgrade/staticpods] Renewing scheduler.conf certificate
    [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-05-18-10-48-44/kube-scheduler.yaml"
    [upgrade/staticpods] Waiting for the kubelet to restart the component
    [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
    Static pod: kube-scheduler-k8s-prod-master1 hash: c888f571a5ca45c57074e8bd29d45798
    Static pod: kube-scheduler-k8s-prod-master1 hash: 92ada396a5fce07cd05526431ce7ba3e
    [apiclient] Found 3 Pods for label selector component=kube-scheduler
    [upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
    [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
    [kubelet] Creating a ConfigMap "kubelet-config-1.16" in namespace kube-system with the configuration for the kubelets in the cluster
    [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.16" ConfigMap in the kube-system namespace
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
    [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
    [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
    [addons] Applied essential addon: CoreDNS
    [addons] Applied essential addon: kube-proxy
    
    [upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.16.15". Enjoy!
    
    [upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
    Copy the code
  • Update the second primary yum Repo cache

    [root@k8s-prod-master2 ~]# yum makecache fast
    Copy the code
  • Upgrade kubeadm version to 1.16.15 on the second master node

    [root@k8s-prod-master2 ~]# yum install -y kubeadm-1.16.15-0 -- disableExcludes =kubernetesCopy the code
  • Prepull mirror image

    [root @ k8s - prod - master2 ~] # docker pull harbor.olavoice.com/k8s.gcr.io/kube-apiserver:v1.16.15 [root @ k8s - prod - master2 ~] # Docker pull harbor.olavoice.com/k8s.gcr.io/kube-controller-manager:v1.16.15 [root @ k8s - prod - master2 ~] # docker pull Harbor.olavoice.com/k8s.gcr.io/kube-scheduler:v1.16.15 [root @ k8s - prod - master2 ~] # docker pull Harbor.olavoice.com/k8s.gcr.io/kube-proxy:v1.16.15 [root @ k8s - prod - master2 ~] # docker pull Harbor.olavoice.com/k8s.gcr.io/coredns:1.6.2 [root @ k8s - prod - master2 ~] # docker pull harbor.olavoice.com/k8s.gcr.io/etcd:3.3.15-0Copy the code
  • Run the kubeadm upgrade node command on the second active node to upgrade K8S components to 1.16.15

    [root@k8s-prod-master2 ~]# kubeadm upgrade node [upgrade] Reading configuration from the cluster... [upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [upgrade] Upgrading your Static Pod- Hosted Control Plane Instance to version "V1.16.15 "... Static pod: kube-apiserver-k8s-prod-master2 hash: 41494b6b716efb9d74599b8f51e1a7bb Static pod: kube-controller-manager-k8s-prod-master2 hash: e35efcd0b54080a8e2537ed9c174e4cd Static pod: kube-scheduler-k8s-prod-master2 hash: c888f571a5ca45c57074e8bd29d45798 [upgrade/etcd] Upgrading to TLS for etcd Static pod: etcd-k8s-prod-master2 hash: c29941cc1aa16a5fc1f0c505075d5069 [upgrade/staticpods] Preparing for "etcd" upgrade [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-05-18-10-53-12/etcd.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) Static pod: etcd-k8s-prod-master2 hash: c29941cc1aa16a5fc1f0c505075d5069 Static pod: etcd-k8s-prod-master2 hash: 7c0e2e3107c5919fa31561ab80d4a6d1 [apiclient] Found 3 Pods for label selector component=etcd [upgrade/staticpods] Component "etcd" upgraded successfully! [upgrade/etcd] Waiting for etcd to become available [upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests450070159" [upgrade/staticpods] Preparing for "kube-apiserver" upgrade [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to  "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-05-18-10-53-12/kube-apiserver.yaml" [upgrade/staticpods] Waiting for  the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) Static pod: kube-apiserver-k8s-prod-master2 hash: 41494b6b716efb9d74599b8f51e1a7bb Static pod: kube-apiserver-k8s-prod-master2 hash: 41494b6b716efb9d74599b8f51e1a7bb Static pod: kube-apiserver-k8s-prod-master2 hash: 41494b6b716efb9d74599b8f51e1a7bb Static pod: kube-apiserver-k8s-prod-master2 hash: 41494b6b716efb9d74599b8f51e1a7bb Static pod: kube-apiserver-k8s-prod-master2 hash: 3b31616504ce6e92cf5ed314dce90f74 [apiclient] Found 3 Pods for label selector component=kube-apiserver [upgrade/staticpods] Component "kube-apiserver" upgraded successfully! [upgrade/staticpods] Preparing for "kube-controller-manager" upgrade [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-05-18-10-53-12/kube-controller-manager.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) Static pod: kube-controller-manager-k8s-prod-master2 hash: e35efcd0b54080a8e2537ed9c174e4cd Static pod: kube-controller-manager-k8s-prod-master2 hash: 4f8382a5e369e7caf52148006eb21dac [apiclient] Found 3 Pods for label selector component=kube-controller-manager [upgrade/staticpods] Component "kube-controller-manager" upgraded successfully! [upgrade/staticpods] Preparing for "kube-scheduler" upgrade [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-05-18-10-53-12/kube-scheduler.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) Static pod: kube-scheduler-k8s-prod-master2 hash: c888f571a5ca45c57074e8bd29d45798 Static pod: kube-scheduler-k8s-prod-master2 hash: 92ada396a5fce07cd05526431ce7ba3e [apiclient] Found 3 Pods for label selector component=kube-scheduler [upgrade/staticpods] Component "kube-scheduler" upgraded successfully! [upgrade] The control plane instance for this node was successfully updated! [Kubelet-start] Downloading configuration for the kubelet from the "Kubelet-config-1.16" ConfigMap in the Kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [upgrade] The configuration for this node was successfully updated! [upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.Copy the code
  • Update the third master yum Repo cache

    [root@k8s-prod-master3 ~]# yum makecache fast
    Copy the code
  • Upgrade kubeadm version to 1.16.15 on the third master node

    [root@k8s-prod-master3 ~]# yum install -y kubeadm-1.16.15-0 -- disableExcludes =kubernetesCopy the code
  • Prepull mirror image

    [root @ k8s - prod - master3 ~] # docker pull harbor.olavoice.com/k8s.gcr.io/kube-apiserver:v1.16.15 [root @ k8s - prod - master3 ~] # Docker pull harbor.olavoice.com/k8s.gcr.io/kube-controller-manager:v1.16.15 [root @ k8s - prod - master3 ~] # docker pull Harbor.olavoice.com/k8s.gcr.io/kube-scheduler:v1.16.15 [root @ k8s - prod - master3 ~] # docker pull Harbor.olavoice.com/k8s.gcr.io/kube-proxy:v1.16.15 [root @ k8s - prod - master3 ~] # docker pull Harbor.olavoice.com/k8s.gcr.io/coredns:1.6.2 [root @ k8s - prod - master3 ~] # docker pull harbor.olavoice.com/k8s.gcr.io/etcd:3.3.15-0Copy the code
  • Run the kubeadm upgrade node command on the third active node to upgrade k8S components to 1.16.15

    [root@k8s-prod-master3 ~]# kubeadm upgrade node [upgrade] Reading configuration from the cluster... [upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [upgrade] Upgrading your Static Pod- Hosted Control Plane Instance to version "V1.16.15 "... Static pod: kube-apiserver-k8s-prod-master3 hash: 67d0682f25ed725533617a42eac46523 Static pod: kube-controller-manager-k8s-prod-master3 hash: e35efcd0b54080a8e2537ed9c174e4cd Static pod: kube-scheduler-k8s-prod-master3 hash: c888f571a5ca45c57074e8bd29d45798 [upgrade/etcd] Upgrading to TLS for etcd Static pod: etcd-k8s-prod-master3 hash: 06486501aecbdefad0781a265587e663 [upgrade/staticpods] Preparing for "etcd" upgrade [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-05-18-10-57-06/etcd.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) Static pod: etcd-k8s-prod-master3 hash: 06486501aecbdefad0781a265587e663 Static pod: etcd-k8s-prod-master3 hash: 038dd587665a6a1ab3259c2775cda1b3 [apiclient] Found 3 Pods for label selector component=etcd [upgrade/staticpods] Component "etcd" upgraded successfully! [upgrade/etcd] Waiting for etcd to become available {" level ":" warn ", "ts" : "the 2021-05-18 T10:57:24. 548 + 0800", the "caller" : "clientv3 / retry_interceptor. Go: 61", "MSG" : "retrying of unary Invoker failed ", "target" : "passthrough: / / / < https://172.16.20.54:2379 >", "attempt" : 0, "error" : "the RPC error: code = DeadlineExceeded desc = context deadline exceeded"} [upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests480460980" [upgrade/staticpods] Preparing for "kube-apiserver" upgrade [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to  "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-05-18-10-57-06/kube-apiserver.yaml" [upgrade/staticpods] Waiting for  the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) Static pod: kube-apiserver-k8s-prod-master3 hash: 67d0682f25ed725533617a42eac46523 Static pod: kube-apiserver-k8s-prod-master3 hash: 986c565fb0e7702ad9bc4f310db14929 [apiclient] Found 3 Pods for label selector component=kube-apiserver [upgrade/staticpods] Component "kube-apiserver" upgraded successfully! [upgrade/staticpods] Preparing for "kube-controller-manager" upgrade [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-05-18-10-57-06/kube-controller-manager.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) Static pod: kube-controller-manager-k8s-prod-master3 hash: e35efcd0b54080a8e2537ed9c174e4cd Static pod: kube-controller-manager-k8s-prod-master3 hash: 4f8382a5e369e7caf52148006eb21dac [apiclient] Found 3 Pods for label selector component=kube-controller-manager [upgrade/staticpods] Component "kube-controller-manager" upgraded successfully! [upgrade/staticpods] Preparing for "kube-scheduler" upgrade [upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-05-18-10-57-06/kube-scheduler.yaml" [upgrade/staticpods] Waiting for the kubelet to restart the component [upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s) Static pod: kube-scheduler-k8s-prod-master3 hash: c888f571a5ca45c57074e8bd29d45798 Static pod: kube-scheduler-k8s-prod-master3 hash: 92ada396a5fce07cd05526431ce7ba3e [apiclient] Found 3 Pods for label selector component=kube-scheduler [upgrade/staticpods] Component "kube-scheduler" upgraded successfully! [upgrade] The control plane instance for this node was successfully updated! [Kubelet-start] Downloading configuration for the kubelet from the "Kubelet-config-1.16" ConfigMap in the Kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [upgrade] The configuration for this node was successfully updated! [upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.Copy the code
  • Viewing Cluster Status

    [root@k8s-prod-master2 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION K8S-prod-master1 Ready master 509D V1.15.0 K8s-prod-master2 Ready Master 509D v1.15.0 K8S-prod-master3 Ready Master 509d v1.15.0 olami-ASr2 Ready < None > 509d V1.15.0 olami-k8s-node1 Ready < None > 447D v1.15.0 olami-NLP -model Ready < None > 145d v1.15.0Copy the code
  • Release the K8S-Prod-Master1 node to make it unschedulable

    [root@k8s-prod-master2 ~]# kubectl drain k8s-prod-master1 --ignore-daemonsets
    node/k8s-prod-master1 cordoned
    error: unable to drain node "k8s-prod-master1", aborting command...
    
    There are pending nodes to be drained:
    k8s-prod-master1
    error: cannot delete Pods with local storage (use --delete-local-data to override): kubesphere-system/redis-ha-haproxy-75776f44c4-lk8tq
    Copy the code
  • Upgrade kubectl and Kubelet versions to 1.16.15 for the first master node

    [root@k8s-prod-master1 ~]# yum install -y kubelet-1.16.15-0 kubectl-1.16.15-0 --disableexcludes=kubernetes
    [root@k8s-prod-master1 ~]# systemctl daemon-reload && systemctl restart kubelet
    Copy the code
  • Remove eviction so that the first master node can be scheduled again

    [root@k8s-prod-master2 ~]# kubectl uncordon k8s-prod-master1
    node/k8s-prod-master1 uncordoned
    Copy the code
  • Viewing Cluster Status

    [root@k8s-prod-master2 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION K8S-prod-master1 Ready master 509D V1.16.15 K8s-prod-master2 Ready Master 509D v1.15.0 K8S-prod-master3 Ready Master 509d v1.15.0 olami-ASr2 Ready < None > 509d V1.15.0 olami-k8s-node1 Ready < None > 447D v1.15.0 olami-NLP -model Ready < None > 145d v1.15.0Copy the code

    You can find that the version information of K8S-ProD-Master1 has been updated to V1.16.15

  • Release the K8S-Prod-Master2 node to make it unschedulable

    [root@k8s-prod-master1 ~]# kubectl drain k8s-prod-master2 --ignore-daemonsets
    node/k8s-prod-master2 cordoned
    error: unable to drain node "k8s-prod-master2", aborting command...
    
    There are pending nodes to be drained:
    k8s-prod-master2
    error: cannot delete Pods with local storage (use --delete-local-data to override): kubesphere-system/redis-ha-haproxy-75776f44c4-x8l2c
    Copy the code
  • Upgrade kubectl and Kubelet versions for the second master node to 1.16.15

    [root@k8s-prod-master2 ~]# yum install -y kubelet-1.16.15-0 kubectl-1.16.15-0 --disableexcludes=kubernetes
    [root@k8s-prod-master2 ~]# systemctl daemon-reload && systemctl restart kubelet
    Copy the code
  • Remove eviction so that the second master node can be scheduled again

    [root@k8s-prod-master1 ~]# kubectl uncordon k8s-prod-master2
    node/k8s-prod-master2 uncordoned
    Copy the code
  • Viewing Cluster Status

    [root@k8s-prod-master1 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION K8S-prod-master1 Ready master 509D V1.16.15 K8s-prod-master2 Ready Master 509D v1.16.15 K8S-prod-master3 Ready Master 509d v1.15.0 olami- ASr2 Ready < None > 509d V1.15.0 olami-k8s-node1 Ready < None > 447D v1.15.0 olami-NLP -model Ready < None > 145d v1.15.0Copy the code

    You can find that the version information of K8S-ProD-Master2 has been updated to V1.16.15

  • Release the K8S-Prod-Master3 node to make it unschedulable

    [root@k8s-prod-master1 ~]# kubectl drain k8s-prod-master3 --ignore-daemonsets
    node/k8s-prod-master3 cordoned
    error: unable to drain node "k8s-prod-master3", aborting command...
    
    There are pending nodes to be drained:
    k8s-prod-master3
    error: cannot delete Pods with local storage (use --delete-local-data to override): kubesphere-system/redis-ha-haproxy-75776f44c4-ktts7
    Copy the code
  • Upgrade kubectl and Kubelet versions for the second master node to 1.16.15

    [root@k8s-prod-master3 ~]# yum install -y kubelet-1.16.15-0 kubectl-1.16.15-0 --disableexcludes=kubernetes
    [root@k8s-prod-master3 ~]# systemctl daemon-reload && systemctl restart kubelet
    Copy the code
  • Remove eviction so that the third master node can be scheduled again

    [root@k8s-prod-master1 ~]# kubectl uncordon k8s-prod-master3 node/k8s-prod-master3 uncordoned [root@k8s-prod-master1 ~]# Kubectl get Nodes NAME STATUS ROLES AGE VERSION K8S-Prod-master1 Ready Master 509D v1.16.15K8S-prod-master2 Ready Master 509d v1.16.15K8S-prod-master3 Ready Master 509d v1.16.15 olami-ASr2 Ready < None > 509d v1.15.0 olami-k8S-node1 Ready < None > 447D v1.15.0 olami-NLP -model Ready < None > 145D v1.15.0Copy the code

    It can be found that the version information of K8S-Prod-Master3 has been updated to V1.16.15. The upgrade of the three master nodes has been completed, and now the working nodes need to be upgraded

Upgrading a Working Node

The following uses node olami-NLP -model as an example. The upgrade procedure for other nodes is the same

  • Update the Yum repo cache

    [root@olami-nlp-model ~]# yum makecache fast
    Copy the code
  • Upgrade the specified version of kubeadm

    [root@olami-nlp-model ~]# yum install -y kubeadm-1.16.15-0 -- disableExcludes =kubernetesCopy the code
  • Call kubeadm Upgrade node to upgrade

    [root@olami-nlp-model ~]# kubeadm upgrade node
    [upgrade] Reading configuration from the cluster...
    [upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
    [upgrade] Skipping phase. Not a control plane node[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.16" ConfigMap in the kube-system namespace
    [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
    [upgrade] The configuration for this node was successfully updated!
    [upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.
    Copy the code
  • Drain the node, marked as unschedulable, and expelling the workload above

    [root@k8s-prod-master1 ~]# kubectl drain olami-nlp-model --ignore-daemonsets
    node/olami-nlp-model cordoned
    error: unable to drain node "olami-nlp-model", aborting command...
    
    There are pending nodes to be drained:
    olami-nlp-model
    error: cannot delete Pods with local storage (use --delete-local-data to override): kubesphere-logging-system/fluentbit-operator-5cb575bcc6-r5jqh, kubesphere-monitoring-system/alertmanager-main-0
    Copy the code
  • Upgrade Kubectl and Kubelet

    [root@olami-nlp-model ~]# yum install -y kubelet-1.16.15-0 kubectl-1.16.15-0 --disableexcludes=kubernetes
    [root@olami-nlp-model ~]# systemctl daemon-reload && systemctl restart kubelet
    Copy the code
  • Uncordon the node so that it can be scheduled again

    [root@k8s-prod-master1 ~]# kubectl uncordon olami-nlp-model
    node/olami-nlp-model uncordoned
    Copy the code
  • Viewing Cluster Status

    [root@k8s-prod-master1 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION K8S-prod-master1 Ready master 509D V1.16.15 K8s-prod-master2 Ready Master 509D v1.16.15 K8S-prod-master3 Ready Master 509d v1.16.15 olami-ASr2 Ready < None > 509d V1.15.0 olami-k8s-node1 Ready < None > 447D v1.15.0 olami-NLP -model Ready < None > 145d v1.16.15Copy the code
  • For other nodes, perform the same steps to obtain the final cluster status

    [root@k8s-prod-master1 ~]# kubectl get nodes NAME STATUS ROLES AGE VERSION K8S-prod-master1 Ready master 509D V1.16.15 K8s-prod-master2 Ready Master 509D v1.16.15 K8S-prod-master3 Ready Master 509d v1.16.15 olami-ASr2 Ready < None > 509d V1.16.15 olami-k8s-node1 Ready < None > 447D v1.16.15 olami-NLP -model Ready < None > 145d v1.16.15Copy the code

    At this point, the K8S cluster upgrade is complete