Author | Zhang Pan (Yuzhe) source | Erda Erda official account

takeaway: As a one-stop cloud native PaaS platform, ERDA has now completed 70W + core code open source for the majority of developers! ** At the same time that Erda is open source, we plan to write a series of articles entitled Cloud Native PaaS Platform Infrastructure Based on K8S, hoping that our experience can help more enterprises improve the construction of PaaS platform infrastructure. This article is the first in a series.

origin

Kubernetes will deprecate Docker as a container runtime after version 1.20:


https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md#deprecation​

At the end of 2020, Kubernetes officially announced that it will drop Docker support as of v1.20, when users will receive Docker deprecation warning in the Kubelet startup log. The release of this melon is a major burden for Docker developers and engineers who are still using Kubernetes. So how does Kubernetes’ move away from Docker affect us? Don’t panic, this thing is not so terrible as imagined.

If you’re rolling your own clusters, you will also need to make changes to avoid your clusters breaking. At v1.20, you will get a deprecation warning for Docker. When Docker runtime support is removed in a future release (currently planned for the 1.22 release in late 2021) of Kubernetes it will no longer be supported and you will need to switch to one of the other compliant container runtimes, like containerd or CRI-O. Just make sure that the runtime you choose supports the docker daemon configurations you currently use (e.g. logging).

In v1.20, only one Docker deprecation warning will be received, and it will not be removed until future v1.22, meaning that by the end of 2021, v1.23, We still have 1 year of buffer to find the right CRI runtimes to ensure a smooth transition, such as containerd and CRI-O.

Edge out

Why did Kubernetes abandon Docker for another CRI? As we know, CRI was introduced by Kubernetes in the V1.5 release to act as a bridge between Kubelet and the container runtime. Summary: CRI is a container-centric API designed to avoid exposing POD information or API interfaces to containers (such as Docker). With this interface mode, Kubernetes can use more container runtimes without recompiling. However, Docker is not compatible with CRI. In order to fit Docker, Kubernetes created Dockershim, which converts CRI into Docker API. Kubelet uses DockerShim to communicate with Docker, which in turn communicates with Containerd below. Then you can work happily. As shown in the figure below:

  • In order to support multiple OCI Runtimes, Dockershim is responsible for pulling up a new docker-shim process for each container started, specifying the container ID, Bundle directory, Runtime binary (runc), and so on. Dockershim allows Kubelet to interact with Docker as if Docker were a CRI compatible runtime.

All was well and good until late last year, when Kubernetes publicly tipped the balance. The Kubernetes introduction says this is because maintaining Dockershim has become a heavy burden for Kubernetes maintainers. DockerShim has always been a compatible program maintained by the Kubernetes community in order to make Docker the container runtime it supports. This so-called abandonment, but also Kubernetes to abandon the current Kubernetes code repository Dockershim maintenance support. Docker itself does not implement CRI at present, which is the problem of this incident.

After a brief understanding of why Kubernetes abandoned Docker, we need to know what impact Docker’s abandonment has on us. What are the alternatives?

  1. If you are relying on the underlying docker socket (/var/run/docker.sock) as part of a workflow within your cluster today, moving to a different runtime will break your ability to use it.
  2. Make sure no privileged Pods execute Docker commands.
  3. Check that scripts and apps running on nodes outside of Kubernetes infrastructure do not execute Docker commands.
  4. Third-party tools that perform above mentioned privileged operations.
  5. Make sure there is no indirect dependencies on dockershim behavior.

For the user, this decision by Kbernetes affects applications and event streams that rely on docker.sock (such as Kubelet’s container-run-time endpoint parameter). Affects the execution of Docker commands and those dependencies on DockerShim components.

Margin of raw

What are the alternatives?

Alternative 1: Containerd

Containerd (https://containerd.io) is an open source project that Docker donated to CNCF when OCI was established and has graduated from CNCF. Containerd is an industry-standard container runtime with an emphasis on simplicity, robustness, and portability, and is intended to be embedded within a larger system rather than being used directly by developers or end users. Kubernetes uses Containerd as the container runtime for the Kubernetes cluster using a CRI interface, as shown in the figure below:



  • The CRI plugin is the native plugin to Containerd, and as of Containerd v1.1, the CRI plugin is built into the published Containerd binaries.

Containerd deployment

cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf overlay br_netfilter EOF sudo modprobe overlay sudo modprobe Br_netfilter # Set the required sysctl parameters, These parameters take effect after the restart still cat < < EOF | sudo tee/etc/sysctl. D / 99 - kubernetes - cri. Conf net. Bridge. The bridge - nf - call - iptables = 1 Net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-ip6tables = 1 EOF # Apply sysctl parameters without restarting sudo sysctl --system # use Docker - ce source sudo yum yum - config - manager \ - add - repo \ http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo # Yum install -y Containerd containerd. IO # Containerd sudo mkdir -p /etc/containerd Containerd Containerd Config DefaultContainerd | sudo tee/etc/containerd/config toml modify the configuration file and the new "SystemdCgroup = true", Use systemd as cgroup driver [plugins. "IO. Containerd. GRPC. V1. Cri". Containerd. The runtimes. Runc]... [plugins. "IO containerd. GRPC. V1. Cri". Containerd. The runtimes. Runc. Options] SystemdCgroup = true # restart containerd sudo systemctl restart containerd



Use crictl to connect to containerd to verify the use of the CRI plugin:





To see the CRI types used by the K8S cluster:







Check the CRI socket specified by Kubelet:







At this point, we have implemented containerd instead of Docker, and the Kubernetes cluster uses another (containerd) runC.

Alternative 2: CRI-O

CRI – O (https://cri-o.io) is a Container runtime launched by Red Hat and Open source. It is the Container runtime for Kubernetes OCI (Open Container Initiative). It is the CRI standard implementation of Kubernetes. It also allows Kubernetes to use OCI-compatible container runtimes indirectly. Think of CRI-O as an intermediate layer for Kubernetes to use OCI-compatible container runtimes, as shown in the figure below:



CRI – O deployment

# to create. The conf file to load at startup module cat < < EOF | sudo tee/etc/modules - load. D/crio. Conf overlay br_netfilter EOF sudo modprobe overlay Sudo modprobe br_netfilter # configure sysctl These configuration after reboot the cat still works < < EOF | sudo tee/etc/sysctl. D / 99 - kubernetes - cri. Conf net. Bridge. The bridge - nf - call - iptables = 1 Net.ipv4.ip_forward = 1 net.bridge.bridge-nf-call-ip6tables = 1 EOF sudo sysctl -- System # Set the CRI-O that matches Kubernetes Version =1.21 OS= CENTOS_8 # Download yum source Perform the install sudo curl - L - o/etc/yum repos. D/devel: kubic: libcontainers: stable. Repo https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/devel:kubic:libcontainers:stable.repo  sudo curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.repo https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$VERSION/$OS/devel:kubic:libcontainers :stable:cri-o:$version. repo sudo yum install cri-o-y # cri-o sudo systemctl daemon-reload sudo systemctl start crio Sudo systemctl enable crio # Change the kubelet parameter to specify the CRI-O socket file cat /var/lib/kubelet/kubeadm-flags KUBELET_KUBEADM_ARGS="--container-runtime=remote --container-runtime-endpoint=/run/crio/crio.sock --pod-infra-container-image=k8s.gcr. IO /pause:3.4.1" # Restart kubelet



Use crictl to connect to cri-o and verify the use of the cri plug-in:









To see the CRI type used by the K8S cluster and the CRI socket specified by Kubelet:







At this point, we have implemented the implementation of CRI-O replacing Docker and Kubernetes clustering using another (CRI-O) Runc.

In the present

How do you change the container runtime for the currently running K8S cluster? Take CRI-O as an example:

  1. Version compatibility, select the corresponding version
  2. Change the Registry repository and Pause Image

  1. Pod migration
# Add a stain to the node that needs to change CRI, release all PODS on the node, Kubectl drain [node-name] --force --ignore -- daemonsets --delete-local-data # disable docker, enable cri-o /run/crio/crio. Sock # kubectl get node # to restore node Receive new POD request kubectl uncordon [node-name]



Example:



Setp1: Determine the environment information.







Setp2: Safely eject all PODs from the node using Kubectl drain.

# kubectl drain izj6cco138rpkaoqqn6ldnz --force --ignore-daemonsets --delete-local-data 
node/izj6cco138rpkaoqqn6ldnz cordoned
WARNING: ignoring DaemonSet-managed Pods: calico-system/calico-node-7l4gc, kube-system/kube-proxy-kztbh
evicting pod default/kube-demo-7456947cdc-wmqb5
evicting pod default/kube-demo-7456947cdc-kfrqr
evicting pod calico-system/calico-typha-5f84f554ff-hzxbg
pod/calico-typha-5f84f554ff-hzxbg evicted
pod/kube-demo-7456947cdc-wmqb5 evicted
pod/kube-demo-7456947cdc-kfrqr evicted
node/izj6cco138rpkaoqqn6ldnz evicted



Setp3: Verify the current POD status.







Setp4: Uninstall Docker and install CRI-O (procedure is abbreviated).



Setp5: Modify kubelet to specify container-runtime-endpoint.

# vim /var/lib/kubelet/kubeadm-flags.env KUBELET_KUBEADM_ARGS="--cgroup-driver=systemd --network-plugin=cni - pod - infra - container - image = k9s. GCR. IO/pause: 3.2 - resolv - conf = / run/systemd/resolve/resolv. Conf --container-runtime=remote --container-runtime-endpoint=/run/crio/crio.sock"



Setp6: Restore Node node, receive new POD request, verify.









Setp7:

Since master nodes cannot be drained, Kubelet can only be stopped. Work nodes and pods continue to run, but the cluster is in a managed state.



Change kubeadm. Alpha. Kubernetes. IO/cri – socket: / var/run/dockershim sock.



Change kubelet (same as setp5).







Verify the master node.



The resources

  • Don’t Panic: Kubernetes and Docker

https://kubernetes.io/blog/2020/12/02/dont-panic-kubernetes-and-docker/

  • Dockershim Deprecation FAQ:

https://kubernetes.io/blog/2020/12/02/dockershim-faq/

  • Getting Started with Containerd:

https://containerd.io/docs/getting-started/

  • Containerd GitHub:

https://github.com/containerd/containerd

  • CRI-O GitHub address:

https://github.com/cri-o/cri-o

Welcome to Open Source

As an open source one-stop cloud native PaaS platform, Erda has platform-level capabilities such as DevOps, micro-service observation governance, multi-cloud management and fast data governance. Click the link below to participate in open source, discuss and communicate with many developers, and build an open source community. Everyone is welcome to follow, contribute code and STAR!

  • Erda Github address:https://github.com/erda-project/erda
  • Erda Cloud Website:https://www.erda.cloud/