Author | Zhang Pan (Yuzhe) source | Erda Erda public number

Erda, as a one-stop cloud native PaaS platform, is now open source for the majority of developers to complete 70W + core code! ** When Erda is open source, we plan to write a series of articles called “Cloud Native PaaS Platform Infrastructure Based on K8s”, hoping that our experience can help more enterprises improve the construction of PaaS platform infrastructure. This is the first article in a series.

origin

Kubernetes deprecates Docker as container runtime after 1.20: github.com/kubernetes/…

At the end of 2020, Kubernetes officially announced that Docker support will be discontinued as of V1.20, when users will receive a Docker deprecation warning in the kubelet startup log. This is a heavy burden for docker developers and engineers who still use Kubernetes. So how does Kubernetes’ abandonment of Docker affect us? Don’t panic. It’s not as bad as you thought.

If you’re rolling your own clusters, you will also need to make changes to avoid your clusters breaking. At v1.20, you will get a deprecation warning for Docker. When Docker runtime support is removed in a future release (currently planned for the 1.22 release in late 2021) of Kubernetes it will no longer be supported and you will need to switch to one of the other compliant container runtimes, like containerd or CRI-O. Just make sure that the runtime you choose supports the docker daemon configurations you currently use (e.g. logging).

In v1.20, only one deprecation warning will be received, which will not be removed until future v1.22. This means that by the end of 2021, in V1.23, We still have a year of buffer to find the right CRI runtime to ensure a smooth transition, such as Containerd and CRI-O.

Edge out

Why did Kubernetes give up Docker and switch to another CRI? As we know, CRI was introduced by Kubernetes in v1.5 to act as a bridge between Kubelet and the container runtime. CRI is a container-centric API designed not to expose POD information or POD API interfaces to containers such as Docker. With this interface mode, Kubernetes can use more container runtimes without recompiling. However, Docker is incompatible with CRI. In order to adapt Docker, Kubernetes invented Dockershim, which converts CRI into Docker API. Kubelet uses Dockershim to communicate with Docker, which in turn communicates with Containerd below. Then you can work happily. As shown below:

  • In order to support multiple OCI Runtime, Dockershim is responsible for pulling a new Docker-shim process for each container started, specifying the container ID, Bundle directory, runc of the Runtime. Dockershim allows Kubelet to interact with Docker as if Docker were a CRI compliant runtime.

Everything was fine until the end of last year, when Kubernetes publicly upset the balance. Kubernetes explains that maintaining Dockershim has become a heavy burden for Kubernetes maintainers. Dockershim has always been a compatible program maintained by the Kubernetes community in order to make Docker a supported container runtime. Kubernetes is abandoning support for dockershim in the Kubernetes repository. Docker itself does not realize CRI at present, so it is the problem of this event.

After a brief understanding of why Kubernetes abandoned Docker, we need to know what impact docker’s abandonment has on us? What are the alternatives?

  1. If you are relying on the underlying docker socket (/var/run/docker.sock) as part of a workflow within your cluster today, moving to a different runtime will break your ability to use it.
  2. Make sure no privileged Pods execute Docker commands.
  3. Check that scripts and apps running on nodes outside of Kubernetes infrastructure do not execute Docker commands.
  4. Third-party tools that perform above mentioned privileged operations.
  5. Make sure there is no indirect dependencies on dockershim behavior.

For users, Kbernetes’ decision will affect applications and event streams that rely on Docker.sock, such as Kubelet’s Container-Runtime-endpoint parameter. Affects the execution of Docker commands and those dependencies on Dockershim components.

Margin of raw

What are the alternatives?

Alternative 1: Containerd

Containerd (containerd.io) is an open source project that Docker donated to CNCF when OCI was established and has graduated from CNCF. Containerd is an industry-standard container runtime that emphasizes simplicity, robustness, and portability and is designed to be embedded into larger systems rather than directly used by developers or end users. Kubernetes uses Containerd as the Kubernetes cluster container runtime through the CRI interface, as shown in the following figure:

  • The CRI plugin is a native plug-in for Containerd. Starting with Containerd V1.1, the CRI plugin is built into containerd binaries.

Containerd deployment

cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf overlay br_netfilter EOF sudo modprobe overlay sudo modprobe Br_netfilter # set the required sysctl parameters, These parameters take effect after the restart still cat < < EOF | sudo tee/etc/sysctl. D / 99 - kubernetes - cri. Conf net. Bridge. The bridge - nf - call - iptables = 1 Net.ipv4. ip_forward = 1 net.bridge.bridge- nF-call-ip6tables = 1 EOF # Apply sysctl parameters without restarting sudo sysctl --system # use Docker - ce source sudo yum yum - config - manager \ - add - repo \ http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo # Yum install -y containerd. IO # containerd sudo mkdir -p /etc/containerd containerd config default | sudo tee/etc/containerd/config toml modify the configuration file and the new "SystemdCgroup = true", Use systemd as cgroup driver [plugins. "IO. Containerd. GRPC. V1. Cri". Containerd. The runtimes. Runc]... [plugins. "IO containerd. GRPC. V1. Cri". Containerd. The runtimes. Runc. Options] SystemdCgroup = true # restart containerd sudo systemctl restart containerdCopy the code

Use crictl to connect to Containerd and verify the use of the CRI plugin:View the CRI type used by the K8s cluster:Kubelet-specified CRI socket:So far, we have implemented containerd instead of Docker and Kubernetes cluster instead of another (Containerd) runC.

Alternative 2: Cri-O

CRI – O (cri-o.io) is a Container runtime initiated by Red Hat and Open source. It is a Container runtime for Kubernetes OCI (Open Container Initiative). It is the CRI standard implementation of Kubernetes. And allows Kubernetes to use ocI-compliant container runtime indirectly. Cri-o can be considered as the middle layer of Kubernetes ocI-compliant container runtime, as shown in the following figure:

CRI – O deployment

# to create. The conf file to load at startup module cat < < EOF | sudo tee/etc/modules - load. D/crio. Conf overlay br_netfilter EOF sudo modprobe overlay Sudo modprobe br_netfilter These configuration after reboot the cat still works < < EOF | sudo tee/etc/sysctl. D / 99 - kubernetes - cri. Conf net. Bridge. The bridge - nf - call - iptables = 1 Net.ipv4. ip_forward = 1 net.bridge.bridge-nf-call-ip6tables = 1 EOF sudo sysctl --system # set cri-O to match kubernetes VERSION=1.21 OS=CentOS_8 # download yum Perform the install sudo curl - L - o/etc/yum repos. D/devel: kubic: libcontainers: stable. Repo https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/devel:kubic:libcontainers:stable.repo  sudo curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.repo https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$VERSION/$OS/devel:kubic:libcontainers :stable:cri-o:$version. repo sudo yum install cri-o -y # start cri-o sudo systemctl daemon-reload sudo systemctl start crio Cat /var/lib/kubelet/kubeadm-flags.env; sudo systemctl enable crio KUBELET_KUBEADM_ARGS="--container-runtime=remote --container-runtime-endpoint=/run/crio/crio.sock --pod-infra-container-image=k8s.gcr. IO /pause:3.4.1Copy the code

Use crictl to connect to Cri-O and verify the use of the CRI plugin: Kubelet specifies the type of CRI socket used by the K8s cluster:So far, we have implemented cri-O instead of Docker and Kubernetes cluster instead of another (Cri-O) runC.

In the present

How do I change the container runtime for a currently running K8s cluster? Let’s take cri-O as an example:

  1. Version adaptation: Select the corresponding version

  1. Change the Registry repository and pause Image

  1. Pod migration
Add smudge to the node where the CRI needs to be changed, release all pods on the node, Kubectl drain [node-name] --force --ignore-daemonsets --delete-local-data Sock =/run/crio/crio. Sock =/run/crio. Receive new POD request kubectl uncordon [node-name]Copy the code

Example: setp1: Determine the environment information.Setp2: Use Kubectl drain to safely eject all pods from the node.

# kubectl drain izj6cco138rpkaoqqn6ldnz --force --ignore-daemonsets --delete-local-data 
node/izj6cco138rpkaoqqn6ldnz cordoned
WARNING: ignoring DaemonSet-managed Pods: calico-system/calico-node-7l4gc, kube-system/kube-proxy-kztbh
evicting pod default/kube-demo-7456947cdc-wmqb5
evicting pod default/kube-demo-7456947cdc-kfrqr
evicting pod calico-system/calico-typha-5f84f554ff-hzxbg
pod/calico-typha-5f84f554ff-hzxbg evicted
pod/kube-demo-7456947cdc-wmqb5 evicted
pod/kube-demo-7456947cdc-kfrqr evicted
node/izj6cco138rpkaoqqn6ldnz evicted
Copy the code

Setp3: Verifies the current Pod status.Setp4: Uninstall docker, install CRI-O (omitted). Setp5: Modify kubelet to specify container-Run-time endpoint.

# vim /var/lib/kubelet/kubeadm-flags.env KUBELET_KUBEADM_ARGS="--cgroup-driver=systemd --network-plugin=cni - pod - infra - container - image = k9s. GCR. IO/pause: 3.2 - resolv - conf = / run/systemd/resolve/resolv. Conf --container-runtime=remote --container-runtime-endpoint=/run/crio/crio.sock"Copy the code

Setp6: Restore node node, receive new POD request, verify. Setp7: Kubelet can only be stopped because master nodes cannot drain, work nodes and Pods continue to run, but the cluster is managed. Change kubeadm. Alpha. Kubernetes. IO/cri – socket: / var/run/dockershim sock. Change kubelet (same as setp5).Verify the master node.

The resources

  • Don’t Panic: Kubernetes and Docker

https://kubernetes.io/blog/2020/12/02/dont-panic-kubernetes-and-docker/

  • Dockershim Deprecation FAQ:

https://kubernetes.io/blog/2020/12/02/dockershim-faq/

  • Getting Started with Containerd:

https://containerd.io/docs/getting-started/

  • Containerd GitHub

https://github.com/containerd/containerd

  • Ci-o GitHub address:

github.com/cri-o/cri-o

Welcome to open source

As an open source one-stop cloud native PaaS platform, Erda has platform-level capabilities such as DevOps, micro-service observation governance, multi-cloud management and fast data governance. Click the link below to participate in open source, discuss and communicate with many developers, and build the open source community. Welcome to follow, contribute code and Star!

  • Erda Github address:https://github.com/erda-project/erda
  • Erda Cloud official website:https://www.erda.cloud/