Abstract:KubeEdge is the first open intelligent edge computing platform based on Kubernetes extension to provide cloud-edge collaboration capabilities. It is also CNCF’s first formal project in the field of intelligent edge. Relying on Kubernetes’ powerful container scheduling and scheduling capabilities, cloud side collaboration, computing sink, mass device access, etc.

Edge computing scenarios and challenges

Edge computing is a concept of distributed computing. IT is a decentralized open IT architecture with decentralized processing capabilities. Data is processed by the devices themselves or local computers or servers, and can be provided on the edge of the network closer to the terminal without transmission to the data center.

However, edge computing cannot exist alone. It must be connected with remote data centers/clouds. Taking IoT (Internet of Things) scenarios as an example, edge devices not only have sensors to collect data of the surrounding environment, but also receive control instructions from the cloud. Therefore, edge computing and cloud computing are interdependent and synergistic.

According to the 2020 Edge Computing Status Report, by 2022, 75 percent of data will be analyzed and processed at the edge. This fluidity of data processing will be accompanied by the evolution of four edge technologies:

  • The utility of ARTIFICIAL intelligence increases, from the cloud to the edge
  • The number of iot devices is growing exponentially
  • The fast arrival of the 5G era
  • Edge computing centers gradually overcome the problems of distributed facility complexity and unit cost economy

Combining the edge computing scene and technology evolution direction, we can sum up several challenges in the current edge computing field:

  • Cloud-side collaboration: gradually infiltrate from the cloud to the edge of AI/ security and other businesses, intelligent collaboration and flexible migration between the cloud and the edge;
  • Network: reliability and bandwidth limitation of edge network;
  • Device management: Exponential growth of iot devices, management of edge nodes and edge devices;
  • Scaling: Highly distributed and massively scalable;
  • Heterogeneous: Edge heterogeneous hardware and communication protocols.

Advantages and challenges of Kubernetes building edge computing platform

Kubernetes has become the de facto standard for cloud native and is able to provide a consistent on-cloud experience on any infrastructure. It’s not uncommon to see a “container + Kubernetes” combination in DevOps at 10X efficiency. Based on the technical architecture and ecological advantages of Kubernetes, there has also been a growing need in recent years to operate Kubernetes outside the data center (edge).

Edge computing platform based on Kubernetes will have many natural advantages:

(1) Containerized application packaging: the lightweight and portability of containers are very suitable for edge computing scenarios. Edge container applications can be built once and run on any edge node.

(2) General application abstract definition: Kubernetes’ application definition has become the de facto standard in the cloud native industry and is widely accepted. With the native Kubernetes application API, users can unify the management of cloud and edge applications. For example, users can use the familiar Kubectl or Helm Chart to manage cloud and edge applications.

(3) Platform scalability: Kubernetes has been proven to have good scalability and can customize apis based on CRD, such as edge device management; Various edge custom plug-ins can be extended based on CRI, CNI, CSI and other plug-ins.

(4) Strong technology ecosystem: Kubernetes has formed a strong cloud native technology ecosystem, such as: monitoring, logging, CI, storage, network can be found ready-made tool chains.

However, Kubernetes was originally designed for cloud data centers. To extend Kubernetes’ capabilities to the edge, the following problems must be solved:

(1) Limited resources of edge devices: many edge devices have limited resource specifications, especially weak CPU processing capacity and less memory resources, so complete Kubernetes cannot be deployed.

(2) The instability of the edge network: Kubernetes relies on the stable network of the data center, and the network is usually unstable in the edge scenario.

(3) Edge node offline autonomy: Kubernetes relies on list/ Watch mechanism and does not support offline operation, while edge node offline is normal, for example, the device restarts offline.

(4) Mass edge device management: how to use Kubernetes to manage the exponential growth of mass edge devices and the generated data.

In addition, regarding how to use Kubernetes on the Edge, a survey conducted by Kubernetes IoT/Edge WG showed that 30% of users want to deploy a complete Kubernetes cluster on the Edge. 70% of users want to deploy the Kubernetes management plane in the cloud and only deploy Kubernetes agent on the edge nodes.

Edge container open source status

The Kubernetes community has been interested in Edge computing scenarios for a long time. As early as 2018, the community established a dedicated Edge working group to discuss Edge related scenarios. At the end of 2018, Huawei opened source Kubernetes edge project KubeEdge for the first time in the industry, huawei cloud Intelligent edge platform product IEF (Intelligent EdgeFabric) core code open source, and donated to the CNCF Foundation in early 19, Become CNCF so far only edge computing official project. Subsequently, Rancher and Ali Cloud followed up one after another and opened source projects such as K3s and OpenYurt. The field of edge container really entered a period of rapid development. Below, we make some brief analyses of these three representative K8s@Edge projects.

KubeEdge architecture analysis

KubeEdge is an open source project that Huawei Cloud opened in November 2018 and donated to CNCF in March 2019. KubeEdge is the first open intelligent edge computing platform based on Kubernetes extension to provide cloud-edge collaboration capabilities. It is also CNCF’s first formal project in the field of intelligent edge. KubeEdge derives its name from Kube + Edge, which, as the name implies, relies on Kubernetes’ powerful container scheduling and scheduling capabilities to achieve cloud-side collaboration, computing sink, mass device access, etc.

KubeEdge architecture is divided into three layers: cloud, edge and end. The cloud center controls Edge nodes and devices, and Edge nodes achieve Edge autonomy. The architecture of Edge nodes controlled on the cloud also meets the demands of most users according to Kubernetes IoT/Edge WG survey results. KubeEdge completely gets through the collaborative scene of cloud, edge and device in edge computing. The overall architecture is shown in the following figure.

For edge-specific scenarios, KubeEdge focuses on the following problems:

Cloud-side collaboration: KubeEdge manages edge nodes, devices, and workloads in the cloud using the Kubernetes standard API. The system upgrade and application update of edge nodes can be delivered directly from the cloud to improve the operation and maintenance efficiency of edge nodes. In the edge AI scenario, the model trained by the cloud can be directly sent to the edge node for inference, so as to realize the cloud and edge integration of edge AI.

Edge autonomy: KubeEdge implements offline autonomy of nodes through message bus and local storage of metadata. The desired control plane configuration and real-time device status updates are synchronized to the local storage through messages. In this way, the node does not lose management metadata even when it is restarted and can manage devices and applications on the node.

Extreme lightness: KubeEdge retains the Kubernetes management surface and reorganizes the node end components of Kubernetes to achieve the purpose of extreme lightness. Node components can run on edge nodes with 256MB memory.

Massive edge device management: KubeEdge plugable device unified management framework, in the cloud based on Kubernetes CRD capability, user-defined device management API, fully in line with Kubernetes native standards, users can manage massive edge devices in the cloud through API; At the edge, device access drivers can be developed according to different protocols or actual requirements. Currently, protocols supported and planned to support include MQTT, BlueTooth, OPC UA, Modbus, etc. With more and more community partners joining, KubeEdge will support more device communication protocols in the future.

K3s architecture analysis

K3s is a self-tailored Kubernetes distribution that Rancher opened source in February 2019. The name of K3s comes from K8S-5, where the “5” refers to the fact that K3s is lighter than Kubernetes, making it better adapted to CI, ARM, and edge technologies. Iot and test these 5 scenarios. K3S is a CNCF official Kubernetes distribution, open source slightly later than KubeEdge. K3S is designed for r&d and operations personnel running Kubernetes in resource-limited environments. It is designed to run small Kubernetes clusters on edge nodes of x86, ARM64, and ARMv7D architectures. The overall architecture of K3S is as follows:

K3S is a direct code change based on a specific version of Kubernetes (e.g. 1.17). K3S is divided into Server and Agent. Server is Kubernetes management plane component + SQLite and Tunnel Proxy. Agent is Kubernetes data plane + Tunnel Proxy.

In order to reduce the resources required to run Kubernetes, K3S made the following changes to the native Kubernetes code:

Remove old, unnecessary code. K3S does not include any non-default, Alpha, or outdated Kubernetes features. In addition, K3S also removes all non-default Admission Controllers, in-tree cloud providers and storage plug-ins.

Integrate the packaging process. In order to save memory, K3S will originally run in the way of multi-process Kubernetes management surface and data surface of multiple processes were merged into one to run;

Using Containderd to replace Docker significantly reduces runtime footprint;

SQLite is introduced to replace ETCD as management data storage, and THE list/ Watch interface is implemented with SQLite.

Package all Kubernetes native processes in the same process.

The K3s project is essentially a “lightweight” version of the K8s, not a true “edge” version. Architecturally, all components of K3s (including Server and Agent) run on the edge side, which means that K3s is not a decentralized deployment model. Each edge requires additional deployment of Kubernetes management surface and therefore does not involve cloud side collaboration. It also lacks the ability of edge autonomy against the instability of edge network and does not involve the management of edge devices.

In addition, if K3s were to go into production, there would be a unified cluster management solution on top of K3s in the cloud responsible for application management, monitoring, alerting, logging, security and policy across clusters. Unfortunately, Rancher has yet to open source this capability.

OpenYurt architecture analysis

OpenYurt is an open source cloud native edge computing project launched by Alibaba Cloud in May 2020. It is basically similar to KubeEdge architecture. OpenYurt is also an edge computing platform that provides cloud edge collaboration based on the container arrangement and scheduling capabilities of native Kubernetes. OpenYurt also relies on Kubernetes’ powerful container application layout capability to achieve the appeal of cloud-edge integrated application distribution and control, and centralized control of edge nodes from the cloud. The overall architecture of OpenYurt is as follows:

The project has not released version 0.1 yet. From the open source part, it can be seen that The architecture of OpenYurt is similar to KubeEdge, which also opens up the scene of cloud side collaboration. Capabilities are similar to Those offered by KubeEdge, including edge autonomy, cloud-side collaboration, and unitary management capabilities (not open source).

OpenYurt does not transform Kubernetes, but provides the control capability required by edge computing in the form of Addon (plug-in). YurtHub at the edge, as a temporary configuration center on the node, continues to provide data configuration services for all devices on the node and customer business when the network connection is interrupted. This simplified architecture focuses on solving the problem of “offline autonomy”, and is more conducive to preserving the complete function of the existing K8s. However, because Kubelet is not modified, OpenYurt cannot run in the edge devices with limited resources. OpenYurt is not involved in the management of edge devices in the Internet of Things scenario. In addition, some edge scenarios involving advanced features not supported by Kubelet native, such as offline self-healing and self-scheduling, cannot be implemented.

Summary and prospect of edge container

Compared with the three open source projects, the most impressive feature of K3s is its attempt to make Kubernetes lightweight and easy to deploy. By cutting out some uncommon functions of Kubernetes and combining multiple components into a process, It makes it easy to get the same experience as Kubernetes on some edge nodes with sufficient resources. However, according to the test data, THE resource consumption of K3s is still very high, and the memory of several hundred MB is not available for the edge nodes of most devices. Moreover, it is only suitable for running in the edge and does not have the core demands of edge computing such as cloud edge collaboration and edge autonomy.

OpenYurt provides edge computing capability on the basis of native Kubernetes through non-invasive plug-in form. Although it provides cloud-edge collaboration and edge autonomy, it is not lightweight and can only run on edge nodes with sufficient resources, but not on a large number of edge nodes with limited resources. It also does not provide the ability to manage massive edge devices in edge computing.

KubeEdge is a complete edge cloud platform from cloud to edge to device, 100% compatible with Kubernetes native API, based on Kubernetes to solve the core demands of edge computing. Including cloud edge collaboration, edge network instability, edge autonomy, edge lightweight, massive edge device management, heterogeneous expansion and other issues.

In the future, edge container technology will continue to focus on solving the challenges in the field of edge computing, such as cloud edge collaboration, network, device management, expansion and heterogeneity. KubeEdge has been an official project of CNCF, and will continue to work with community partners to develop standards for cloud and edge computing collaboration and solve the problems in the field of edge computing. End the chaotic state of edge computing without unified standards and reference framework, and jointly promote the development of edge computing industry.


Click to follow, the first time to learn about Huawei cloud fresh technology ~