Author: Zhao Zhen, Shenxin Cloud computing development engineer, OpenYurt community Member

Editor’s note: Driven by new technologies such as 5G and the Internet of Things, the edge computing industry is already heading for the wind. In the future, more and more types, larger scale and more and more complex applications and workloads will be deployed to the edge. Based on the sharing and organizing by Zhao Zhen, a cloud computing engineer of Shenxin, in KubeMeet, a developer salon in the field of cloud native container jointly organized by CNCF and Ali Cloud, this paper introduces the opportunities and challenges of edge computing, as well as the practice scheme of edge container open source project OpenYurt in the enterprise production environment.

This sharing is a practical case, mainly about how OpenYurt products are deployed in real situations.

The first part is about the opportunities and challenges of edge computing. The second part is deeply convinced of the solutions made by the platform to these challenges, which can make users better use of edge computing. The third part is the combination of the scheme and OpenYurt. What are the key points to be implemented? The last part is about the future outlook of the whole industry and the future development of the community.

Opportunities and challenges of edge computing

With the arrival of 5G and the emergence of live broadcasting and the Internet of Things, more and more edge devices have been used by people, and the data generated is very large. A 1080p video monitor on a smart terminal, for example, generates 10GB of data per minute. In a small and medium-sized city, there are 1 to 1.5 million of these cameras, and the number is growing. In such a marginal scenario, its data applications are enormous.

In the era of Internet of all things, many smart homes have been produced. Besides simple access gateways, they also have a lot of data to process, which is also the application scenario of edge side.

Those are the opportunities. What are the challenges?

For some traditional industries, their cloud computing may be very small. For example, there are many private cloud scenarios and government private cloud scenarios in the market, and they are not enough to do a lot of computing processing with infinite expansion like the cloud computing of big manufacturers. Under the current market environment, the cloud and terminal environment is not ideal, mainly due to the following aspects.

First, due to the low penetration rate of end-to-end data acquisition equipment, many useful data cannot be collected for the cloud brain to conduct analysis.

Secondly, due to the low dimension and single function of data collection, some valuable data will be missed.

Third, maintenance of front-end equipment is very difficult. Take cameras for example. We can’t closely monitor and maintain every camera. Several days may have passed since the accident to trace the problem. In this case, the data is lost.

Fourth, industry data standards are different. The equipment is constantly updated and the standards of data are constantly updated. There are many different types of devices in the market, and the data of these devices are centralized to the cloud for processing, but the cloud cannot keep up with the capability.

The main bottleneck of traditional cloud is resource and efficiency. A 1080p camera may generate 10GB of data per minute, and the cloud and edge bandwidth is so limited that a single camera may overwhelm the entire network and render other services unavailable. Another is the efficiency limitation. Many private clouds are not strong enough to process data in an ideal way, and they cannot respond in a timely manner. For some industries that require low latency, this is very dangerous.

At the same time, the traditional link between the end and the cloud is uncontrollable. For example, the end loses contact with the cloud due to network jitter, and the commands from the cloud cannot be delivered to the end in time, which also brings certain risks.

Furthermore, equipment updates at the upper end have traditionally been slow, with no iterations occurring long after a one-time deployment. However, in the scenarios of some emerging industries, such as intelligent intersection, its AI algorithms need to be constantly trained by models. As they deploy, they collect data, upload that data to the cloud, train the model, get a better version, and then push that better version to the end for smarter operations. This part is the process of continuous software update and iteration, which is also the traditional sense of cloud can not do.

Deeply convinced of intelligent edge platform solutions

In view of the above problems, we provide users with solutions. Take a look at the overall architecture of the solution. It works from two sides — the edge side and the center side.

First, we adopt a cloud integrated architecture. Deploy a cloud – edge machine on the edge for users, which can also be understood as a small server, which can be placed in the same place as terminal devices. So they will form an independent small network. In this way, the devices on the side can send data to the all-in-one cloud, so that the data can be processed and responded to as soon as possible.

Secondly, even in the case of cloud-side network disconnection, end-side can access the network with side-side all-in-one computers. We can build in some AI algorithms, so that the instructions on specific occasions can also be responded to.

The last is about data processing. Cloud side network bandwidth is limited, we can first collect data in the all-in-one machine, first do a round of processing, some effective data processing out. These data are then reported to the central side through the SD-WAN network for processing, which reduces the pressure of bandwidth on the one hand and improves the data processing capacity of the central side on the other hand.

In fact, the edge autonomy capability in the case of cloud-side network disconnection is based on the combination of OpenYurt with the community, which organically combines the cloud-side operation and maintenance channel, the autonomy of edge terminal and the unit-oriented deployment, forming such an architecture diagram of edge computing.

The ultimate goal of cloud edge all-in-one is to get through the last mile for intelligent transformation.

It provides many functions, including control panel, AI algorithm platform, monitoring log collection, and of course the most important secure network management, as well as some video decoding. At the same time, this box also supports hardware adaptation, such as ARM architecture, x86 architecture, the coordination of different network Gpus, and the adaptation of the underlying data operating system.

After the adaptation of the underlying hardware, AI algorithm, network equipment and video decoding is completed, the overall scheme is handed over to users, which can help users achieve containerized deployment of services more quickly, which greatly improves the efficiency of product intelligent transformation.

Technical solution combined with OpenYurt landing

An important application scenario of edge computing is intelligent intersection. Every intersection in the city has a different strategy. For example, in some intersections with very large traffic flow, its focus is more on flow control. Due to the density of vehicles, traffic lights may not have enough time to respond, so they need to be supported by AI algorithms.

For example, in the case of very dense human flow, some people with criminal records that the public security system focuses on pass by, at this time, the face recognition function of the AI algorithm should be used to inform the surrounding police in time to remind them to pay attention to prevention.

Many urban roads will be combined with village roads. Such urban and rural roads need not only flow control, but also traffic safety control. We need to implant some intelligent voice service propaganda system into the AI algorithm, combined with the dynamic alarm function, which can avoid the occurrence of traffic accidents.

These AI algorithms can do intelligent injection according to different areas. In fact, this makes use of the unitary management of edge computing OpenYurt. We set different unitary scenarios for it. Once connected to the network, different AI algorithms can be pushed based on the current area.

Having said so much about the real business landing scenario, we will explain our transformation ideas in combination with the architecture of the whole platform.

KubeManager (KM) architecture is a product developed by our company. It is a container management platform. The bottom layer is constructed by multiple K8s cluster management, integrating multiple application stores and software, as well as some data collection and monitoring, and visual display for users.

It is mainly divided into two large modules, the bottom left of the figure is the management cluster, the above series of content is carried on in the management cluster. The user cluster can access the data through the access layer, and then send the API data to the API business layer, and then store the data into the etCD of the native K8s.

The part we do is mainly for the user cluster, combined with OpenYurt. There are also many problems in the process of transformation and landing.

When there are multiple masters, the Tunnel traffic needs to be adapted. Therefore, users need to adapt Tunnel traffic themselves. Therefore, we have completed the docking with the community and integrated them into the platform, so that users can directly use it without considering various adaptation problems.

After the user cluster is connected to the KM cluster, it needs to be converted from the K8s cluster to the edge cluster, and we also provide an automated transformation.

OpenYurt is based on the native K8s. Due to the different construction methods, there are some differences in the later platform interconnection process, such as the automatic control, polling operation, and delivery of certificates, which need to be solved in the early interconnection process before using OpenYurt.

After the transformation, the user cluster architecture is switched from the left to the right. The transformation here mainly includes the following points:

First, a change has been made to the YurtControlManager component, which used to be a Deployment and has 1 copies. Now it’s changed to DaemonSet, which automatically expands and shrinks as the number of masters changes. This is a.

Second, because the overall traffic is through Nginx to find different APS servers as proxies, YurtHub actually does not directly access APIserver, but through Nginx. But it can also achieve the desired effects of edge clustering and OpenYurt as it is now – such as traffic filtering and edge autonomy.

Future vision of the industry and community development expectations

Finally, I would like to say something about the development and future expectations of the whole industry.

As you can see from the figure above, the growth of edge devices is a cumulative process. The development of the whole industry will have a lot of demand for edge equipment, such a large demand will drive the development of the whole industry, the development of the industry is inseparable from the edge community, including the contribution of OpenYurt community. Hopefully, every user can be more marginal, secure and intelligent when using OpenYurt.

If you are interested, you can also join the Pin search group: 31993519 and join the OpenYurt project Pin group.

Click here to learn about the OpenYurt project!