The author | | Huang Yuqi source alibaba cloud native public number

Recently, in the sponsored by the global alliance of Distributed Cloud “Distributed Cloud | 2021 global Distributed Cloud conference, Cloud native BBS”, ali Cloud senior technical experts Huang Yuqi published titled “the edge of Cloud computing Cloud native new boundaries: ali Cloud native ground practice” the theme of the speech.

Hello everyone, I’m Huang Yuqi from the native team of Aliyunyun. Thank you very much for this opportunity to share with you. The title of today’s sharing is “Cloud native New boundary: Cloud Native Practice of Edge computing”. From the title, we can see that the sharing content should include several parts: cloud native, edge computing, architecture design of the combination of the two, Ali Cloud in business and open source practice and cases.

The concept of Cloud Native known by everyone today is essentially a set of best practices and methodologies that “use Cloud computing technology to reduce costs and increase efficiency for users”. Therefore, the term “cloud native” is in a process of constant self-evolution and innovation since its birth, expansion, and then to today’s great popularity. Cloud native has been widely used as a series of tools, architectures and methodologies. So how exactly is cloud native defined? In the early days, cloud-native meant containers, microservices, Devops, CI/CD; After 2018, CNCF added a service grid and a declarative Api.

Looking back, let’s take a look at the development history of cloud native again. In the early days, due to the emergence of Docker, a large number of businesses began to be containerized and Dockerized. Containerization has led to the rapid growth of Devops through unified deliverables and isolation. The emergence of Kubernetes makes resource scheduling decoupled from the underlying infrastructure, and the management and control of applications and resources begin to be handy. Container scheduling realizes resource scheduling and efficient scheduling. Then, the service grid technology represented by LSTIO decouples service implementation and service governance capabilities. Today, cloud native is almost “all-encompassing” everywhere. More and more enterprises and industries begin to embrace cloud native.

As one of the practitioners of cloud native technology, Alibaba has long been one of its core technology strategies, which originates from the accumulation, precipitation and practice of Alibaba in the cloud native field over the past ten years. There are roughly three stages:

  • In the first stage, the core middleware, container, feitian Cloud operating system and other basic cloud native capabilities are deposited through the Internet of application architecture.
  • The second phase is the full cloud native of the core system and the full commercialization of cloud native technology;
  • The third is the comprehensive landing and upgrading stage of cloud native technology, especially the next generation cloud native technology represented by Serverless is leading the upgrade of the entire technology architecture.

Ali Cloud Container Service ACK, as a commercial platform related to Ali Cloud native capabilities, is providing rich cloud native products and capabilities for our customers, which are the best evidence of embracing cloud native. We firmly believe that cloud native is the future.

Cloud native technology has been everywhere, Ali Cloud as a cloud native service provider, we believe that cloud native technology will continue to develop at a high speed, and will be applied to “new application load”, “new computing form” and “new physical boundary”; As we can see from the big picture of Aliyun’s native product family, containers are being used in more and more types of applications and cloud services; And through more and more computing forms, such as Serverless, function calculation and so on; And the rich form also began to move from the traditional central cloud to the edge of computing, to the terminal. That brings us to today’s topic: Cloud native in edge computing. Let’s take a look at what edge computing is.

First, let’s look at the intuition of what edge computing is. With the development of 5G, loT, audio and video, live broadcast, CDN and other industries and businesses, we see an industry trend, is more and more computing power and business began to sink to the data source or end user heavy close to the place, so as to obtain a good response time and reduce the cost; This is clearly different from the traditional centralised cloud computing model. And more and more widely used in automobile, agriculture, energy, transportation and other industries.

Looking at edge computing from IT architecture, we can see that IT has an obvious hierarchical structure determined by business delay and computing form. Gartner and IDC’s interpretation of the top architecture of Edge computing are quoted here. Gartner divides Edge computing into “Near Edge”, “Far Edge” and “Cloud”, respectively corresponding to common device terminals, IDC/CDN nodes under the Cloud, and public/private Cloud. IDC defines Edge computing as the more intuitive “Heavy Edge” and “Light Edge” to represent the data center dimension and the end side of low-power computing, respectively. From the picture we can see that in the layered structure, the layers depend on each other. Cooperate with each other.

This definition is now an industry consensus on the relationship between edge computing and cloud computing. After that background and architecture, let’s look at the trend of edge computing; We try to analyze the three trends of edge computing from the three dimensions of business, architecture and scale:

First, with the integration of Al, loT and edge computing, there will be more and more types, larger and more complex businesses running in the edge computing scene. We can also see some very impressive numbers from the figure.

Second, edge computing, as an extension of cloud computing, will be widely used in hybrid cloud scenarios, which requires future infrastructure to be decentralized, autonomous edge facilities, and edge cloud hosting capabilities. Similarly, some figures are quoted in the figure.

Third, the development of infrastructure will trigger the growth of edge computing. With the development of 5G, loT, audio and video industry, the outbreak of edge computing is natural, and the explosive growth of online live broadcasting and online education industry during the epidemic last year is also an example.

As the architecture consensus is formed, we find that the scale and complexity of edge computing is increasing day by day, and the shortage of operation and maintenance means and capacity is finally beginning to be overwhelmed, so how to solve this problem?

The cloud and the edge are naturally an inseparable organic whole. The operation and maintenance coordination of “the cloud edge and the end are integrated” is a solution that can form consensus at present. As cloud native practitioners, we try to think about this from the perspective of cloud native; Just think, if “cloud side end one” has the blessing of cloud native, it will better accelerate the process of cloud side fusion.

Under this top-level architecture design, we abstract the cloud native architecture of cloud-side collaboration: In the center (cloud), we retain the original cloud native control and productization capabilities, and sink it to the edge through the cloud side control channel, so that the massive edge nodes and edge businesses turn into the workload of the cloud native system, and better interact with the end through service traffic and service governance. To complete the integration of business, operation and maintenance, and ecology; With edge cloud native, we can get the same operation and maintenance experience as on the cloud, better isolation, security and efficiency. Product landing on the smooth division into a lot of.

Next, we introduce Aliyun’s edge computing cloud native practices on both commercial and open source.

Ali cloud ACK@Edge flagship “cloud standard management and control, edge moderate autonomy” service concept; “Cloud edge end” three layer structure stratification is obvious, ability coordination. The first layer is the cloud native control capability of the center, which provides standard cloud native northbound interface for upper-layer service integration, such as city brain, Industrial brain, CDN PaaS, loT PaaS, etc. The second layer is the cloud side operation and maintenance management and control channel, which has multiple specifications and soft and hard multi-link solutions to carry cloud side sinking management and control traffic and service traffic. Further down is the key edge side, we in the original K8s capabilities on the basis of superimposed similar: edge autonomy, unit management, traffic topology, edge computing force state fine detection capabilities; Side cloud collaboration, thus forming a complete cloud side control closed-loop; At present, this architecture has been widely used in CDN, loT and other fields.

What core capabilities and business goals do edge containers need? The figure includes: four capabilities, five goals; The four capabilities are edge computing power management, edge business container management, edge business high availability support and the ultimate realization of edge cloud native ecology; To achieve edge computing power access, operation and maintenance, business can be managed, choreography, high availability of services, business ecology. The design of these core competencies is briefly described below.

Ali Cloud edge container ACK@Edge realizes network interconnection between cloud and cloud through built-in SD-WAN capability, and service traffic interworking, which greatly improves the efficiency, stability and security of cloud edge collaboration. The cloud resource interconnection enables flexible interworking of resources on and off the cloud, improving service resilience in edge scenarios.

The second core capability is the edge autonomy capability. In the cloud-edge integrated architecture, operation and maintenance collaboration is an important capability, but usually limited by the network conditions before the cloud and edge, edge needs appropriate autonomy capability to ensure the continuity and continuous and stable operation of the business. In other words, edge resources and services can continue to complete the full life cycle management of services, including creation, start and stop, migration, and capacity expansion without cloud control.

The third heterogeneous resource support is easy to understand. Edge computing is different from traditional central cloud computing with a signature feature that computing resources and storage resources are diverse and heterogeneous. ACK@Edge Supports the ARM x86 CPU architecture, Linux and Windows operating systems, and supports the mixed deployment of Linux and Windows applications to solve resource heterogeneity in edge scenarios.

By cooperating with Alibaba Cloud container image service, it provides multi-region delivery capability in edge scenarios and supports multi-region delivery of various cloud native products, including container image, application orchestration resource package, etc.

OpenYurt is the official project of CNCF’s edge container. It is a smart open platform project that extends native Kubernetes to edge computing.

As the core framework of ACK@Edge, it has served more than one million container instances and has been widely used in mainstream edge computing scenarios. After introducing commercial and open source practices, I have a few more examples to share with you:

The first is the digital fusion transformation of “human and freight yard” realized by Hema Based on the edge cloud native system. Through the cloud native system, multiple types of edge heterogeneous computing forces are unified access and unified scheduling, including ENS (Alibaba public cloud edge node service), offline self-built edge gateway nodes, GPU nodes, etc. Strong resource flexibility and flexibility brought by business mix; And through the edge cloud native Al solution provided by ACK@Edge, we build a cloud, edge, end integration and coordination of the Sky Eye Al system, give full play to the edge computing nearby access, real-time processing advantages, easy to achieve a full range of cost reduction and efficiency, store computing resource cost saving 50%, new store service efficiency increased 70%.

The second is the case of our customer in a video website. We use ACK@Edge to manage cross-region, cross-type and cross-region edge computing power to deploy video acceleration services. By supporting heterogeneous resources, the customer obtains strong resource resilience in edge computing scenarios. One number to share is the cost savings of around 50% through the flexibility and heterogeneous resource management capabilities of edge containers.

The third case is the landing of ACK@Edge in the loT intelligent building project, in which the loT gateway device is hosted as the edge node in the cloud management and control of ACK@Edge, and the business on the gateway device interacts with the intelligent device of the building; The operation and maintenance of the gateway and terminal devices are integrated into the central cloud, greatly improving efficiency.

After full discussion with community students, the OpenYurt community has also released the 2021 roadmap, and we welcome interested students to contribute together.

2021 roadmap OpenYurt community: https://github.com/openyurtio/openyurt/blob/master/docs/roadmap.md

  • OpenYurt official website: https://openyurt.io
  • Making project address: https://github.com/openyurtio/openyurt
  • Nail search group number: 31993519, you can enter the group communication!