Recently, MEC Global App Developers Conference was held! The conference is committed to promoting MEC from trial to application, promoting the full opening of “cloud-net-edge-end-industry” capabilities of different MEC platforms, and accelerating the pace of edge application innovation. In the conference “MEC Open forum”, Ali Cloud senior technical expert Zhou Zhe carried out the “Ali Edge cloud native application practice” theme sharing, standing in the technical perspective of the edge cloud native technology concept, application scenarios, Ali Cloud edge cloud native practice cases and other aspects of interpretation.

Why do you need edge cloud native?

What is edge cloud native? Zhou Zhe thinks: edge cloud native is edge cloud and cloud native combination. In today’s 5G era, 80% of data and computing will take place at the edge, so why must market demand be marginalized?

At the scenario level, future 5G access and various applications will spring up. The characteristics of 5G are large traffic, low latency and large connection, which traditional data centers cannot meet. For example, 4K/8K, VR, cloud games and other applications rely on large traffic for support. If all the traffic is transmitted to the data center, the bandwidth pressure is unbearable; Secondly, due to the limitation of physical distance, it is difficult to realize the low latency requirement. Finally, at the large connection layer, with the popularity of IoT smart devices, the scale of future terminals will increase dramatically, requiring divide-and-conquer, with more edge nodes to provide large connection solutions. An edge cloud platform is built between the central cloud and the device terminal, which can serve as a link between the above and the below to realize the transmission and ability communication.

Everyone’s understanding of cloud native is different. For the understanding of cloud native, Ali Cloud Alibaba partner, Ali cloud senior researcher Jiang Jiangwei thinks: because of the cloud and the software, hardware, architecture, is the real cloud native. Zhou Zhe introduced to the site: it contains three dimensions, in the hardware or infrastructure layer, with on-demand distribution, elastic scalability, which is the basic capability of cloud computing. Infrastructure covering public cloud, infrastructure covering private cloud, infrastructure covering edge computing, and overall building cloud native infrastructure implementation. At the architectural level, it is characterized by loose coupling, elastic expansion and high fault tolerance. At the cloud native application management level, based on an automated, observable and manageable software delivery process, it accelerates innovation efficiency while ensuring system stability. Cloud native applications have these three dimensions. Such applications are easy to maintain, elastic and scalable, stable, and efficient in iteration.

“Typical technologies for cloud native: containers, service grids, microservices, immutable technology facilities, and declarative apis. These belong to cloud native technology, now very cloudy native technology is based on container and grid service, such as micro service constructed, but not only contains the cloud native technology, in addition to these techniques, also can build a cloud native applications and technologies, as long as meet the above three characteristics, it doesn’t matter whether the container. If the application does not have the above three features, it is not a standard cloud native application. For example: using container technology, build your own container service, although also using cloud native technology, but this application does not mean cloud native application.” Zhou Zhe thinks.

Edge cloud native is to meet both edge cloud and cloud native characteristics.

Edge cloud native technical advantages

Cloud native technology can solve the efficiency, stability and cost of application development and maintenance. So what’s the difference between edge cloud native apps and central cloud native apps?

Zhou Zhe made a comparison from four aspects:

First, in terms of infrastructure: The edge is characterized by distributed, small rooms, each room 10-50 machines, computing and storage capacity is limited. The reliability of a single room is poor, and the SLA of each room is poor compared with that of the center. If the flexibility of single room is limited, the flexibility of cross-room is strong. The characteristics of the central cloud are centralized, each computer room is large, the computing and storage scale is large, and the reliability of the computer room is high. The elastic expansion of a single room is good, and the disaster recovery of multiple rooms is generally considered.

Second, in terms of system architecture: the edge feature is global distributed access, close to the customer edge measurement. The communication capability between clusters across public networks is required. This is what we call edge communication and edge synergy. The network link between each cluster and the center is not very reliable. Once the connection between the edge and the center is lost, edge autonomy is required. The characteristic of the center is the general single room access, the computer room between the core of the computer room special line interconnection. There is no need for borderline autonomy.

3. Resource scheduling: Due to the small size of the edge room, the elasticity of the single room is limited. The elasticity of the edge needs to be done across nodes. In addition, the edge equipment room is scattered. Therefore, you need to select an edge node based on the global load and perform scheduling within the edge node. It requires a combination of global request scheduling and resource scheduling. Users in the central side mainly focus on the computing force scheduling of containers, but generally do not pay attention to the request scheduling.

4. Application orchestration management: Applications on the edge need to be deployed to hundreds or thousands of clusters, and cross-cluster Dr Needs to be considered. It is difficult to ensure the consistent application status of several clusters. On the central side, cluster management is unified and multi-cluster management is rare. In general, cross-cluster Dr Is used to perform cold backup to ensure application status consistency in the cluster.

Edge cloud native can provide key capabilities

First: heterogeneous fusion wide coverage

The edge needs to be covered extensively, and more resources will have more advantages in terms of latency. How to achieve more resources? In addition, we will consider renting edge resources from each cloud vendor, so as to achieve heterogeneous integration capabilities for resources from different vendors.

Second: cloud side experience consistency

Many applications need to be deployed in the cloud as well as on the edge, and if the cloud-side experience is inconsistent, it can add more development costs. Application development needs to make cloud side experience consistent, so it is very friendly to application developers, without one set of applications on the edge and one set of applications in the center.

Third: standard cloud native compatibility

One of the main starting points for cloud native is cross-platform compatibility. Allowing developers to run their applications on their own platform and in different scenarios is very developer friendly.

Fourth: computing power global liquidity

In the edge feature, the most critical point is distributed, there is a strong demand for global traffic scheduling + computing power combination. Where the user is, the application is automatically deployed to the nearest edge of the user. Calculate the flow of force and flow at the edge as needed.

So which application scenarios are suitable for edge cloud native?

Zhou Zhe said: there are several scenes suitable for edge cloud native, the first scene is cloud computing force sinking, such as video transcoding, the original video transcoding is in the center of transcoding, in the edge of the distribution, this is a typical cloud computing force sinking application scenario. Secondly, the calculation power of the terminal moves up the scene; Zhou Zhe explained, for example, the more popular cloud applications, cloud games, cloud desktop, etc., belong to the terminal computing power up scene, through edge computing can solve a lot of application instability and other problems. The third scenario is applications and data that are already on the edge, typically CDN applications and data, with hundreds of Tbps of data flowing and transmitted across the network every second.

Ali Cloud edge cloud native products and practices

Ali Cloud edge cloud native system, as shown in the following figure, from the bottom up. The first is edge cloud, which includes edge infrastructure capabilities and edge fusion computing. There are many edge nodes in the edge cloud, which can run containers and micro-service architecture applications. Distribution and control based on edge network and middleware. The next layer is the container platform, which contains container-related management and control services and cluster management capabilities. On top of the container platform are cloud-native capabilities, including application management, Servicemesh, observability, and more. Edge application development is made easier through the overall cloud native technology stack.

Next, Zhou Zhe also showed some cases of Ali Cloud edge cloud native practice for the above introduction. The first case: CDN ON ENS aims to use cloud native technology for edge computing ON the whole network CDN. Its value can achieve cost reduction and efficiency improvement, and improve the efficiency of CDN business innovation under the condition of ensuring stability. “We actually have problems with the CDN business.” Zhou Zhe said: CDN service will encounter different resources competing problems, complex services, multiple system components, the whole network release cycle is long, heterogeneous resource deployment capability is weak, the resource reuse rate is low and other major problems. For example, there is a lot of pressure on the pictures of e-commerce companies on Double 11, and the load of live broadcast when watching the Spring Festival Gala on New Year’s Eve is large. How to solve the problem of CDN application? “We solved this with some cloud-based technology solutions.” Zhou Zhe proposed the solution of cloud native technology: split and isolate services through container to solve the problem of resource contention. Through the micro-service architecture transformation, the problem of multiple system components and difficult troubleshooting is solved. The whole process application management system combines with service separation and isolation to ensure stability and improve release efficiency. Be able to manage heterogeneous resources and solve the problem of using heterogeneous resources. Real-time elastic scheduling based on resources solves the problem of staggered peak overcommitment in different service scenarios.

Second scenario case: video transcoding sinks to the edge

As shown in the following figure, the left side is the original scheme: the anchor pushes the stream to the CDN edge node, and the CDN edge node pushes the live stream to the center, where transcoding is performed. The other end is the playback end. The playback end pulls streams from the edge node, and the edge node pulls streams from the center. The new scheme is as follows: the anchor end directly pushes the stream to ENS edge node, directly deplots the transcoding application on the edge node, and directly performs transcoding on the edge node; the player end can directly pull the transcoded live stream from the edge node. This reduces the edge to center cost and improves the user experience.

The third scenario case: terminal cloud (cloud game/cloud application scenario) you will not encounter a situation: usually and friends together dinner want to play a game, mobile phone did not install, download the game installation package is relatively large; Download traffic high and long time, how to do? It would be nice if there were cloud games. The cloud game scenario uses edge computing solutions, as shown in the figure below: the lower left corner is the user terminal, the upper left corner is the game manufacturer’s business system, the upper right corner is the edge computing control center service, and the lower right corner is the edge computing node. The process of the entire interaction is: When the user terminal to access the business system, edge of cloud gaming business system and calculation control by cloud game application creation request, edge calculation control center issued instructions to edge nodes, the edge node cloud application start up the game, after start up virtual devices at the edge of the return to cloud computing control center game, the virtual equipment business system feedback mobile terminal users, The mobile phone terminal and the virtual terminal of edge node can be connected, the mobile phone terminal can interact with the virtual terminal of edge node through operation instructions, and the virtual terminal can return the screen picture and video stream to the mobile phone terminal, and finally the mobile phone terminal can display normally.

In summary, Aliyun edge computing products are mainly divided into several layers. The first layer is that the underlying servers and switches can form unified pooled resources, which belong to ENS infrastructure. The second layer abstracts the form of computing, storage, and network resources based on these uniformly pooled hardware. The third layer is to package these infrastructure technologies as platform services, forming storage platforms, computing platforms, container platforms, computing platforms, and application management platforms. In the fourth layer, saas scenarios are delivered to customers for encapsulation and implementation, so as to complete the landing of vertical business and innovation scenarios.

At the same time, Ali edge computing products also have five capabilities:

Supports three computing modes: virtual machine, container and secure container, and terminal digital twin computing.

Supports multiple storage services: high availability block device, file storage, object storage, and CDN KV cache.

Support for multiple network services: Multiple network forms within nodes, network acceleration and switching capabilities across nodes, and edge load balancing products are implemented.

Security node Principal: Provides layer 4 and Layer 7 anti-ddos attacks. It provides 600G anti-ddos attacks for content and host security scenarios.

Efficient and automatic cloud management: supports the efficient production of containers and virtual machines, and the control of image release, and 3-hour remote automatic management without contact.

In the future, Aliyun Edge Computing will share more latest product capabilities, solutions and technical practices in the “Aliyun Edge Plus” public account. Welcome to discuss. Benefits of this article: follow the “Aliyun Edge Plus” public account, reply to Zhou Zhe, and get PPT of the speech.