Background and present situation of edge computing platform and function computing

At present, Edge Computing is a new technology direction, Edge Computing can be closely combined with cloud Computing, give full play to the Edge of low delay, security and other features, has broad application prospects. Edge computing platform is a set of computing platform built on edge infrastructure based on edge computing capabilities and cloud computing core technologies. It is also a comprehensive elastic cloud platform with computing, network, storage, security and other capabilities formed at edge locations. Edge computing platform is applicable to extend the computing capacity of cloud to the location 10 kilometers away from “everything”, such as township, street level computing scenes, as well as factories, buildings and other computing scenes 1,000 kilometers away from “everything”. It provides distributed cloud services with self-organization, low latency, definable, schedulable, high security, and open standards for terminals connected to the Internet of Everything. Different from the mature computing management software platform in the field of cloud computing, there is no mature and stable corresponding technology for edge and object computing platform at home and abroad. At present, all companies hope to gain the first mover advantage in the new field of edge Computing. The research work of edge Computing and Serverless Computing is still in the initial stage. The edge technology routes of various vendors and open source communities are different. This paper’s edge computing platform is based on Nomad, a lightweight cluster management and application orchestration tool. In the white paper of edge computing Industry Alliance, the edge computing industry is generally divided into three development stages: connection, intelligence and autonomy. The first phase of development is jointing, where a run-time support implies a workload task type and a technology ecosystem support. This article looks at multi-runtime support for edge computing platforms. Nomad-based edge computing platform has native support for Docker, LXC, bare process and other runtimes.

However, the current nomad-based edge computing platform cannot meet the requirements of business scenarios in terms of runtime. First, traditional VMS need to be used in some edge scenarios, including scenarios that support multiple operating systems (oss), so services have higher security and isolation requirements. With containers and virtual machines, the system can more easily consolidate, orchestrate, and manage business loads on the computing platform. Second, it has become a consensus that Internet of Things platforms and applications use edge computing as bearer. However, Internet of Things applications are characterized by mass access, fragmentation and marginalization, requiring edge computing to provide new architecture and technical means. Edge application services are usually characterized by peaks and valleys and event-triggered service operation. In addition, the edge environment has limited resources and a large number of devices. Therefore, fine-grained dynamic adjustment and collaborative resource management should be considered. With the introduction of serverless computing, the Service mode of Function computing, namely Function as a Service (FaaS), is functionally granular, which more reflects the core concept of resource reuse and flexible scheduling, and conforms to the characteristics of Internet of Things and edge application business, which can simplify the work of edge application developers.

Based on the above problems, this paper specifically studies the support of virtual machine, container, function and other multi-granularity and multi-runtime based on Nomad’s edge computing platform. The Libvirt virtual machine runtime and FaaS function runtime are added to the edge computing platform. On this basis, combined with the model of serverless computing, design and implement the function computing service function of edge computing platform, expand the service mode of edge computing platform, and provide better service quality experience.

Background and current situation of edge computing

The background of edge computing. The rapid emergence of rich new business applications such as the Internet of Things, industrial Internet, Internet of vehicles, augmented reality, virtual reality and 4K/8K HD video has put forward higher and higher requirements on the transmission capacity, data distribution and processing capacity of the network. At the same time, the further development of application services has created a huge amount of connections and data. In addition, end users are increasingly eager to obtain higher quality of experience network services, and are willing to pay more for it. Traditional cloud computing model has been unable to effectively cope with the rapidly increasing amount of data, higher demand for data transmission bandwidth, and higher real-time requirements of new application data processing. Therefore, network architecture must be adjusted to meet service requirements such as super-large connections, ultra-low latency, and super-large bandwidth. To cope with the above challenges, the industry proposes to provide computing processing and data storage capabilities at the edge of the network, namely edge computing, to achieve the purpose of providing high-quality services to users at the edge of the network.

The history of edge computing. In the 1990s, Akamai Company defined Content Delivery Network (CDN) for the first time [1]. It is regarded as the earliest prototype of edge computing that transmission nodes are set up near end users to store static data such as images and videos. Edge computing takes this concept a step further by allowing nodes to participate in and perform basic computing tasks. In 1997, computer scientist Brian Noble successfully applied edge computing to speech recognition in mobile technology [2]. Two years later edge computing was successfully applied to extend the life of cell phone batteries, a process known at the time as Cyber foraging [3]. In 1999, Peer to Peer Computing appeared [4]. In 2006, Amazon released its EC2 service, and cloud computing officially emerged and began to be adopted by enterprises. In the case of virtual machine-based Cloudlets mobile computing summarized in 2009, the end-to-end relationship between delay and cloud computing was introduced and analyzed in detail [5]. The concept of a two-tier architecture is proposed: the first level is the cloud computing infrastructure, and the second level is the Cloudlet composed of distributed cloud elements. This two-level architecture has in many ways become the theoretical basis of modern edge computing. In 2013, Fog Computing was formally proposed by OpenFog, an organization spearheaded by Cisco. Its central idea is distributed cloud Computing infrastructure to improve the scalability of the Internet [6]. Fog computing and edge computing are similar in concept. If we want to distinguish them, the main difference between the two is that edge computing emphasizes edge devices and infrastructure while paying more attention to the embodiment of edge intelligence. In 2014, European Telecommunications Standards Institute (ETSI) established the Working Group on Mobile Edge Computing Standards to promote the standardization of edge computing, aiming to achieve flexible utilization of computing and storage resources. Migrating cloud computing platform from inside mobile core network to mobile access edge [7]. In 2015, edge computing entered Gartner’s Hype Cycle. In 2016, ETSI proposed to extend the concept of Mobile Edge Computing (MEC) to multi-access Edge Computing (MEC) [8]. Since then, the MEC has become a server or other device that can run on the edge of a mobile network to perform specific computing tasks.

Definition of edge computing. For the definition of edge computing, there is no unified conclusion in the industry. Edge Computing is defined by the Edge Computing Consortium (ECC) in 2018 as: “At the edge of the network near the object or data source, an open platform integrating the core capabilities of network, computing, storage and application provides edge intelligent services nearby to meet the key needs of industry digitalization in agile connection, real-time business, data optimization, application intelligence, security and privacy protection. It can serve as a bridge between the physical and digital worlds, enabling smart assets, smart gateways, smart systems and smart services “[9]. The edge of the network can be understood as any functional entity from the data source to the cloud computing center. These entities are equipped with the edge computing platform that integrates the core capabilities of network, computing, storage and application.

Development status of edge computing at home and abroad. In recent years, edge computing has set off an upsurge of industrialization. Various industrial organizations and commercial organizations are actively initiating and promoting the research, standards and industrialization activities of edge computing. The three main camps have their own advantages in the development of edge computing: Internet enterprises try to extend the public cloud service capability to the edge, taking the consumer Internet of Things as the main position; Industrial enterprises try to play their own industrial network connection and industrial Internet platform services in the field of advantages, to the industrial Internet as the main position; Communication enterprises hope to activate the value of network connected devices, open the network capacity of access side, and advance into the consumer Internet of Things and industrial Internet [10].

Edge computing technology architecture. In 2018, ECC designed and released edge computing Reference Architecture 3.0 based on a model-driven engineering approach. With the slight differences in research focus, edge computing has developed into three widely recognized technical architectures, MEC, micro cloud and fog computing [11]. In addition, the research and development of cloud access network also provides more innovative possibilities for the development of edge computing. Among them, the mobile edge computing model emphasizes the establishment of edge servers between cloud computing centers and edge devices to complete the computing tasks of terminal data on the edge servers, but the mobile edge terminal devices are generally considered to have no computing capabilities. In contrast, the terminal devices in the edge computing model have strong computing capacity. Therefore, mobile edge computing is similar to the architecture and hierarchy of an edge computing server as part of the edge computing model [12]. As cloud technology moves towards multi-cloud distribution and inclusion of devices, lightweight virtualization solutions are required in architecture design, and containerization is currently being discussed as a lightweight virtualization solution. Pahl et al. [13] reviewed the requirements of edge cloud and discussed the appropriate container and cluster technologies needed to build applications through distributed multi-cloud platforms with a series of network nodes ranging from data centers to small devices. Kunz[14] et al. introduced OpenEdge, a dynamic and secure open service edge network architecture (here different from Baidu’s OpenEdge). OpenEdge provides a control architecture that automates the configuration of edge networks in a cloud-like manner to simplify the introduction of new network services and applications. Pahl et al. [15] demonstrated a PaaS architecture of edge clouds. For edge clouds, application and service choreography helps manage and orchestrate applications through containers. Computing is carried to the edge of the cloud in this way, rather than the data that goes from IoT to cloud. By implementing container and cluster technology on small single-board devices such as raspberry PI, experiments show that edge cloud can meet the requirements of cost efficiency, low power consumption and robustness.

Edge computing enabling technologies include Network Function Virtualization (NFV) [16], Software Defined Networking, SDN) [17], Information Centric Networking (ICN) [18], artificial intelligence, cloud computing and data center network, big data, block chain and other supporting technologies.

Edge computing commercialization landing. At present, the edge computing market is still in the initial stage of development, and the main landing forms of edge computing include cloud edge, edge cloud and cloud gateway. The cloud edge is still logically a cloud service; Edge cloud is the edge side to build small and medium-sized service capabilities, edge service capabilities are mainly provided by edge cloud. Cloud gateway is to reconstruct the original embedded gateway system and provide protocol/interface conversion and edge computing capabilities on the edge side. Amazon has moved into edge computing with AWS Greengrass to stay at the forefront of the industry. Microsoft is investing heavily in the Internet of Things with the release of its Azure IoT Edge solution, which enables offline use by extending cloud analytics to Edge devices. Google has unveiled two new products, a hardware chip called Edge TPU and a software stack called Cloud IoT Edge, designed to help improve the development of Edge connected devices. The Apache Foundation incubator project Edgent can be embedded in gateways and small, in-memory edge devices to accelerate real-time data analysis at the edge. Dell launches EdgeX Foundry, the Linux Foundation’s first iot framework. Baidu open source intelligent Edge computing framework OpenEdge, later added Linux Foundation Edge renamed Baetyl. Alibaba launched Link IoT Edge, an Edge computing platform for the Internet of Things. Huawei open source KubeEdge, Kubernetes local edge Computing framework, joined the Cloud Native Computing Foundation (CNCF) incubation. Xiong[18] et al., in KubeEdge’s paper, describe the edge infrastructure that allows the adoption of existing cloud services and cloud development models at the edge and provides seamless communication between the edge and the cloud. Huawei Cloud provides IEF and IEC solutions for edge cloud and edge cloud respectively. Jinshan Cloud launched KENC (Kingsoft Cloud Edge Node Computing) platform for the next generation of Edge Computing. OpenStack Foundation launches StarlingX Edge Cloud software stack. Nomad-based edge computing platform belongs to the landing form of edge cloud. The emerging field of edge computing is thriving.

Background and present situation of function calculation

Functional computing, function as a service, is the core of no server computing. Specific vendor-bound Background as a Service (BaaS) provided by the cloud platform is required to meet the requirements of specific applications. Simply put, Serverless=FaaS+BaaS[20]. FaaS takes an event-driven approach, hosting computing services by the platform. With functional computing, users simply write and package and upload code, and functional computing prepares computing resources for developers to perform tasks flexibly and reliably without pre-purchasing and managing IT infrastructure such as servers.

The background of function calculation. The rise of serverless architecture is the product of the deep development of cloud computing technology. When cloud computing began, there were two extreme competing approaches to cloud virtualization, and the market ultimately chose the Amazon EC2 low-level virtual machine approach to handle cloud computing. The downside of this option is that developers have to manage the virtual machine themselves and do tedious configuration. With the prosperity and development of public cloud BaaS cloud products being adopted by more and more developers, 5G landing, AI outbreak and continuous breakthrough of big data, cloud computing has become the hydropower and coal of the new era, and the main contradiction of users’ use of cloud computing has changed. AWS then launched Lambda cloud services, which proposed the Serverless architecture with functional computing as the core. Serverless provides an interface that greatly simplifies cloud programming. It represents an evolution, a productivity advance very similar to the shift in programmers from assembly to high-level programming languages in the past. Rapid innovation in data centers and the software platforms within them is once again changing the way developers build, deploy, and manage online applications and services. Serverless, although this may be a contradiction in terms, since servers are still being used for computing, the name has gained popularity because it suggests that cloud users simply write code and delegate much of the server system administration required by traditional cloud computing to cloud vendors. The three basic differences between Serverless and traditional cloud computing are: decoupling of computing and storage, independent of scale and price; The abstraction of executing a piece of code rather than allocating resources to execute the code; Pay for code execution, not for the resources allocated to executing code.

The history of functional computation. In 2014, AWS launched Lambda cloud service, and its proposed Serverless architecture with function computing as the core has gained attention and promotion. In fact, Google’s original App Engine also allowed developers to deploy code while leaving much of the work to cloud vendors, but it was largely rejected by the market in the early years of Serverless’s popularity, and the market eventually embraced amazon’s EC2 low-level virtual machine approach to cloud computing in 2006. Is the main reason for the success of the low level virtual machine, in the early days of cloud computing, users want to recreate the same as the local computer in the cloud computing environment, in order to simplify the workload of migration to the cloud, obviously, the actual demand in preference to just write a new program for cloud, especially in the early cloud amount of success is unclear. The history of functional computing includes the evolution of virtualization. In the early days each application ran on its own physical machine. The high cost of buying and maintaining large numbers of machines, and the fact that each machine is often underused, has led to a giant leap forward: virtualization. Virtualization is a key technology in computing platforms. Virtualization is a set of similar resources that provide a common set of abstract interfaces to hide differences between properties and operations and allow a common way to view and maintain resources. Efficient and convenient management based on virtualization effectively improves resource utilization and reduces energy consumption. At first it was hardware-based heavyweight virtualization, such as kernel-based Virtual Machine (KVM). More lightweight operating system-level container virtualization technologies followed, such as Docker, Linux Containers (LXC), etc. Container is a repackaging of unix-like processes for servers. It is lightweight virtualization combined with the Namespace resource isolation mechanism provided by the Linux kernel. Use Cgroups to control and isolate resource limits. Combined with distribution tools like Docker, containers make it easy for developers to start new services without the slow startup and run-time overhead of virtual machines. Servers are the core concept for hardware-based and container-based virtualization, and they are still Serverful. The core concept of cloud computing is resource reuse and flexible scheduling. From physical machines to virtual machines and containers, resource granularity is constantly refined, and management and scheduling are constantly optimized. Cloud computing has evolved into Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SAAS). SaaS) three layers and three service models. Servers have long been used to deploy online applications, but they are notoriously difficult to configure and manage, and server startup time severely limits the ability of applications to scale quickly. The emergence of a new model of serverless computing is bound to change the architecture of modern scalable applications, with functional computing FaaS at its heart. Instead of viewing an application as a collection of servers, developers define the application using a set of functions that access a common data store. Function handlers for different users share a common server pool managed by cloud vendors, so developers don’t have to worry about server administration and applications can scale quickly. In this way, the FaaS service pattern represents the logical conclusion of the evolution of sharing between applications, as shown in Figure 1.1, from the hardware to the operating system to the runtime environment itself.

Figure 1.1 Serverless virtualization development

Development status of function calculation at home and abroad. In 2014, AWS launched Lambda cloud service and proposed Serverless architecture with functional computing as the core. Subsequently, various Cloud computing providers have launched functional computing Cloud services, including Google Cloud Functions, Microsoft Azure Function, IBM Cloud Functions and Apache OpenWhisk. In April 2017, Alibaba Cloud launched Function Compute and Tencent Cloud launched serverless cloud Function. Functional computing became the fastest growing cloud service in 2017. The industry believes that cloud costs can be reduced by 10 to 90 percent using functional computing. Predictions made by THE University of California, Berkeley in 2009 that the challenges facing cloud computing at that time would be solved one by one and cloud computing would flourish [21] have come true. In February 2019, Berkeley published another paper predicting that Serverless will become the mainstream form of the next generation of cloud services [22].

The principles of function computation services are abstracted and generally include event sources, function computation, and rules that define and govern how event sources trigger function computation. Developers write function code to upload to function calculation, and configure function trigger rules. As shown in Figure 1.2, when an event occurs in the event source, the function will be automatically triggered in accordance with the configured function triggering rules, and the function calculation will automatically perform elastic scaling according to the peaks and troughs of business operation. Trigger rule management is placed in function calculations or within event sources, depending on the event source type or the individual cloud vendor. Each cloud computing provider’s function calculation has an event source/trigger that specifies what is allowed to trigger the function calculation. Event sources generally support network and content distribution, storage, data analysis, and timed events. Some cloud vendors support event sources that cover integrated services such as IoT, database, machine learning, and message queues, and also support user-defined event sources.

Figure 1.2 Schematic diagram of function calculation principle

The characteristics and advantages of function calculation. At present, the abstraction granularity of cloud computing is mostly at the machine level, while functional computing refines the abstraction granularity of computing services to the function level, triggering applications to respond to user requests in an event-driven manner. Take the enterprise to develop a short video social application as an example, there are many problems to consider, such as how to build and operate a flexible and stable video processing back-end service; How many servers need to be purchased; What specifications the server uses; How to configure the network and operating system; How to deploy the environment; How to load balance; How to scale dynamically; How to upgrade the configuration; How to deal with server outages; How to deal with peak user requests; How to deal with system monitoring alarm…… . With functional computing, users can build applications and services quickly and easily without having to worry about these issues. Using function calculation, no need to purchase and manage the underlying infrastructure, low operation and maintenance cost; Simply deploy the code to the function evaluation and trigger the function execution in an event-driven manner, and the service runs smoothly. In addition, the main advantage of functional computation is the ability to quickly flex elastically in response to peak pressures when loads are suddenly increased. Users do not need to worry about environment deployment, server capacity expansion, and server downtime. Function calculation is based on a pay-as-you-go business model, especially when there are obvious peaks and troughs. In addition, function calculation provides log query, performance monitoring, and alarm functions to help developers quickly locate and rectify faults. Applications built with the help of functional computation conform to the application design pattern of no server concept. 2019 is the first year of popularization of Serverless cloud native. Double 11 shows that Serverless application architecture can withstand the test of flood traffic.

The function calculates the appropriate application scenario. Businesses and applications that are suitable for using function computing can better solve the problems of cost, efficiency and connectivity through function computing. These businesses and applications generally have the following characteristics: First, businesses have peaks and troughs. Second, the business operation is event-triggered. Third, applications need to be associated with multiple cloud services. Fourthly, the application data is updated so much that the processing capacity of a server can no longer meet the requirements. Developers need to consider how to configure load balancing to deal with the processing requests. Fifth, application version iteration speed, business development, deployment, upgrade, capacity expansion requirements. Sixth, some low frequency, maintenance of the background tasks. Analysis based on the characteristics of the application, suitable for function calculation of typical application scenario there are four main categories: first, the event triggers such scenarios, such as custom graphics, custom events, low request of IoT, audio conversion word processing, automatic messaging program, based on a timer processing/timer task, log processing, SaaS event processing, batch task. Second, traffic burst scenarios, such as elastic expansion to deal with sudden traffic, transcoding and traffic expansion, multimedia (video, picture, etc.) data backup, image uploading, cutting, sharing, etc. Handle big data scenarios, such as data-driven subsequent distribution, real-time file processing, real-time data flow processing, database data extraction, conversion and loading. Fourth, back-end class scenarios, such as Web applications, IoT backends, mobile backends, and manual triggers.

Background and current state of the Nomad runtime

Nomad is a lightweight workload orchestration tool that provides basic functions such as multi-cluster management, application deployment, and elastic scaling. Nomad supports different types of task loads, including non-containers, that are connected to the runtime through the concept of drivers. According to SuperFluidity Deliverables D5[23] of the European Telecom standardization project, NFVM features involving early Nomad are compared with OpenStack and OpenVIM and have a significant low latency advantage in instantiation over both.

Before v0.9, Nomad supports runtime such as Docker, LXC, bare process Exec, and Rkt. The runtime scheme has the following defects:

L Functions and features: have the same runtime life cycle, some runtime functions are similar, with little differentiation and few interfaces implemented. Pluggable run time drivers are not supported.

L Code management: driver codes connected to different runtimes are stored in the main warehouse. Code updates of the runtimes themselves will affect each other, and the increase of runtimes types leads to complex management and maintenance.

L Support for more runtimes: the need for community openness and support for various business scenarios including container and non-containerized workload types. A recompilation is required each time a new runtime driver is added. Such requirements can lead to excessive code in the codebase and should be separated out and supported and maintained independently by third parties.

In order to solve these problems, Nomad community reconstructs the code in v0.9 and designs a set of plug-in mechanism based on C/S architecture, including runtime driver plug-in and hardware device plug-in, which facilitates and standardizes third-party runtime access. You can register run-time drivers dynamically without recompiling the Nomad binaries. Some commonly used run – time driver plug-in built-in, do out of the box. Different workload types can interconnect task-related runtime drivers based on the same set of standard runtime driver interfaces, which use Protocol Buffer and are based on gRPC. The runtime driver interface allows third-party runtimes to be separated and developed and released as a separate project independently of Nomad. This interface also makes the community ecology more open.

At the beginning of this research, when V0.9 was just released, the only run-time driven interface based projects in the community were the official master warehouse split LXC. Later, the drivers that are supported by community ecology appear successively such as Singularity, Jails, Pot and Firecracker.

As can be seen from the above, the Nomad runtime still needs to be diversified, and it lacks virtual machine running time and function running time, which cannot meet the needs of some edge computing scenarios. The runtime driver interface is one of the ways to access workload types, and this is how the Libvirt VIRTUAL machine runs in this article. The Nomad-based edge computing platform functions as a PaaS platform layer on which function runtimes are supported. Function workloads are built and run in containers, while the components associated with function runtimes can be run through containers, minimizing the intrusion of drastically changing code.