Introduction: Listen to aliyun technical experts talk about the development history of Serverless Kubernetes, see how it is compatible with the use of Kubernetes at the same time, with free operation and extreme flexibility and other characteristics ~

The author | Florence (ali cloud experts)

Planning | ChuXingJuan

With the development of cloud native, from the earlier stand-alone version of Docker to the unified arena of Kubernetes in the field of orchestration, and then to the cloud hosted Kubernetes, technology wind and rain changes. Today, we will take a look at the development history of Serverless Kubernetes in the context of history.

The story begins with Docker

Although the story starts from Docker, we should not ignore the difficulties of IaaS (Infrastructure as a Service) ancestors in front, as well as the development plan of cloud computing determined by cloud computing leaders very early.

More than a decade ago, our predecessors divided the cloud into three layers based on user usage (the capabilities provided by the cloud platform) :

  • IaaS: Infrastructure as a Service, which provides VMS or other Infrastructure resources for users as services.
  • PaaS: Platform as a Service provides a Platform as a Service to users. For example, various middleware services can be accessed on the Platform.
  • SaaS: Software as a Service, which provides applications as services to users, such as email services.

As shown in the figure below, from IaaS to PaaS, users (development and operations) become less aware of the underlying resources and more focused on the business.

Let professional people do professional things, so as to maximize the overall efficiency. For example, for a start-up Internet vegetable buying company, there is no need to build computer rooms, purchase hardware, configure network storage, install operating systems and other things unrelated to business, but should focus more on business development and operation.

After more than ten years of development, IaaS has become mature, and various basic resources, such as ECS, VPC, and EBS, have gained popular support. However, the development of PaaS is very slow.

Back in 2008, Google launched App Engine as a platform for developers to write business code and run on it. This idea is far too advanced for developers to fully embrace. In addition to the public cloud, the open source community PaaS platform is also on the move. IBM’s Cloud Foundry and Redhat’s OpenShift are among the best known, both hoping to provide a platform for rapid application distribution, but both have been lukewarm and increasingly difficult to use due to compatibility issues.

Until the birth of Docker in 2013, Docker became one of the most popular open source projects in the community due to its friendliness to developers, a command that can pull up a service and extremely simple operation.

Docker’s advantages are mainly reflected in the following aspects: Docker image packages the environment and application that should be relied on into a compressed file, which can be run directly on any Docker installed machine, solving the deployment problems of all links from application development, testing to production, and ensuring the consistency of the environment.

The success of Docker lies in the extreme simple operation rather than technological innovation. Technologies such as Cgroup and Namespace have long been added to kernel features. As a result, Cloud Foundry didn’t see Docker as a competitor early on because these technologies were already in use on Cloud Foundry. On the contrary, Dcoker mirror inadvertently inadvertently function, let Docker truly realize the “Build once, Run anywhere”.

Kubernetes established its position as a quack

The original Docker was standalone, requiring a management platform for large-scale deployment, just like OpenStack management VM.

Container management platforms, such as Mesos and Swarm, were also controversial in their early days, but none of them moved away from the IaaS mindset and managed containers as virtual machines. Until the emergence of Kubernetes, it really began to unify the river’s lake. In addition to the endorsement of Google and the mature architecture derived from Borg, what is more important is that Kubernetes has already thought out how to manage the Replica set and how to provide services to the outside since its birth.

One of the most disappointing is Docker’s own management system, Swarm. At that time, Docker was already on the map, but Docker itself did not make a profit. So the company launched Swarm enterprise version, although Swarm later also introduced a lot of Kubernetes concept, but the situation is gone, cloud native ecology has been around Kubernetes booming.

Kubernetes, although dominated by Google, remains open enough to abstract resource management away from interface specifications, such as CRI for container runtime, CNI for network, CSI for storage, As well as Device management Device Plugins and various access control, CRD, etc. Kubernetes is evolving into a cloud operating system, and various cloud native components are system components that run on top of this operating system.

Public cloud hosting Kubernetes

Although Kubernetes has established its leadership position, its operation is not so easy. In this context, public clouds have tried to launch Kubernetes hosting services on the cloud. For example, Ali Cloud launched the hosting Master scheme: ACK in 2017.

In ACK (Alibaba Cloud Container Service for Kubernetes), the installation, operation and maintenance of Kubernetes management components are hosted to the public Cloud, and ECS or bare metal is used as Kubernetes computing nodes. This greatly reduces the cost for Kubernetes users. Users can directly manage the cluster through kubectl command line or Restful API after obtaining a KubeconFig file from the cloud platform.

If you need to expand the cluster capacity, you only need to adjust the number of ECS. The newly created ECS will automatically register with Kubernetes Master. Not only that, ACK also supports one-click upgrades to cluster versions and various plug-ins. ACK moves the heavy o&M work to the cloud and allows for minute-level resource scaling with the flexibility of the cloud.

Free transportation and flexibility will be implemented to the end

Compared with private clouds, public clouds pay more attention to cost, because in private clouds, users’ infrastructure costs are basically fixed. Users cannot go offline for a service and stop a server in the machine room. In contrast, the public cloud offers a pay-as-you-go model.

If most of the tasks running in a cluster are long runs and the resource requirements are fixed, ACK is not a problem. However, if there are a large number of Jobs or unexpected traffic, ACK is not flexible enough to temporarily expand VMS and enable containers on VMS.

For example, an online education company will temporarily expand tens of thousands of PODS at the peak of class from 7-9 PM every day. If ACK is used, it is necessary to evaluate the capacity of these pods in advance, and then convert it into ECS computing power, and purchase the corresponding number of computing nodes in advance to join Kubernetes. These ECS also need to be released after 9 o ‘clock, the operation is very tedious.

So, is there a solution that is compatible with Kubernetes usage mode, can start Pod in second, and charge according to Pod dimension (ACK is charged according to Node dimension)?

AWS was the first to come up with Fargate, which can be added to a Kubernetes cluster as a Pod without a real Node. Alibaba Cloud launched a similar product called ECI (Elastic Container Instance) in 2018. Each ECI is a Pod that is hosted on the cloud.

Kubernetes uses ECI in two ways: one is ASK (Alibaba Serverless Kubernetes), the other is ACK + Virtual Node scheme. In ASK, the compute Node becomes completely Virtaul Node. Virtaul Node is a virtual infinite-capacity compute Node responsible for ECI lifecycle management. A Virtaul Node is registered with Kubernetes. To Kubernetes, it is a normal Node. All you need to do is submit native Kubernetes Yaml to create a Pod that is fully compatible with Kubernetes.

Virtaul Node can also be used with regular ACK nodes. Users can schedule long run tasks to the ECS Node, and then take advantage of the ECI’s fast start (pulling the container within 10 seconds) to schedule burst or short-period tasks to the ECI to achieve the optimal cost.

ECI has been adopted by many Internet and artificial intelligence companies. In subsequent articles, we will step by step share some of the technical issues and challenges typical users encounter when migrating ECI.

To summarize, today we reviewed the history of containers and K8S from a technology development perspective, and we can see that common technologies are sinking to the bottom, as both K8S and ServiceMesh attempt to sink service management and traffic management, respectively, into the infrastructure. But these components have their own administrative costs, so they evolve to be hosted in the cloud. In the future, as the technology sinks, the capabilities provided by cloud computing will continue to move up, providing more comprehensive and rich capabilities that allow development to focus on the business.

Xiaoyu Chen is a technical expert at Ali Cloud. He is responsible for the underlying research and development of Ali Cloud Elastic Container (ECI). He has published Prometheus and Cloud Computing Stuff. This article is reprinted from InfoQ’s “Inside Out Alibaba Cloud Serverless Kubernetes” series by Chen Xiaoyu.

The original link

This article is the original content of Aliyun and shall not be reproduced without permission.