Introduction: How to provide Serverless capabilities through native Kubernetes? How to take advantage of the rich cloud native community ecology? This article will give you an overview of our implementation on Serverless Kubernetes.

Author: Yuan Yi

takeaway

Kubernetes, as the current cloud native industry standard, has a good ecosystem and cross-cloud vendor capabilities. Kubernetes abstracts the IaaS resource delivery standard well, making cloud resource delivery easier and easier. At the same time, users expect to focus more on business itself and achieve application-oriented delivery, hence the Serverless concept was born. So how do you provide Serverless capabilities through native Kubernetes? How to take advantage of the rich cloud native community ecology? Here is our implementation on Serverless Kubernetes. This paper will be introduced from the following three aspects:

  • Why Serverless Kubernetes
  • How to implement Serverless Kubernetes
  • Serverless Kubernetes implementation

Why Serverless Kubernetes

Kubernetes

As we all know, Kubernetes is an open source containerized choreography system. Users can use Kubernetes to reduce operation and maintenance costs, improve operation and maintenance efficiency, and provide standardized API. In a sense, it avoids being bound by cloud manufacturers, thus forming the cloud native ecology with Kubernetes as the core. It can be said that Kubernetes has become the de facto standard in the cloud native industry.

With Kubernetes Serverless

So let’s go back to Serverless. The core idea of Serverless is to get developers to focus more on business logic and less on infrastructure. So how do we do Serverless on top of cloud native industry standards, and whether Kubernetes can also focus more on application business logic.

What are the advantages of Kubernetes as Serverless

Let’s look at the advantages of Kubernetes as Serverless. Take a look at what Kubernetes features include:

  • The container is changed
  • Unified IaaS resource delivery
  • CI/CD continuous integration deployment
  • Across the cloud vendors
  • Rich ecology
  • Application Oriented Management

This corresponds to Serverless

  • Event-driven: Kubernetes supports job types and provides rich event sources around Kubernetes
  • On demand: Kubernetes itself supports HPA resilience
  • O&m free, high availability: Kubernetes delivers great support through containerized, unified resource delivery.

Combined with these, Kubernetes implementation serverless has natural advantages.

How to implement Serverless Kubernetes

Implementation of Serverless on Kubernetes:

First, how to make users pay less attention to infrastructure;

Second: how to focus more on business applications online.

Here we use the Serverless Framework to focus on business applications and further abstract Kubernetes resources to provide the ability to use automatic elasticity on demand. IaaS resources are o&M free, reducing infrastructure concerns and eliminating node o&M.

So IaaS resource o&M free, how do we do?

Reduce the focus on infrastructure: IaaS o&M free

Native Kubernetes node resources need to be maintained by users themselves. In order to reduce the cost of maintaining nodes for users, we provide managed node pools to help users maintain the life cycle of nodes, but users still need to maintain managed node pool policies. In Serverless Kubernetes, virtual nodes are combined with the elastic container instance ECI to completely eliminate IaaS o&M.

Serverless Kubernetes IaaS resources free o&M includes:

  • Container-based, secure isolation, high portability
  • None Server management: No capacity planning is required and no server operation and maintenance is required
  • Elastic capacity expansion: Second capacity expansion, unlimited containers
  • Pay on demand for higher resource utilization

Down, we realize o&M free of IaaS resources by combining virtual nodes with ECI. How do we focus on business logic up? It’s really about the application.

Focus on business logic: application as the core

In terms of applications, we have to solve these problems:

  • Application deployment
  • Gray released
  • Traffic management
  • Automatic elastic
  • Observability and multiple versioning of applications

Is there an out-of-the-box solution? The answer is Knative.

What is Knative

Knative is an open source Serverless application framework based on Kubernetes, which helps users to deploy and manage modern Serverless workloads and build enterprise-class Serverless platforms.

Knative has the following advantages:

  • Build scalable, secure, stateless services in seconds.
  • Apis with higher levels of Kubernetes application abstraction.
  • Pluggable components that allow you to use your own logging and monitoring, network, and service grids.
  • You can run Knative anywhere Kubernetes runs without worrying about vendor lock-in.
  • Developers have a seamless experience with support for GitOps, DockerOps, ManualOps, and more.
  • Support for common tools and frameworks, such as Django, Ruby on Rails, Spring, etc.

Knative consists of two main core modules: Serving and Eventing

Serving provides a Service application model with support for traffic-based grayscale publishing, versioning, scaling down to zero, and automatic elasticity.

Eventing provides event-driven capabilities. Supports rich event sources, as well as Broker/Trigger models for event flow and filtering.

Why Knative

So why did we choose Knative?

According to CNCF 2020 China Cloud Native Survey report, Knative has become the most widely installed serverless on Kubernetes.

In addition, the Knative community recently launched a survey of which cloud vendors or enterprises are currently offering or using Knative. We can see that almost all major manufacturers support or integrate Knative, such as Ali Cloud, Google Cloud, IBM, Red Hat, etc., and most of them provide Production level capabilities, which indicates that more and more users embrace Knative.

In addition, Knative has recently applied to become a CNCF incubator program, which is undoubtedly exciting for Knative developers.

Challenges, solutions and effects of Knative landing

Moving from open source to productization is bound to bring some challenges. The commercialization of Knative mainly faces the following challenges:

  • Multiple management and control components complicate operation and maintenance
  • 0 to 1 cold start problem
  • Traffic requests are distributed 1 to 1

So how do we respond?

We provide component hosting to help users save resources and operation and maintenance costs; When the request is 0, the capacity is reduced to the low-specification reserved instance to realize the cold-free startup of request 0 to 1, and the cost is controllable. Provide self-developed event gateway to achieve accurate control of traffic.

Serverless Kubernetes implementation

The ground plan

Combined with the above introduction, up through Serverless Framewok Knative to focus more on business applications, down through virtual nodes to reduce the focus on infrastructure. This is our Serverless Kubernetes Landing solution: the ability to integrate cloud products offline around the Kubernetes API, including message events, elastic container instances, and log monitoring. Upward through Knative around the application as the core, providing event driven, automatic elasticity and other capabilities.

Typical Application Scenarios

Finally, let’s take a look at what landing scenarios we have. Typical application scenarios and industry fields are shown as follows:

Implementation practice: Heterogeneous resources are used on demand

  • The customer pain points

Users want to use the Serverless technology to use resources on demand, saving resource usage costs and simplifying O&M deployment. In addition, there are GPU business demands. Want to use the container Serverless, support the use of GPU resources, at the same time, simplify the application deployment operations (as little as possible operating Kubernetes deployment/SVC/ingress/hpa and other resources), IaaS resources free operations.

  • The solution

Use Knative + ASK as the Serverless architecture. After data collection, the data processing service is accessed through the service gateway, and the data processing service expands and shrinks automatically according to the demand.

Landing practice: event-driven, precise distribution

A customer live broadcast system supports users’ online interaction. Processing message data has the following technical challenges:

  • Service elasticity fluctuates and message concurrency is high.
  • Interactive real-time response with low latency.

Customers choose The Knative service of Ali Cloud for elastic data processing. The number of application instances expands and shrinks in real time with service peaks and troughs, enabling on-demand, real-time elastic cloud computing capabilities. The entire process is fully automated, greatly reducing the mental burden on the infrastructure of the business developer.

summary

Let’s review the main points of this article: First, we introduce why Serverless is provided on Kubernetes:

  • Kubernetes has become the cloud native industry standard
  • Serverless programming for standard Kubernetes API

Then how do we implement Serverless Kubernetes:

  • IaaS nodes are o&M free
  • Serverless Framework (Knative)

Finally, two landing practice scenarios are introduced:

  • Heterogeneous resources on demand
  • Event driven, precise distribution

In a word: Serverless Kubernetes based on Kubernetes, to provide on demand, node free operation and maintenance of Serverless capabilities, so that developers really achieve through Kubernetes standardized API Serverless application programming, Worthy of attention.

The original link

This article is the original content of Aliyun and shall not be reproduced without permission.