The author | peng enlightenment Alibaba technology experts

* * takeaway: ** This paper tries to take daily development process as the starting point to analyze the problems faced by developers at each stage, and then combine solutions to extract the development model oriented to Serverless, which corresponds to the Serverless product form proposed by the industry, so as to provide reference for developers to adopt Serverless architecture and services.

In the past two years, the concept of Serverless has been exchanged more and more among developers, and theme sharing has shown an outbreak trend. For example, in the influential KubeCon&CloudNativeCon conference, there were 20 themes about Serverless in 2018. By 2019, the number will increase to 35.

At the Serverless product level, from the earliest AWS Lambda to Azure Functions, Google Functions, Google CloudRun, Then to the domestic Ali Cloud Serverless Kubernetes, Serverless application engine, function computing, etc., computation-oriented Serverless cloud infrastructure is increasingly rich.

New concepts and products don’t just appear; they are born to solve current problems. As practitioners have a clearer and deeper understanding of the problem domain, the treatment methods of the problem will be gradually iterative, and solutions closer to the nature of the problem will also appear. If you don’t understand the solution in terms of the problem domain, it’s easy to fall into two extremes: “It solves everything” and “it’s too far ahead to understand.”

View Serverless from daily iterations

The figure above is a commonly used project iteration model that aims to meet customer requirements. In this model, the project team meets the needs of customers through passive iteration, and gradually understands the essence of customers’ needs. Through active iteration, the project team adopts better solutions with customers or solves problems from the root. Each demand feedback will deepen the understanding of customer needs and provide more satisfying services. Each bug feedback will deepen the understanding of the solution and provide more stable service.

Once the model is launched, the core day-to-day problem is how to accelerate iteration.

If you want to solve the problem of iterative acceleration, you need to understand what the constraints are and have a target in mind. Here’s a development model from a development perspective:

Although different development languages and architectures are used in practical applications, there are common problems at each stage, such as:

In addition to addressing the above general issues, standardized solutions are needed to reduce the cost of learning and using for developers, and shorten the time from idea to launch.

If the time spent in different stages of the above process is analyzed, it can be found in the whole life cycle of the project:

  • Deployment & operation will take much more time and effort than development & testing
  • General-purpose logic can consume as much time and effort as or even more than business logic

In order to accelerate the iteration, the parts that occupy a lot of time and energy need to be solved in turn, as shown in Figure 4:

From left to right, reduce deployment & operation costs by delegating operations at different levels. After reducing the cost of operation and maintenance work, reduce the cost at the level of “common logic”. Combined, the two provide a deeper business focus during the iteration. This process is also a process from Cloud Hosting to Cloud Native, fully enjoying the technical bonus brought by Cloud Native.

Due to the high degree of coupling between software design architecture and deployment architecture and the current environment, facing new concepts, services and products, the technology used in the iterative process of stock application needs to be adjusted accordingly, that is, the development and deployment mode needs to be reformed to some extent. When developing and deploying new applications, there are learning and practice costs associated with applying new ideas.

Therefore, the above process cannot be accomplished overnight. Matching services and products should be selected according to the current pain priority of the business, and technical pre-research should be carried out in advance according to the future planning, so as to select suitable services and products at different stages.

Introduction of Serverless

Wikipedia has a fairly complete definition of Serverless:

Serverless computing is a cloud computing execution model in which the cloud provider runs the server, and dynamically manages the allocation of machine resources. Pricing is based on the actual amount of resources consumed by an application, rather than on pre-purchased units of capacity.

Serverless computing is a cloud computing execution model in which cloud vendors provide programs to run servers and dynamically manage the allocation of machine resources. Cloud vendors base their pricing on the actual amount of resources that applications consume, not the capacity that users pre-purchase.

Under this calculation model, the following benefits will be brought to users:

Serverless computing can simplify the process of deploying code into production. Scaling, capacity planning and maintenance operations may be hidden from the developer or operator. Serverless code can be used in conjunction with code deployed in traditional styles, such as microservices. Alternatively, applications can be written to be purely serverless and use no provisioned servers at all.

Serverless computing simplifies the process of deploying code to production, and makes scaling, capacity planning, and operations transparent to developers. Serverless code can be used in combination with code deployed in traditional ways (such as microservices), or developers can write applications in a serverless computing mode without having to configure the server in advance at all.

In essence, concept is the abstraction of problem domain and the summary of the characteristics of problem domain. Understanding concepts through features avoids focusing attention on the literal description rather than the value of the concept itself.

From the user’s perspective, we can abstract the following characteristics of Serverless:

  • O&m free (server O&M, capacity management, elastic scaling, etc.)
  • Pay by resource usage

In companies of a certain size, where the roles of development and operations are strictly separated, this form of computing actually exists, not entirely new. However, the current technology trend is to reduce the technical cost of the business through the cloud by virtue of the scale and technology dividend advantages, and to feed the business through technology dividend. Therefore, the discussion of Serverless in the industry focuses on the Serverless capabilities reflected in the services and products on the cloud.

Serverless development model

This article by Martin Fowler covers the Serverless development model from an architectural point of view. Here’s a brief summary with three key points:

  • Event-driven development model
  • Automatic elastic expansion
  • OpenAPI

Serverless development adopts an Event-driven model, which is designed around the production and response of EVENTS such as HTTP/HTTPS requests, time and messages. In such a model, the production and processing process of events are the core, and the whole service process is driven by events and attention is focused on the whole processing process. The deeper the understanding of the business, the better the match between the Event type and the business, and the more effective the interaction between technology and business will be.

The Event-driven model transforms the concept of service residency from a required option to an optional one, which can better respond to changes in the volume of business requests, such as automatic elastic scaling. At the same time, the service is very permanent, which can reduce the required resource cost and maintenance cost, and accelerate the project iteration.

It can be understood more intuitively through the two pictures in the article:

Figure 5 shows the current common development model, where the Click Processor service is a resident service that responds to all Click requests from users. In production environments, where multi-instance deployments are common, resident services are a key feature, and day-to-day operations focus on ensuring the stability of resident services.

Figure 6 shows the Event-driven development model, with the focus moving forward to focus on the generation and response of events, and whether the response service is resident or not is optional.

Serverless differs from PaaS (Platform as a Service) and CaaS (Container as a Service) in terms of concept in that automatic elastic scaling is the core feature of the concept.

Combined with the Event-driven development model, automatic elastic scaling in the Serverless scenario needs to be more transparent to developers, and developers’ focus on processing capacity in the development process is changed from static to dynamic, so as to better cope with the uncertainty of business requests after the launch.

On the development side, delivery can be mirrored or language-level packaging (such as a War/JAR in Java) with the platform taking care of the run-time. You can go one step further and take the FaaS concept and rely on a platform or standardized FaaS solution to provide only business logic functions, leaving the platform to take care of runtime issues such as request entry, request invocation, and automatic elastic scaling.

No matter which delivery method, BaaS concept can be used in the cloud, and part of the logic can be implemented through the cloud platform or third-party OpenAPI, such as permission management, middleware management, etc., and the development process is more focused on the business level.

Serverless service model

Serverless service model focuses on cloud vendors’ support for Serverless computing. The main differences between different services and product forms mainly focus on the understanding and satisfaction degree of Serverless features:

  • O&m free (server O&M, capacity management, elastic scaling, etc.)
  • Pay by resource usage

In the dimension of free operation and maintenance, the most basic is to avoid the cost of server operation and maintenance, and developers can apply for resources according to the amount. At common O&M levels, such as capacity management, elastic scaling, traffic management, and log, monitoring, and alarm management, different services and products adopt appropriate methods based on their own positioning and target customer characteristics.

In terms of charging form, cloud vendors will determine charging dimensions, such as resources and requests, according to their own positioning on the one hand, and the granularity of charging according to their current technical capabilities on the other hand.

Based on the above analysis, it can be seen that different Serverless service models of cloud vendors are not static, but will continue to iterate with product positioning, target customer characteristics and technical capabilities to grow together with customers.

The Serverless service model needs to meet the actual demand. Back to Figure 4, the Serverless service model of cloud vendors can be divided into the following categories:

  • Resource instance platform
  • Scheduling platform
  • Application Management Platform
  • Business logic management platform

Taken together, namely:

Industry Serverless products

At present, domestic and foreign cloud manufacturers all provide Serverless products with different dimensions. A simple summary is as follows:

Resource instance platform

Overseas AWS Fargate/Azure ACI and Domestic Ali Cloud ECI/Huawei CCI have great influence and provide container group services to users. Container groups as a whole provide a concept similar to Pod in Kubernetes. Users can directly create container groups through OpenAPI calls, without having to purchase and configure servers before deploying services, and delegate capacity management o&M tasks related to resources. You can use a resource instance platform as a resource pool with sufficient capacity to apply for fine-grained resources at the container group level and manage application-level capacity together with dynamic capacity expansion and reduction.

In a production environment, users typically do not use such resource management services directly, but rather use application choreography services to make such services transparent, focusing on the application choreography dimension.

Scheduling platform

Kubernetes is the de facto standard for container scheduling. Foreign AWS EKS and Domestic Ali Cloud Serverless Kubernetes host The Kubernetes Master component on the one hand, with the help of resource management services, For example, VirtualKubelet + AWS Fargate or VirtualKubelet + Ali Cloud ECI provide Kubernetes Node layer services.

For users who want to use Kubernetes capabilities directly and want to operate and maintain Kubernetes at a low cost without having to keep a resource pool, this kind of product is suitable for their needs.

Application Management Platform

The overseas Google GAE/CloudRun and domestic Aliyun Serverless application engine further serserized operation and maintenance work, such as release management (packaging/grayscale/batch/rollback/version control, etc.), log/monitoring/alarm, traffic management, elastic scaling, etc., so that users can further focus on business needs. Low cost operation and maintenance.

For users who expect zero-cost transformation stock application, low learning cost, and minimizing operation and maintenance work, such platforms have a high degree of matching with demand. However, as there is no standardized scheme for the operation and maintenance of application management in the industry, different projects will have personalized requirements, so it is necessary to strengthen communication in the process of adopting such products, constantly feed back to the platform, and integrate Serverless platform with its own business through joint construction.

Business logic management platform

On the basis of application management, foreign AWS Lambda/Azure Functions/Google Functions, domestic Ali Cloud function computing/Tencent Cloud function/Huawei function workflow, etc., further make the general logic in the development process transparent. Users should only care about the implementation of the business logic. This process can be analogous to writing unit tests during development, where the inputs and outputs are common, except for differences in processing logic. This type of Serverless product is also the most discussed form in the industry and represents an abstraction of the ideal development process that can further speed up the iterative process and shorten the time to bring an idea to life. Such Serverless products are more closely integrated with other types of cloud platform products, and use cloud platform services to achieve general logic, such as storage and cache, in the form of BaaS, which has a certain implicit demand for cloud platform product richness.

The processing process is less dependent on the external or partial computing scenarios, such as front-end, multimedia processing, etc., the use of Serverless products to learn and use relatively low cost, easy to get started. With the increasingly high degree of abstraction of services and components, more and more business scenarios will be applied, and users’ operation and maintenance work will be more transparent. At the same time, the development process can directly enjoy the industry’s best practices, and the stability, performance, throughput and other aspects of services can be maximized with the help of platform capabilities.

The selection

To sum up, when selecting Serverless products, users need to sort out the stage and pain points of the current business technology, determine their requirements for cloud solutions, and then select services and cloud products suitable for the current stage according to the product form of cloud manufacturers.

This relationship focuses on knowing whether cloud product positioning can meet business requirements in the long term. For example:

  • Whether the current stage of the business technology matches the cloud product positioning
  • Whether rapid business iteration is limited by the evolution of cloud products themselves
  • How stable is the cloud product
  • Whether cloud products can continue to bring technology dividends to the business

At the same time, it is also necessary to know whether cloud products can accompany the development of the business, focusing on the technical requirements of the business, which are the limitations caused by the positioning of cloud products, and which are the limitations caused by the technical implementation of the current cloud products.

If there are limitations to cloud product positioning, then you need to consider using cloud products that better match the positioning of the business needs. If the current technology implementation is limited, then there is an opportunity to grow together with cloud products, timely feedback to cloud products, so that cloud products can better meet their own business needs.

In addition, businesses also need to pay attention to the richness of cloud vendors’ own service types. The richer the cloud vendors’ own services are, the larger their scale is, the more scale effect will be generated, thus bringing richer technology dividends and cost advantages to businesses.

Fortunately, cloud products are often well-documented and have a user base that can be directly addressed by product managers and developers, feedback on their needs, and develop in a collaborative way.

summary

Serverless is essentially a problem domain that abstracts the non-business core problems affecting business iteration in the R&D process and proposes corresponding solutions. This concept did not come out of a sudden, and people have more or less applied its concept into their daily work. However, with the wave of cloud computing, Serverless services and products on the cloud are more systematic and competitive. Based on scale advantages and rich product lines, they can continue to provide services that meet business needs in the face of problem domains.

Serverless concept not only flourishes in the centralized cloud, but also gradually develops in the edge, making the operation of services more extensive, better meet the customers of the business itself, and provide lower latency and stable services.

This article tries to help readers understand the Serverless concept from the perspective of daily practice from the daily process of project and development, and select suitable Serverless services and products according to the stage. At the same time, the author himself is responsible for the bottom research and development of Ali Cloud Serverless application engine, trying to convey the concept of cloud products and user co-construction from the perspective of internal cloud products, so as to better deliver and create value through collaboration.

References

  • Wikipedia has a fairly complete definition of Serverless
  • Serverless Architectures
  • The Event – driven model
  • BaaS
  • From DevOps to NoOps, the landing method of Serverless technology is discussed
  • Ali Cloud elastic container instance ECI
  • Alibaba Cloud Container service ACK
  • Ari Cloud Serverless Kubernetes (ASK)
  • Aliyun Serverless Application Engine (SAE)
  • Ali cloud function calculation
  • Ali Cloud Serverless workflow
  • Cloud Programming Simplified: A Berkeley View on Serverless Computing

Course recommended

In order for more developers to enjoy the dividends brought by Serverless, this time, we gathered 10+ Technical experts in the field of Serverless from Alibaba to create the most suitable Serverless open course for developers to learn and use immediately. Easily embrace the new paradigm of cloud computing – Serverless.

Click to free courses: developer.aliyun.com/learning/ro…

“Alibaba Cloud originator focuses on micro-service, Serverless, container, Service Mesh and other technical fields, focuses on the trend of cloud native popular technology, large-scale implementation of cloud native practice, and becomes the public account that most understands cloud native developers.”