The author | PuSongYang (qin yue)

The author says

Before starting this article, I want to make a few points of agreement with the developers.

The first consensus is that software engineering does not have a silver bullet, Serverless is not a silver bullet, it is not a panacea for all problems.

The second consensus, Serverless, is that it solves the problems of the operations domain. It is a domain-specific technology that is not infinite and has nothing to do with low code.

The third consensus is Tesler’s law, the conservation of complexity. The classic example is Apple, whose products are easy to use. But it’s essentially conserved overall complexity, and it actually leaves the complexity to the systems developers and the software developers, so that the user can have a smooth experience. Similarly, Serverless transfers the deployment or operation of applications and websites to cloud service providers, but the overall complexity remains the same.

The fourth consensus is The Dunning-Kruger Effect. In The process of cognitive learning, we all have such a development curve: from The beginning of ignorance, to The illusion of new knowledge, and then to The trough of disappointment, slowly climbing. We go through this curve when we learn anything new. Gartner uses the Dunning-Kruger curve to explain the development cycle of new technologies.

Personal cognitive curve

Gartern technology development curve

As development engineers often have this kind of body feeling, new technology is constantly emerging to learn very tired. When Serverless was just launched, people were full of infinite imagination for this technology. When they imagined a peak, they would gradually realize the gap between imagination and reality. When they personally experienced the use of the product, they would fall to the technological trough, and then slowly climb up the slope.

Right Serverless

This paper will introduce Serverless to you through three parts:

The first section is “Complicating for Cloud Developers”

The second part is “Simplify for Developers”

The third part introduces some of the best scenarios for myself and my team using Serverless.

Complicate for cloud developers

1) Serverless architecture

Serverless is an aggregator whose entire development history stands on the shoulders of giants. Now many cloud service providers to run a function, the underlying architecture is such. First, there is a CaaS layer running underneath Serverless. It is a Serverless container service, most of the application services will run in this layer, container scheduling now open source is a better solution is K8s, with K8s to schedule containers, the bottom layer laaS is virtual machine, the bottom layer is physical machine.

CaaS can be implemented in many ways. The Serverless application must be supported by CaaS services. In addition to Docker, VM can also be CaaS; For example, node.js VMS can also do CaaS, WebAssembly can also do CaaS, and so on. In addition, when designing the overall architecture, a Component layer is also needed to solve the problems of east-west and north-south network traffic, such as the solution of Service Mesh and Ingress. Generally speaking, the architecture design behind Serverless is basically the same.

2) Cloud developers: immutable infrastructure

The entire framework of CNCF architecture is migrated according to configuration files, and can be deployed on Ali Cloud, Tencent cloud, Amazon cloud, or even the private cloud built by ourselves. When all cloud services are immutable infrastructures, complexity sinks down to the K8s layer and the architecture becomes generic.

In addition, for cloud service providers, their traditional advantages accumulated in the past (operation and maintenance advantages of VIRTUAL machine LAAS layer and platform level advantages of PaaS layer) will gradually lose. So if it’s Vendor-unlock, there’s a price war going on between vendors to see who can offer a better service at a cheaper price.

In the broad sense, Serverless is the Serverless of the entire cloud service provider operation and maintenance system. Such as the traditional provision of a MySQL or Redis, developers must be aware that it is running on the server, need to provide an IP, but Serverless (BaaS), developers do not need to care about where the service is running, just need to declare a DB, The application can automatically link to and consume DB.

Serverless in the narrow sense is not only Severless Computing, but also refers to a FaaS application, which is composed of trigger (also can be merged into BaaS) + FaaS + BaaS architecture. At present, the core competitiveness of cloud developers at the Serverless FaaS layer is to continuously introduce new BaaS (Backend as a Service) capabilities, which are mainly used in conjunction with FaaS.

The immutable infrastructure of cloud service providers mentioned above, as shown in the figure below, allows developers to deploy applications on the top layer without caring about the underlying infrastructure. Now the BaaS SDK provided by cloud service providers is actually included in your FaaS runtime. Developers only need to use it as a function interface to call directly, regardless of where the database is deployed, whether to maintain long links and so on.

Simplify for developers

This chart shows the development status of emerging technologies introduced by Gartner in 2017. At that time, Gartner thought Serverless was still a relatively new concept and people’s cognition of it was still in the climbing stage. But in fact, today, Serverless has entered a period of gentle climb, and people have a clear understanding of Serverless can solve the problems of operation and maintenance domain, which boundaries are limited, and so on.

Why hasn’t anything particularly new been introduced in recent years? The reason for this is that there are no new concepts in the Serverless layer, and most of them are working on FaaS application infrastructure. Whether the existing various Web application scenarios can be Serverless, such as the recently supported database BaaS, websocket support FaaS, in addition, there are many Web application scenarios are slowly climb through your efforts to achieve, Make it close to the ideal Serverless.

2021 Gartner Technology Adoption Recommendation Chart

The position of the frame in the figure is Serverless, and the green color represents maturity. It can be seen that Serverless is a mature technology and supports most Web application scenarios, so developers can rest assured and boldly try Serverless.

1) Serverless in operation and maintenance

Many people in China translate Serverless as no server or no service, which is not quite accurate. The antonym of Severless is Serverful, Serverful means that special attention should be paid to the server. The essence of Serverless is to reduce the mental burden. You don’t really need to care about the server, just focus on the deployment function, how it works, how many containers there are underneath, and how many servers there are underneath to support it.

The traditional front and back end development mode is composed of: the back end provides data services, formerly called SOA is service-oriented programming, now more popular is domain driven micro services, front-end consumption assembly data. The traditional way of back-end data interface is to provide HTTP API, until the popular BFF (Backend For Frontend) glue layer function orchestration. Providing full data with micro services is a popular practice in the industry. So the future trend will be all BaaS, the ideal state is front and back end integration model driven, no need to write interfaces.

2) Make technical changes in combination with Serverless

Serverless + = …

The advantage of Serverless at this stage is that it combines technology from other fields, and Serverless combines technology from other fields to trigger many technological changes. For example, traditional microservices + Serverless can be combined to make BaaS microservices. Before providing a microservice, developers need to care about where the microservice is deployed, but with Serverless, there is no need to care about where the deployment is, only need to care about how to call. LowCode and Serverless enable quick deployment and online of web pages; The previous arrangement of interface functions, such as the traditional BFF, can be Serverless in the future and become Serverless for frontend (SFF), which is very suitable for front-end and back-end integration.

3) Change of development roles: integration of front and back ends

After the emergence of Serverless, the integration of front and back ends will also appear in the future. Now there have been logical layout visualization tools, such as Wolf uncle iMove, has been able to achieve visual layout of back-end interface, front-end engineers to do a back-end interface layout becomes very simple. As a result, the responsibilities of front-end engineers can be extended to the back end in the future.

Back end engineers will move from traditional application deployment to BaaS service level development, and future operations engineers will move to the cloud. This is the series of changes that Serverless brings to the R&D and production link.

Best scenario practices for Serverless

The easiest way to determine Serverless’s use of recent scenarios is to see which Trigger events the cloud developer supports.

So at this stage, cloud developers are constantly adding new triggers. As shown in the figure, when the developer writes the FaaS, he wraps the HTTP request into a Trigger. Think of the FaaS function as inside a closed package. How do you wake up the package and how do you open it? This Trigger is used to wake people up.

In addition, at the present stage of Serverless, the importance of the development language is not so high, the language is only to achieve the function of the required tools. Since CNCF came out, FaaS has become language-independent, so node.js, PHP, Python and other major languages can be FaaS code, and you can even create a mirror of your own custom language and execution environment. Therefore, after Serverless, we can borrow the advantages of multiple languages, such as using Python to deal with AI data, node.js to deal with high concurrency network I/O and so on.

1) SFF data orchestration

The best practice is BFF + Serverless, which is very common within Alibaba Group. As Java engineers are at the back end of most scenes inside Ali, the front end team needs to communicate with engineers, and the HSF microservice provided by the back end engineers can be understood as a bunch of RPC interfaces. In the past, a Node.js application was deployed to call the interface, and the data was cleaned, processed, and put into the front-end page for rendering. However, after deploying BFF node.js applications with Serverless, there is basically no need to consider following up traffic expansion and capacity reduction, cost saving and other issues.

2) GitOps model

GitOps is a very suitable scenario for small enterprises. It is equivalent to building a set of pipes for automatic release and launching. It is no longer necessary to modify a version and test it once as before. Git itself supports a large number of hook functions, so creating a process like this is easy. What needs to be paid attention to is the ability of cloud developers. For example, the release process of Ali Cloud is very automatic. After the release of the platform under the cloud, it can support the recording and playback of online traffic.

3) Small but beautiful technical team

The last point is to build a small but beautiful team. In my opinion, one of the most powerful constraints on technology architecture is that our organizational architecture determines our technology architecture.

For example, the separation of the front and back ends is mostly because the organizational structure is defined: the front end has the leader, the back end has the leader, so the development of the front end by the front end, the development of the back end by the back end, and the apI-based communication needs to be coordinated in the middle. So how do we break this barrier if we want to build a small and beautiful team?

Serverless A more suitable scenario is that the front-end service orchestration SFF will solve the problem of intermediate API communication, and the back-end will provide full service. This kind of change will force the backend to do microservitization, and even the backend research and development to use Serverless to do BaaS, which is the reverse promotion process. If our front-end team mastered Serverless, there would be three advantages: front-end data orchestration would eliminate the need for back-end engineers; GitOps solves deployment operations and reduces front-end mental load; Front-end students can concentrate on abstracting business models.

Lecturer Profile:

Pu Songyang, known as Qin Yue. Author of Geek Time’s Introduction to Serverless. Serverless and Node.js evangelist, currently responsible for standardization group of Alibaba Front-end Committee, Low code group — middle and background construction, Node.js application microservice architecture. Rich experience in micro service, Serverless and mid-stage projects.

Click here to watch Serverless Meetup shenzhen station playback!