preface

This article mainly discusses the development history of Serverless and some landing scenarios in the front end, as well as the paper I read in Berkeley: Cloud Programming Simplified: A Berkeley View onServerless Computing. Want to know Serverless in detail also suggest to read this paper directly, I believe there will be a lot of harvest.

What is Serverless

To discuss a specific use scenario of a technology, we first need to define it and determine the scope of what we are talking about. Determining the scope of the discussion boils down to understanding the context of Serverless and how it solves the problem.

1.1 Background and history of Serverless

In today’s cloud computing has popularity, Serverless is already a BBS or speech on major technology, technical term frequency is very high, in the front is not exceptional also, in almost every speech technology of the front end of the meeting, it will be more or less mentioned, and pull pull it and the front intersection or some combination of the ground scene.

But back to the origin, the emergence of Serverless, in the final analysis is to solve what problems? To answer this question, we have to go back to the evolution of cloud computing. In the early days of cloud computing, the two dominant cloud services on the market were Amazon’s EC2 and Google’s App Engine (GAE). These two schemes represent two ideas respectively:

  • EC2 has chosen to provide the underlying infrastructure, and its instances work very much like a physical server, without any extra functionality, where you can run any type of service in any language.
  • GAE has chosen to offer high-level abstractions, including impressive auto-scaling capabilities, while limiting the code that users can run — features that require the use of Storage and computing services provided by Google and compliance with specifications.

The market chose AWS EC2 because developers prefer to run their services in the same environment as their local development environment, so that the developed code can be easily deployed to cloud instances with few changes. However, although this mode gives developers great freedom, it also means that almost all operation and maintenance operations are solved by developers themselves. ** Therefore, most users of cloud services have to bear complex operation and maintenance costs and low hardware utilization rate while using cloud services. **Serverless is created to solve these problems.

To review the birth of the term Serverless, we need to go back to 2012, which is the time when the term Serverless first appeared in our eyes. Ken put forward The term Serverless in his article: Why The Future Of Software And Apps Is Serverless, which started to bring Serverless into our view. Here’s a quote from the article that explains the earlier definition of Serverless:

Thinking Serverless

The phrase “serverless” doesn’t mean servers are no longer involved. It simply means that developers no longer have to think that much about them. Computing resources get used as services without having to manage around physical capacities or limits.

In general, Serverless at this time point is more about abstract discussion of the underlying operation and maintenance of computers, and also thinking about how to solve the problem of complex operation and maintenance costs.

What really makes Serverless famous is AWS Lambda released by Amazon in 2015, which puts forward the concept of **Cloud Function ** and improves Serverless to a new height. It not only abstracts the underlying operation and maintenance capabilities, To provide cloud developers with the support and abstraction of operation and maintenance capabilities, and provide rapid expansion and charge by call mechanism, improve resource utilization, reduce the cost of users. It was also the first truly Faas platform as we know it today, and it was from that year that Serverless began to become a hot term in the world, appearing in various cloud computing conferences.

In 2017, domestic Paas and LAAS platforms also launched their own functional computing platform, which joined in the promotion and construction of Serverless.

1.2 Technical composition of Serverless

Clear the background of Serverless, the definition of Serverless in today’s community is still relatively vague, but in general, browsing some authoritative materials, generally more similar, such as the so-called Serverless white paper: Cloud Programming Simplified: A Berkeley View onServerless Computing defines Serverless as:

Put simply, serverless computing  =  FaaS + BaaS,In our definition, for a service to be considered serverless, it must scaleautomatically with no need for explicit provisioning, and be billed based on usage.

Extract several keywords: serverless = FaaS + Baas, and must be able to implementAutomatic shrinkage and expansionandCharge by usage. In the otherThe Serverless Architectures”Serverless is also regarded as a combination of FaaS and BaaS:Therefore, Serverless is defined as Serverless = FaaS + BaaS (although I personally prefer Serverless as an architectural pattern to lower the development barrier and improve the development efficiency) :

Functions as a Service (FaaS) are Functions as services. FaaS is a form of serverless computing, and currently the most widely used is the AWS Lambada function computing platform. FaaS is essentially an event-driven, message-triggered service. FaaS vendors typically integrate a variety of synchronous and asynchronous event sources and subscribe to these event sources to trigger functions on a sudden or regular basis.

The function here, on the other hand, provides a subtler program unit than a microservice. For example, we can split microservices according to a set of CRUD operations specific to a particular user. Under FaaS, each user action, such as creating this action, corresponds to a function on our function computing platform that can execute an action event by firing it through a trigger. The following figure vividly reflects the characteristics of function calculation:(Image from:Developer.aliyun.com/article/574…BaaS (backend-as-a-service) is an API-based third-party Service that implements core back-end functions of applications, including common databases, object storage, message queues, and log services.

The following table lists the differences between traditional Serverful(aka cloud computing) and Serverless:

Characteristics of | | | AWS Serverless cloud computing | AWS Serverful cloud computing | | — – | : – | : – | : – | | | developers when running the program | according to the event by the user to choose | unless explicitly to stop, Otherwise it keeps running. | | | | JavaScript, Python, Java programming language, such as Go limited language any language | | | | | program state stored in the storage (stateless) | any local (state or stateless) | | | | 0.125 maximum memory size 3 GiB (the user to choose) | 0.5 1952 GiB (the user to choose) | | | the biggest local store | 0.5 GiB | 0 ~ 3600 GiB (the user to choose) | | | | the longest running time 900 seconds | | at | | | minimum billing unit 0.1 seconds to 60 seconds | | | | | 0.0000002 0.0000002 per billing unit price | 0.0000002 ∣ ∣ 0.0000867 – $0.4080000 | | | | operating system and library cloud vendor selection | users choose | | System administrators cloud vendor selection | | | server instance user choose | | | extension | cloud provider is responsible for providing | users themselves responsible for | | | | deployment cloud provider is responsible for providing | users themselves responsible for | | the | | tolerance cloud provider is responsible for providing themselves responsible for | | users | | | monitoring cloud provider is responsible for providing | users themselves responsible for | | | | log cloud provider is responsible for providing themselves responsible for | | users

From: www2.eecs.berkeley.edu/Pubs/TechRp…

In general, Serverless has great changes in the following three aspects compared with Serverful:

  1. Weaken the connection between storage and computing. Service storage and computing are deployed and charged separately. Service storage becomes an independent service, while computing becomes stateless, which facilitates scheduling and scaling.
  2. The execution of the code no longer requires manual allocation of resources, just provide a copy of the code, and the scheduling and allocation of other resources are completed by the Serverless platform
  3. Charge by usage. Serverless charges based on the usage of the service, rather than the resources used (ECS instances, VM specifications, and so on) as traditional Serverful services do.

2. Landing scenario of Serverless and front-end combination

From the practical point of view, Serverless from the perspective of front-end development engineers, can let us focus more on business development, some common server-side problems, we can give Serverless to solve, such as:

  • Serverless does not need to worry about memory leaks because its cloud function services are destroyed upon use
  • Serverless does not require us to build the server environment, estimate the process peak, and care about resource utilization and disaster recovery, because it can rapidly expand capacity based on traffic and charge based on actual usage
  • Serverless has complete supporting services, such as cloud database, cloud message queue, cloud storage and so on. Making full use of these services can greatly expand our ability boundary and do things that we have no time or ability to do before

The following is the distribution of specific usage scenarios for Serverless in 2018 listed in Berkeley’s paper:

2.1 Small program cloud development

According to the figure above, in Serverless usage scenarios, the highest proportion is Web and API services, which is a typical development scenario is the cloud development of small programs.

In the traditional small program development process, we need the front-end engineer to develop the small program side, and the back-end engineer to develop the server side. If the development team on a smaller scale, may also need to front-end engineer to service the development is finished, but because of the small application back-end development and other backend application is essentially the same, need to be concerned with the application of load balance, some operations, such as disaster, monitoring operations, but most of these knowledge and touch the front-end engineer knowledge blind spots, It often takes a lot of time to understand and learn, and the finished product is often unsatisfactory.

However, in the small program cloud development mode based on Serverless, developers can only care about the realization of business requirements. A front-end engineer can participate in the development and use the cloud development platform to encapsulate the back-end functions of BaaS to complete the development of the whole application without complete knowledge of operation and maintenance. The following are several basic capabilities provided by the development of wechat small program cloud:

Ability to role instructions
Cloud function No need to build your own server The code running in the cloud, wechat private protocol natural authentication, developers only need to write their own business logic code
The database No need to build your own database A JSON database that can be operated on the front end of an applet and read and write in cloud functions
storage No need to build your own storage and CDN Upload/download cloud files directly in front of the small program, and manage them visually in the cloud development console
Cloud call Native wechat service integration Cloud function-based authentication The ability to use applets to open interfaces, including server-side invocation and access to open data
WeChat pay Authentication is exempted from native use of wechat Pay No signature calculation, no access_token using wechat payment ability

(Details can be found in wechat official document – mini program – Cloud development)

Specific practice case: Miniprogram – FoodMap

2.2. Data choreography, from BFF to SFF

BFF is no longer unfamiliar to most front-end engineers. It is mainly based on the fact that different devices may need to use different back-end apis, and may have different requirements on data format and data volume. Therefore, the work of BFF is usually to arrange the data and interface of the back end and adapt it to the data format required by the front end and provide it to the front end for use. The specific model is as follows:However, while this pattern solves the problem of interface coordination, it also introduces some new problems:

  • If you need to develop a BFF application for different devices, there will undoubtedly be some duplication of development costs,
  • The BFF layer is usually taken charge of by Node applications that are good at handling high network I/O, while the traditional server operation and maintenance of Node applications are still heavy, requiring us to purchase virtual machines or host them on PaaS platform. However, due to the demands of high availability of micro-services, it will lead to the waste of server resources.
  • The front-end didn’t have to worry about concurrency at all, just the rendering of the page, but with the addition of BFF, the pressure of high concurrency is also concentrated on THE BFF.

Serverless can help us solve these problems. Because BFF does stateless data orchestration, it is a natural fit for the on-demand, elastic expansion, and destroy model of FaaS. We can use one function to realize the aggregation or clipping of each interface. The front-end sends a request to BFF, which is just like an HTTP trigger of FaaS, triggering the execution of a function, which initiates a request to obtain data for specific business logic, and then aggregates and clipping the data. Finally, the data is returned to the front end.The advantages of this method are that, on the one hand, resources can be saved and costs can be reduced without the overhead of virtual machines that maintain Node service. On the other hand, the pressure of operation and maintenance can be transferred from BFF to FaaS service. The front-end does not need to worry about BFF maintenance and concurrent scenarios. In addition, we can leverage other capabilities provided by cloud service providers in the FaaS platform to enable service choreography and enhance our capabilities at the SFF layer.

3. Future and prospect of Serverless

3.1 Be rational about Serverless’s “fire”

Although the concept of Serverless has been blown hot by relevant stakeholders in the market, it seems that the traditional development mode can be revolutionized immediately. However, as ordinary developers, we need to be relatively rational in order to understand a technology more objectively.

First, has Serverless really become a household name, a technology that everyone should know about and embrace? Google Trends ranks the search popularity of the three terms below, and you can see that graphQL and BFF, which are strongly associated with the front end, are much more popular than Serverless.Second, as a technology that comes out of the United States, while AWSLambdaIt is also interesting to divide by Google Trends regional search popularity regardless of the fact that China is ahead of China in terms of technology maturity and service level: China is far ahead of other countries with 100, and Singapore is in second place with 17.

Therefore, at least from this data, The attention of Serverless in China is much higher than that of foreign countries (of course, this is also related to the actual project development needs in China, Serverless naturally has a certain set scene).

3.2 Limitations of current Serverless

After several years of development, why Serverless has not really become popular? Firstly, there are some problems existing in the technology itself. The following are the four shortcomings of Serverless listed in Berkeley’s paper, or the factors that hinder its rapid development:

And formal above due to some reason, lead to complex enterprise business system cannot be based on such a simple Faas to realize now, only a few of the more simple business scenario application is it fall to the ground scene, want to become a mainstream thoughts and an architecture, get rapid development, must be applied in the main process of the business enterprise system, only in this way, Only then can it reflect the huge value and income it brings in the enterprise. Therefore, how to combine the thought of Serverless, land in the core business scenarios of the enterprise, show its real value, bring the benefits of cost reduction and efficiency increase for the enterprise, and precipitation of powerful Serverless development framework and best practices, can push Serverless to the throne of the mainstream architecture in the cloud era.

3.3 Development prospects of Serverless

Here is a direct reference to Berkeley’s prediction of the development prospect and trend of Serverless computing in the next ten years:The one that stands out to me personally is the last one:

Serverless computing will become the default computing paradigm of the Cloud Era, largely replacing serverful computing and thereby bringing closure to the Client-Server Era.

Simply put, they believe that Serverless Computing will become the default paradigm of the cloud era, largely replacing traditional cloud Computing and revolutionizing the client-server era. I personally agree with this outlook, because from a macro point of view, the development of technology must be a process of constantly lowering the threshold, abstracting the underlying logic and improving the development efficiency. The core idea of Serverless architecture, according to THE CTO of AWS, as an architecture mode, Serverless should do the following:

“Everyone wants just to focus on business logic.”

This is really in line with the demand of business development to reduce cost and increase efficiency. Therefore, from this perspective, the implementation and promotion of Serverless architecture can be expected in the future.

Reference links:

  • Cloud Programming Simplified: A Berkeley View onServerless Computing
  • Serverless For Frontend
  • From IaaS to FaaS — The Past and present of the Serverless architecture
  • You should know this when we talk about Serverless
  • Exploring the front-end development pattern in Serverless (Multiple scenarios)
  • Serverless Architectures