The author does not aversion | Ali cloud senior technical experts

How did **Serverless come about? What landing scenes are currently available? What does the future hold for Serverless? This article shares the views of Bha Dosa, senior technical expert of Aliyun, on Serverless, reviews its development history, and forecasts the development trend of Serverless.

origin

Looking back at the history of computer technology, we will find the theme of “abstraction, decoupling, integration” running through it. Every abstraction, decoupling and integration of the industry pushes innovation to a new height and gives birth to a huge market and a new business model.

In the mainframe era, hardware and software were customized, using proprietary hardware, operating systems, and applications.

In the PC era, hardware is abstracted and decoupled into standardized components such as CPUS, memory, hard disks, motherboards, and USB devices. Components produced by different manufacturers can be freely combined to form a whole machine. Software is abstractly decoupled into reusable components such as operating systems and libraries. The abstract decoupling of hardware and software created new business models that unleashed productivity and led to the boom of the PC era.

In the era of cloud, hardware software and software service become the two most significant trends.

The core of hardware softwareization lies in that more and more parts of hardware functions are presented by software, thus obtaining significant advantages in terms of iteration efficiency and cost. Take Software Defined Storage (SDS) as an example. SDS is a Software layer between physical Storage and data requests, allowing users to control how and where data is stored. By decoupling hardware from software, SDS can run on either industry standard or X86 systems, meaning users can meet their ever-increasing storage needs with any standard commercial server without discrimination. Hardware and software decoupling also allows SDS to scale horizontally, eliminating the complexity of capacity planning, cost management, and so on.

Another trend in the cloud era is the servitization of software. The functions of application software are used by a large number of users through the network in the mode of remote invocation. Services became the basis for application construction, apis were implemented as services for developers, and microservices architecture gained widespread success. Services also become the basic form of cloud products. The cloud has proved its success over the past decade. Instead of building their own data center, users can simply call the API and get the server. Computing power is provided to users in a way that has never been simpler.

Remember Google’s famous “Datacenter as a Computer” paper? If we think of the cloud as the computer of the DT era, then a natural question is: as cloud apis (fully hosted services) become richer, what is the appropriate programming model for the cloud? How “abstract, decoupled, integrated” should we build cloud-based applications?

Before answering these questions, let’s first turn to the SaaS space. Salesforce is a star in the SaaS space and provides a great example of how to build capabilities on platforms. Early SaaS offerings used standardized delivery models and integrated capabilities through open API interfaces. As Salesforce’s offerings get richer and its customers grow, businesses are starting to face new challenges:

  • How to launch new products faster and strengthen the integration and synergy between products?

  • Rapid growth of customers, diverse needs. How to effectively meet customer customization needs and increase customer engagement?

  • How to improve the ability of product integration and better connect upstream and downstream resources?

  • When product capabilities and API integrity reach a certain level, how can developers quickly integrate apis and easily build applications around Salesforce capabilities?

  • How to design a good business model, so that customers, enterprises and developers win-win?

Salesforce’s strategy is to platform the entire business, technology and organization. The platform amplifies the value of enterprises, benefiting enterprises, customers and developers. By constantly improving the platform’s application delivery capacity, the internal product research and development efficiency is greatly improved, and product integration and integration are strengthened. Externally, the integration capability of the product is greatly improved and the developer ecology is established.

Since 2006, Salesforce has invested heavily in its platform capabilities, launching programming languages such as Apex and Visualforce that allow customers, partners, and developers to write and run custom logic code in multi-tenant environments. The Force.com PaaS platform was launched in 2008, allowing customers to build their own applications around Salesforce’s capabilities. In 2010, it acquired Heroku, a popular PaaS service provider, and launched Evergreen, a Serverless computing platform, in 2019 to further strengthen application construction and integration and integration capabilities. In addition to the ability to build applications, Salesforce has also made a large amount of investment in the mobile, digital and intelligent applications in recent years, extending the platform’s capabilities in related fields, helping customers realize the digitalization and intellectualization of management processes, and bringing incremental business to customers through data analysis and transaction matching.

To summarize Salesforce’s history, we can come up with some ideas:

  • Apis have become the most important form of value delivery;

  • When API richness and capability integrity reach a certain level, products or organizations that take API as a form of value delivery will be upgraded to a platform to break through capability bottlenecks and achieve new evolution of business, products and technology.

  • The capability of the platform is reflected in its programming model, that is, whether it can help users build a new generation of applications with high efficiency and low cost;

  • In addition to greatly enhancing the ability of enterprise value delivery, the platform is more important to establish an application development ecosystem.

While the cloud is far more complex than the SaaS example above, it follows a similar evolution logic. Almost all product functions of cloud services are embodied through apis, and cloud service providers regard developing platform programming models, improving user value delivery capabilities and establishing application development ecosystems as the most important goals. When we look at the product system of cloud from the perspective of programming model, the positioning of various and complicated cloud services gradually becomes clear.

Infrastructure-as-a-service (IaaS) and container technologies are the infrastructure of the cloud. Container Choreography services represented by K8S are the operating system of cloud native applications, and domain-specific back-end services (BaaS) are the apis of the cloud. In order to achieve higher productivity, a large number of BaaS services are fully hosted and Serverless in storage, database, middleware, big data, AI, etc. This trend has been going on for many years. For example, customers are used to Serverless object storage rather than building their own data storage systems based on servers. When the cloud provides rich Serverless BaaS services, a new general-purpose computing service is needed that can shield the complexity of infrastructure and quickly build applications based on cloud services. Hence Serverless computing, which contains the following elements:

  • Serverless computing is a fully managed computing service where customers write code to build applications without the need to manage and operate servers and other underlying infrastructure.

  • Serverless computing is universal and pervasive, supporting all important types of applications on the cloud combined with the capabilities of cloud apis (BaaS services);

  • Serverless computing not only implements the purest pay-as-you-go (paying for the resources consumed by the actual running of the code), it should also support metering models such as pre-payment, making customer costs very competitive with traditional methods in various scenarios.

  • Unlike resource-oriented computing platforms such as virtual machines or containers, Serverless computing is application-oriented. To be able to integrate and link the cloud product system and its ecosystem, to help users achieve disruptive innovation in value delivery methods.

Status: In what scenarios is Serverless implemented?

With the establishment of the user’s mind and the improvement of the product’s ability, Serverless has been accelerating in recent years. We are seeing significant gains in reliability, cost, and R&D operations efficiency using the Serverless architecture in many scenarios.

1. Applets/Web/Mobile/API back-end services

In small programs, Web/Moible applications, API services and other scenarios, the business logic is complex and changeable, and the iteration on-line speed is required to be high. Moreover, the resource utilization rate of such online applications is usually less than 30%, especially for long-tail applications such as small programs, the resource utilization rate is less than 10%. Serverless computing free operation and maintenance, pay-on-demand features are very suitable for building small program /Web/Mobile/API back-end system, by reserving computing resources + real-time automatic scaling, developers can quickly build delayed and stable online applications that can bear high frequency access. Inside Ali, Serverless is the most common scenario For building back-end services, including Serverless For Frontends in the field of front-end full-stack, machine learning algorithm services, small program platform implementation, and so on.

2. Large-scale batch task processing

Typical offline task batch processing systems, such as large-scale audio and video file transcoding services, contain a series of functions such as computing resource management, task priority scheduling, task scheduling, reliable task execution, and task data visualization. If the construction starts from the machine or container level, users usually use message queues to persist task information and allocate computing resources, use container scheduling systems such as K8S to achieve resource scaling and fault tolerance, and build or integrate monitoring and alarm systems by themselves. If a task involves multiple steps, workflow services need to be integrated to achieve reliable step execution. However, with Serverless computing platform, users only need to focus on task processing logic, and the extreme flexibility of Serverless computing can well meet the demands of sudden tasks on computing power.

3. Online application and offline data processing based on event-driven architecture

Typical Serverless computing services are widely integrated with various types of cloud services in an event-driven way. Users do not need to manage servers and other infrastructure and write glue codes to integrate multiple services, so it is easy to build applications with loosely coupled and distributed event-driven architecture.

Taking Ali Cloud function computing as an example, users can quickly realize API back-end services through the integration of API gateway and function computing. Through the event integration of object storage and function calculation, functions can respond to events such as object creation and deletion in real time, and realize large-scale data processing centered on object storage. Through the event integration of message middleware and function calculation, users can quickly process massive messages. Through the integration with Aliyun EventBridge, all events can be quickly and conveniently processed by functions, whether it is the cloud service of one party, the SaaS service of three parties, or the system built by users.

4. Operation and maintenance automation

With timing triggers, users can quickly implement scheduled tasks with functions without having to manage the underlying server that performs the task. By using cloud monitoring triggers, users can receive O&M events of IaaS services, such as ECS restart or outage and OSS object storage flow control, and automatically trigger functions for processing.

Future: Where will Serverless go?

In recent years, Serverless has been growing at a high speed and showing increasing influence. Mainstream cloud service providers continue to enrich the cloud product system, providing better development tools, more efficient application delivery pipeline, better visibility, more delicate integration between products, but it is just beginning.

Trend 1: Serverless will be everywhere

Any sufficiently sophisticated technical solution will be implemented as a fully managed, Serverless back-end service. Not just cloud products, but also partner and third-party services. The capabilities of the cloud and its ecosystem will be demonstrated through API + Serverless. In fact, Serverless will be the most important part of any platform product or organization that uses API as a means of function disclosure, such as Dingding, wechat, Didi, etc.

Trend 2: Closer integration with container ecology

Container has achieved a revolutionary innovation in application portability and delivery process agility, which is an important change in modern application construction and delivery.

  • Excellent portability: Through the operating system virtualization technology, the application and its running environment are virtualized into containers, realizing the Build once, Run anywhere, containerized applications can run in the development machine, on-premise, and public cloud environment without any difference.

  • Agile delivery process: Container image has become the de facto standard for application packaging and distribution. Today, developers all over the world are accustomed to container as the way of application delivery and distribution. A complete application delivery tool chain has been established around container.

Containers have become the basis of modern applications, but users are still responsible for the management of servers and other infrastructure, including water level estimation, machine operation and maintenance, etc. Therefore, Serverless Container services such as AWS Fargate and Ali Cloud ECI have emerged to help users focus on the construction of containerized applications without the cost of infrastructure management. From the perspective of Serverless, Serverless computing services such as function computing bring users fully automatic scaling mode, extreme flexibility and completely on-demand metering, but they face challenges in the compatibility, portability, tool-completion chain and ecology of user development habits, which are the advantages of containers. It is believed that with the development of technology, container image will become the distribution method of more Serverless applications such as function calculation in the future. The combination of container’s huge tool ecology and Serverless free operation and maintenance and extreme flexibility will bring a brand new experience to users.

Trend 3: Serverless will connect everything in the cloud and its ecosystem in an event-driven way

We have already discussed the implications of functional computing connected through event-driven and cloud services in the previous sections, and such capabilities extend to the entire cloud ecosystem. Whether it’s a user’s own app or a partner’s service; Whether on-premise environment or public cloud, all events can be handled in a Serverless manner. Cloud services and their ecosystem will be more closely connected in some areas, becoming the building blocks for users to build resilient and highly available applications.

Trend 4: Serverless Computing will continue to improve computing density to achieve the best performance/power ratio and performance/price ratio

Virtual machines and containers are two different virtualization technologies. The former has strong security and low overhead, while the latter is the opposite. On the one hand, Serverless computing platform requires the highest security and minimum resource overhead, so it must have both. On the other hand, preserving compatibility with the way the program is executed, such as supporting arbitrary binaries, makes a solution for a language-specific VM impractical. Therefore, AWS Firecracker, Google gVisor and other new lightweight virtualization technologies emerge at the right moment. Taking AWS Firecracker as an example, it can realize the startup speed of 100 milliseconds and minimum memory overhead by cutting the device model and optimizing the kernel loading process. A bare-metal instance can run thousands of instances. Combined with the application load aware resource scheduling algorithm, cloud service providers are expected to increase the oversold rate by an order of magnitude while maintaining stable performance.

As the scale and influence of Serverless computing becomes larger and larger, end-to-end optimization based on the load characteristics of Serverless becomes very meaningful from the aspects of application framework, language and hardware. New Java virtual machine technology dramatically improves the startup speed of Java applications, non-volatile memory helps instances wake up faster, CPU hardware and operating systems collaborate to provide fine isolation of performance disturbances in high-density environments, and all of these new technologies are creating new computing environments.

Support for heterogeneous hardware is another important direction to achieve optimal performance/power ratio and performance/price ratio. Over time, X86 processors have become increasingly difficult to improve. On the other hand, processors based on THE ARCHITECTURE of GPU, FPGA, TPU (Tensor Processing Units) have more advantages in computing efficiency in scenarios requiring high computational power, such as AI. As heterogeneous hardware virtualization, resource pooling, heterogeneous resource scheduling, and application frameworks mature, the computing power of heterogeneous hardware can be released in Serverless mode, greatly reducing the user threshold.

Afterword.

In 2009, UC Berkeley published a famous paper called “Above the Clouds: A Berkeley View of Cloud Computing, discusses the Cloud and its value, challenges and evolution path. Its insights have been verified in the decade of Cloud development. Today, no one doubts the value of Cloud and its profound impact on all walks of life. In 2019, they published a new paper, “Cloud Programming Simplified: A Berkeley View on Serverless Computing “predicts that Serverless will dominate the development of cloud in the next decade. The development of the industry is A spiral, and the birth and rise of Serverless logic has long been contained in it. We believe that in the next decade, Serverless will reshape the way enterprises innovate and help the cloud become a powerful driver of social development.

Course recommended

In order for more developers to enjoy the dividends brought by Serverless, this time, we gathered 10+ Technical experts in the field of Serverless from Alibaba to create the most suitable Serverless open course for developers to learn and use immediately. Easily embrace the new paradigm of cloud computing – Serverless.

Click to free courses: developer.aliyun.com/learning/ro…

“Alibaba Cloud originator focuses on micro-service, Serverless, container, Service Mesh and other technical fields, focuses on the trend of cloud native popular technology, large-scale implementation of cloud native practice, and becomes the public account that most understands cloud native developers.”