What is serverless?

Traditionally, we have built and deployed Web applications for which we have some degree of control over HTTP requests made by the server. Our application runs on this server and we are responsible for configuring and managing resources for it. But this can create some problems:

  • We want to keep the server running even if no requests are being processed.
  • We are responsible for the normal operation and maintenance of the server and all its resources.
  • We are also responsible for making appropriate security updates to our servers.
  • As usage expands, we also need to manage server scaling (capacity expansion), with the result that when we don’t have much usage, we reduce it (downsizing).

For smaller companies and individual developers, this can be a lot of work. This distracted us from our more important job of building and maintaining the actual application. In large organizations, this is handled by the infrastructure team and is usually not the developer’s individual responsibility. However, the process required to do so can end up slowing down development time. Because you can’t continue to build applications without working with the infrastructure team to help you get up and running. As developers, we’ve been looking for a way to solve these problems, and that’s where serverless comes in.

Serverless computing

Serverless computing, or Serverless for short, is an execution model in which a Cloud service provider (AWS, Azure, or Google Cloud) is responsible for executing a piece of code by dynamically allocating resources and charging only for the resources used to run the code. This code typically runs in a stateless container and can be triggered by a variety of events including HTTP requests, database events, queue services, monitoring alarms, file uploads, scheduling events (CRON timed tasks), and more. The code that is sent to the cloud Service provider for execution is usually in the form of functions, so serverless computing is sometimes referred to As “function As A Service” or Funtion As A Service (FAAS). The following are FAAS products offered by the major cloud service providers:

  • AWS: AWS Lambda
  • Microsoft Azure: Azure Functions
  • Google Cloud: Cloud Functions

Serverless computing lets you build and run applications and services regardless of the server. It eliminates infrastructure management tasks such as server or cluster configuration, patching, operating system maintenance, and capacity provisioning. You can build serverless applications for almost any type of application or back-end service, and all the operations required to run and extend highly available applications are yours.

Serverless applications are event-driven and loosely coupled through technology-independent apis or messaging. Execute event-driven code in response to events, such as state changes or terminal node requests. Event-driven architectures decouple code from state. Integration between loosely coupled components is typically done asynchronously using messaging.

Why serverless computing?

advantage

  • None Server Management

No servers need to be preset or maintained. There is no software or runtime to install, maintain, or manage.

  • Flexible extend

Your application can scale automatically or adjust capacity by switching the number of units of occupied resources (such as throughput, memory) rather than switching the number of units of individual servers.

  • Pay by value

Pay for consistent throughput or execution duration (not the server unit) (really only per request).

  • Automated high availability

Serverless applications provide built-in availability and fault tolerance. You do not need to build these capabilities because the service running this application provides them by default.

disadvantage

  • Cold start Performance is poor

The average cold startup time of current cloud service providers is between 100 and 700 milliseconds, depending on the language features. Thanks to Google’s JavaScript engine Just In Time, Node.js is the fastest at cold launches.

  • Monitoring and debugging are complex

  • Relying on cloud vendors

For example 🌰 (Lambda)

Serverless applications are typically built with fully managed services as building blocks across computing, data, messaging and integration, streaming, and user management and identity layers. Services such as AWS Lambda, API Gateway, SQS, SNS, EventBridge, or Step Functions are at the heart of most applications, supported by services such as DynamoDB, S3, or Kinesis.

category service instructions
To calculate AWS Lambda AWS Lambda lets you run stateless serverless applications on a managed platform that supports execution of the microservice architecture, deployment, and management function layers.
API agent API Gateway Amazon API GatewayIs a fully managed service that makes it easy for developers to create, publish, maintain, monitor, and secure apis on any scale. It isAPI managementProvide a comprehensive platform. Using the API Gateway, you can handle hundreds of thousands of concurrent API calls and handle traffic management, authorization and access control, monitoring, and API versioning.
messaging SNS Amazon SNSIs a fully managed message publish/subscribe service that easily separates and extends microservices, distributed systems, and serverless applications.
messaging SQS Amazon SQSIs a fully managed message queue service that makes it easy to separate and scale microservices, distributed systems, and serverless applications.
The event EventBridge Amazon EventBridgeIs a serverless event bus that connects applications together using your own application data to integrate software as a service (SaaS) applications with AWS services.
choreography Step Functions AWS Step FunctionsEnables you to easily coordinate components of distributed applications and microservices using visual workflow diagrams.

Although serverless computing abstracts the underlying infrastructure from the developer, the server is still involved in executing our functions. Since your code will be executed as a separate function, we need to know the following:

Micro service

As we move to a world of serverless computing, the biggest change we face is that our programs need to be organized as functions. You might be used to deploying your application as a single Rails or Express monolithic application. But in the world of serverless computing, you generally need to adopt more microservices-based architectures. You can solve this problem by running your entire application as a whole in a single function and handling the routing yourself. This is not recommended because it is better to reduce the size of your function. We will discuss this below.

Stateless function

Your functions usually run in secure (almost) stateless containers, which means you won’t be able to run code in an application server that executes long after an event has completed, or service requests using the previous execution context. You have to effectively assume that your function is called in a new container every time.

Cold start

Because your function is running in a container that needs to respond to events, there is some latency. This is called a “cold start”. When your function is finished executing, your container may remain for a while. If another event is triggered at this point, it responds much faster, which is often referred to as a “hot start.”

The duration of the cold start depends on the implementation of the particular cloud service provider. On AWS Lambda, it ranges from a few hundred milliseconds to a few seconds. It may depend on the runtime (or programming language) used, the size of functions (in the form of packages), and, of course, the cloud service provider in question. Over the years, cold starts have improved dramatically as cloud providers have gotten better at optimizing latency.

Function in the running state plug-in.

What is Serverless?

Serverless is where the server-side logic is implemented by the developer, runs in a stateless computing container, is triggered by events, is completely managed by a third party, and its business-level state is stored in a database or other media. An event-driven, fully hosted cloud computing service. So Serverless is an advanced stage in the evolution of cloud native technologies, enabling developers to focus more on business logic and less on infrastructure.

In addition, Serverless emphasizes NoServer, not non-service, and its code still runs on the server. But the business side is more focused on the business logic code.

The history of the Serverless

IaaS

IaaS (Infrastructure as a Service), launched by AWS in 2006, essentially leases servers and provides Infrastructure outsourcing services.

Paas

Platform as a Service (PaaS) is a Platform Service based on IaaS that provides functions such as operating system installation, monitoring, and Service discovery. Users only need to deploy their own applications.

The history of the first Serverless platform dates back to 2006, called Zimki (the company is now defunct);

Conclusion:

Serverless has been developed for 15 years, and with the rise of cloud native application platform based on Kubernetes, Serverless becomes the focus of people’s pursuit again.

The classification of the Serverless

BaaS

Backend as a Service BaaS (Backend as a Service) uses apis to invoke the Backend or program logic implemented by others, such as authentication Service Auth0. BaaS is used to manage data. There are also many commercial services available on the public cloud for our common open source software, such as Amazon’s RDS, which can replace our own MySQL, as well as various other database and storage services.

FaaS

FaaS (Functions as a Service) is a form of serverless computing. Currently, AWS Lambada is the most widely used. FaaS is essentially an event-driven, message-triggered service.

Different from traditional server-side software, it is deployed to a virtual machine or container with an operating system by application program, which generally needs to stay in the operating system for a long time to run. However, FaaS is to directly deploy the program on the platform. When an event comes, the program can be triggered to execute, and then it can be uninstalled after execution.

Landscape

Usage scenarios of Serverless

CNCF Serverless Whitepaper V1.0 provides more detailed descriptions of Serverless usage scenarios.

The sample

Let’s take a game application as an example to illustrate what a Serverless application is.

A mobile game has at least one of the following features:

  • Mobile friendly user experience
  • User management and permission authentication
  • Level, upgrade and other game logic, game ranking, player level, mission and other information

A traditional application architecture might look like this:

Such an architecture is easy to develop, but complex to maintain, requiring very specialized people and environment configurations for both front and back end development. There are also people dedicated to maintaining databases, application updates and upgrades, and so on.

In the Serverless architecture, we no longer need to store any session state in server-side code, but store them directly in NoSQL, which makes the application stateless and helps to scale flexibly. The front-end can directly leverage BaaS while reducing the coding requirements of the back-end, which essentially reduces the human cost of application development, reduces the risk of maintaining your own infrastructure, and makes it easier to scale and iterate quickly by leveraging the cloud’s capabilities.

Advantages and disadvantages of Serverless

The Serverless architecture cannot be used once and for all. You need to choose whether to use Serverless based on actual requirements.

advantages

Most companies today need to know in advance how many servers, storage capacity, and database capabilities are required when developing applications and deploying them on servers, whether in a public cloud or a private data center. And you need to deploy running applications and dependent software onto the infrastructure. Assuming we don’t want to sweat the details, is there a simple architectural model that will satisfy this idea? The answer already exists, and it is a new but hot topic in today’s software architecture world — Serverless architecture.

– AWS FeiLiangHong

  • Reduce operating costs

Serverless is a very simple outsourcing solution. It lets you delegate management of servers, databases, applications, and even logic to a service provider that you would otherwise have to maintain yourself. Since the number of users of this service will be so large, economies of scale will occur. There are two aspects to cost reduction, namely the cost of infrastructure and the cost of personnel (operation/development).

  • Reduce development costs:

IaaS and PaaS exist on the premise that server and operating system management can be commoditized. The result of Serverless as another service is that the entire application component is commoditized.

  • Expansion capability:

One obvious advantage of the Serverless architecture is that “horizontal scaling is fully automated, flexible, and managed by the service provider.” The biggest benefit from basic infrastructure is that you only pay for the computing power you need.

  • Simpler administration:

The Serverless architecture is significantly simpler than the other architectures. Fewer components means less overhead for your administration.

  • Reduce labor costs

Instead of maintaining your own server and worrying about its various performance metrics and resource utilization, you care about the state and logic of the application itself. And the Serverless application itself is easy to deploy by simply uploading basic code, such as zip files of Javascript or Python source code, and pure JAR files for JVMS based languages. No need to use Puppet, Chef, Ansible, or Docker for configuration management, reducing o&M costs. At the same time, it is more intuitive and effective for operations to monitor the application itself instead of monitoring the lower-level and long-term metrics such as disk usage and CPU usage.

As long as there are applications, there will be Ops. However, the role of personnel will change. Deployment will become more automated, monitoring will become more application-oriented, and the lower-level operation and maintenance will still require professional personnel.

  • To reduce risk

For a system with more and more complex components, the risk of failure increases. We use BaaS or FaaS to outsource these failures to professionals, sometimes more reliably than we can fix them ourselves, using expertise to reduce the risk of downtime, shorten the time to fix failures, and make our systems more stable.

  • Reduce resource overhead

When applying for host resources, we usually evaluate the maximum cost of a peak to apply for resources, which often leads to excessive configuration, which means that the cost of peak capacity is always paid even when the host is idle. For some applications this is a last resort, such as databases that are hard to scale, and for common applications it may not make much sense, although we all agree that it is better to waste resources than to have the application die when the peak comes.

The best way to solve this problem is not to plan how many resources you need to use, but to request resources based on actual needs, provided that the entire resource pool is sufficient (public clouds are obviously more suitable). Payment is made according to the usage time and the computing resources applied for each time. The granularity of charging is smaller, which helps to reduce the resource cost. This is an optimization of the application itself, such as making each request take less time and consume fewer resources per request, which can result in significant cost savings.

  • Increased flexibility in scaling

In the case of AWS Lamba, when the platform receives the first event that triggers a function, it launches a container to run your code. If a new event is received at this point, and the first container is still processing the previous event, the platform launches a second code instance to handle the second event. This automatic, zero-management level scaling of AWS Lambad will last until there are enough code instances to handle all the workloads.

However, AWS will still only charge you for the execution time of your code, no matter how many container instances it needs to start to meet your load requests. For example, assuming that the total execution time of all events is the same, calling a Lambda 100 times sequentially in one container costs the same as calling a Lambda 100 times simultaneously in 100 different containers. Of course, AWS Lambada does not have an unlimited number of instances. What if someone launches a DDos attack on you? AWS has a default limit, the default maximum number of concurrent Lambada functions is 1000.

  • Shorten the innovation cycle

Small teams of developers are developing applications from scratch and deploying them to production in a matter of days. Use short and simple functions and events to glue together powerful apis that drive data stores and services. The finished application is highly available and extensible, with high utilization, low cost, and fast deployment.

Container technology represented by Docker only shortenthe iteration cycle of applications, while Serverless technology directly shortenthe innovation cycle, from concept to minimum feasible deployment time, allowing junior developers to complete projects in a very short time that previously would have required experienced engineers to complete.

disadvantages

  • State management

Stateless is necessary for free scaling, and with stateful services, serverless loses flexibility, which inevitably adds latency and complexity when stateful services need to interact with storage.

  • delay

Access latency for different components in an application is a big problem and can be optimized by using proprietary network protocols, RPC calls, data formats, or by placing instances in the same rack or host instance to reduce latency.

Serverless applications are highly distributed and low-coupled, which means that latency will always be an issue, making it impractical to simply use serverless applications.

  • Local test

Local testing difficulties for Serverless applications are a thorny issue. Although you can in a test environment using a variety of databases and message queue to simulate the production environment, but the integration of service application in or end-to-end testing especially difficult, it is difficult to the kinds of connections in local simulation application, and combined with the characteristics of performance and scaling test, and is also distributed serverless application itself, Simply gluing together countless FaaS and BaaS components can be challenging.

Specific practical operation will not write, there are many tutorials online, share a good feeling of the article: ServerLess front-end practice

Open source frameworks:

  • booster – Booster is a framework for building and deploying reliable and scalable event-driven serverless applications.
  • dapr – Dapr is a portable, event-driven, runtime for building distributed applications across cloud and edge.
  • dispatch – Dispatch is a framework for deploying and managing serverless style applications.
  • Easyfaas – EasyFAAS is a function computing service engine with light dependence, strong adaptability, less resource occupation, stateless and high performance.
  • eventing – Open source specification and implementation of Knative event binding and delivery.
  • faas-netes – Enable Kubernetes as a backend for Functions as a Service (OpenFaaS).
  • firecamp – Serverless Platform for the stateful services.
  • firecracker – Secure and fast microVMs for serverless computing.
  • fission – Fast Serverless Functions for Kubernetes.
  • fn – The container native, cloud agnostic serverless platform.
  • funktion – A CLI tool for working with funktion.
  • fx – Poor man’s serverless framework based on Docker, Function as a Service with painless.
  • gloo – The Function Gateway built on top of Envoy.
  • ironfunctions – IronFunctions – the serverless microservices platform.
  • keda – KEDA is a Kubernetes-based Event Driven Autoscaling component. It provides event driven scale for any container running in Kubernetes.
  • knative-lambda-runtime – Running AWS Lambda Functions on Knative/Kubernetes Clusters.
  • knix – KNIX MicroFunctions is a serverless computing platform that combines container-based resource isolation with a lightweight execution model using processes to significantly improve resource efficiency and decrease the function startup latency. KNIX MicroFunctions works in Knative as well as bare metal or virtual machine-based environments.
  • kubeless – Kubernetes Native Serverless Framework.
  • layotto – A fast and efficient cloud native application runtime.
  • nuclio – High-Performance Serverless event and data processing platform.
  • openfaas – OpenFaaS – Serverless Functions Made Simple for Docker & Kubernetes.
  • openwhisk – Apache OpenWhisk (Incubating) is a serverless, open source cloud platform that executes functions in response to events at any scale.
  • osiris – A general purpose, scale-to-zero component for Kubernetes.
  • riff – Riff is for functions.
  • Serverless – Serverless Framework – Build web, mobile and IoT applications with serverless architectures using AWS Lambda, Azure Functions, Google CloudFunctions & more!
  • serving – Kubernetes-based, scale-to-zero, request-driven compute.
  • spec – CloudEvents Specification.
  • sqoop – The GraphQL Engine powered by Gloo.
  • thanos – Highly available Prometheus setup with long term storage capabilities.

Reference Documents:

Jimmysong. IO/awesome – clo…

www.liangzl.com/get-article…

Aws.amazon.com/cn/getting-…

Juejin. Cn/post / 684490…

serverless.com

Juejin. Cn/post / 684490…

A white paper serverLess