The AWS Serverless service is a Serverless computing approach for application engineers. The basic concept is to leave the infrastructure needed to run the service to AWS. Engineers using AWS Serverless services can focus on the development of a customer-facing logical services layer without having to focus on building, managing, scaling, and other tasks of the infrastructure. At the heart of the development of AWS Serverless is a computing service called Lambda.

Today, we will focus on Lambda and introduce the different assembly patterns of Lambda and various AWS services under different application scenarios, to preliminarily explore the development and deployment based on AWS Serverless.

What?

Let’s start with what Serverless development is.

Different from the classic development, compilation and deployment operation mode, AWS Serverless is used to compute the service Lambda. It only needs to upload the source file, select the execution environment and execute, and then the operation results can be obtained. During this process, server deployment, Runtime installation, compilation, and management are performed by the AWS Serverless computing platform. For developers, it is simply a matter of maintaining the source code and the relevant configuration for the AWS Serverless execution environment.

According to?

Why Serverless?

For developers, using AWS Serverless services saves a lot of effort in managing the infrastructure architecture and allows them to better focus on the development of business logic. As for services, the service nature of AWS makes it well able to support flexible scaling and high concurrency scenarios. In addition, the development based on AWS Serverless often has the advantages of rapid update and rapid deployment, and its on-demand charging method also has advantages to cut costs in application scenarios such as lightweight deployment test environment and rapid verification.

How?

So, let’s take a look at how to quickly assemble a simple Web Service from the related services of AWS Serverless.

AWS Serverless provides a rich catalog of services to cover the usage needs of various functions. In addition to the core computing Service Lambda, Web Service construction often needs to be combined with API Gateway (API Gateway), persistent storage (S3), CDN (CloudFront), firewall (WAF), domain name resolution (Route 53) and other services. If you need to support the HTTPS protocol, you can also implement it using Certificate Management Service (ACM).

Once the above services are assembled, a complete response request flow would look like this:

  • User requests arrive at CloudFront through domain name resolution. After WAF performs frequency control, IP filtering, header verification and other security guarantees, user requests are routed to the core Lambda computing service through API Gateway.
  • Lambda processes the request, reads or stores data from the persistent store S3 as needed, and returns the processing results to the client via the API Gateway.
  • The logs generated during the logical computation by Lambda will be output to CloudWatch’s log management service for later query. In addition, additional optimizations can be made, such as configuring CloudFront to load static resources directly from S3 to reduce time and computing overhead.

The way Lambda is started

In the previous example of the Web Service, the execution of a Lambda was invoked by the API Gateway Service. Lambda implementations can actually be evoked in a number of ways. First of all, among the services of AWS itself, message publishing (SNS), message queue (SQS), load balancer (ALB), Step Function and other services are often used in combination with Lambda.

Of course, through the SDK, Command Line or API interface, you can also start the execution of the Lambda function. The execution mode is divided into synchronous and asynchronous two kinds:

  • Synchronous mode calls: You need to wait for the Lambda function to complete before returning the result
  • Asynchronous mode call: Returns immediately after Lambda’s execution interface is called. The execution result of a Lambda function needs to be obtained by other means.

These two invocation modes can be used flexibly in different scenarios.

Message-driven examples

Let’s look at another example of using the AWS Serverless service in a message-driven alarm processing system.

For example, we have a running system that is configured to send an alarm message to the SNS service when an abnormal alarm occurs. SNS service is a Pub/Sub service of messages, which performs a basic fan-out publishing operation for alarm messages. On the one hand, the person in charge is informed by phone or email, and on the other hand, Lambda is called at the same time, in which some automatic processing of alarm can be carried out. This is a simplest alarm processing system.

Note here, however, that the SNS service itself does not store messages. After SNS receives the message, it will publish the message immediately. If there is no recipient of the message at this point, the message is discarded. In addition, after successful message delivery, that is, after the interface that calls the Lambda succeeds, the message will be discarded regardless of the processing result. If a Lambda fails because of some internal logic error, or an external dependency system failure, it is not possible to retry a lost message. The reliability of message processing can be improved by adding Message Queuing Service (SQS) between SNS and Lambda.

SQS standard queues provide an unordered, reliable, high-concurrency queue service that can store messages for up to 14 days. SNS publishes a message to SQS, where the message is first stored. At this point, SQS is set to the Event Source of Lambda, and the message will be sent to Lambda for further processing. SQS evokes a Lambda and can be configured as a synchronous procedure, that is, if a Lambda fails and returns an error, SQS will not remove the message from the queue. The failed message is marked as invisible for a while, and after a period of hiding SQS will call up the Lambda again to process the message. This approach can greatly improve the reliability of message processing.

However, the above approach also introduces the problem of large accumulation of exception messages, which reduces the execution efficiency of normal messages. To solve this new problem, we can configure a dead-letter Queue for the message Queue. If a message is processed several times without success, it can be removed from the original Queue and moved to a dead-letter Queue. A standard dead-letter Queue is essentially a standard Queue and can also continue with other subsequent processing of its “discarded” messages.

Standard queues are better able to support high concurrency scenarios. A standard queue can receive a large number of messages simultaneously and simultaneously evoke a large number of Lambda instances for processing. Correspondingly, the standard queue service does not guarantee the order in which messages are delivered, and the same message may be delivered repeatedly. Therefore, when using the SQS standard queue, we need to consider the de-duplication of messages, the idempotency of processing logic and other issues. In addition to the standard queue, SQS has another first-in, first-out (FIFO) queue. FIFO sacrifices concurrency to ensure sequential and unique message delivery. In different application scenarios, you can flexibly choose to use different queue types according to specific requirements.

conclusion

The AWS Serverless service has natural advantages in terms of decoupling, flexible scaling, cross-regional deployment, etc., but it also has limitations:

  • The execution of a single Lambda is capped at 15 minutes and has poor support for long hours of work.
  • Availability of services built on the Serverless architecture is heavily dependent on AWS availability.
  • The development based on Serverless will incur the learning cost of AWS system, and the difficulty of debugging and troubleshooting will also become higher.

In actual production activities, it is necessary to consider the demand comprehensively and balance the cost and effect well. In some application scenarios suitable for micro-services, especially in the execution of short state, temporary tasks, development based on AWS Serverless can become a very convenient means of development.

The above is all the content of this sharing, about the sharing of the video, you can click [here] to view.

The authors introduce

Ge Xinyi, netease cloud letter backend development engineer, has overseas development experience based on AWS Serverless, now engaged in cloud letter backend scheduling development.