The book “What is Serverless” by Mike Roberts and John Chapin was recently translated by Shao Mingqi, a technical expert at Lingquyun.

Once you know what Serverless, the various service components are, how do you compose the service components into a complete application? What does the application architecture look like based on Serverless? What are the advantages of Serverless over traditional non-Serverless application architectures? This article will answer these questions.

The original:

What is ServerLess: Understand the Latest Advances in Cloud and Service-Based Architecture

By Mike Roberts & John Chapin

ID: Deep-easy-Arch

Translator: The sparrow yun Shao Mingqi

Consider an application that is a multi-user mobile game with the following advanced requirements:

Friendly mobile interface

Have user management and authentication

There is some basic business logic, like leaderboards, history, etc

We’ll ignore other features that might be encountered in the game for the moment, after all, our goal is not to actually develop a game, but to compare the Serverless application architecture with a traditional non-Serverless architecture.

** Traditional non-serverless architecture **

Based on the above requirements, a traditional non-Serverless architecture would look like this:

Java Application Server is Application logic written in Java code, running in Application servers such as Tomcat or JBoss.

Data is stored in relational databases such as Mysql.

In this architecture, the mobile application is responsible for rendering the game interface and processing input from the user, but it removes most of the actual logic to the back end. From a code point of view, the mobile application is simple and lightweight, using HTTP to invoke different apis provided by the back end Java application.

User management, authentication, and various game operations are encapsulated using Java application code, and back-end applications interact with a single relational database to maintain the state of ongoing games and store the results of completed games.

Why change the architecture?

This simple architecture seems to meet our requirements, so why improve it? The key lies in the potential challenges and pitfalls of future development and operation.

Expertise in iOS and Java development, as well as in configuring, deploying, and operating Java application servers, and in configuring and operating relational database servers are required when building the game. In addition to application servers and databases, separate hosts need to be configured and run, whether these systems are container-based or run directly on virtual or physical hardware. We also need to configure routing policies, access control lists, etc., to ensure network connectivity between system components and between clients and servers.

With all of this, we’re still just providing the most basic environment to make games usable without touching on security, scalability, or high availability, which are key aspects of modern production systems. Most importantly, even in simple architectures, there is a lot of inherent complexity to meet the diverse needs of the real world. It’s not hard to build systems, but all that complexity can become a huge drag when we fix bugs, add features, or try to build new innovative ideas quickly.

How to change?

Now that you’ve seen some of the challenges of the traditional architecture, how do you change it? Let’s see how we can meet the high-level requirements and use the Serverless architectural pattern and components to address some of the challenges of the traditional approach.

As mentioned in the previous article, Serverless components can be divided into two categories, BaaS and FaaS. Given the requirements of the game, some of these can be solved with the BaaS component, and some with the FaaS component.

Serverless architecture

For a game built on Serverless, the architecture should look something like this:

While the user interface is still part of the mobile application and needs to be implemented through code itself, user authentication and management can be handled by AWS Cognito and other BaaS services that can be invoked directly from mobile applications to handle user-facing functions such as registration and authentication, Other back-end components can use the same BaaS to retrieve user information.

With user management and authentication now handled by BaaS, back-end Java application logic is simplified and HTTP requests between mobile applications and back-end game logic can be handled in a secure and extensible manner using another component, AWS API Gateway. The operations of each different function can then be encapsulated in FaaS functions.

These back-end FaaS functions can interact with NoSQL databases like DynamoDB to store the state of the game. In fact, one major change is that instead of storing any session state in server-side application code, all session state is stored in NoSQL storage. While this may seem like a hassle, it helps scale.

Mobile applications can seamlessly access the same database to retrieve past results and leaderboard data. This allows us to move some business logic to the client implementation instead of putting it into the back-end implementation.

Serverless architecture advantages

This new Serverless architecture seems complex and requires more components than the traditional architecture, but by using fully managed Serverless components, we have eliminated many of the challenges associated with managing the application infrastructure and underlying systems.

The code we wrote is now almost entirely focused on the game’s unique business logic, and more importantly, the components have been decoupled and separated, so they can be switched out or new logic added very quickly without the drag that is not inherent in a serverless architecture.

Scalability, high availability, and security also improved, which meant that as our games became more popular, we didn’t have to worry about buying more powerful servers, or whether our databases were going to crash, or checking firewall configuration failures.

In short, we reduce the labor cost of making the game, as well as the risk and computing cost of running the game, and all of its components expand flexibly. When we have some new ideas, the lead time is much shorter and we can start getting feedback and iterating faster.

The outsourcing of cloud computing infrastructure brings five benefits: reduced labor costs, reduced risk, reduced infrastructure costs, scalability, and delivery time. Serverless also has these five advantages, the first four of which are more or less about cost savings, which is what Serverless is best known for: how to do the same things that have been done before at a lower cost. However, the cost savings are not the most exciting part of serverless for us. The biggest benefit is that it reduces the time from new ideas to implementation. In other words, it allows you to innovate faster.

Reduce labor costs

As we said earlier, Serverless essentially doesn’t need to care about its own servers and processes, just the business logic and state of the application, leaving all other unnecessary work to the platform. The first obvious benefit here is that you no longer manage operating systems, patch levels, database version upgrades, and so on. If you are using a BaaS database, message bus, or object store, then congratulations, none of these infrastructures need you to operate and maintain.

With other BaaS services, it is more intuitive to save labor costs and less logic to develop yourself. We’ve talked a lot about authentication BaaS services, and one of the biggest benefits is that you can define development, testing, deployment, and operations with less code, all of which reduces engineer time costs. Another example is third-party mail BaaS services like Mailgun, It eliminates most of the complexity of handling E-mail sending and receiving.

Compared with traditional methods, FaaS also has significant labor cost advantages. Software development using FaaS is simplified because most of the infrastructure code has been moved to the platform. An example here is the development of an HTTP API service, where all HTTP-level request and response processing is done by an API gateway.

Deploying with FaaS is easier because we just upload basic code packaged in Zip format (if it’s JS or Python scripting language), or normal JAR files if it’s Java, with no Puppet, Chef, Ansible, or Docker configuration to manage. Other types of operational activities are also made easier; for example, monitoring can be limited to more application-oriented metrics because the “always-on” server process is no longer the focus. These are statistics, such as execution duration and customer-facing metrics, not available disk space or CPU utilization.

To reduce risk

When thinking about software application risk, we often consider how sensitive we are to failures and outages, and the greater the number of different types of systems or components our team is responsible for managing, the greater the risk of problems occurring. Instead of managing these systems ourselves, we can “outsource” systems to solve these problems, as described earlier.

While we still face the risk of application failures as a whole, we choose to manage risks differently — we now rely on the expertise of others to fix some of these failures rather than fixing them ourselves. This is usually a good idea, because some of the technologies in the application stack are rarely changed, and when they fail, the repair time and difficulty are uncertain. With Serverless, we can significantly reduce the number of technology stacks that we operate directly, and the technologies that we still manage ourselves are often very familiar and frequently changing, so we are more able to handle failures with confidence when they do occur.

For example, managing distributed NoSQL databases. Once a component is installed, a failure in a node occurrence may be relatively rare, but what happens when it fails? Does your team have the expertise to diagnose, fix, and recover problems quickly and effectively? Often not. Instead, teams can choose to use Serverless’s NoSQL database services, such as Amazon DynamoDB, where outages occur occasionally, but because Amazon has a whole team working on that particular service, they fail less often and recover more quickly.

Therefore, we say that when using Serverless technology, the risk is reduced because the components’ expected downtime is reduced and there is less time to fix them.

Reduce resource input costs

In general, when running applications, we have to figure out the type and number of underlying hosts they will run on. For example, how much memory and CPU does the database server need? How many different instances are needed to support the extension? Or how to support high availability (HA)?

Once we have planned the hosts or resources we need, we can assign which parts of the application will run on which resources. Finally, once we’re ready to deploy the application, we need to actually get the host we want, which is the environment configuration.

The entire environment configuration process is complex, and we rarely know in advance what our resource requirements will be, so we overestimate our plans. This is called overconfiguration, and it’s actually the right thing to do: it’s better to have spare capacity and keep applications running than to drop under load. For some types of components, such as databases, it may be difficult to scale later, so you may want to overconfigure them to carry the expected load in the future.

Overprovisioning means that we always pay for the capacity we need to handle peak expected loads, even when our applications don’t experience loads. The most extreme case is when our application is idle and we are paying for the server when it is not actually doing anything useful. But even if our application is active, we don’t want the host to be fully utilized. Instead, we want to leave some room for unexpected spikes in load.

The great benefit that serverless brings to this space is that there is no need to plan, allocate, or configure resources so that services provide exactly the capacity we need at any point in time, and if we don’t have any load, then no computing resources are required and no fees are paid. If we only have 1 GB of data, we don’t need capacity to store 100 GB. We believe that services will scale on demand when needed, and this applies equally to FaaS and BaaS services.

In addition to eliminating resource allocation problems, serverless also makes it more cost efficient. For various applications with different loads, we will save resource costs by using serverless. For example, if our application only runs for 5 minutes per hour, we only have to pay for 5 minutes per hour instead of the whole 60 minutes. In addition, good serverless products will have very precise incremental usage; AWS Lambda, for example, charges for 100 milliseconds of usage, which is 36,000 times more accurate than EC2’s hourly billing.

In modern non-Serverless applications, we have made some gains through techniques such as automatic scaling, but these methods are generally not as accurate as Serverless products and often do not automatically scale the database.

Improve scalability

All of these resource cost advantages come from the fact that the Serverless service can meet our needs exactly. So how do you actually implement this kind of extension? Do we need to set up the auto zoom group? Monitoring process? No! In fact, scaling is automatic and effortless.

Take AWS Lambda, for example. When the platform receives the first triggering function event, it launches a container to run the code, and if the event is still being processed when another event is received, the platform launches a second instance of the code to handle the second event. This automatic, zero-management, horizontal scaling will continue until the Lambda has enough code instances to handle the load.

One particularly nice aspect is that AWS will still only charge you based on how long your code takes to execute, no matter how many containers it has to boot. For example, assuming the total execution time of all events is the same, the cost of calling Lambda 100 sequentially in one container is exactly the same as the cost of calling Lambda 100 times simultaneously in 100 different containers.

Reduce lead times

Significant cost savings can be achieved by adopting the Serverless technology.

Sree Kotay, CTO of Comcast Cable, said at the AWS Summit in August 2016: He’s not talking about Serverless, he’s talking about how Comcast has benefited enormously from outsourcing all sorts of other infrastructure, moving from “on-campus” to cloud computing:

Clouds and agile of the journey, in the past five years we have achieved our profits, and these benefits are to be carried out around the cost and size, they are the key and important, but what's interesting is that they are not the most attractive, the most critical part is it really change your innovation cycle, it fundamentally changed your view of product development. - Sot KotayCopy the code

The point we make is that the CTO of a large company says cost and scale are not the most important things to him, innovation is the most important thing. So how can Serverless help with this?

Adrian Cockcroft, VP of Cloud Architecture strategy at AWS and former Cloud Architect at Netflix, said:

We're starting to see application development times get shorter and shorter, with small development teams building production-ready applications from scratch in just a few days. They glue together powerful API-driven data stores and services with short features and events. The completed applications have been highly available and scalable, with high utilization, low cost and rapid deployment. -Adrian CockcroftCopy the code

Over the past few years, we’ve seen great progress in incremental cycle times for development through continuous delivery and automated testing, as well as technology improvements like Docker. These techniques are great, but only when set up and stabilized. For truly thriving innovation, shorter lead times are not enough; you also need shorter lead times — from the conceptualization of a new product or feature to the time it takes to deploy to production in the least feasible way.

Because Serverless eliminates a lot of the incidental complexity of building, deploying, and running applications on a large scale in production, it provides us with so much leverage that software delivery can be turned upside down. With the right organizational support, innovation, and “lean startup” style, experimentation can become the default way of working for all businesses, not just something reserved for startups or “hack days.”

This is not just a theory. In addition to Adrian’s point, we see that relatively inexperienced engineers often take months to complete projects and need the help of more experienced engineers. Instead, with Serverless, they were able to implement the project essentially without help in a matter of days.

That’s why we’re so excited about Serverless: in addition to all the cost savings, it frees up their ability to focus on what makes their product different.