• Serverless: Where Is the Industry Going in 2021?
  • Originally written by Suresh Kumar
  • The Nuggets translation Project
  • Permanent link to this article: github.com/xitu/gold-m…
  • Translator: Ashira97
  • Proofread: PassionPenguin, Kamly

Serverless: Where will this technology go in 2021?

In this article, we’ll focus on what serverless means, how companies are adopting the technology, and how it will evolve in 2021.

introduce

Serverless computing is an approach to developing software on cloud platforms that frees developers from the complexities of code deployment, infrastructure configuration, and code availability management on the cloud. Amazon introduced this technology with its Lambda service in 2014. Since then, other cloud providers have followed suit with similar features, Azure functions and Google Cloud functions.

As in appear before developers must worry bare-metal server virtualization, and the emergence of the container makes developers don’t need to be particularly worried about the virtual machine, similarly, along with the people of no server computing that is used to develop, deploy and execute the code more and more interested in the new paradigm, developers don’t need to worry about the container.

None Server Definition

There are many definitions of serverless — some describe it as a function as a service (FaaS), others describe the database (DBaaS) or security services it contains as being related to back-end services (BaaS). They all have similar characteristics and can be abstracted to a platform or functional layer using an execution or consumption-based cost model.

Most of the interest in non-service is in function as a Service (FaaS), which should not be mistaken for functional programming, which is not supported by most cloud providers’ services. So what is feature as a service?

  1. Code Development – Developers can write whatever code they want and execute it with minimal change in the FaaS environment. This code will have a message handler, which is the primary method that acts as a listener interface.

  2. Code deployment — This is essentially copying and pasting specific functionality into the FaaS console, although most ides also support FaaS integration.

  3. Code extension – FaaS runtime expands as many instances as needed. Cluster management technologies like Kubernetes encapsulate this functionality and make it available to users.

  4. Executing code — FaaS is suitable for event-driven architectures because the code runs when it receives an event or trigger, which could be an HTTP request, a file being sent, a message arriving on a message queue, and so on.

  5. Short run – FaaS are short rather than long run and therefore do not hold any state. The developer should save the state that needs to be saved outside the function.

  6. Cost of execution — FaaS charges for the length of time it takes to execute. This is the difference between FaaS and BaaS. For example, Google BigQuery charges for each API call based on the amount of data scanned or SQS. Code that runs too long or does not receive requests will eventually time out.

High latency challenge

When an event is raised, it must be initialized before the FaaS function can be executed. This initialization event is called the cold start time or the warm start time for the VM running code.

Language and memory challenges

Latency for FaaS code can vary from milliseconds to seconds, and choosing different languages and allocating different memory sizes can make a big difference in performance. For example, the recent AWS Re :Invent talk showed how using Springboot and Java can significantly improve cold boot times. And how to increase memory allocation from 1Mb to 3Mb to further reduce cold start times. However, regardless of memory allocation, cold startup times are 100 times lower using the Node framework, which is why Node is a popular serverless language. In general, Python or Node starts faster.

Why are these problems no longer a problem?

The performance issues of serverless computing have been widely discussed by developers in recent years, and the discussion has determined that serverless computing should be used in appropriate situations. For example, serverless technology is not suitable for iot applications requiring low latency, but it is suitable for traditional Web and API services. These discussions also address issues such as latency, cold and hot startup times, and performance characteristics of different language frameworks in a non-server.

  • ** Cloud service providers are working: ** All major cloud providers are putting a lot of effort into serverless pipelines to optimize cold start times, which means cloud service providers can address latency issues through a range of measures including predictive scheduling, optimization of language runtimes, local branch processes.

  • ** Developers can also optimize themselves: ** For developers, there are many strategies they can control, such as increasing memory allocation, choosing a faster runtime language, keeping shared data in memory, shrinking package sizes, and preserving the pool of pre-warmed functions (also known as preparatory concurrency) to solve most problems.

Where does this technology go from here?

“By the end of 2021, 25 percent of developers will be serverless and nearly 30 percent will be using containers on a regular basis,” Forrester predicts.

Serverless is moving beyond pure computing to even providing databases on a serverless server. Amazon announced at AWS Re :Invent 2020 that Aurora V2 Serverless will expand MySQL and PostgresSQL to the Petabyte level, fully supporting highly available, global databases with read copies, backtracking and parallel queries. Rather than stand still, AWS has announced that it can use containers to package and deploy Lambda serverless functionality, making it much easier for developers who already have automation pipelines — all of which helps reduce resistance to Lambda adoption.

Many enterprises have adopted Kubernetes as their common platform for container deployment. This is not easy for many organizations because it puts additional pressure on developers and operations teams. Few enterprises need the scale and complexity that Kubernetes offers and are turning to alternatives like Elastic Container Services (ECS), which are cheaper and easier to deploy. By combining ECS with serverless servers like Fargate, developers don’t even need to worry about configuring EC2 instances. The equivalent of ECS Fargate are Google Cloud Run and Azure for Anthos.

While cloud vendors are trying to simplify container management on the cloud, the trend and direction toward higher levels of abstraction, where developers can focus on code rather than the complexities of deploying nodes, Pods, or virtual machines, is clear.

If you find any mistakes in your translation or other areas that need to be improved, you are welcome to the Nuggets Translation Program to revise and PR your translation, and you can also get the corresponding reward points. The permanent link to this article at the beginning of this article is the MarkDown link to this article on GitHub.


The Nuggets Translation Project is a community that translates quality Internet technical articles from English sharing articles on nuggets. The content covers Android, iOS, front-end, back-end, blockchain, products, design, artificial intelligence and other fields. If you want to see more high-quality translation, please continue to pay attention to the Translation plan of Digging Gold, the official Weibo, Zhihu column.