concept

In the novel “The Three-Body Problem”, the scene of the internal facilities of the future spaceship is described. When science and technology are highly developed, the complex details of the facilities are hidden. There is no sign of any facilities inside the spaceship, but when people need to use it, there will be seats, tables and other facilities for people to use. Similarly, Serverless is a concept that hides the details of complex service operations and physical facilities inside the cloud platform, providing only interfaces for developers to use. Through these interfaces, a developer can start a service at any time, and when it’s finished, the occupied resources are reclaimed for the next invocation. From the point of view of the average user, all software services should be Serverless, because the user does not need to understand the operating principles behind these services.

For the service we encountered, there are several points to determine whether it is a Serverless architecture:

  1. How many machines are there for service
  2. Where are the machines deployed
  3. What operating system is the machine running
  4. What software is installed on the machine

If we cannot answer these questions clearly, the service we are using is Serverless architecture

Serverless at the front

With the popularity of cloud computing technologies, the Serverless architecture has slowly become an important infrastructure for front-end developers. In recent years, front-end applications have become more and more complex, shifting from focusing only on the user world to undertaking more complex business logic. With the popularity of BFF layer, different businesses will also have their own BFF layer, which provides convenience but also faces new problems:

  1. Higher operation and maintenance costs
  2. The resource utilization efficiency of the server is low
  3. Much of the underlying logic may need to be independently implemented multiple times

How to solve these problems from the perspective of front-end developers is what we need to think more about in our work. When our BFF layer is switched to Serverless architecture, the above problems can be well solved, allowing us to focus on solving business problems.

Front end with full stack

With the continuous evolution of front-end architecture, Serverless has become a trend to replace single application architecture. Front-end developers can also use Serverless to complement their shortcomings and greatly reduce the operation and maintenance cost of the server.

In fact, in many companies, front-end engineers are indispensable but also passive, known ironically as “tool people.” Sometimes, although we are not satisfied with the status quo, it is difficult to make effective changes in the general environment. One important reason for this is that front-end engineers are often “too far away” from the business, while back-end engineers rarely need to know how the front end presents and interacts, just the business logic, to have a say.

As the front-end architecture continues to evolve and the front-end collaboration model changes, Serverless will allow the front-end to take care of more of the upper level business logic rather than just writing simple pages. From this point of view, the challenge of front-end development is to have a deep understanding of business processes and control of the overall picture. This will make us a true full stack engineer.

* the evolution of aaS

Serverless is an abstract concept, does not mean that its specific implementation can only be one, this concept is constantly developed in practice. What is the relationship between Backend as a Service (BaaS) and Platform as a Service (PaaS)?

In the early IT days, if you wanted to deploy an application, you might go through the following steps:

  1. Buying a server
  2. Installing the OS
  3. Install dependent software, such as MySQL and Nginx
  4. Deploy the application. Deploy the code to the server

Deploying an application like this can take a long time and cost a lot of time. Now a large number of cloud computing technology, is a good solution to this problem.

In cloud computing, there are three service models. IaaS, PaaS, SaaS. Let’s start with a schematic:

Infrastructure as a Service (IaaS) provides basic processing storage, network connection, and computing resource services, enabling users to directly deploy operating systems on IaaS. Customers can deploy and run their own services without purchasing or renting physical servers.

PaaS (Platform as a Service) provides computing platforms and solution services on top of infrastructure. For example: database service, cache service, message queue service.

SaaS (Software as a Service) goes one step further and provides Software services out of the box. The software does not need to be installed. Users can use the services provided by the software directly through the client. SaaS provides services that directly address business scenarios, but for PaaS services, we still need to implement business logic on top of them.

The above three aaS cloud computing services provide services for users from different layers. Users can choose their own scenarios to use them. Cloud computing encapsulates computing resources in a hierarchical mode so that users can access them on demand.

With the development of containerization technology, a new Service model, Container as a Service (CaaS), emerges on the basis of IaaS. Cloud computing service providers change computing resources from virtual machines to containers. Container choreography services enable developers to build and deploy applications based on containerized technologies by describing files.

If CaaS is an evolution of IaaS capabilities, then BaaS is an extension of capabilities on top of PaaS. We often use third-party services to replace some of the technical functions in our applications. Third party services are generally provided in the form of apis that are automatically scalable and do not require o&M for developers, they are Serverless services. From these characteristics, PaaS and PaaS are not very different, but BaaS is oriented to different objects, such as mobile apps, Web sites, and so on. Developers can use these PaaS capabilities directly in the terminal.

The above mainly introduces the evolution of cloud computing services. It can be seen that, judging by the criteria for Serverless in the beginning, these service patterns have more or less taken on the characteristics of the Serverless architecture. In the CNCF (Cloud Native Foundation) white paper, there is a clear definition of the capabilities that Serverless should provide. The Serverless computing platform should contain one or two of the following capabilities:

  1. Function as a Service. Provides event-driven computing services.
  2. Backend as a Service. It refers to the ability to replace some core capabilities in an application and provide third-party services directly through an API.

FaaS

Here’s a look at the latest FaaS technology, a service free technology that can be tightly integrated with the front end.

Based on the event-driven concept, it provides developers with the ability to run code at a function-grained level with the ability to be triggered and executed just like HTTP or any other time, with the developer writing business code and not having to focus on server resources. In this way, the monthly payment is changed from renting VMS to charging by scheduling consumption. In this way, the o&M and lease costs of servers are reduced and the utilization efficiency of hardware resources is greatly improved. On the other hand, the code can be deployed more efficiently because, when a new feature is released, only one function needs to be launched.

The event processing model of FaaS is as follows:

advantages

  1. Higher research and development efficiency. In traditional RESEARCH and development, we usually need to complete two parts of the work, business implementation and technical architecture. Our goal is to realize the business, but in the process, we need to focus on the technical architecture. FaaS not only provides users with the running environment of functions, but also provides the scheduling mode of functions. Let developers focus more on business, improve the efficiency of research and development.
  2. Lower deployment costs. In the FaaS scenario, after the developer has written the function, it can be deployed through the Web console or a simple command line tool.
  3. Lower operation and maintenance costs. Thanks to Serverless’s elastic scalability, we don’t have to worry about server load. Almost all the work needed to ensure usability can be eliminated.
  4. Lower learning costs. Just as drivers don’t need to learn engine principles or photographers don’t need to learn optics, business functions can be deployed directly through Faas.
  5. Lower server costs. Services based on virtual machine technology and container technology will be charged after applying for service resources. No matter how much or how little resources are occupied, fees will be incurred. While FaaS charges by function call volume and function execution time, saving a lot of costs.
  6. More flexible deployment solutions. Since each function is published and controlled independently, a new function release will start a new instance rather than overwriting the previous one, so the functionality of the original function will not be affected. This makes it easy to implement multiple deployment environments and the ability to grayscale traffic segmentation.
  7. Higher system security. In Serverless, there is no need for r&d and operations personnel to log in to the server because the concept of a server is removed. With the door to the server closed, it becomes more difficult to attack.

disadvantages

  1. There is a platform learning cost. Because FaaS is a relatively new architecture, there is a lack of documentation, examples, and best practices. The different implementation of different vendors on the platform also increases the learning cost for the r&d staff.
  2. High commissioning costs. Since I can’t run this function directly locally, I have to configure the same container environment locally or debug it remotely, which is a hassle and a problem that needs to be solved.
  3. Potential performance issues. Because the service automatically shrinks the instance of the function to 0 after a long time without a call, if a new request comes in, it starts the container immediately and deploys the function before executing it. This zero-to-one process is called cold start. The cold start time varies from 10ms to 5s for different languages due to different operating environments.
  4. Vendor lock-in issues. As FaaS is a new cloud computing service model, there is no unified standard for cloud computing vendors to refer to in terms of implementation, so they have different implementations. As a result, we cannot easily migrate from one vendor to another.

Implement simple FaaS

Having introduced some basic concepts, let’s use FaaS as an example to see how to implement a simple FaaS based on NodeJS.

At present, in the implementation of FaaS, the most commonly used is container-level isolation based on Docker technology, which can also isolate and limit system resources. The other is process-based isolation implementation. The process-based isolation implementation is relatively lighter and more flexible, but not as isolated as container-level isolation implementation.

This section uses an implementation based on process isolation as an example.

  1. Sandbox environment. In the operating system, different processes have independent memory space. Different processes cannot access the memory allocated to each other. In this way, process A can prevent process B from writing data to process A. In NodeJS, the main process listens for a function call request, and when the request is triggered, the child process executes the function and returns the result to the main process and finally to the client. For security reasons, the vm2 module is used directly for code execution. Nodejs’ built-in VM module is not completely safe.
// Child process code
// After reading the code, execute it in the new sandbox environment and return the result of the execution
const process = require('process');
const { VM } = require('vm2');

process.on('message'.(data) = > {
    const fnIIFE = ` (${data.fn}`) ();
    const result = new VM().run(fnIIFE);
    process.send({ result });
    process.exit();
});

// Main process code
// Reads the function code from the file, starts the child process, and passes the function to the child process to execute
const fs = require('fs');
const child_process = require('child_process');
const child = child_process.fork('./child.js');

child.on('message'.(data) = > {
    console.log('function result', data.result);
});

const fn = fs.readFileSync('./func.js', { encoding: 'utf8'});
child.send({ action: 'run', fn});

// Function code
// Defines an immediate execution function
(event, context) = > {
    return { message: 'function is running'.status: 'ok' };
}
Copy the code
  1. Example Add HTTP services. In a production environment, in order for functions to be able to serve externally, you also need to provide the capability of a Web API. This capability enables the service to dynamically execute the corresponding function code based on the user’s different request paths and send the results back to the client.
// Child process code
const process = require('process');
const { VM } = require('vm2');

process.on('message'.(data) = > {
    const fnIIFE = ` (${data.fn}`) ();
    const result = new VM().run(fnIIFE);
    process.send({ result });
    process.exit();
});

// Main process code
// Use Koa to provide HTTP services. When the request arrives, different function codes are read according to the request path to execute
const fs = require('fs');
const child_process = require('child_process');
const koa = require('koa');

const app = new koa();
app.use(async ctx => ctx.response.body = await run(ctx.request.path));
app.listen(3000);

async function run(path) {
    return new Promise((resolve, reject) = > {
        const child = child_process.fork('./child.js');
        child.on('message', resolve);

        try {
            const fn = fs.readFileSync(`. /${path}.js`, { encoding: 'utf8' });
            child.send({ action: 'run', fn });
        } catch (error) {
            if(error.code === 'ENOENT') {
                return resolve('not fond function'); } reject(error.toString()); }}); }// function code 1
(event, context) => {
    return { message: 'function is running'.status: 'ok' };
}

// function code 2
(event, context) => {
    return { name: 'func2' };
}

Copy the code

At this point, the basic FaaS capability based on process isolation is complete. On top of that, you can consider performance improvements, timeouts for function execution, and limiting function resources (Cgroups).

conclusion

Serverless is the idea, keep in mind the rules for determining whether a service is Serverless or not.

Any solution is an evolutionary process, and to fully understand what a technology solves, you need to study the context in which it was created and connect the dots.

Front-end development engineers should know more about the business in their work. Master enough voice, will become a true full stack engineer.

Abstraction and encapsulation of complex things are common patterns in all fields, not just in the field of computer.

We keep ceding control so we can focus more on the business. For example, after surrendering control of a service, we no longer need to operate and maintain the service. So we have a contract with the service provider that makes things run more efficiently.