This article is by IMWeb IMWeb team

Introduction: Speaking of the hottest technology at present, in addition to the latest block chain, AI, there is a concept that has to be mentioned is Serverless. Serverless, as a new Internet architecture, has directly or indirectly promoted the development of cloud computing. From AWS Lambda to Serverless service framework launched by various manufacturers, Serverless has been enjoying great success all the way. In this tuyere, what does it look like on the front end?

Directory:

I. Introduction to Serverless

A lightweight Web Application migration practice

I. Introduction to Serverless

This chapter briefly introduces the evolution process of Serverless, what Serverless is, its advantages and disadvantages, and suitable application scenarios.

Cloud computing has evolved from IaaS, PaaS, SaaS, to the latest BaaS, FasS, and serverless is becoming more and more obvious in this trend.

Bare Metal (IDC) :

Physical machine hosting

IAAS:

Infrastructure as a Service (IaaS). Service providers provide Infrastructure resources (servers, data centers, environment control, power supplies, and server rooms) at the underlying or physical layer. Users need to purchase virtual resources and select an operating system on the Service platform provided by IaaS. Install software, deploy applications, monitor applications.

At present, the well-known IaaS platforms include AWS, Azure, Google Cloud Plantform, Tencent Cloud Service, Ali Cloud and open source OpenStack.

PAAS:

PaaS(Platform as a Service) provides infrastructure services such as operating systems (Windows and Linux), database servers, Web servers, load balancers, and other middleware. The IaaS customer only needs to control the upper-layer application deployment and application hosting environment.

Currently, well-known PaaS platforms include Amazon Elastic Beanstalk, Azure, Google App Engine, Tencent Container Service, VMware Cloud Foundry, etc.

BAAS:

Backend as a Service (BaaS) Provides Backend services for customers (developers), such as file storage, data storage, push Service, and identity authentication Service, to help developers develop applications quickly.

FAAS:

Function as a Service (FaaS) functions are services. Service providers provide a platform that allows customers to develop, run, and manage application functions without building and maintaining an infrastructure. Building applications following this model is one way to implement a “serverless” architecture, often used when building microservice applications.

Virtualization and Isolation

Since the earliest physical servers, we have been abstracting or virtualizing them.

Server development

We use virtualization technologies such as XEN and KVM to isolate the hardware and the operating systems that run on top of it. We use cloud computing to further automate the management of these virtualized resources. We use container technologies like Docker to isolate the operation of the application’s operating system from that of the server. Now that we have Serverless, we can isolate the operating system and even the underlying technical details.

stateless

But it also determines the stateless nature of Serverless, because each time a function is executed, it may use a different container and cannot share memory or data. If you want to share data, you can only use third party services such as Redis, COS, etc.

No operational

With Serverless we do not need to care about the server, do not need to care about operation and maintenance. This is also at the heart of Serverless’s thinking.

Event-driven programming Serverless’s run-time computing means that he is event-driven computing.

Low cost

Using Serverless is cheap because we only pay for each function run. If the function does not run, it costs nothing and does not waste server resources.

  • In Serverless applications, developers just need to focus on the business and don’t need to worry about operations and maintenance
  • Serverless is truly on demand, running only when the request comes in
  • Serverless is paid by running time and memory
  • Serverless applications rely heavily on specific cloud platforms, third-party services

Serverless is a “Serverless architecture” that allows users to Focus on business logic without caring about the application running environment, resources and quantity.

FAAS (function as a Service) + BAAS (Background as a Service) can be called a complete Serverless implementation.

Serverless Cloud Function (SCF) architecture

Currently Tencent cloud SCF support Serverless language

Python 2.7&3.6, node.js 6.10&node.js 8.9, Java 8, Php 5 & Php 7, Go 1.8, C#&C++(under planning)

The advantage of the Serverless

  • Reduce start-up costs
    • Reduce operating costs
    • Reduce development costs
  • Fast online
    • Faster deployment pipeline
    • Faster development
  • Higher system security
  • Adapt to microservices architecture
  • Automatic expansion capability

The disadvantage of Serverless

  • Not suitable for stateful services
  • Not suitable for long running applications
  • Completely dependent on third party services
  • The cold start time is long
  • Lack of debugging and development tools

Application scenarios of Serverless

  • Send a notification
  • WebHook
  • Lightweight API
  • The Internet of things
  • Statistical analysis of data
  • Trigger and scheduled tasks
  • Lean startup
  • Chat robot

Although Serverless still has many limitations at present, Serverless has been developing and improving, and the majority of developers and service providers are looking for the infinite possibilities of Serverless.

A lightweight Web Application migration practice

This chapter describes a Web Application migration practice based on Tencent cloud function from the framework migration and development and deployment process.

1. Architecture migration

Let’s start by looking at a general Web Application architecture on SCF.

Static resource

Static resources (JS/CSS/IMG/HTML) are stored in COS (object storage). COS can customize the domain name and enable CDN acceleration (for details, see Tencent Cloud document “Configuring Self-Customized Domain Name to Support HTTPS Access”). This is no different from the original Web Application.

If we were a single-page asynchronous application, and our page HTML was a static resource, you could choose to put it in SCF and return it, just like a normal Web server returns static resources, but using SCF to return purely static resources is always a waste of running resources and not good performance. It can also be stored on COS, and COS also supports custom domain names and CDN acceleration.

However, there is a cross-domain problem between the primary domain and the dynamic data domain, which can be easily solved by the following method.

Support for cross-domain access: SUPPORT for CORS can be set through the API gateway or backend application.

Performance optimization: Using preconnect in the header of the page for dynamic API preconnection can greatly reduce the TIME of DNS/TCP/SSL. Do not underestimate this time, because the API gateway corresponding to Tencent cloud functions only supports one place access, and the time can reach hundreds of ms in distant places.

API gateway + application logic

From the original nodeServer to cloud function architecture changes are mainly as follows:

The format of API GATEWAY EVENT

You can choose to parse directly in your code using API GATEWAY events, and encapsulate the BODY of the HTTP response. HTTP basically uses related data fields as a result, which are present in API GATEWAY events, but in different data structures. If we are used to the Express framework and rely heavily on some good middleware, it can be a lot of work to rebuild that middleware. B: Do we have a solution compatible with the original way of writing?

Both migration and new development projects can adopt this architecture:

We can convert API gateway events into HTTP requests and communicate with nodeserver through local sockets and functions.

Is there a performance loss after a layer of service forwarding? In terms of statistical time, with this layer of forwarding, the average time of the total cloud function is within 20ms, so even if there is performance loss in the middle, it is within 10ms, and the communication is through the local socket, which is nothing compared to the network time.

The intermediate forwarding agent layer already has some available frameworks (ServerlessPlus, SCF-framework), you can try, the usage is relatively simple.

Data is stored

Since Serverless is an event-triggered, release-on-use architecture, you must consider that your local storage and cache must rely on third-party services such as COS and Redis, but can be retained by instance or it has a release delay of 3 minutes. You can still use local storage and in-memory caches as your first level of cache.

1, the DB

Not much different from the original DB usage.

Generally, you can apply for resources in a VPC connected to the Intranet to ensure high security and complete isolation from external networks. In this way, you can connect to the Intranet only by using the same subnet.

Resource requests are covered in the development deployment section below.

You can also select DB resources from the external network and set up a virtual subnet to be in the same subnet as Tencent cloud function so that the cloud function can be accessed through the Intranet IP address. In addition, DB can also set the external network domain name address, access through the external network, so that local development can also be accessed, the general test used.

Database instance interface:

2. Memory cache

As mentioned above, instances have a delayed destruction time, and memory variables can be cached if the same instance is hit for a short period of time. The contents that need to be cached can be cached in two levels, first read from memory, and then read from Redis.

The use of Redis is similar to DB. It applies for resources, sets subnets, and connects to DB through IP PORT.

Redis can be used simply with the NPM package encapsulated below.

npm i qcloud-serverless-redis

3. File storage

Serverless The writable directory is/TMP /, but will be released as the instance is released, so it can only be temporarily stored.

The total size is 512 MB. You are advised to delete temporary files when they are used up.

If you want to store files for a long time, you can use COS for storage.

For detailed COS operations, refer to Tencent cloud documentation such as Node.js SDK Quick Start

Here I also simply packaged a COS NPM package, you can quickly try the access function of cos, see the specific usage inside the README

tnpm i @tencent/serverless-cos

Login related

This solution is similar to data storage in that state can be stored through third-party services or token encryption and decryption.

The following scheme checks the login status through token encryption and decryption. The login verification process invokes the original background service.

Applets don’t have cookies, which means they don’t set cookies for you, save cookies, send cookies, you have to simulate that yourself. The current general scheme is that the front end receives the content returned by the back end, saves it in localStorage, verifies the validity period of each request, and sets the token in the cookie module of the header, and the back end can normally get the cookie for verification.

Performance tuning

In the first chapter, it was mentioned that Serverless is not suitable for scenarios with high requirements on delay. Then how about the actual performance, whether there is room for optimization, and whether it can meet our demand for immediate response?

Let’s take a look at the startup process of a cloud function.

When a function is called, the system dispatches to see if an instance of the function exists, and if it does, it executes the function and returns the result. This time is very short, at the millisecond level.

If not, you need to create the container and download the deployment code. These processes, which can take anywhere from a hundred milliseconds to a few seconds, are called cold starts.

To optimize the performance of a function, you need to optimize it at all stages of the function life cycle.

  • 1, function instance reuse, this is the most direct and effective means. But for how long? You can solve 95% of your problems in 3 minutes and 99% of your problems in 3 hours and 99.9% of your problems in 3 days. This is a question of cost and effectiveness. (The retention time is not necessarily a fixed value, so you need to analyze the function characteristics and time periods.)
  • 2. Pre-create a batch of containers with different specifications (without code) to reduce the time to create containers.
  • 3. The function platform has a code repository to store the management function code, which is only downloaded to the container when it is used. You can cache hotspot codes as follows: Level 1: Node compute Node cache level 2: equipment room cache
  • 4, through machine learning to predict traffic, start some function instances in advance, so as to eliminate cold start as far as possible, to ensure that instances are hot start.

These are platform optimizations, so what can developers do?

  • Code simplification: Shorten code download time
  • Public stripping: Increase the caching effect by splitting some common, frequently invoked services into a separate cloud function.
  • Resource reuse: Shortening execution time, by which common resources used by functions can be placed outside the function to define execution, such as database connections.
  • Stay active: Avoid resource reclamation, which ensures that requests are hot started.

For more information, see my other article “Front-end Serverless Series – Performance Tuning.”

2. Development, deployment, operation and maintenance

Development and debugging

1) Debug on the cloud

Currently, functions published to the cloud include the node_modules folder, and if they don’t need it, compress it and transfer it over the network. If you change a line of code, you have to upload it once for execution. And online ides can only edit index.js, but the code is not written in the entry file. Online ides currently only support editing single file entry functions, and upgrades to the IDE are in the works.

2) Local debug 1.0

TCF, Docker need to be installed in advance

For details, see the TCF command line tool

Execute commands locally:

tcf local invoke --template template.yaml --event event/apigateway.json

Debugging is much faster than the first scheme, although in the company’s network must open a proxy, otherwise it will not pull down the Docker image.

This works by loading an image similar to the cloud environment and executing it in Docker.

However, there are still many problems:

For example, the connected database must be an external address, because the network environment of Docker is not connected with the local environment, nor with the environment on the cloud.

For example, the NPM package that I installed locally will not run properly because my local system is MAC and the image is Linux.

For example, the cloud function originally had some built-in NPM packages. I wrote a script to delete this part of the NPM package, which could be normally executed on the cloud. When debugging locally, I found that the NPM package was missing again, because the environment on the cloud and the environment in the mirror were not secure and consistent. But that, too, has been solved.

3) Local debugging 2.0

With the feedback from the developers, the cloud function colleagues launched an upgraded version of TCF. Native debugging is supported, which means using the local environment for debugging. In this way, at least there will be no connection problems with the database and no operating system differences during debugging. The last problem is the need for a dedicated compiler. In general, NPM packages are common if the NPM library they rely on does not involve operating system differences.

For details, see the documentation: github.com/tencentyun/…

tcf native invoke --template template.yaml --event event/apigateway.json

4) Node Server debugging:

If your project is based on a FRAMEWORK such as KOA or Express, you can add a server entry directly to local debugging, just like normal Node Server debugging.

release

You can use the TCF command to publish, and there are two types of TCF command line publishing

  • Upload code via COS object store
  • Upload code from a local ZIP package (zip size cannot exceed 50M)

See the documentation: How to publish deployment code with TCF

As mentioned earlier, it is very easy to publish code to the cloud function platform using the command line tool.

// Indicate the code

tcf package & tcf deloy

But just post it?

How are development tests and online environments isolated and rolled back?

The cloud function has the version function. You can release a new version at the upper right corner of the cloud function details page.

API Gateway also has test, pre-release, and release environments by default, and you can specify the version of cloud functions.

When testing, we can specify $LATEST version, after passing the test, we can send a cloud function version, and then configure the API gateway pre-release environment for pre-release verification, after pre-release verification, and then publish to the online environment.

Specific operation path: Click the specific service of API gateway to enter the details page, and edit for each API under API management.

Select the corresponding version:

After editing, the API needs to be published to the corresponding environment before it takes effect.

Operation rollback is also convenient, just switch to history to version. But be careful, write a note and don’t write “test” ^_^ like I did

At this point, it looks like you can distinguish the test from the online environment.

But in practice, testing is not the same as connecting resources online, such as DB. We usually determine what resources to connect by reading environment variables, not by changing the code, and a cloud function has only one configuration.

Publishing can be done by function replication, optionally not copying configuration. Configuration online is different from test environment Settings. Code connects resources online and under test by judging environment variables.

Create a namespace:

Copy functions to other namespaces:

Setting environment variables:

Depending on the environment variable read different to the configuration:

// Read different configurations based on environment variables
const devConfig = require('./config.dev');
const testConfig = require('./config.test');
const prodConfig = require('./config.prod');

const env = process.env.NODE_ENV;
console.log('process.env.NODE_ENV', process.env.NODE_ENV);
switch (env) {
  case 'prod':
    module.exports = prodConfig;
    break;
  case 'test':
    module.exports = testConfig;
    break;
  default:
    module.exports = devConfig;
}
Copy the code

To sum up:

1) Put the test and live code in two different namespaces, separate the test and live code.

2) On-line is accomplished by replication function.

3) Test and online environments are distinguished by setting different environment variables through function configuration.

4) Rollback is done by setting the version of the function.

Domain name mapping

API Gateway will have a default domain name, so we do not need to apply for a domain name to use API gateway. However, generally if it is the user in the browser to access the URL, must be their own/short point domain name more trusted.

API Gateway – Custom domain name

If HTTPS is supported, you need to upload the HTTPS certificate to the Cloud.

In addition, can also custom path map, such as the release of more short path from http://yourdomain/release to http://yourdomain/. You can also make the /test test path more complex to prevent users from accessing it, but it is safer not to publish the test environment.

The log

When it comes to background services, the printing of logs is essential. Debugging, problem finding and even statistics may all need to use logs. So how can logs from cloud functions be used?

In the interface of the cloud function, I can see a log interface of the cloud function, which can support the real-time log display, and select the time, and only select the failed log.

But we see a unique retrieval box that can only be retrieved based on a RequestID, one per request. So where to get the requestID, it looks like you can only get it from this log, and if you send it to another service or the front end, the other service can trace it back to this.

This is obviously too inconvenient.

The logging service was not available at the time OF my use and is up and running as I write this article.

If the current function has been configured with logging service, you can [go to Logging Service] for easier retrieval of logs.

You can configure sending logs to the log service in the lower part of the function configuration page. Creating a Log Service

Create a log set:

Multiple log topics can be created on a log set.

To see how a log can be consumed, see the action bar below:

The LogListener is used to collect logs from the server, while the cloud function only needs to specify the log set and log topic to be delivered in the function configuration.

Index configuration: You can configure the segmentation

Post configuration: Logs can be posted to COS

Real-time consumption: consumption can be done with Ckafka

Search examples:

monitoring

Cloud functions and API gateways have some built-in monitoring that can meet viewing requirements. If you need more detailed view configuration and alarm functions, you can use cloud monitoring.

Overview of cloud functions

Cloud function monitoring information interface:

API Gateway Monitoring:

For more detailed statistics and monitoring alarms, see Cloud Monitoring

Custom monitoring is also necessary for background services to report service indicators or alarms. You can view the document: Custom Monitoring

Matters needing attention

Node_modules used in the project cannot be installed online yet, and can only be packaged and uploaded locally.

Node_modules, which are installed locally, can’t be used on cloud functions or in mirrors.

3. The docker environment debugged locally is network isolated, so if you want to connect to the relevant BAAS service, you need the BAAS service that supports external network access. TCF Native is already supported for node debugging without docker.

4. Currently, only one cookie can be set on the server. The Tencent cloud is also planned to be repaired.

Such as ~

The last

When we use HTTP protocol, we need to transfer a layer through API GATEWAY. Can we remove this layer of transfer?

If our business applications are complex and need to be disassembled into multiple cloud functions to bear the load, can we transform and migrate existing projects like this?

Now when deploying cloud functions, we need to package lib libraries and upload them. We hope that we can connect with the code repository and compile online and deploy them automatically when git push.

These issues will be addressed in Tencent Cloud Server 2.0, which will be released soon.

The road to try is still continuing, welcome to have interest and demand developers to discuss and explore and build together.