define

Serverless is an execution model. In this model, the cloud service provider is responsible for executing a piece of code by dynamically allocating resources. Cloud providers simply charge for the resources needed to execute this code. Code typically runs in a stateless container and can be triggered by a variety of events (HTTP requests, database events, monitoring alerts, file uploads, scheduled tasks…). . Code is often uploaded to cloud services for execution as Functions, so Serverless is also referred to as Functions as a Service or FaaS.

  • BaaS: Backend as a ServiceHere,BackendCan refer to any application or service provided by a third party, such as a cloud database serviceFirebaseandParseTo provide unified user authentication servicesAuth0andAmazon CognitoAnd so on.
  • FaaS: Functions as a ServiceApplications exist in the form of functions and are hosted and run by third-party cloud platforms, such as the one mentioned earlierAWS Lambda, Google Cloud FunctionsAnd so on.

background

So why do we need Serverless? For this problem, we can find the reason in our daily personal development. Whether it is a personal blog, or a small program, or a personal website, we all face the problem of how to solve the server deployment related issues.

For conventional projects, the general process is: purchase server => domain name record (this step may not be needed, depending on personal needs) => environment construction => deployment

So if we just want to develop a little background support, but have to do so many operations, it is very unfriendly for the development of the whole project, it is easy to give up the idea of online.

Then Serverless is used to solve the problem of existence, he have the ability to host server functions, let you need for the tedious steps you can easily make your web site or service normal running and accessible, at the same time can greatly reduce maintenance costs of the individual, this is very useful for the developers.

Serverless characteristics

Low cost

As we all know, when we buy a cloud server, apart from the labor cost, from the analysis of the charging method, all manufacturers are using the monthly billing method to charge, that is, even if no one visits your website or service, the monthly rent is the same.

Then there’s the Serverless app, which is priced by the amount of resources you actually use, so you pay what you use, and it’s charged in a similar way to how we charge for mobile data. Meanwhile, according to a Forbes study published in 2015, the average server utilization in a typical data center was a dismal 5 to 15 percent throughout the year, meaning that at least 80 percent of operating costs could theoretically be saved if all servers were Serverless.

After comparative analysis, Serverless is undoubtedly lower in terms of cost.

Automatic capacity expansion and shrinkage

As mentioned above, functions are applications. Each function only serves a specific function. It can be dynamically expanded or shrunk at will without affecting other functions, and it is smaller in granularity and faster. On the other hand, with the help of various container choreography technologies, our single application and microservice can also achieve automatic scaling, but due to granularity, compared with functions, there is always a certain amount of resource waste.

event-driven

The function essentially implements an IPO (input-process-output) model, which is transient and ready-to-go. This is another feature that distinguishes functions from singleton applications and microservices. Both individual applications and microservices are resident processes in the system that continue to run even if you don’t use them. Functions, on the other hand, consume no resources without a request. They respond only when a request is received and release resources immediately after completion, which is a huge advantage in terms of saving resources.

stateless

We mentioned it from the event-driven ability only after receipt of the request work, was released immediately after completing a job, or a variety of runtime memory cache is not much benefit, not only that, the same request, the second access services is likely to be scheduled to other new machines, so the local cache way is still failure, Functions can only use external memory (such as Redis, database) for caching, and the operation of external memory needs to be through the network, the performance is one or two orders of magnitude worse than memory, local hard disk.

Cold start

In general, if your function has not been called for a long time, its container resources will be reclaimed. Wait until the next request comes, and then restart and allocate resources. This process also has latency, resulting in an unfriendly user experience.

Not only that, but once your project is deployed to a particular platform, there is bound to be a tie-in with the target platform, which can cause significant resistance if you decide to migrate your project at some point.

From a developer perspective, once you depends on the environment of cloud services platform, then the inevitable in the process of local development will encounter all sorts of setbacks, you may need to write good code debug uploaded to the clouds, it is bad in the daily development of development experience, even if you only modify one line of code.

The first cloud function

Tencent Cloud will be used as a demonstration platform. First, you need to log in to Tencent Cloud.

Then open the cloud function console:

We choose to create a new cloud function

Here we are free to choose the language to develop and whether we need a template or not. For the sake of demonstration, we can select template and click Finish to enter our formal code writing.

Here we can change the return value of the function arbitrarily, and then we can test it.

Your cloud function is working when you see it return the same value as the code you wrote.

I also want to access the cloud function through HTTP requests. How can I do that?

This is actually very simple, we choose the trigger management on the left, create a trigger policy, and select the trigger mode asAPI gateway“And click Submit directly.

At this point we can actually access it through the URL, we copy the access path, and then access it in the browser, and we can see what our cloud function returns to us.

Local development

Once you’ve covered how to create functions in the cloud, it’s time to show you how to develop and deploy locally, because you can’t always develop in the cloud, which is just too unfriendly.

The installation

Node will be used as the development language for this demonstration. Please refer to the official documentation for more languages to install.

npm i serverless -g
Copy the code

Install the corresponding package first, here can be installed to the global for later use.

Then generate the Demo template project using the relevant commands:

serverless init sls-demo
Copy the code

Here is a small incident, I use the official shorthand SLS init slS-demo, found a strange input, I did not go into the further, so I will use the full text.

Once created, we go to the build directory and are greeted with a SRC directory and a serverless.yml configuration file. If you don’t like it, you can put it in any folder you like, but you need to change the value of the SRC field in the configuration file, which by default points to the SRC directory in the current directory.

Component information:

The field name Whether the choice instructions
component is The name of component, which can be imported using the SLS Registry command.
name is The name of the instance created, one for each component at deployment time.

Parameter information (inputs) :

The field name instructions
name The cloud function name, which also serves as the resource ID.
src Code path.
handler Function handler name.
runtime The cloud operating environment supports: Python2.7, Python3.6, Nodejs6.10, Nodejs8.9, Nodejs10.15, Nodejs12.16, PHP5, PHP7, Go1, Java8, and CustomRuntime.
region The region where the cloud function resides.
events The trigger. The supported triggers are timer, APIGW, COS, CMQ, and CKAFKA.

For more details, please refer to the official documentation.

The development of

We can place all the business code in the SRC directory, using the current demo project as an example.

We open SRC /index.js, and we can do whatever we want in the exposed function, return a result, and test it. You can connect to the database operation, or initiate requests, etc., according to your choice of language to code, the author here is the Nodejs environment for debugging.

Not only that, we can work with other Node frameworks

The deployment of

Deployment is relatively simple and can be deployed to the cloud with a simple command:

serverless deploy
Copy the code

If you are deployed for the first time, you will find a QR code printed on the console. All you need to do is scan the code for authorization with wechat, and then wait for the deployment to complete.

When you deploy, you’ll find a.env file in your directory that contains the private data you need to deploy, and then deploy it again without scanning the code (the authorization information is time-valid and needs to be reauthorized if it expires).

debugging

Execute the trigger function command, function= followed by your cloud function name, to determine if the deployment was successful.

serverless invoke  --inputs function=scfdemo-dev-scf-demo
Copy the code

Effect preview:

Full stack project actual combat

If you only learn how to play cloud functions, it will not show the convenience of serverless. Here we will use Express + Vue3 to bring a real project deployment process.

Structure of the building

First create a directory for our front and back end projects, then go to the directory, create an API directory for our server project code, then create a vue project under the API directory (there are no restrictions here, you can choose react or vue). A VUE3 project was created using Vite.

Also create a serverless. Yml configuration file in this directory as a configuration file for the overall project.

Project Structure:

  • api
    • app.js
    • package.json
    • .
  • front
  • .
  • serverless.yml

Serverless is a directory that will be generated automatically during deployment.

Now that our basic directory structure is in place, let’s configure it:

Server-side code writing

First we open the API directory, create app.js as the entry file, initialize the project with NPM init, and install Express and CORS. The demo project only uses these two packages:

npm init

npm i express cors -S
Copy the code

Then write the following code in app.js and export the Express instance.

Remember not to say app.listen(…) , as long as the default export is ok

// code in app.js file

const express = require('express');
const cors = require('cors');

const app = express();

app.use(cors);

app.use('/ *'.(req, res) = > {
    res.send({
        msg: 'hello world'})})module.exports = app;
Copy the code

Front-end coding

Here is a simple example to illustrate the convenience of writing your own favorite projects based on personal interests.

First open app. vue in front and rewrite the code:

<template>
<div>{{message}}</div>
</template>

<script>
import '.. /env'; // Automatically generated at deployment time
import {
    ref
} from 'vue'

export default {
    name: 'App'.setup(props) {
        const message = ref();
        fetch(window.env.apiUrl).then(res= > res.json()).then(({ msg }) = > {
            message.value = msg;
        });

        return {
            message
        }
    }
}
</script>

Copy the code

Here I use vue3 to write, readers prefer to choose the framework, here I mainly want to demonstrate the process of making a request and then rendering the page, this import ‘.. /env’ is necessary, we don’t care if it is not in our project directory, it will be generated automatically when we deploy the code, that is, we will import it directly, the main thing it does is to hang our configured environment variables on the window.

The author has configured the environment variable (window.env.apiURL) of the server SIDE URL after the project deployment in serverless.yml file. The specific configuration method will be mentioned later

So if we look at the code in our script, the only thing I’m actually doing here is asking our server API, and then getting the return value and rendering it to the page, which is pretty simple and I won’t go into all the details here.

Serverless configuration file

The last step is to configure the deployment configuration of the project. The purpose of the parameters can be seen in the code comments, which is relatively simple.

# Project name
name: tencent-fullstack-vue-app

# Front-end configuration
dashboard:
  # Serverless component adopted
  component: '@serverless/tencent-website'
  Set input parameters
  inputs:
    # Our project source code configuration
    code:
      The directory of the deployed files
      src: dist
      The root directory of the project
      root: front
      The dist directory (SRC) command is used to package the dist directory and then deploy it
      hook: npm run build
    # Environment variables
    env:
      # deployment path
      apiUrl: ${api.url}
Configure the server
api:
  # Serverless component adopted
  component: '@serverless/tencent-express'
  inputs:
    The server directory to deploy to
    code: './api'
    # cloud function name
    functionName: tencent-fullstack-vue-api
    apigatewayConf:
      # agreement
      protocols:
       - https
Copy the code

From here we can see that our configured environment variable apiUrl, which is the property we used in the front-end project above, will automatically mount it to the window when the project is deployed. We directly through the window. The env. ApiUrl access line (if you need to import the front-end project in the root directory of env file, this is automatically generated, also have mentioned above,

debugging

Once the above work is complete, you can start deploying:

serverless --debug
Copy the code

Execute the above command, it will first automatically deploy the project to the cloud, and then print the address of the project on the console. We can see the project deployed by copying the address and accessing it directly. You should also be required to authorize login.

After the deployment is successful, we can see the above screen on the console. We copy the URL to the browser and open it, and the familiar Hello World is perfectly displayed on the page.

If your page doesn’t display properly, you need to check for coding errors, syntax problems, and configuration errors.

conclusion

Of course, this is only an introductory tutorial. It only introduces a few features of Serverless, but Serverless is far more powerful than that. It is up to developers to open the door to this new world step by step.

🏆 issue 7 | all things can be Serverless technology projects