• You should never ever run directly against Node.js in production. Maybe.
  • By Burke Holland
  • The Nuggets translation Project
  • Permanent link to this article: github.com/xitu/gold-m…
  • Translator: fireairforce
  • Proofread by: HearFishle, JasonLinkinBright

Sometimes I wonder if I really know much.

Just a few weeks ago, I was talking to a friend who casually mentioned that “you’ll never run a program using Node directly in production”. I nodded strongly, indicating that I wouldn’t run Node directly in production either, for reasons everyone probably knows. But I don’t know. Should I know why? Can I still write code?

If I were to draw a Venn diagram representing what I know and what everyone else knows, it would look something like this:

By the way, the older I get, the smaller that dot gets.

Alicia Liu created a better chart that changed my life. She says it’s more like…

I like this chart very much because I want it to be true. I don’t want to spend the rest of my life looking like a tiny shrinking blue dot.

It’s too dramatic. Blame Pandora. As I write this, I have no control over what happens next. Dashboard Confessional is really poison.

Well, assuming Alicia’s diagrams are real, I’d like to share a few things I know now about running Node applications in production. Maybe there’s no overlap in our relative Venn diagrams here.

First, let’s be clear about the “never run a program directly from Node in production.”

Never run a Node directly in production

That may or may not be true. Let’s explore how this statement came to be. First, let’s see why not.

Suppose we have a simple Express server. The simplest Express server I can think of is as follows:

const express = require("express");
const app = express();
const port = process.env.PORT || 3000;

/ / look at http://localhost:3000
app.get("/".function(req, res) {
  res.send("Again I Go Unnoticed");
});

app.listen(port, () => console.log(`Example app listening on port ${port}! `));
Copy the code

Run it from the startup script defined in package.json.

"scripts": {
  "start": "node index.js"."test": "pfffft"
}
Copy the code

There are two problems here. The first is development, and the second is production.

The development problem is that when we change the code, we have to stop and start the application again to get the effect of our changes.

We usually solve this problem by using some kind of Node process manager, such as Supervisor or Nodemon. These packages listen to our project and restart the server when we commit changes. I usually do that.

"scripts": {
  "dev": "npx supervisor index.js"."start": "node index.js"
}
Copy the code

Then run NPM run dev. Notice that I’m running the NPX Supervisor here, which allows us to use it without having the Supervisor package installed.

Our other problem is that we’re still running directly against Node, which we’ve already said sucks, and now we’re going to find out why.

I’m going to add another route here, trying to read the file from a disk that doesn’t exist. This is a mistake that can easily occur in any real-world application.

const express = require("express");
const app = express();
const fs = require("fs");
const port = process.env.PORT || 3000;

/ / look at http://localhost:3000
app.get("/".function(req, res) {
  res.send("Again I Go Unnoticed");
});

app.get("/read".function(req, res) {
  // This route does not exist
  fs.createReadStream("my-self-esteem.txt");
});

app.listen(port, () => console.log(`Example app listening on port ${port}! `));
Copy the code

If we run NPM start directly against Node and navigate to the read endpoint, the page will report an error because the file does not exist.

It’s not a big deal, is it? It was just a mistake. It happened.

Not at all. That’s important. If you go back to the terminal, you’ll see that the application is completely closed.

This means that if you go back to the browser and try to access the root URL of the site, you’ll get the same error page. An error in a method invalidates all routes in the application.

This is bad. This is where people say “Never run Node directly in production.”

Ok. If we cannot run Node directly in production, what is the correct way to run Node in production?

Node option in production

We have several options.

One of them is to simply use tools like Supervisor or Nodemon in production, just as we use them in development. This works, but these tools are a bit lightweight. A better option is PM2.

Pm2 support

Pm2 is a Node process manager with many useful features. Just like any other “JavaScript” library, you can install it globally using NPM — or you can use NPX again. I won’t repeat it here.

There are many ways to run programs using PM2. The simplest is to call pm2 start in the entry file.

"scripts": {
  "start": "pm2 start index.js"."dev": "npx supervisor index.js"
},
Copy the code

The terminal will display something like this:

This is the process running in the background that we monitor in PM2. If you access the Read endpoint and crash the program, PM2 will automatically restart it. You don’t see anything on the terminal because it’s running in the background. If you want to see what PM2 is doing, you can run pm2 log 0. This 0 is the ID of the process we want to view the log from.

The next! As you can see, PM2 will restart the application when it is down due to an unhandled error.

We can also extract development commands and prepare PM2 monitoring files for us to restart the program when any changes occur.

"scripts": {
  "start": "pm2 start index.js --watch"."dev": "npx supervisor index.js"
},
Copy the code

Note that since PM2 runs in the background, you can’t just kill a running PM2 process by CTRL + C. You must stop a process by passing its ID or name.

pm2 stop 0

pm2 stop index

Also note that PM2 saves a reference to the process so that you can restart it.

To delete the process reference, you need to run pM2 delete. You can use a command delete to stop and delete processes.

pm2 delete index

We can also use PM2 to run multiple processes of our application. Pm2 automatically balances the load of these instances.

Multiple processes using the PM2 fork pattern

Pm2 has a number of configuration options that are contained in a “ecosystem” file. You can create one by running pM2 init. You get the following:

module.exports = {
  apps: [{name: "Express App".script: "index.js".instances: 4.autorestart: true.watch: true.max_memory_restart: "1G".env: {
        NODE_ENV: "development"
      },
      env_production: {
        NODE_ENV: "production"}}};Copy the code

I won’t cover deployment in this article because I don’t know much about deployment either.

The applications section defines the applications that you want PM2 to run and monitor. You can run more than one application. Many of these configuration Settings may be self-evident. What I want to focus on here is the instance setup.

Pm2 can run multiple instances of your application. You can pass in as many instances as you want to run, and PM2 can start them all. So, if we wanted to run four instances, we could create the following configuration file.

module.exports = {
  apps: [{name: "Express App".script: "index.js".instances: 4.autorestart: true.watch: true.max_memory_restart: "1G".env: {
        NODE_ENV: "development"
      },
      env_production: {
        NODE_ENV: "production"}}};Copy the code

Then we run it using pm2 start.

Pm2 will now run in cluster mode. Each of these processes will run on a different CPU on my computer, depending on the number of cores I have. If we want to run a process for each kernel without knowing we have multiple kernels, we can pass the Max parameter to the Instances parameter for configuration.

{... instances:"max". }Copy the code

Let’s see how many cores I have on this machine.

Eight cores! Wow. I’m going to install Subnautica on my Microsoft-issued machine. Don’t tell them I said that.

The advantage of running processes on separate cpus is that even if one process is running abnormally and using 100% of the CPU, the other processes can keep running. Pm2 doubles the number of processes on the CPU as needed.

You can do much more with PM2, including monitoring and other ways of dealing with those pesky Environment variables.

One more note: If for some reason you want PM2 to run the NPM start directive. You can do this by running NPM as a process and passing — start. The space before “start” is very important here.

pm2 start npm -- start
Copy the code

In Azure AppService, pM2 is included in the background by default. If you want to use PM2 in Azure, you don’t need to include it in a package.json file. You can add a ecosystem file and use it.

Good! Now that we know everything there is to know about PM2, let’s talk about why you might not want to use it, and it probably does run directly against Node.

Run directly against Node in production

I had some questions about this, so I reached out to Tierney Cyren, who is part of the huge Orange circle of knowledge, especially with respect to Node.

Tierney points out that there are some disadvantages to using a Node-based process manager such as PM2.

The main reason is that nodes should not be used to monitor nodes. You don’t want to spy on yourself with what you’re spying on. Like when you let my teenage son supervise himself on a Friday night: Will it turn out badly? Maybe, maybe not.

Tierney recommends that you do not use the Node process manager to run applications. Instead, there is something at a higher level that monitors multiple individual instances of an application. For example, an ideal setup is that if you have a Kubernetes cluster, your applications run on different containers. Kubernetes can then monitor these containers, and if any of them fail, it can restore them and report on their health.

In this case, you can run directly against Node because you are monitoring at a higher level.

As it turns out, Azure is already doing just that. If we don’t push the PM2 ecosystem file to Azure, it will use our package.json file to run the script to launch the application, which we can run directly against Node.

"scripts": {
  "start": "node index.js"
}
Copy the code

In this case, we run it directly against Node, which is fine. If your application crashes, you’ll find it coming back. That’s because in Azure, your application runs in a container. Azure is responsible for container scheduling and knows when to update.

But there’s still only one instance. It takes a second for a container to come back online after a crash, which means your users may have a few seconds of downtime.

Ideally, you want to run multiple containers. The solution is to deploy multiple instances of the application to multiple Azure AppService sites and then load balance the application under a single IP address using Azure Front Door. The Front Door knows when the container is closed and routes traffic to other healthy instances of the application.

Azure Front Service feel | Microsoft Azure using Azure Front feel delivery, protection and track the global distribution of decay Service application performance

systemd

Tierney’s other suggestion is to run Nodes using Systemd. I don’t know systemd very well (or at all), and I’ve already got this one wrong once, so I’ll use Tierney’s own words:

This option is only possible if you access Linux during deployment and control how Node is started at the service level. If you’re running node.js in a long-running Linux VIRTUAL machine, such as an Azure VIRTUAL machine, running Node.js with Systemd is a good option. You can avoid this option if you are only deploying files to a service like Azure AppService or Heroku, or if you are running in a containerized environment like Azure container instances.

  • Running Node.js applications with Systemd — Part 1
  • You’ve written your next great application in Node, and you’re ready to release it. That means you can…

Node.js worker thread

Tierney also wants you to know that There are worker threads in Node, which allows you to start your application on multiple threads without the need for something like PM2. Maybe. I don’t know. I haven’t read the article.

  • Node. Js v11.14.0 Documentation
  • The worker_Threads Module enables The use of threads that execute JavaScript in parallel. To access it: const worker =…

Be a mature developer

Tierney’s final piece of advice is to handle bugs and write tests like a mature developer. But who has the time?

The little circle stays forever

Now you know most of what’s in that little blue circle. The rest is useless stuff about emO bands and beer.

For more information about PM2, Node, and Azure, check out the following resources:

  • pm2.keymetrics.io/
  • Node.js deployment on VS Code
  • Deploy the simple Node site to Azure

If you find any mistakes in your translation or other areas that need to be improved, you are welcome to the Nuggets Translation Program to revise and PR your translation, and you can also get the corresponding reward points. The permanent link to this article at the beginning of this article is the MarkDown link to this article on GitHub.


The Nuggets Translation Project is a community that translates quality Internet technical articles from English sharing articles on nuggets. The content covers Android, iOS, front-end, back-end, blockchain, products, design, artificial intelligence and other fields. If you want to see more high-quality translation, please continue to pay attention to the Translation plan of Digging Gold, the official Weibo, Zhihu column.