Following on from the last article, Koa and Webpack?

This article is all about creating a stable Web server with Node. If you think of tools like PM2, you can drop pM2 and check it out. If anything is out of place, please point out.

What problems do you need to solve to create a stable Web server?

  • How to utilize multi-core CPU resources.
  • Survival state management for multiple worker processes.
  • Smooth restart of the worker process.
  • Process error handling.
  • The worker process restarts in limited time.

How do I utilize multi-core CPU resources

There are several ways to leverage multi-core CPU resources.

  • By deploying multiple Node services on a single server and listening on different ports, load balancing is performed on one Nginx.

    This is usually done with multiple machines, and in the case of a server cluster, this is what we will not do here.

  • Start a master process on a single machine and fork multiple child processes. After the master process sends its handle to the child process, it closes the listening port and lets the child process handle the request.

    This is also a common practice in single-node clusters.

Fortunately, with the addition of the Cluster module in V0.8, we don’t have to use child_process to handle all the details of Node cluster step by step.

So this article is based on the cluster module to solve the above problems.

Start by creating a Web server, using the Koa framework on the Node side. If you haven’t used it, go to the portal first

The following code is the configuration required to create a basic Web service. Those who have read the previous article can filter this code first and go straight to the next section.

const Koa = require('koa');
const app = new Koa();
const koaNunjucks = require('koa-nunjucks-2');
const koaStatic = require('koa-static');
const KoaRouter = require('koa-router');
const router = new KoaRouter();
const path = require('path');
const colors = require('colors');
const compress = require('koa-compress');
const AngelLogger = require('.. /angel-logger')
const cluster = require('cluster');
const http = require('http');

class AngelConfig {
  constructor(options) {
    this.config = require(options.configUrl);
    this.app = app;
    this.router = require(options.routerUrl);
    this.setDefaultConfig(); 
    this.setServerConfig();
    
  }

  setDefaultConfig() {
    // Static file root directory
    this.config.root = this.config.root ? this.config.root : path.join(process.cwd(), 'app/static');
    // Default static configuration
    this.config.static = this.config.static ? this.config.static : {};
  }

  setServerConfig() {
    this.port = this.config.listen.port;

    // Cookie signature verification
    this.app.keys = this.config.keys ? this.config.keys : this.app.keys; }}// Start the server
class AngelServer extends AngelConfig {
  constructor(options) {
    super(options);
    this.startService();
  }

  startService() {
    // Enable gzip compression
    this.app.use(compress(this.config.compress));

      // Template syntax
    this.app.use(koaNunjucks({
      ext: 'html'.path: path.join(process.cwd(), 'app/views'),
      nunjucksConfig: {
        trimBlocks: true}}));this.app.use(async (ctx, next) => {
      ctx.logger = new AngelLogger().logger;
      await next();
    })
  
    // Access logs
    this.app.use(async (ctx, next) => {
      await next();
      // console.log(ctx.logger,'loggerloggerlogger');
      const rt = ctx.response.get('X-Response-Time');
      ctx.logger.info(`angel ${ctx.method}`.green,` ${ctx.url} - `.`${rt}`.green);
    });
    
    // Response time
    this.app.use(async (ctx, next) => {
      const start = Date.now();
      await next();
      const ms = Date.now() - start;
      ctx.set('X-Response-Time'.`${ms}ms`);
    });

    this.app.use(router.routes())
      .use(router.allowedMethods());

    // Static resources
    this.app.use(koaStatic(this.config.root, this.config.static));
  
    // Start the server
    this.server = this.app.listen(this.port, () => {
      console.log('The current server is started, please visit'.` http://127.0.0.1:The ${this.port}`.green);
      this.router({
        router,
        config: this.config,
        app: this.app }); }); }}module.exports = AngelServer;

Copy the code

After starting the server, thethis.app.listenAssigned tothis.serverWe’ll use that later.

Normally when we do a single-machine cluster, the number of processes we fork is the number of cpus on the machine. Of course more also not limited, just do not recommend commonly.

const cluster = require('cluster');
const { cpus } = require('os'); 
const AngelServer = require('.. /server/index.js');
const path = require('path');
let cpusNum = cpus().length;

/ / timeout
let timeout = null;

// Restart times
let limit = 10;
/ / time
let during = 60000;
let restart = [];

/ / master process
if(cluster.isMaster) {
  //fork multiple worker processes
  for(let i = 0; i < cpusNum; i++) { creatServer(); }}else {
  / / worker processes
  let angelServer = new AngelServer({
    routerUrl: path.join(process.cwd(), 'app/router.js'),// Routing address
    configUrl: path.join(process.cwd(), 'config/config.default.js')  
    // Read config/config.default.js by default
  });
}

// master.js
// Create a service process
function creatServer() {
  let worker = cluster.fork();
  console.log('The worker process has restarted pid:${worker.process.pid}`);
}

Copy the code

The way to use process is to judge by cluster.isMaster and cluster.isWorker.

The master-slave process code may not be easy to understand when written together. This is also the official Node writing method, of course, there is also a clearer writing method, using cluster.setupmaster implementation, will not be explained here.

Execute the code through Node to see what happens.

First determine whether cluster.ismaster exists, then loop to createServer(),fork4 worker processes. Prints the worker process PID.

When cluster starts, it starts the TCP service internally and sends the file descriptor of the TCP server socket to the worker process when the cluster.fork() child process occurs. If there is a call to listen() on a network port in the worker process, it will get the file descriptor for that file and reuse it through the SO_REUSEADDR port, thus allowing multiple child processes to share the port.

Process management, smooth restarts, and error handling.

In general, the master process is stable and the worker process is not so stable.

Because the worker process handles business logic, we need to add an automatic restart function to the worker process, which means that if the child process has an error and is blocked for reasons beyond the control of the business, we should stop the process from receiving any requests and then gracefully shut down the worker process.

/ / timeout
let timeout = null;

// Restart times
let limit = 10;
/ / time
let during = 60000;
let restart = [];

if(cluster.isMaster) {
  //fork multiple worker processes
  for(let i = 0; i < cpusNum; i++) { creatServer(); }}else {
  //worker
  let angelServer = new AngelServer({
    routerUrl: path.join(process.cwd(), 'app/router.js'),// Routing address
    configUrl: path.join(process.cwd(), 'config/config.default.js') // Read config/config.default.js by default
  });

  // The server exits gracefully
  angelServer.app.on('error', err => {
    // Send a suicide signal
    process.send({ act: 'suicide' });
    cluster.worker.disconnect();
    angelServer.server.close((a)= > {
      // Exit the process after all existing connections are disconnected
      process.exit(1);
    });
    Exit the process after 5 seconds
    timeout = setTimeout((a)= > {
      process.exit(1);
    },5000);
  });
}

// master.js
// Create a service process
function creatServer() {

  let worker = cluster.fork();
  console.log('The worker process has restarted pid:${worker.process.pid}`);
  If a child process sends a suicide signal, restart the process immediately.
  // Smooth restart restart first, suicide later.
  worker.on('message', (msg) => {
    // If the MSG is a suicide signal, restart the process
    if(msg.act == 'suicide') { creatServer(); }});// Clean timer.
  worker.on('disconnect', () => {
    clearTimeout(timeout);
  });

}

Copy the code

After instantiating AngelServer, we can get AngelServer and get Koa instance by getting AngelServer. app, so as to listen for Koa error event.

Send ({act: ‘suicide’}) a suicide signal is sent when an error is detected. Call cluster. Worker. Disconnect () method, this method is called will shut down all of the server, and wait for the server ‘close’ event, then close the IPC pipeline.

Call the angelServer.server.close() method, and when all connections have been closed, the IPC pipe to the worker process will be closed, allowing the worker process to die gracefully.

If the process does not exit within 5 seconds, the process will be forcibly stopped after 5 seconds.

Koa’s app.Listen is http.createserver (app.callback()).Listen (); So you can call the close method.

The worker listens for messages. If it is a message, restart the new process first. Also listen for the Disconnect event to clear the timer.

Normally, we should listen for uncaughtException events in the process. If an exception is not caught by Javascript and passed back to the event loop along the code call path, ‘uncaughtException’ will be emitted.

But Koa has added tryCatch to middleware. Therefore, uncaughtException is not caught.

Here, also have to special thanks to the big deep sea brother, late at night, in the group to give me directions.

Set limit to restart

Telling the main process by a suicide signal makes it possible for new connections to always be serviced by the process, but there are still extreme cases. Worker processes cannot be restarted indefinitely and frequently.

Therefore, the giveup event will be triggered if the number of restarts is exceeded.

// Check whether the number of startup times is too frequent. If the number exceeds a certain number, restart.
function isRestartNum() {

  // Record the restart time
  let time = Date.now();
  let length = restart.push(time);
  if(length > limit) {
    // Take out the last 10
    restart = restart.slice(limit * - 1);
  }
  // Whether the number of restart times in one minute is too frequent
  return restart.length >= limit && restart[restart.length - 1] - restart[0] < during;
}

Copy the code

Change createServer to

// master.js
// Create a service process
function creatServer() {
  // Check whether the startup is too frequent
  if(isRestartNum()) {
    process.emit('giveup', length, during);
    return;
  }
  let worker = cluster.fork();
  console.log('The worker process has restarted pid:${worker.process.pid}`);
  If a child process sends a suicide signal, restart the process immediately.
  // Smooth restart restart first, suicide later.
  worker.on('message', (msg) => {
    // If the MSG is a suicide signal, restart the process
    if(msg.act == 'suicide') { creatServer(); }});// Clean timer.
  worker.on('disconnect', () => {
    clearTimeout(timeout);
  });

}

Copy the code

Example Change a load balancing policy

The default is operating system preemption, where idle processes compete for incoming requests to be served.

Being busy is determined by the CPU and I/O, but it is the CPU that affects preemption.

For different services, THE I/O may be busy but the CPU is idle, resulting in load imbalance. So we use another strategy for Node, called the round-robin system.

cluster.schedulingPolicy = cluster.SCHED_RR;
Copy the code

The last

Of course, creating a stable Web service requires a lot of attention, such as optimizing communication between processing processes, data sharing, and so on.

This article is just to give you a reference, if there are some inappropriate places to write, please kindly point out.

See Github for the full code.

Resources: Simple nodejs