Faster Async Functions and Promises

JavaScript’s asynchronous processes have long been considered not fast enough, and worse, debugging in real-time scenarios like NodeJS can be a nightmare. However, this is all changing, and this article will explain in detail how we improved async and Promises in V8 (and some other engines as well) and the development experience that goes with it.

Tips: Here’s a video you can watch in conjunction with the article.

A new approach to asynchronous programming

From Callbacks to Promises to Async functions

Before Promises became an official part of the JavaScript standard, callbacks were heavily used in asynchronous programming. Here’s an example:

function handler(done) {
  validateParams((error) = > {
    if (error) return done(error);
    dbQuery((error, dbResults) = > {
      if (error) return done(error);
      serviceCall(dbResults, (error, serviceResults) => {
        console.log(result);
        done(error, serviceResults);
      });
    });
  });
}
Copy the code

Callbacks with deep nesting like these are often referred to as “callback black holes” because they make code less readable and less maintainable.

Fortunately, Promises are now part of the JavaScript language and do the same with promises:

function handler() {
  return validateParams()
    .then(dbQuery)
    .then(serviceCall)
    .then(result= > {
      console.log(result);
      return result;
    });
}
Copy the code

More recently, JavaScript supports async functions, which can be written as synchronous code like the following:

async function handler() {
  await validateParams();
  const dbResults = await dbQuery();
  const results = await serviceCall(dbResults);
  console.log(results);
  return results;
}
Copy the code

With async functions, the code becomes more concise, the logic of the code and the data flow become more controllable, of course, the underlying implementation is asynchronous. (Note that JavaScript is still single-threaded and async does not open new threads.)

Callback from event listener to async iterator

ReadableStreams as another form of asynchrony in NodeJS is also particularly common. Here’s an example:

const http = require('http');

http.createServer((req, res) = > {
  let body = ' ';
  req.setEncoding('utf8');
  req.on('data', (chunk) => {
    body += chunk;
  });
  req.on('end', () => {
    res.write(body);
    res.end();
  });
}).listen(1337);
Copy the code

This code is a bit tricky to understand: the stream in the chunks can only be retrieved through callbacks, and the end of the stream must also be handled in callbacks. Bugs can be introduced if you fail to understand that the function terminates immediately but the actual processing must take place inside the callback.

Luckily, a cool Async iterator introduced in THE ES2018 feature simplifies the above code:

const http = require('http');

http.createServer(async (req, res) => {
  try {
    let body = ' ';
    req.setEncoding('utf8');
    for await (const chunk of req) {
      body += chunk;
    }
    res.write(body);
    res.end();
  } catch {
    res.statusCode = 500;
    res.end();
  }
}).listen(1337);
Copy the code

You can put all the data processing logic into an async function using “for await”… Of iterates chunks instead of handling them in the ‘data’ and ‘end’ callbacks respectively, and we also add try-catch blocks to avoid unhandledRejection problems.

You can use these features in the build environment today! Async functions have been fully supported since Node.js 8 (V8 v6.2 / Chrome 62) and async iterators have been supported since Node.js 10 (V8 V6.8 / Chrome 68).

Async performance optimization

From V8 V5.5 (Chrome 55 & Node.js 7) to V8 V6.8 (Chrome 68 & Node.js 10), we’ve been working on performance optimizations for asynchronous code, and the results are good so far, so you can safely use these new features.

The doxBee benchmark above shows performance with heavy use of promises, with the vertical axis showing execution times, so smaller is better.

Parallel benchmarks, on the other hand, reflect performance with heavy promise.all () :

Promise. All is eight times better!

However, the above tests were only small DEMO level tests, and the V8 team was more concerned with optimizing the actual user code.

This is a test based on popular HTTP frameworks on the market. These frameworks make a lot of use of Promises and Async functions. This table shows requests per second, so unlike the previous table, the larger the better. As you can see from the table, node.js 7 (V8 V5.5) to Node.js 10 (V8 V6.8) has seen considerable performance improvements.

Performance improvements depend on three factors:

  • TurboFan, the new optimized compiler 🎉
  • Orinoco, the new garbage collector 🚛
  • A node.js 8 bug causes the await to skip some microticks 🐛

When we enabled TurboFan in Node.js 8, we saw a huge performance boost.

We also introduced a new garbage collector, called Orinoco, which took garbage collection off the main thread, so it helped a lot with response times.

Finally, node.js 8 introduced a bug that caused “await” to skip a bit of tick at some point, which actually made the performance better. This bug is caused by an unintentional violation of the specification, but it gives us some ideas for optimization. Here’s a little explanation:

const p = Promise.resolve();

(async () = > {
  await p; console.log('after:await'); }) (); p.then((a)= > console.log('tick:a'))
 .then((a)= > console.log('tick:b'));
Copy the code

The code above creates a completed promise p, then await the result, and chains two “THEN” s simultaneously. What will the console.log print?

Since p is finished, you might expect it to print ‘after:await’ and then the remaining two ticks, but in fact the result in Node.js 8 is:

Although the above results are expected, they do not conform to the specification. Node.js 10 corrects this behavior by executing the then chain first before async functions.

This “correct behavior” may seem unusual and even surprising to many JavaScript developers, but it’s worth explaining in detail. Before we explain, let’s start with some basics.

Tasks vs. Microtasks

At some level, there are tasks and microtasks in JavaScript. Tasks handle events such as I/O and timers, one at a time. Microtasks are designed for deferred execution of async/await and promise, with each task executing last. The queue of microtasks is cleared before the event loop is returned.

Learn more about Tasks, MicroTasks, Queues, and Schedules in the Browser by Jake Archibald. The task model in Node.js is very similar.

Async function

According to MDN, an async function is a function that executes asynchronously and implicitly returns a promise as a result. From a developer’s perspective, async functions make asynchronous code look like synchronous code.

One of the simplest async functions:

async function computeAnswer() {
  return 42;
}
Copy the code

The function returns a promise, which you can use just like any other promise.

const p = computeAnswer();
/ / to Promise

p.then(console.log);
// prints 42 on the next turn
Copy the code

You can only get the value returned by Promise P after the next microtask. In other words, the above code is semantically equivalent to what you get with promise.resolve:

function computeAnswer() {
  return Promise.resolve(42);
}
Copy the code

The real power of async functions comes from await expressions, which can make execution of a function pause until a promise is accepted and then resume execution after the promise is fulfilled. The completed promise will be the value of await. Here’s an example that explains this behavior:

async function fetchStatus(url) {
  const response = await fetch(url);
  return response.status;
}
Copy the code

FetchStatus suspends when it encounters an await and resumes when the fetch promise has completed. This is somewhat equivalent to directly chaining the promise returned by the FETCH.

function fetchStatus(url) {
  return fetch(url).then(response= > response.status);
}
Copy the code

The chained handler contains the code that was previously followed by the await.

Normally you should put a Promise after ‘await’, but it can be followed by any JavaScript value. If it is not followed by a Promise, it will be converted to a Promise, so await 42 has the following effect:

async function foo() {
  const v = await 42;
  return v;
}

const p = foo();
/ / to Promise

p.then(console.log);
// prints `42` eventually
Copy the code

More interestingly, await can be followed by any “thenable”, such as any object that contains a then method, even if it is not a promise. So you can implement an interesting class to record execution time consumption:

class Sleep {
  constructor(timeout) {
    this.timeout = timeout;
  }
  then(resolve, reject) {
    const startTime = Date.now();
    setTimeout((a)= > resolve(Date.now() - startTime),
               this.timeout); }} (async() = > {const actualTime = await new Sleep(1000);
  console.log(actualTime); }) ();Copy the code

Let’s take a look at how await is handled in the V8 specification. Here is a very simple async function foo:

async function foo(v) {
  const w = await v;
  return w;
}
Copy the code

When executed, it wraps the argument v as a promise, then pauses until the promise completes, then assigns w to the completed promise, and async returns this value.

The mystery of theawait

First, V8 marks the function as recoverable, meaning execution can be suspended and resumed (from an await perspective). Then, a so-called IMPLICIT_PROMISE is created (used to convert values generated in async functions into promises).

Then came the interesting thing: the real await. First, the value following await is converted to a promise. The handler then binds the promise to resume the main function after the promise completes, at which point async is suspended and returns implICIT_PROMISE to the caller. Once the promise completes, the function resumes and takes the value w from the promise, and finally, implICIT_PROMISE is marked as accepted with w.

In short, the await V initialization step consists of the following:

  1. thevInto a promiseawaitBack).
  2. Binding handlers are used for post-recovery.
  3. Pause async and returnimplicit_promiseTo the caller.

Let’s go step by step and assume that after await is a promise and that the final completed state has a value of 42. The engine then creates a new promise and uses the value after await as the value of resolve. Using standard PromiseResolveThenableJob the promise will be put into the next cycle.

The engine then creates another promise called throwaway. It’s called that because nothing else is connected to it, just for the inside of the engine. Throwaway promises are chained to promises that contain recovery handlers. Inside the performPromiseThen operation is promise.prototype.then (). Eventually, the async function pauses and gives control to the caller.

The caller will continue to execute, and eventually the call stack will clear, and the engine will start performing microtasks: Run before PromiseResolveThenableJob is ready, the first is a PromiseReactionJob, its job is just on the value of the transfer to await encapsulates a layer of promise. The engine then returns to the microtask queue, which must be emptied before returning to the event loop.

Then another PromiseReactionJob, waiting for the promise we are awaiting (we mean 42 here) to complete, and then scheduling the action into the throwaway promise. The engine continues back to the microtask queue because there is one last microtask.

Now the second PromiseReactionJob communicates the decision to the Throwaway Promise and resumes execution of the async function, finally returning the 42 received from the await.

To summarize, two additional promises are created for each await engine (even though the rvalue is already a promise) and at least three microtasks are required. Who would have thought that a simple await could have so many redundant operations? !

Let’s look at what exactly causes redundancy. The first line encapsulates a promise, and the second line encapsulates the value v after the resolve encapsulate promose await. These two lines produce a redundant promise and two redundant microtasks. It’s not a good deal if V is already a promise (which it is most of the time). In some special cases, if you await 42, that really needs to be wrapped as a promise.

Therefore, this is handled using the promiseResolve action, which encapsulates promises only when necessary:

If the input parameter is a PROMISE, it is returned unchanged, encapsulating only the necessary promises. This operation saves an additional promise and two microtasks if the value is already promose. This feature can be turned on in V8 (as of V7.1) with the –harmony-await-optimization parameter, and we made a proposal to ECMAScript, which we expect to merge soon.

Here is the simplified await execution:

Thanks to the magic of promiseResolve, now we just pass v and don’t care what it is. And then as before, the engine will create a Throwaway promise and put it in the PromiseReactionJob and in order to resume the async function on the next tick, it will suspend the function and return itself to the caller.

When the final execution is complete, the engine will run the microtask queue, it will execute the PromiseReactionJob. This task passes the promise result to the throwaway and resumes the async function, getting 42 from await.

Despite being used internally, the engine that created throwaway Promises might feel like something was wrong. As it turns out, the Throwaway promise is simply a fulfillment of the performPromiseThen requirement in the specification.

This is a recent proposed change to ECMAScript and the engine no longer needs to create throwaway most of the time.

Compare the performance of await in Node.js 10 and optimised (should be on Node.js 12) :

Async /await performance outperforms handwritten promise code. The key is that we have reduced some of the unnecessary overhead in async functions, not just in V8, but in other JavaScript engines.

Development experience optimization

In addition to performance, JavaScript developers are also concerned with problem locating and fixing, which has never been easy in asynchronous code. Chrome DevTools now supports asynchronous stack tracking:

This is a useful feature for local development, but not once the application is deployed. When debugging, you will only see the Error#stack information in the log file, which will not contain any asynchronous information.

The recent zero-cost async stack trace makes Error#stack contain async function calls. “Zero cost” sounds exciting, right? How can Chrome DevTools achieve zero cost when its features impose significant overhead? For example, if foo calls bar and bar throws an exception after await a promise:

async function foo() {
  await bar();
  return 42;
}

async function bar() {
  await Promise.resolve();
  throw new Error('BEEP BEEP');
}

foo().catch(error= > console.log(error.stack));
Copy the code

This code runs on Node.js 8 or Node.js 10 as follows:

$ node index.js Error: BEEP BEEP at bar (index.js:8:9) at process._tickCallback (internal/process/next_tick.js:68:7) at Function.Module.runMain  (internal/modules/cjs/loader.js:745:11) at startup (internal/bootstrap/node.js:266:19) at bootstrapNodeJSCore (internal/bootstrap/node.js:595:3)Copy the code

Note that foo itself is not in the stack trace, despite the error of the call throw in foo(). If the application is deployed in a cloud container, this can make it difficult for developers to locate problems.

Interestingly, the engine knows what to continue with after bar: after await in foo. As it happens, this is where Foo pauses. The engine can use this information to reconstruct asynchronous stack trace information. With the above optimizations, the output would look like this:

$ node --async-stack-traces index.js Error: BEEP BEEP at bar (index.js:8:9) at process._tickCallback (internal/process/next_tick.js:68:7) at Function.Module.runMain  (internal/modules/cjs/loader.js:745:11) at startup (internal/bootstrap/node.js:266:19) at bootstrapNodeJSCore (internal/bootstrap/node.js:595:3) at async foo (index.js:2:3)Copy the code

In the stack trace information, the topmost function appears first, followed by some asynchronous call stacks, followed by the bar context stack information in Foo. This feature can be enabled with V8’s –async-stack-traces parameter.

However, if you compare it to the stack information in Chrome DevTools above, you’ll notice that foo’s call point information is missing from the asynchronous part of the stack trace. This takes advantage of the property that the await resume is the same as the pause position, but Promise#then() or Promise#catch() is not. See Mathias Bynens’ article await Beats Promise#then() for more.

conclusion

The following two optimizations are necessary for faster async functions:

  • Two additional microtasks were removed
  • removedthrowaway promise

In addition, we have improved the debugging experience of await and promise.all () development with zero-cost asynchronous stack tracing.

We also have some performance recommendations that are friendly to JavaScript developers:

Use async and await instead of writing promise code, and use promises provided by JavaScript engines instead of implementing them yourself.

The article can be reproduced at will, but please keep this link to the original text. Please send your resume to caijun. HCJ (at)alibaba-inc.com.