Maya Lekova and Benedikt Meurer

Translator: UC International research and development Jothy


Welcome to the “UC International Technology” public account, we will provide you with the client, server, algorithm, testing, data, front-end and other related high-quality technical articles, not limited to original and translation.


Asynchronous processing of JavaScript has long had a reputation for not being fast enough. To make matters worse, debugging real-time JavaScript applications – especially Node.js servers – is not easy, especially when it comes to asynchronous programming. Fortunately, that is changing. This article explores how we can optimize asynchronous functions and promises in V8 (and to some extent other JavaScript engines), and describes how we can improve the debugging experience for asynchronous code.

Note: If you like to read the speech while reading the article, please enjoy the video below! If not, please skip the video and continue reading.

Video Address:




A new approach to asynchronous programming


>> From callback to promise to asynchronous function <<

Before promises were implemented in JavaScript, the problem of asynchracy was usually solved based on callbacks, especially in Node.js. For example 🌰 :

We often refer to this pattern of deeply nested callbacks as “callback hell,” because the code is difficult to read and maintain.

Fortunately, now that Promise is part of JavaScript, we can implement the code in a more elegant and maintainable way:

More recently, JavaScript has added support for asynchronous functions. We can now implement the above asynchronous code in a similar way to synchronous code:

With asynchronous functions, the code is still executed asynchronously, but it is much cleaner and easier to control and flow data. (Note that JavaScript is still executed in a single thread, meaning that the asynchronous method itself does not create a physical thread.)


>> Callback from event listener to asynchronous iteration <<

Another asynchronous paradigm that is particularly common in Node.js is ReadableStreams. Take an example:

This code is a bit tricky to understand: the incoming data can only be processed in the callback block, and the signal for stream end is also fired within the callback. It’s easy to write a bug here if you don’t realize that the function terminates immediately and doesn’t actually process until the callback is triggered.


Fortunately, a cool new ES2018 feature, asynchronous iteration, can simplify this code:


Instead of putting the logic to process the actual request into two different callbacks – ‘data’ and ‘end’ callbacks, we can now put everything into a single asynchronous function and use the new for await… The of loop iterates asynchronously. We also added try-catch blocks to avoid unhandledRejection problems [1].


You can now officially use these new features! Node.js 8 (V8 V6.2 /Chrome 62) and later have full support for asynchronous methods, while Node.js 10 (V8 V6.8 /Chrome 68) and later have full support for asynchronous iterators and generators!




Asynchronous performance improvement

We have significantly improved asynchronous code performance between V8 V5.5 (Chrome 55 and Node.js 7) and V8 V6.8 (Chrome 68 and Node.js 10). Developers can safely use new programming paradigms without worrying about speed.


The figure above shows doxBee’s benchmark, which measured performance with heavy use of promise code. Note that the chart shows the execution time, meaning the lower the value, the better.

The results of the parallel benchmark, with special emphasis on promise.all () performance, were even more exciting:

We increased promise.all’s performance by 8 times!

However, the benchmarks above are synthetic microbenchmarks. The V8 team is more interested in how this optimization affects the actual performance of real user code.

The chart above shows the performance of some popular HTTP middleware frameworks that make heavy use of Promises and asynchronous functions. Note that this chart shows requests per second, so unlike the previous chart, the higher the better. The performance of these frameworks has improved significantly between node.js 7 (V8 V5.5) and Node.js 10 (V8 V6.8).


These performance improvements resulted in three key achievements:

  • TurboFan, the new optimized compiler 🎉

  • Orinoco, the new garbage collector 🚛

  • Node.js 8 bug causing await to skip microticks 🐛


With TurboFan enabled in Node.js 8, we saw an overall performance boost.

We’ve been working on a new garbage collector called Orinoco that takes garbage collection out of the main thread and dramatically improves request handling.

Finally, there is a simple bug in Node.js 8 that causes await to skip microticks in some cases, resulting in better performance. This error starts with an unintentional breach of the specification, but gives us ideas for optimization. Let’s start by explaining the bug:

The above program creates a fulfilled Promise P and await the result, but also binds two handlers to it. In which order do you want console.log calls to be executed?


This is very depressing, so you may want it to print ‘after: await’ and then ‘tick’. In fact, Node.js 8 will execute like this:



Await the bug in Node.js 8


While this behavior may seem intuitive, it is not correct according to the specification. Node.js 10 implements the correct behavior, which is to execute the chained handler first and then proceed to execute the asynchronous function.

Node.js 10 has no await bug


This “correct behavior” is not immediately obvious and is quite a surprise to JavaScript developers 🐳, so we have to explain. Before we dive into the wonderful world of promises and asynchronous functions, let’s look at some of the basics.



>> Task VS Microtask <<

There are concepts of task and microtask in JavaScript. Task handles events such as I/O and timers and executes them one at a time. Microtask implements deferred execution for async/await and promise and executes at the end of each task. Event loop execution will not return until the MicroTasks queue has been emptied.


The difference between Task and microtask


See Jake Archibald’s explanation of task, MicroTask, Queue, and Schedule in browsers. The task model in Node.js is very similar.


Article Address:

https://jakearchibald.com/2015/tasks-microtasks-queues-and-schedules/


>> Asynchronous function <<

MDN defines an asynchronous function as a function that uses an implicit promise to perform an asynchronous operation and return its result. Asynchronous functions are designed to make asynchronous code look like synchronous code and reduce the complexity of asynchronous processing for developers.


The simplest asynchronous function looks like this:

When called, it returns a promise, and you can get its value just as you would any other promise.

You only get the value of this promise the next time you run MicroTask. In other words, the above program is semantically equivalent to using promise.resolve to get a value:

The real power of asynchronous functions comes from await expressions, which pause the execution of the function until the promise completes and then resume it. The value of await is the result of the promise fulfilled. This example can be well explained:

FetchStatus pauses at await and resumes when the fetch Promise completes. This is more or less equivalent to linking the handler to the promise returned by the fetch.

This handler contains the code after the await in the async function.


Normally you would await a Promise, but you can await any JavaScript value. Even if the expression after await is not a promise, it is converted to a promise. This means you can await 42 as well if you want to:

More interestingly, await applies to any “thenable”, that is, any object with a then method, even if it’s not really a promise. So you can do some interesting things with it, such as asynchronous sleep that measures actual sleep time:

Let’s see what the V8 engine does with await as per the specification. Here is a simple asynchronous function foo:

When foo is called, it wraps the argument v in a promise and suspends the execution of the asynchronous function until the promise completes. After completion, the execution of the function resumes and w is given the value of the promise completion. The asynchronous function then returns this value.


How does V8 handle await <<

First, V8 marks the function as recoverable, which means that the operation can be paused and resumed later (while await). It then creates something called IMPLICIT_PROMISE, which is the promise returned when the asynchronous function is called, and resolve is the return value of the async function.

Simple asynchronous function and engine parsing result comparison



Here’s the interesting part: the actual await. First, the value passed to await is encapsulated in the promise. The promise is then followed by a handler to process the function (so that the asynchronous function can be resumed after the promise is completed), and the execution of the asynchronous function is suspended, returning the IMPLICIT_PROMISE to the caller. Once the promise is complete, the value w generated is returned to the asynchronous function, which resumes execution, and W is the completed result of IMPLICIT_PROMISE.


In short, the initial step of await V is:

1. Encapsulate the value passed to await by V – convert to promise.

2. Attach a handler to the Promise so that the asynchronous function can be restored later.

3. Suspend the asynchronous function and return the IMPLICIT_PROMISE to the caller.


Let’s do this step by step. Suppose you are awaiting a completed promise that will return 42. The engine then creates a new promise and completes the await operation. It has delayed the promise the next round of links, as PromiseResolveThenableJob canonical expression.


The engine then created another promise called throwaway. It is called disposable because it is not chained to anything – it exists entirely within the engine. The throwaway is then linked to the Promise, using the appropriate handler to restore the asynchronous function. This performPromiseThen operation is implicitly performed by promise.prototype.then (). Finally, execution of the asynchronous function is paused and control is returned to the caller.



The calling program continues to execute until the call stack is empty. Then JavaScript engine start to run microtask: it will be run before PromiseResolveThenableJob, generate new PromiseReactionJob to pass promise link to await the value. The engine then returns to process the MicroTask queue, which must be emptied before continuing with the main event loop.



Next comes the PromiseReactionJob, which completes the promise with the value returned by the promise we await – now 42 – and handles the response to the throwaway. The engine then returns to the MicroTask loop again with the final microtask to be processed.





The second PromiseReactionJob then passes the result back to the throwaway Promise and resumes the suspended asynchronous function, returning the value 42 from await.



awaitThe cost of


To sum up, for each await, the engine must create two additional promises (even though the expression on the right is already a promise) and it requires at least three microTask queues to execute. Who knew that a simple await expression would cause so much overhead? !

Let’s take a look at where that cost comes from. The first line encapsulates the promise. The second line immediately unwraps the wrapper with the value V obtained with await. These two lines bring in an extra promise, as well as two of the three microticks. In the case where V is already a promise (which is a common case, because promises are normally await), this is very expensive. In less common cases, developers await a value such as 42, and the engine still needs to wrap it as a promise.


As it turns out, there is already a promiseResolve operation in the specification that performs encapsulation only when necessary:

This will return Promises as well, and only pack other values into Promises if necessary. In this way you can save one extra promise and two ticks on the microTask queue because normally the value passed to await will be a promise. This new behavior can currently be implemented using V8’s –harmony-await-optimization flag (starting with V8 V7.1). We have also committed this change to the ECMAScript specification and the patch will be put in as soon as we confirm that it is web-compatible.


The following shows how the new improved await works step by step:


Let’s assume again that we await a promise that returns 42. Thanks to the magic of promiseResolve, now promises only refer to the same Promise V, so there is no relationship in this step. The engine then continues as before, creating the Throwaway Promise, generating the PromiseReactionJob that resumes the asynchronous function on the next tick of the MicroTask queue, suspends the function, and returns it to the caller.



Finally, when all JavaScript execution is complete, the engine starts running microTask, so PromiseReactionJob is executed. This work propagates the result of the promise to the throwaway and resumes the execution of the async function, producing 42 from the await.



Summary of the reduction in
await overhead


This optimization avoids the need to create a Promise wrapper if the value passed to await is already a promise, in which case we have reduced the minimum of three microticks to one. This behavior is similar to that of Node.js 8, but now it’s no longer a bug – it’s a standardization optimization!


Even though the engine is completely built in, it’s still wrong that it has to create throwaway promises internally. As it turns out, the Throwaway Promise was simply intended to satisfy the API constraints of the specification’s internal performPromiseThen operations.




The recent ECMAScript specification addresses this problem. The engine no longer needs to create await throwaway promises – for the most part [2].

Comparison of
await code before and after the optimizations


Comparing await in Node.js 10 with await that might be optimized in Node.js 12 will have the following performance impact:

Async /await is better than handwritten promise code. The key point here is that we pass the patching specification
[3]Significantly reduced overhead of asynchronous functions – not only in V8, but in all JavaScript engines.





Development experience improvement


In addition to performance, JavaScript developers are also concerned with the ability to diagnose and fix problems, which is not so easy when dealing with asynchronous code. Chrome DevTool supports asynchronous stack tracing, which includes not only the currently synchronized part, but also the asynchronous part:

This is very useful during local development. However, this approach does not work once the application is deployed. During post-debug, you can only see the Error#stack output in the log file, not any information about the asynchronous part.


We’ve recently been working on a zero-cost asynchronous stack trace that enriches the Error#stack property with asynchronous function calls. “Zero cost” sounds exciting, right? How can Chrome DevTools achieve zero cost when its features impose significant overhead? For example 🌰, where Foo asynchronously calls bar and bar throws an exception after await promise:

Running this code in Node.js 8 or Node.js 10 prints:

Note that while a call to foo() causes an error, foo is not part of the stack trace. This makes it tricky for JavaScript developers to perform post-hoc debugging, whether your code is deployed in a Web application or inside a cloud container.

Interestingly, when a bar is finished, the engine knows where it should continue: just after await in function foo. Coincidentally, this is also where the function foo is paused. The engine can use this information to reconstruct the part of the asynchronous stack trace, the await point. With this change, the output becomes:

In a stack trace, the topmost function appears first, followed by synchronous rest of the stack trace, followed by an asynchronous call to bar in function foo. This change is implemented in V8 after the new –async-stack-traces flag.


However, if you compare it to the asynchronous stack trace in Chrome DevTools above, you’ll notice that foo’s actual call point is missing from the asynchronous part of the stack trace. As mentioned earlier, this approach takes advantage of the principle that the await resume and pause positions are the same – but this is not the case for regular Promise#then() or Promise#catch() calls. For more background, see Mathias Bynens’ explanation of why await beats Promise#then().




conclusion

Thanks to two important optimizations, our asynchronous functions are faster:

  • Remove two additional microticks;

  • Cancel the throwaway promise;


Most importantly, we improved the development experience with zero-cost asynchronous stack traces running in await and promise.all () of asynchronous functions.

We also have some great performance tips for JavaScript developers:

  • Asynchronous functions and await are used instead of handwritten promises;

  • Stick with the native promise implementation provided by the JavaScript engine and avoid await using two microticks;


English text: https://v8.dev/blog/fast-async


Good article recommendation:

React 16.x roadmap published, including Suspense components for server rendering, Hooks, etc


UC International Technology is committed to sharing high quality technical articles with you

Please follow our official account and share this article with your friends