Thank you. Good question. Recently I am working on containerization for the company. I am so tired that I should be invited to talk about such brain-burning questions.

Kit Kat, I don’t know if it counts, but I recently saw a redesign of the nextTick queue in commit.

(Easter eggs at the end)

There are actually quite a few:

  • Design of Async hook
  • Timer Time wheel optimization, HTTP date cache obtain
  • Package mechanism (I think quite a coincidence
  • Stream design (back Press, HWM) etc…

NextTick queue

Those familiar with node event loops should know that when we set up the nextTick callback in a loop, it is executed at the end of the loop, in front of the Mirco Task.


To complete the event loop, you need to add the two phases implemented by Node.js in addition to the six phases provided by Libuv.

At the end of this round, all nexttick will be executed, and in practice, we’ll use the Nexttick callback a lot (both for Node developers and node users). In the original version, the nextick queue looked like this:

var nexTickCallback = [];
Copy the code

Simple and crude. In the early implementation, this was fine until this PR came along :#13446, the guy discovered the magic: using ES6 to construct an array, manually adding methods like Clear,push, Shift, etc., was nearly 20% faster than native []. Here are the results:

Improvement confidence P.value process/ next-tick-embos-args. Js millions= 227.75% *** 1.271176E-20 Process/Next-tick-depth gs. Js process/ Next-tick-depth GS. Js process/ Next-tick-depth GS Millions = 1247.32% *** 7.742778e-31Copy the code

So what code does he add to nextTick? No, he simply wrote a function using es6’s class:

class NextTickQueue { constructor() { this.head = null this.tail = null this.length = 0 } push(v) { const entry = { data: v, next: null } if (this.length > 0) this.tail.next = entry else this.head = entry this.tail = entry ++this.length } shift() { if  (this.length === 0) return const ret = this.head.data if (this.length === 1) this.head = this.tail = null else this.head = this.head.next --this.length return ret } clear() { this.head = null this.tail = null this.length = 0 } }Copy the code

With just one change, performance improved by 20%!

This function is universal, which means we can use it in real life to optimize a large number of operations on our queue. So I wrote a test function. The results are really exciting.

Use a special reusable one-way linked list to optimize speed

After a while, the nextTick implementation was kicked again, and the specific pr is here :pr:#18617. His idea is actually very simple. When we push, the system will apply for a new space to store, and when we clean up, a large chunk of memory will be cleared, so it is really a bit of waste. It is better to apply for a heap of memory at one time, and put it in according to the location when we push. Hence the present implementation:

// The current design looks like this: // If the number of callbacks registered in a single list is less than 2048, So will only one-time separated 2048 array length available / / this 2048 memory in the length of the array is a reusable / / / / the head tail / / | | v / / / / v + -- -- -- -- -- -- -- -- -- -- - + < -- -- -- -- -- \ + -- -- -- -- -- -- -- -- -- -- - +  <------\ +-----------+ // | [null] | \----- | next | \------- | next | // +-----------+ +-----------+ +-----------+ // | tick | <-- bottom | tick | <-- bottom | [empty] | // | tick | | tick | | [empty] | // | tick | | tick | | [empty] | //  | tick | | tick | | [empty] | // | tick | | tick | bottom --> | tick | // | tick | | tick | | tick | // | ... | |... | |... | // | tick | | tick | | tick | // | tick | | tick | | tick | // | [empty] | <-- top | tick | | tick | // | [empty] | | tick | | tick | // | [empty] | | tick | | tick | // +-----------+ +-----------+ <-- top top --> +-----------+ // / / callback less situation / / head tail head tail / / | | | | / / / / v v v v + -- -- -- -- -- -- -- -- -- -- - + + -- -- -- -- -- -- -- -- -- -- - + / / | (null) | | | / / [null] +-----------+ +-----------+ // | [empty] | | tick | // | [empty] | | tick | // | tick | <-- bottom top --> | [empty] | // | tick | | [empty] | // | [empty] | <-- top bottom --> | tick | // | [empty] | | tick | // +-----------+ +-----------+ // // when a callback is inserted into the queue, top is pulled down by a grid. // When a callback is pulled from the queue, bottom is pulled down by a grid. //list[top]! //list[top]! //list[top]! // if top===bottom && list[top] === void 666 // if top===bottom && list[top] === void 666 // if top===bottom && list[top] === void 666 // then the table is emptyCopy the code

Because of these commits, performance improved 40 percent

Confidence improvement accuracy (*) (**) (***) Process/next-tick-bound-gs. Js Millions =4 *** 40.11% ±1.23% ±1.64% (2) process/next-tick-breadth. Js =4 *** 7.16% ±3.50% ±4.67% ±6.11% process/ Next-tick-depth -args.js Millions =12 *** 5.46% ±0.91% ±1.22% ±1.59% Process /next-tick-depth.js Millions =12 *** 23.26% ±2.51% ±3.36% ±4.40% Millions =5 *** 38.64% ±1.16% ±1.55% ±2.01% Process/Next-tick-exec.js millions=5 *** 77.20% ±1.63% ±2.18% ±2.88% Be aware that when doing many comparisions the risk of a false-positive result increases. In this case there are 6 comparisions, you can thus expect the following amount of false-positive results: When considering a 5% risk acceptance (*, **, **), take a false positives tives, In order to take advantage of the unhealthy environment, we decided to take a 10% risk acceptance (**, ***) into consideration.Copy the code

To sum up nexttick

Also not be what strange qiao lewd skill actually, namely commonly used in the industry “space changes time” practice.

This “space for time” approach is summed up as:

When building a complex javascript object, we can reuse the object in the form of an object pool, which can greatly reduce the stress on the system, although it costs a little more memory, but at this stage, memory is almost worthless.

Title:

With this in mind, when we design a Web framework, one thing is massively created. What is this thing? Hey hey hey hey….

(I won’t say, lest I steal my PR.)