Why’s THE Design is a series of articles about programming decisions in THE field of computing (with a bias toward THE front-end), in which THE pros and cons of this Design are discussed from different perspectives, and their impact on implementation. Inspired by Drap’s Why Design This Way

The body of the

In front end technology circles, there is often a conclusion about setTimeout: “The minimum setup delay for setTimeout is 4ms”. In the “so-hu” way, you have to “check first” and “check later” before answering a question.

Let’s start with the first question, “Is there a specific specification that specifies 4MS, or is it just an established fact of industry practice?”

For those familiar with the front end, setTimeout is not maintained by ECMAScript, but is provided by the host Environment. The specification to be followed is maintained by the WHATWG (there was a lot of discussion in esdiscuss 2011 about why ECMAScript doesn’t directly provide setTimeout functionality, Participants include Brendan Eich, Kyle Simpson, and others, which will be mentioned briefly or in another article). Back to HTML Standard, setTimeout() and setInterval() are described in detail in 8.6 Timers-2020/6/23, and we’ll just look at lines 10-13:

  1. If timeout is less than 0, then set timeout to 0.
  2. If nesting level is greater than 5, and timeout is less than 4, then set timeout to 4.
  3. Increment nesting level by one.
  4. Let task’s timer nesting level be nesting level.

As can be seen from the specification above:

  1. If set totimeoutIf the value is less than 0, set it to 0
  2. If the nested level exceeds 5 levels and timeout is less than 4ms, set timeout to 4ms.

At this point, we seem to have found the origin of 4ms, and have a more precise definition of the minimum delay of setTimeout – “4ms will be set only when the nesting level is more than 5 and timeout is less than 4ms”

Some may be wondering, “What is the Timer nesting level?” I’m also curious to see how to implement the timer nesting level and minimum latency in the browser source code, which can be explained in the Chromium source code below:

setTimeout((a)= > {
  setTimeout((a)= > {
    setTimeout((a)= > {
      setTimeout((a)= > {
        setTimeout((a)= >{},0)},0)},0)},0)},0)
Copy the code

At this point, it seems to explain setTimeout’s minimal delay design. However, as we often say in STATION B, “You only saw the second floor and thought I was on the first floor, but actually I was already on the fifth floor.”

When I saw the question raised by the group members, my first thought was to find the source of the specification, but also to focus on two other aspects:

  1. Have major browser vendors implemented the spec, and if not, why?
  2. How on earth was the number 4ms determined?

As I often think, “There must be a root behind a seemingly simple design result, which may come from the pain points, background and limitations of the time. We need to understand not only the design, but more importantly what is behind the design — for example, how the major technology players and manufacturers game and make tradeoff.”

Let’s first look at the output of various boundary cases of setTimeout delay by major browsers (it should be noted that the increase of setTimeout delay caused by a single Event loop delay is not considered here, which is simplified to the original case. Specifically, it does not consider the excessively long execution time in a single loop, assuming that the execution time of a single loop is less than ms level) :

  1. Chrome 83.0.4103.106 和 Safari / edge

  1. Firefox 65.0.1 and IE 11

Looking at the figure above, we can see that there are two strategies adopted by major browser manufacturers for setTimeout setting delay of 0ms and 1ms. As for the specific strategy, we need to find it in the browser source code (here we only show chromium source code, other Webkit or Firefox download to view), in the Chromium Blink directory, There is a file called domtimer.cpp, the online address, which is also used to set the timer delay:

static const int maxIntervalForUserGestureForwarding = 1000; // One second matches Gecko.
static const int maxTimerNestingLevel = 5;
static const double oneMillisecond = 0.001;
// Chromium uses a minimum timer interval of 4ms. We'd like to go
// lower; however, there are poorly coded websites out there which do
// create CPU-spinning loops. Using 4ms prevents the CPU from
// spinning too busily and provides a balance between CPU spinning and
// the smallest possible interval timer.
static const double minimumInterval = 0.004;

Copy the code
double intervalMilliseconds = std::max(oneMillisecond, interval * oneMillisecond);
if (intervalMilliseconds < minimumInterval && m_nestingLevel >= maxTimerNestingLevel)
    intervalMilliseconds = minimumInterval;
Copy the code

The code logic is clear, setting three constants:

  1. maxTimerNestingLevel = 5. This is the nesting level referred to in HTML Standard
  2. MinimumInterval = 0.004. This is what THE HTML Standard calls minimum latency.

As we will see in the second code, we first take a maximum between the delay time and 1ms. In other words, the minimum delay time is set to 1ms in cases where the nesting hierarchy is not satisfied. This also explains why testing setTimeout in Chrome is the result above.

The Chromium comment explains why minimumInterval = 4ms was set. To put it simply, the Chromium team wanted lower latency times (actually they wanted sub-milliseconds), but due to the poor use of setTimeout timers on some sites (like the New York Times), setting the latency too low would cause CPU-spinning (later, We’ll explain what CPU-spinning is), so Chromium did some benchmark tests and chose 4ms as its minimumInterval.

So far, this has been explained both from the perspective of the browser vendor and the HTML Standard specification4msAnd its more precise definition. But there is a new curiosity as to whether HTML Standard came first or whether Chromium, the browser vendors, came first. The point of understanding sequencing is to understand the history behind it, how specifications and vendors reinforce and balance each other.

Let’s dive into the operating system level first. Windows’ default Timer resolution is 10-15.6ms(here you can understand it as the granularity of timer). This means that the browser timer initially relies on timer resolution at the operating system level. Switching to setTimeout, the minimum delay will be at least 10ms. But in terms of CPU performance, processor speeds have increased from 500HZ in 1995 to more than 3GHZ (reaching 2010), while Windows’ default timer has not changed. It’s still 10-15.6ms (here you’ll see browser vendors and OS vendors thinking differently). The browser manufacturer (Chrome) believes that the default timer affects the presentation of web pages (10-15.6ms is too long). For the internal browser, if the clock tick is very long, it means that the browser will sleep for a long time, which degrades the browser performance.

The above explanation tells us an established fact that all browser timer implementations under Windows are initially dependent on the operating system timer, which is 10-15.6ms. Actual test results can be seen in Erik Kay’s visual sequencing test and in John Resign’s quick test scheme, in his 2008 article.

Chrome is very conscious of the 10-15.6ms timer. We know that Chrome is intended to be a high-performance modern browser, and when it comes to Timer Resolution, it wants to be submillimeter (less than 1ms). As a result, the Chrome team wanted to change the browser’s reliance on the timer operating system, and they used different schemes for Windows and Linux/Unix to achieve this. Linux/Unix has a dedicated API for modifying the system’s default Timer Resolution, which can be a bit cumbersome on Windows. The Chromium team ended up using the same API as Flash and Quicktime instead of the default Timer Resolution.

Chrome’s performance has been greatly improved by changing the OS’s default Timer Resolution. For Chrome 1.0 beta, Timer Resolution was set to 1ms (already close to the team’s expectations). One might wonder, given the low latency, why not just set it to 0ms?

The reason for this is that if the browser allows 0ms, the JavaScript engine can become overly looping, meaning that if the browser architecture is single-process, the site can easily become unresponsive. Because browsers themselves are built on event loops, the event loop will block if a slow JavaScript engine repeatedly wakes up the system with a 0ms timer. So what happens to the user? The thought of having CPU spinning and basically hanging browsers at the same time is overwhelming. No one would want to use a browser that routinely lets users experience this, since few people want to be abused. That’s why Chrome 1.0 Beta is set to 1ms.

The results looked pretty good, but then some teams reported bugs (specifically, the nyT website bug and the Intel team’s discovery of unusual chrome battery consumption). He found that the timer caused the CPU to spining and the CPU to spin, so that the computer could not go into sleep mode (low-power mode), which consumed power very quickly. As a result, the Chrome team had to deal with real-world problems (plus, since Chrome’s market share wasn’t as big back then as it is today, it didn’t want to get too big). The Chrome team’s solution at the time was to put a lot of restrictions on the Timer. Later, after some experimentation by the Chrome team, it was found that by going from 1ms to 4ms, there seemed to be no CPU spinning and excessive power consumption on most machines. In this tradeoff scenario, the Chrome team’s goal was to make the timer more accurate, and it didn’t cause any more problems.

As an aside, the Chrome team communicated with the Windows team at the very beginning, hoping that Windows could provide the function of dynamically adjusting hardware clock tick interval to match the needs of the upper application, but the result of the communication was not so good. You can understand why Microsoft is reluctant to make such a change from a recent presentation in which they hoped that future OS would enforce a low wake rate (100ms) for upper-layer applications to reduce the number of misbehaving programs. The Expectations of the Chrome team are in conflict with the expectations of the Windows team. At this point, we can understand that different teams consider the same thing differently.

Following the Chrome team’s tweaks to Timer (which have improved performance considerably), other major browsers (Safari, Opera, Firefox, IE) have moved to 4ms, And different browsers throttle timers for different conditions (which is why our browser tests had different results in the first place). Then the HTML standard was set up.

In fact, Timer Resolution is not a topic that is often discussed (because it requires a lot of basic knowledge and tends to be low-level), but it is actually evolving all the time.

As Nicholas C.Zakas noted in one of his articles, “We’re getting closer to the point of having per-millisecond control of the browser. When someone figures out how to Manage timers without CPU Interrupts, we’re likely to see timer resolution drop again. Until then, keep 4ms in mind, But remember that you still won’t always get that.”

conclusion

At this point, you can understand how setTimeout is set to 4ms. We can define the minimum delay of this method more precisely.

  1. The minimum latency varies from browser to browser. For example, The minimum latency of Chrome is1ms. If the timer has many nested levels, the minimum delay is4ms. The threshold for the nesting level varies from browser to browser, as shown in HTML Standard> 5In Chrome, yes> = 5.

In addition, we also understand the two questions mentioned above:

  1. Have major browser vendors implemented the spec, and if not, why?
  2. How on earth was the number 4ms determined?

Browser vendors don’t fully implement the spec because they have their own benchmarks, and then different browser vendors make different Settings. In addition, THE HTML Standard provides some flexibility for such small variables. We also understand the background of 4MS and the different considerations of browser vendors and operating system vendors behind it, their solution decisions and tradeoff.


I’m BY, an interesting person who will bring more original articles in the future.

This article is subject to MIT protocol. Please contact the author for republication.

Interested can follow the public account (Baixue Principle) or Star GitHub repo.