The recently released React V17.0 contains no new features.

The main work of V17.0 is to support Concurrent Mode within the source code. So the V17 version is also known as the “stepping Stone” version.

This article details the ins and outs of Concurrent Mode and the implementation of the system from the underlying architecture to the upper API.

Due to the relatively long span, details are inevitably missing. To further complement the details mentioned in the article, welcome to pay attention to my work reed number – magician Card song, give you a complete source code learning program.

What is?

What is Concurrent Mode? You can read about the basic concepts of Concurrent mode on the official website.

In a word:

Concurrent mode is a new set of React features that help applications stay responsive and adjust appropriately to the user’s device performance and network speed.

In order for an application to remain responsive, we need to understand what is constrains the application to remain responsive?

When we use apps and browse the web on a daily basis, there are two types of scenarios that restrict the hold response:

  • When encountering a large computation operation or insufficient device performance, the page drops frames, resulting in a lag.

  • After a network request is sent, it cannot quickly respond to the request because it needs to wait for the data to return.

These two types of scenarios can be summarized as:

  • The bottleneck of the CPU

  • The bottleneck of the IO

The bottleneck of the CPU

When a project becomes large and has many components, it is easy to run into CPU bottlenecks.

Consider the following Demo, where we render 3000 Li’s into the view:

function App() {
  const len = 3000;
  return (
    <ul>
      {Array(len).fill(0).map((_, i) => <li>{i}</li>)}
    </ul>
  );
}

const rootEl = document.querySelector("#root");
ReactDOM.render(<App/>, rootEl);  
Copy the code

Mainstream browsers refresh the browser at a frequency of 60Hz, that is, 16.6ms every (1000ms / 60Hz).

As we know, JS can manipulate the DOM, and GUI rendering threads and JS threads are mutually exclusive. Therefore, JS script execution and browser layout and drawing cannot be executed at the same time.

In every 16.6ms, the following work needs to be done:

The JS script performs ----- style layout ----- style drawingCopy the code

When JS execution takes longer than 16.6ms, this refresh does not have time to perform style layout and style drawing.

In Demo, due to the large number of components (3000), the JS script execution time is too long, and the page frames drop, resulting in a lag.

As can be seen from the printed execution stack diagram, JS execution time is 73.65ms, which is much more than the time of one frame.

How to solve this problem?

The answer is: Set aside some time for the JS thread in each frame of the browser. React uses this time to update components (as you can see from the source code, the initial time set aside is 5ms).

When not enough time is set aside, React returns thread control to the browser so it has time to render the UI. React waits for the next frame to resume interrupted work.

This process of breaking up long tasks into each frame, performing one task at a time like an ant moving, is known as time slice.

Therefore, the key to solving the CPU bottleneck is to implement time slicing, and the key to time slicing is to turn synchronous updates into interruptible asynchronous updates.

The bottleneck of the IO

Network latency is something front-end developers can’t solve. How to reduce users’ perception of network delay when network delay exists objectively?

React’s answer is to incorporate the results of human-computer interaction research into a real-world UI.

Here we take Apple, the industry’s top human-computer interaction, for example, in the IOS system:

Click “General” in the “Settings” panel to enter the “General” interface:

For comparison, click “Siri & Search” in the “Settings” panel again to enter the “Siri & Search” screen:

Can you feel the difference in experience?

In fact, the interaction after clicking “General” is synchronous, showing the subsequent screen directly.

Click “Siri and Search” and the interaction is asynchronous, waiting for the request to return before displaying the subsequent screen.

But in terms of user perception, the difference between the two is minimal.

The trick here is to click “Siri and Search” and stay on the current page for a short time, which is used to request data.

When the “little time” is short enough, the user is insensible. If the loading time exceeds a certain range, the loading effect is displayed.

Imagine if the loading effect is displayed as soon as we click “Siri and Search”, even if the data request time is very short, the loading effect will flash by. The user can also perceive it.

React implements Suspense and useDeferredValue for this purpose.

Within the source code, to support these features, you also need to make synchronous updates interruptible asynchronous updates.

Concurrent Mode bottom-up

The underlying foundation determines the implementation of the upper API, so let’s take a look at the bottom-up components of Concurrent Mode that enable the functionality mentioned above.

Underlying architecture — Fiber architecture

As we have seen above, the key to solving CPU and IO bottlenecks is to implement asynchronous interruptible updates.

Based on this premise, React spent two years reconstructing the Fiber architecture.

The significance of the Fiber mechanism is that it takes individual components as units of work, enabling “asynchronous interruptible updates” at component granularity.

The driving force of the architecture — Scheduler

If we run the Fiber architecture synchronously (via reactdom.render), the Fiber architecture is the same as before the refactoring.

But when we combine time slicing, we can assign each unit of work a runnable time based on host environment performance, enabling “asynchronous interruptible updates.”

The scheduler comes into being.

Scheduler ensures that our long tasks are split into different tasks for each frame.

When we enable Concurrent Mode for the Demo rendering 3000 Li mentioned above:

// Enable Concurrent Mode by using reactdom.unstable_createroot
// ReactDOM.render(<App/>, rootEl);  
ReactDOM.unstable_createRoot(rootEl).render(<App/>);
Copy the code

It can be seen that the execution time of each JS script is about 5ms.

This gives the browser time to perform style layout and style drawing, reducing the possibility of dropping frames.

The Fiber architecture, in conjunction with Scheduler, implements the underlying requirement of Concurrent Mode — “asynchronous interruptible updates”.

Architecture operation strategy — The Lane model

Up to now, React controls updates to run/break/continue running in Fiber architecture using Scheduler.

Based on the current architecture, when an update is interrupted during runtime and then resumed at a later time, this is called “asynchronous interruptible update”.

When an update is interrupted during the run and a new update is restarted, we can say that the later update interrupts the previous update.

This is the concept of priority: the later update has a higher priority, and he interrupts the previous update that is in progress.

How do multiple priorities interrupt each other? Can the priority be changed? What priority should be assigned to this update?

This requires a model to control the relationship and behavior between different priorities, so lane model was born.

The Lane model manipulates priorities by assigning different priorities to a single bit through a 31-bit bit operation

Here are the definitions of different priorities:

export const NoLanes: Lanes = / * * / 0b0000000000000000000000000000000;
export const NoLane: Lane = / * * / 0b0000000000000000000000000000000;

export const SyncLane: Lane = / * * / 0b0000000000000000000000000000001;
export const SyncBatchedLane: Lane = / * * / 0b0000000000000000000000000000010;

export const InputDiscreteHydrationLane: Lane = / * * / 0b0000000000000000000000000000100;
const InputDiscreteLanes: Lanes = / * * / 0b0000000000000000000000000011000;

/ / to omit...
Copy the code

The upper implementation

Now, we can say:

At the source level, Concurrent Mode is a controlled “multi-priority update architecture.”

So what interesting features can be implemented on top of this architecture? Let’s take a few examples:

batchedUpdates

If we trigger multiple updates in a single event callback, they are merged into a single update for processing.

The update will only be triggered once by executing the following code:

onClick() {
  this.setState({stateA: 1});
  this.setState({stateB: false});
  this.setState({stateA: 2});
}
Copy the code

This optimization of merging multiple updates is called batchedUpdates.

BatchedUpdates have been around since earlier versions, but the previous implementation was much more limited (updates out of the current context were not merged).

In Concurrent Mode, updates are combined on a priority basis and are used more widely.

Suspense

Suspense can present a pending state when a component requests data. Render data after successful request.

Component subtrees in Suspense essentially have lower priority than the rest of the component tree.

useDeferredValue

UseDeferredValue Returns the value of a delayed response, which can be “deferred” for as long as timeoutMs.

Example:

const deferredValue = useDeferredValue(value, { timeoutMs: 2000 });
Copy the code

Inside useDeferredValue, useState is called and an update is triggered.

This update has a low priority, so any updates that are currently in progress will not be affected by useDeferredValue. So useDeferredValue can return the value of the delay.

A higher priority update is triggered when useDeferredValue has not been updated after timeoutMs (it is constantly interrupted because the priority is too low).

conclusion

In addition to the implementation described above, we can expect a wave of Concurrent mode-based libraries in V18 when V17 fully supports Concurrent Mode.