The introduction

Recently, the React team presented plans for the React 18 release, which mentioned an optimization: Automatic Batching. This optimization was discussed on Github by Dan, a React member. This article expands on this explanation.

What is batch processing

Batch processing is an optimized way of combining multiple status updates in React into a single re-render for better performance, as in this example (React 17.0.2) :

import { useState, useLayoutEffect } from "react";
import * as ReactDOM from "react-dom";

function App() {
  const [count, setCount] = useState(0);
  const [flag, setFlag] = useState(false);

  function handleClick() {
    console.log("=== click ===");
    // Trigger a re-render
    setCount((c) = > c + 1);
    setFlag((f) = >! f); }return (
    <div>
      <button onClick={handleClick}>Next</button>
      <h1 style={{ color: flag ? "blue" : "black}} ">{count}</h1>
      <LogEvents />
    </div>
  );
}

function LogEvents(props) {
  useLayoutEffect(() = > {
    console.log("Commit");
  });
  console.log("Render");
  return null;
}

const rootElement = document.getElementById("root");
ReactDOM.render(<App />, rootElement);
Copy the code

=== = click === === = :

Render
Commit
Copy the code

Here’s an extension of how React implements batch processing in Legacy mode.

Legacy mode batch processing is implemented

Click on the button and view the call stack as follows:

Here we focus on the two discreteUpdates marked in red. First of all, there are discreteUpdates for you:

export function discreteUpdates<A.B.C.D.R> (fn: (A, B, C) => R, a: A, b: B, c: C, d: D,) :R {
  const prevExecutionContext = executionContext;
  executionContext |= DiscreteEventContext;

  if (decoupleUpdatePriorityFromScheduler) {
    ...
  } else {
    // Go this way
    try {
      return runWithPriority(
        UserBlockingSchedulerPriority,
        fn.bind(null, a, b, c, d),
      );
    } finally {
      executionContext = prevExecutionContext;
      if (executionContext === NoContext) {
        // Flush the immediate callbacks that were scheduled during this batchresetRenderTimer(); flushSyncCallbackQueue(); }}}}Copy the code

Note here that DiscreteEventContext has been added to the current executionContext executionContext (value 4). Then batchedEventUpdates:

export function batchedEventUpdates<A.R> (fn: (A) => R, a: A) :R {
  const prevExecutionContext = executionContext;
  executionContext |= EventContext;
  try {
    return fn(a);
  } finally {
    executionContext = prevExecutionContext;
    if (executionContext === NoContext) {
      // Flush the immediate callbacks that were scheduled during this batchresetRenderTimer(); flushSyncCallbackQueue(); }}}Copy the code

Here, as above, we add EventContext (value 2) to executionContext. Now the executionContext value is 6.

Then execute setCount, which takes you all the way to scheduleUpdateOnFiber:

export function scheduleUpdateOnFiber(fiber: Fiber, lane: Lane, eventTime: number,) :FiberRoot | null {...if (lane === SyncLane) {
    if (
      // Check if we're inside unbatchedUpdates(executionContext & LegacyUnbatchedContext) ! == NoContext &&// Check if we're not already rendering
      (executionContext & (RenderContext | CommitContext)) === NoContext
    ) {
      ...
    } else {
      // Come here
      ensureRootIsScheduled(root, eventTime);
      schedulePendingInteractions(root, lane);
      if (executionContext === NoContext) {
        // Flush the synchronous work now, unless we're already working or inside
        // a batch. This is intentionally inside scheduleUpdateOnFiber instead of
        // scheduleCallbackForFiber to preserve the ability to schedule a callback
        // without immediately flushing it. We only do this for user-initiated
        // updates, to preserve historical behavior of legacy mode.resetRenderTimer(); flushSyncCallbackQueue(); }}}else{... }// We use this when assigning a lane for a transition inside
  // `requestUpdateLane`. We assume it's the same as the root being updated,
  // since in the common case of a single root app it probably is. If it's not
  // the same root, then it's not a huge deal, we just might batch more stuff
  // together more than necessary.
  mostRecentlyUpdatedRoot = root;

  return root;
}
Copy the code

“EnsureRootIsScheduled” is called:

function ensureRootIsScheduled(root: FiberRoot, currentTime: number) {
  constexistingCallbackNode = root.callbackNode; .// Check if there's an existing task. We may be able to reuse it.
  if(existingCallbackNode ! = =null) {
    const existingCallbackPriority = root.callbackPriority;
    if (existingCallbackPriority === newCallbackPriority) {
      // The priority hasn't changed. We can reuse the existing task. Exit.
      return;
    }
    // The priority changed. Cancel the existing callback. We'll schedule a new
    // one below.
    cancelCallback(existingCallbackNode);
  }
  // Schedule a new callback.
  let newCallbackNode;
  if (newCallbackPriority === SyncLanePriority) {
    // Special case: Sync React callbacks are scheduled on a special
    // internal queue
    // Enter here
    newCallbackNode = scheduleSyncCallback(
      performSyncWorkOnRoot.bind(null, root),
    );
  } else if (newCallbackPriority === SyncBatchedLanePriority) {
    ...
  } else{... }// console.log(root.callbackNode, newCallbackNode);

  root.callbackPriority = newCallbackPriority;
  root.callbackNode = newCallbackNode;
}
Copy the code

Here a task is scheduled using the React scheduler (see the React source for more information about the scheduler in Concurrent mode) :

    newCallbackNode = scheduleSyncCallback(
      performSyncWorkOnRoot.bind(null, root),
    );
Copy the code

Then mount the task to root:

  root.callbackNode = newCallbackNode;
Copy the code

EnsureRootIsScheduled is one of the ensureRootIsScheduled applications that is also entered, but since root.callbacknode is not null, the ensureRootIsScheduled option will be entered:

const existingCallbackNode = root.callbackNode;
const nextLanes = getNextLanes(
  root,
  root === workInProgressRoot ? workInProgressRootRenderLanes : NoLanes,
);
// This returns the priority level computed during the `getNextLanes` call.
constnewCallbackPriority = returnNextLanesPriority(); .if(existingCallbackNode ! = =null) {
  const existingCallbackPriority = root.callbackPriority;
  if (existingCallbackPriority === newCallbackPriority) {
    // The priority hasn't changed. We can reuse the existing task. Exit.
    return;
  }
  // The priority changed. Cancel the existing callback. We'll schedule a new
  // one below.
  cancelCallback(existingCallbackNode);
}
Copy the code

Here, because existingCallbackNode is not empty and the priority of the update has not changed, it returns directly. As you can see from the above analysis, only one update task is scheduled, triggered by setCount (the macro task in this case will execute performSyncWorkOnRoot), and this macro task will be taken out and executed in the next event loop, processing both updates at once.

Batch failure

However, batch processing sometimes fails, such as when two consecutive updates are issued in setTimeout:

import { useState, useLayoutEffect } from "react";
import * as ReactDOM from "react-dom";

function App() {
  const [count, setCount] = useState(0);
  const [flag, setFlag] = useState(false);

  function handleClick() {
    setTimeout(() = > {
      // Rerender is triggered each time
      setCount(c= > c + 1);
      setFlag(f= > !f);
    })
  }

  return (
    <div>
      <button onClick={handleClick}>Next</button>
      <h1 style={{ color: flag ? "blue" : "black}} ">{count}</h1>
      <LogEvents />
    </div>
  );
}

function LogEvents(props) {
  useLayoutEffect(() = > {
    console.log("Commit");
  });
  console.log("Render");
  return null;
}

const rootElement = document.getElementById("root");
ReactDOM.render(<App />, rootElement);
Copy the code

After handleClick is executed, there are plenty of discreteUpdates and batchedEventUpdates for the executionContext to be restored.

export function discreteUpdates<A.B.C.D.R> (fn: (A, B, C) => R, a: A, b: B, c: C, d: D,) :R {
  const prevExecutionContext = executionContext;
  executionContext |= DiscreteEventContext;

  if (decoupleUpdatePriorityFromScheduler) {
    ...
  } else {
    try{... }finally{ executionContext = prevExecutionContext; . }}}Copy the code
export function batchedEventUpdates<A.R> (fn: (A) => R, a: A) :R {
  const prevExecutionContext = executionContext;
  executionContext |= EventContext;
  try {
    return fn(a);
  } finally{ executionContext = prevExecutionContext; . }}Copy the code

By the time the setTimeout function is executed, the value of executionContext will already be 0. When setCount is executed, scheduleUpdateOnFiber is displayed:

export function scheduleUpdateOnFiber(fiber: Fiber, lane: Lane, eventTime: number,) :FiberRoot | null {...if (lane === SyncLane) {
    if (
      // Check if we're inside unbatchedUpdates(executionContext & LegacyUnbatchedContext) ! == NoContext &&// Check if we're not already rendering
      (executionContext & (RenderContext | CommitContext)) === NoContext
    ) {
      ...
    } else {
      // Come here
      ensureRootIsScheduled(root, eventTime);
      schedulePendingInteractions(root, lane);
      if (executionContext === NoContext) {
        // Flush the synchronous work now, unless we're already working or inside
        // a batch. This is intentionally inside scheduleUpdateOnFiber instead of
        // scheduleCallbackForFiber to preserve the ability to schedule a callback
        // without immediately flushing it. We only do this for user-initiated
        // updates, to preserve historical behavior of legacy mode.resetRenderTimer(); flushSyncCallbackQueue(); }}}else{... }... }Copy the code

“After ensureRootIsScheduled is executed, the” flushSyncCallbackQueue “function is called because executionContext is 0. This function causes update tasks scheduled in ensureRootIsScheduled to be executed immediately.

After the update is complete, the following setFlag execution will follow the same logic.

To avoid batch failures in asynchronous logic such as timers, we can use unstable_batchedUpdates:

import { unstable_batchedUpdates } from 'react-dom'; .function handleClick() {
    setTimeout(() = > {
      unstable_batchedUpdates(() = > {
        // Trigger a re-render
        setCount(c= > c + 1);
        setFlag(f= >! f); })})}...Copy the code

Unstable_batchedUpdates is not a secret. It is simply a matter of adding BatchedContext to executionContext before executing the function and resuming it after executing it:

export function batchedUpdates<A.R> (fn: (A) => R, a: A) :R {
  const prevExecutionContext = executionContext;
  executionContext |= BatchedContext;
  try {
    return fn(a);
  } finally {
    executionContext = prevExecutionContext;
    if (executionContext === NoContext) {
      // Flush the immediate callbacks that were scheduled during this batchresetRenderTimer(); flushSyncCallbackQueue(); }}}Copy the code

Automatic Batching

After version 18, automatic batching is now possible under any circumstances, but you need to use createRoot to enable Concurrent mode (see the React source code for timing slicing of Concurrent mode) and the same timer example:

import { useState, useLayoutEffect } from "react";
import * as ReactDOM from "react-dom";

function App() {
  const [count, setCount] = useState(0);
  const [flag, setFlag] = useState(false);

  function handleClick() {
    setTimeout(() = > {
      // React 18 only triggers a re-render
      setCount(c= > c + 1);
      setFlag(f= > !f);
    })
  }

  return (
    <div>
      <button onClick={handleClick}>Next</button>
      <h1 style={{ color: flag ? "blue" : "black}} ">{count}</h1>
      <LogEvents />
    </div>
  );
}

function LogEvents(props) {
  useLayoutEffect(() = > {
    console.log("Commit");
  });
  console.log("Render");
  return null;
}

const rootElement = document.getElementById("root");
// Use concurrent mode
ReactDOM.createRoot(rootElement).render(<App />)
Copy the code

So, how does it work?

ScheduleUpdateOnFiber when setCount is used:

export function scheduleUpdateOnFiber(fiber: Fiber, lane: Lane, eventTime: number,) :FiberRoot | null {...if (lane === SyncLane) {
    ...
  } else {
    // will enter the branch
    // Schedule a discrete update but only if it's not Sync.
    if( (executionContext & DiscreteEventContext) ! == NoContext &&// Only updates at user-blocking priority or greater are considered
      // discrete, even inside a discrete event.
      (priorityLevel === UserBlockingSchedulerPriority ||
        priorityLevel === ImmediateSchedulerPriority)
    ) {
      // This is the result of a discrete event. Track the lowest priority
      // discrete update per root so we can flush them early, if needed.
      if (rootsWithPendingDiscreteUpdates === null) {
        rootsWithPendingDiscreteUpdates = new Set([root]);
      } else{ rootsWithPendingDiscreteUpdates.add(root); }}// Schedule other updates after in case the callback is sync.
    ensureRootIsScheduled(root, eventTime);
    schedulePendingInteractions(root, lane);
  }

  // We use this when assigning a lane for a transition inside
  // `requestUpdateLane`. We assume it's the same as the root being updated,
  // since in the common case of a single root app it probably is. If it's not
  // the same root, then it's not a huge deal, we just might batch more stuff
  // together more than necessary.
  mostRecentlyUpdatedRoot = root;

  return root;
}
Copy the code

“EnsureRootIsScheduled” is still one of the options available to enable an update task.

This is one of the ensureRootIsScheduled options that will be returned if the callbackNode already exists on root and the priority of the update has not changed.

Comparing Legacy mode with Concurrent mode, the difference between the two is the addition of this code in Legacy mode:

  if (executionContext === NoContext) {
    // Flush the synchronous work now, unless we're already working or inside
    // a batch. This is intentionally inside scheduleUpdateOnFiber instead of
    // scheduleCallbackForFiber to preserve the ability to schedule a callback
    // without immediately flushing it. We only do this for user-initiated
    // updates, to preserve historical behavior of legacy mode.
    resetRenderTimer();
    flushSyncCallbackQueue();
  }
Copy the code

From the comments at the end of the comment, this code is added in Legacy mode to keep it consistent with historical behavior.

conclusion

This article describes how to implement the React 17 update batch in Legacy mode and why it fails in some scenarios. It also compares the automatic batch in React 18 Concurrent mode. In simple terms, Legacy mode identifies batch status with a variable, while Concurrent mode determines whether to start a new update task or reuse a scheduled task by determining whether a task has been scheduled on root and whether the update priority has changed.

Welcome to pay attention to the public account “front-end tour”, let us travel in the front-end ocean together.