Any logic can be broken down into models like this:

Event -> Status -> Event -> Status -> Event -> Status…

Events cause state changes, which in turn initiate new events

“When XXXX occurs, the data of XXXX will change”, the event is in the front, the state is in the back, what kind of form is it?

Refer to the cycle.js documentation for the event-to-state dependent arrows in the event area

That is, the event is actively initiated and the state is passively endured. Notice the active and passive relationship

But let’s rephrase it

“When listening for XXXX data change, execute XXXX”, state before, event after, it will change to this form:

As shown, the dependencies are in Bar, the state owner, which can actively control the execution of the process

This shift between active and passive is at the heart of responsive programming

The challenge of responsive programming is this shift in programming thinking. By changing your thinking to “responsive first,” your learning curve will be flat and most tasks will be straightforward

So let’s look at the React API with dependency arrays. Which ones are active? Which ones are passive?

UseEffect, useLayoutEffect are active apis

useEffect(() = >{
    if(a){
     // ...
    }
},[a,b,c,d])
Copy the code

The control structure is in useEffect, so useEffect is active and a responsive API

This means that no matter where the structure is, it is independent and autonomous. That is to say, it achieves low coupling and incorporates React scheduling mechanism. It also has natural advantages in handling asynchronism

UseMemo, useCallback, and even useState are passive apis

UseCallback can’t decide what changes, useMemo just follows state, and useState can’t decide how it changes

UseMemo doesn’t matter because the data doesn’t automatically trigger events

However, when you use the return value of useCallback in useEffect, a huge trap emerges:

const cb = useCallback(() = >{/ *... * /},/ *... * /])
useEffect(() = >{
  If cb is not declared in this scope, you will have difficulty determining dependencies and controlling scheduling
},[cb])
Copy the code

This leads to a magical loop of effects, such as effect(active)->callback(passive? Active?)

The dependency array in useEffect, which is active, acts as an event initiator in effect and should act on the setter of useState. Instead, it writes a passively controlled useCallback. Once the dependency of callback is coupled with effect, There will be a loop, an endless loop is born

const cb = useCallback(() = >{ 
   setA(' ') // Write a here
},[a,b,c,d])
useEffect(() = >{
},[cb,a]) // Pass a here
Copy the code

If you need to avoid a loopback, it’s simple —

Try not to call callback with dependencies in useEffect

Undependent callback is just a value, there is no active or passive

How do you guarantee dependency free callback?

Very simple, we take all these events -> states -> events -> state flow, and transform it into:

Event (useCallback) -> Status -> Effect -> Status

Isn’t that enough?

MVI model

We abstracted IO to a high degree, and it’s easy to see that it’s just a process like this:

Operation (input) -> Calculation -> Output (perception) -> People -> Operation (input)…

Among them, the processing input part is designed as an event, the output part is designed as a state, and the calculation part is designed as a state + effect set

This pattern, called the MVI model, can be easily modeled:

The intent part of the intent is where the useCallback launches

We use useCallback to convert events to states

Do you need to add a dependency to useCallback?

const [a,setA] = useState()
const [b,setB] = useState()

const intentCb = useCallback(() = >{
  setA(' ')
  setB(' ')
},[])
Copy the code

Use useCallback for this purpose, without any dependencies, can be safely passed, combined:

const refresh = useCallback(() = >{},[])

useEffect(
  () = >{
      refresh()
  }
,[refresh,a,b,c,d,e])

Copy the code

Both:

DOM SOURCE -> callback -> state -> effect -> state+memo -> DOM SINK

One thing that gets overlooked is the View layer

In MVI, DOM SINK is used to describe the view layer, that is, from the perspective of scheduling, not every user operation will make every data change, so it needs to do a work similar to filtering

However, developers often forget this part of the code, for example, it is easy to get used to returning an element directly

function SomeCompo(){
  // ...
  return <div>
    {/* State is easy to control, and callback is hard to control because function is always new */}
    <SomeExpenssiveCompo value={[state,someCb]}/>
  <div>
}
Copy the code

React The official solution is to use the useReducer to pass the callback down and write the context separately

const [state,dispatch] = useReducer(() = >{}, {})return <State.Provider value={state}>
  <Dispatcher.Provider value={dispatch}>{/ *... * /}</Dispatch.Provider>
</State.Provider>
Copy the code

I think it’s stupid! Not to mention that parts of the logic that belong to the same context need to be broken down and written into nested contexts, complicating the dependency injection structure

Most importantly, useReducer has difficulty using the various third-party Hooks ecosystem, such as react-use, SWR, etc., and does not have enough asynchronous support

Of course, this is true of all usereducers with modular capabilities, not to mention unified state management libraries such as Redux with global unified context and no split initialization capabilities

At the time of this part of remarks, although I have long been familiar with the use of a variety of state management libraries, MY view of useReducer is still too one-sided. Later articles will also explain the use of useReducer in detail, which can be viewed in a step-by-step manner

The problem here, I think the real solution is — cure the head and foot of the headache

  1. Undependent callback instead of Dispatch (which is already undependent and does not change)

  2. Add useMemo to JSX

Yeah, your goal is to control component scheduling, DOM sink

You should work directly on the consumption phase (JSX Element) (it would be silly to move this logic forward to the Model phase)

function SomeCompo(){
  return <div>
    {/* Refresh the component */ only when a,b, and c change}
    useMemo(() = > <SomeExpenssiveCompo value={[state,someCb]}/>, [a,b,c])
  <div>
}
Copy the code

A simple problem that can be solved. Why use useReducer?

UseReducer is an inverter type functional state machine (similar to the class package structure), however, JS and even TS is not a functional language, TS inverter type support is not good, with TS more should consider covariant types, and use the self-interpretation of the class for auxiliary development

Personally, I am very opposed to using reducer state mechanism in JS/TS. Js/TS has no pattern recognition and perfect covariant types. Most of the time, the developer controls the code line by himself, simulates covariant types with strings, and forces pattern recognition with if/switch statements. There’s really no need to exist

Of course, I am opposed to using Reducer in JS/TS, and I think it is great on ReasonML

However, I don’t have the experience of flow + React development, so maybe checking type system can bring better experience to this development method, but I don’t have much hope for js/ TS based environment

Yes, you’ll notice that I used cycle.js documentation for both active and passive programming and the MVI model, and cycle.js is based on or borrowed from Rxjs

Yes, the MVI model and the idea of active and passive responsive development is —

Reactive flow

  1. An IO is abstracted into an intent, model, view, intent initiation, Model processing, and view settlement

  2. If you switch to active mode, it is an event-driven stream (Rx,xstream). If you switch to passive mode, it is a data-driven stream (React main mode).

Both events -> Events -> Events and Data -> Data -> Data

In other words, if your application logic is based on useCallback + State, it is event-driven (fully reactive), and if your application logic is based on useEffect + State, it is data-driven (fully active).

React is better suited to a data-driven model. Why?

React doesn’t fully proxy events.

In other words, not all react events are aware of. At synthetic events, you can just call useCallback. What about non-synthetic events? (setTimeout, promise, socket, Web worker, media stream?)

You probably need to call useCallback in useEffect?

const handleTimeout = useCallback(() = >{
   // timeout
},[])
useEffect(() = >{
  setTimeout(handleTimeout)
},[handleTimeout])
Copy the code

Each of these incidents must be handled with great caution and courage

However, there is a special pattern that can be used in these places:

Action mode – Executes functions according to React scheduling

const [action,disaptch] = useState(() = >() = >' ')

useEffect(() = >{
    // Function arguments
    const params = action()
    // Function logic
},[action])

// In a certain place
dispatch(() = >'new param')
Copy the code

UseEffect is used to write a useCallback, but react controls the scheduling of this function.

  1. Call asynchronously
  2. The same event loop is called only once

The second problem can be solved in the form of a set of parameters, while the first problem is basic misunderstanding.

In this case, asynchronous events like setTimout are very easy to solve

const [action,dispatch] = useState(() = >() = >{})

useEffect(() = >{
  console.log('timeout')
},[action])

useEffect(() = >{
  setTimeout(() = >{ dispatch(() = >{})})}, [])Copy the code

Data-driven – a stupid way to solve complex problems

Data-driven doesn’t have as many highly abstract tools as event-driven, such as switchMap, Merge, Combine, debounce (yes, data-driven you don’t even need debounce, one timer + another identifier), but it’s incredibly efficient and stable

Even in data-driven thinking, there is no asynchrony, and the basic implementation of asynchrony (shared memory communication) is itself data-driven

For example, with setTimeout, the trigger is treated as an empty function (or map) action, the wait is treated as a state waiting, the callback execution is treated as a state over, and asyncism does not exist

It is easy to understand why many hooks tools encapsulate this:

/* swr */ const {data,error,loading} = useSwr(key,fetcher) `/* react-use */ const [state, doFetch] = useAsyncFn(async ()=>{},[])

Const [state,waiting,dispatch] = useAsyncFn for simplicity and synchronization

However, there is a major drawback to React

Data-driven, not functional

Yes, data-driven is not a good fit for functional

  • A function call has two states: the call and return value, the number of calls and errors, and the set of parameters if multiple calls are synchronized
  • An asynchronous function call, in addition to the state that the function has, has loading, for scheduling control, has history loading time, called time, launched time, and their list, etc
  • A constantly triggered event, with countless amounts of data and related descriptions

And that makes the number of states very large, so how do you manage states, that’s a very tricky problem, right

Function plus return value? Both function emulation classes (no inheritance, inheritance itself should be discarded) are very poor — no additional descriptive information, no self-interpretation

Package status should be managed in a Golang interface or anaemic style

Yes, Go is also data-driven asynchronous development. Developers can easily achieve lock-free asynchronous development by selecting, for range a channel. Because of this, Go does not need functional and does not have the motivation to support generics (inverting types)

React support is very poor, not to mention self-explanatory. For a large set of states, there is no auto-generated document, auto-generated graph statistics, and these states will only make your development experience deteriorate.

React does not have full event-driven support. If you use event-driven tools like Rxjs to help you develop React, the experience will not be as good.

This is the biggest drawback of React, but there is also a reason why React is React