The illustration comes from Google search

background

In business, we often encounter situations where each request requires some pre-information, such as token, user ID, etc. However, this information also needs to be obtained asynchronously.

The solution

There are two things that come to mind immediately:

  1. SSR server rendering
  2. Front-end request interception

SSR server rendering is easier to understand, the server to request the pre-information, and then add the result to the returned HTML or URL, the front-end fetch is good, this article does not do too much. Mainly look at the second solution, how to solve the front end?

General plan

I’m sure the smart reader will immediately think:

Simple ~ intercept the method of initiating a request, before each request is initiated, the first to request the pre-information, if the request arrived, the data cache down, to prevent the next request to request these information.

Let’s take the token request scenario as an example and write the code

// fetch.js
let token = null;

const requestToken = (a)= > {
  return new Promise((resolve, reject) = > {
    if (token) {
      return resolve(token);
    }
    setTimeout((a)= > {
      token = 'this is token';
      console.log('Requested token successfully once');
      resolve(token);
    }, 300); })}const request = async (args) => {
  return new Promise(async (resolve, reject) => {
    try {
      const token = await requestToken();
      setTimeout((a)= > {
        resolve(`${args}The request was successful. The token is${token}`);
      }, 300);
    } catch (error) {
      reject('Request failed')}}}export const getUserInfo = (a)= > request('getUserInfo');

export const getList = (a)= > request('getList');

export const getDetail = (a)= > request('getDetail');
Copy the code

As you can see, we use setTimeout to simulate asynchronous requests, encapsulate requestToken and Request methods, and expose three request functions, namely getUserInfo, getList, and getDetail.

After the first rendering, execute any of the three request functions to verify that only one token was requested.

// homepage.jsx
import React from 'react';
import { getUserInfo, getList, getDetail } from './fetch.js';

const Homepage = (a)= > {
  React.useEffect((a)= > {
    Promise.all([
      getUserInfo(),
      getList(),
      getDetail(),
    ]).then(res= > {
      console.log('Concurrent request successful', res);
    }).catch(e= > {
      console.error('Concurrent request failed', e);
    })

    setTimeout((a)= > {
      getUserInfo().then(res= > {
        console.log('Wait until the above concurrent request completes', res);
      }).catch(e= > {
        console.error('Second request failed', e);
      });
    }, 2000);
  }, [])

  return (
    <div>This is homepage.</div>)}export default Homepage;
Copy the code

OK, so let’s see what it looks like.

The problem is that requestToken requests multiple times

As you can see, on the first concurrent request, requestToken made 3 requests because none of them returned. When the next getUserInfo is executed, the token has already returned the result, so the request is not made again

What problems do they cause?

  1. In an interview,requestTokenRepeated requests are pointless and waste traffic
  2. inhttp2.0Previously, there was a limit to how many simultaneous requests a browser could make,requestTokenOccupying the request channel will inevitably affect other requests and degrade the user experience

As a performance-obsessed front-end, this is unacceptable. So how do you optimize?

Optimized version

Train of thought

We wrap a higher-order function fetchOnce, which wraps the real request function, pushes the real request function to the request queue when called, and then executes the request function in turn. If the request is successful, the result is stored and returned, and if the request fails, the request continues until the request queue is empty.

Beyond that, there are a few details to consider.

How do I block requests?

This is easy. Just add a lock. When a request is being processed, the lock is locked so that subsequent requests do not continue. When a request succeeds or all fails, the lock is reopened.

When a request succeeds or fails altogether, how do you inform all requesting functions of the correct processing result?

Each time a wrapped function is called, because it returns a promise, the promise queue pushes the resolve and reject methods of the current promise. When a request succeeds, all the promise queue’s resolve methods are executed. When all requests fail, all promise reject methods in the Promise queue are executed.

With this idea, we can write code right away

// fetchonce.js
// Higher order function, the argument is the actual request function
const fetchOnce = (fn) = > {
  // Request function queue
  const fnQueue = [];
  / / promise queue
  const promiseQueue = [];
  // Error queue, used to collect error information for each time, when all failures are returned
  const errors = [];
  
  // Used to cache results
  let result;
  / / request lock
  let lock = false;
  
  // Consume the Promise queue
  const dispatch = (isSuccess, value) = > {
    while(promiseQueue.length){
      const p = promiseQueue.shift();
      p[isSuccess ? 'resolve' : 'reject'](value); }}// return the wrap function
  return function(. args) {
    return new Promise(async (resolve, reject) => {
      // When there is a result, resolve the result directly
      if(result){
        return resolve(result);
      }
      // Queue the real request function and the current resolve reject
      fnQueue.push(fn);
      promiseQueue.push({resolve, reject});
      // If it is locked, do not proceed
      if(lock){
        return;
      }
      lock = true;
      // Iterate through the queue of request functions, executing one by one
      for(let func of fnQueue){
        try{
          // If one succeeds, clear the request queue and error queue, cache the result, consume the promise, release the lock
          const res = await func.apply(this, args);
          fnQueue.length = 0;
          errors.length = 0;
          result = res;
          dispatch(true, res);
          lock = false;
        }catch(e){
          errors.push(e);
          // If all fail, consume the Promise queue, empty the request queue and error queue, and release the lock
          if(errors.length && errors.length === promiseQueue.length){
            dispatch(false, [...errors]);
            fnQueue.length = 0;
            errors.length = 0;
            lock = false; }}}})}}Copy the code

We then wrap the requestToken with this higher-order function and modify the Request function to see what happens

// fetch.js
import fetchOnce from './fetchonce.js';

const requestTokenOnce = fetchOnce(requestToken);

const request = (args) = > {
  return new Promise(async (resolve, reject) => {
    try {
      const token = await requestTokenOnce();
      setTimeout((a)= > {
        resolve(`${args}The request was successful. The token is${token}`);
      }, 300);
    } catch (error) {
      reject('Request failed')}}}Copy the code

Let’s take a look at the print on the page

As expected, requestToken only actually requests the token once, and then we simulate the scenario where the request fails.

Let’s modify the requestToken function so that it randomly succeeds or fails

const requestToken = (a)= > {
  return new Promise((resolve, reject) = > {
    if (token) {
      return resolve(token);
    }
    const random = Math.random() * 10;
    setTimeout((a)= > {
      if (random > 5) {
        token = 'this is token';
        console.log('Requested token successfully once');
        resolve(token);
      } else {
        console.error('Token request failed once');
        reject('Token request failed'); }},300); })}Copy the code

Let’s refresh several times to see the token request in case of a failure

Failure at a time

Failure secondary

Concurrency failed three times, and the one after concurrency succeeded

All failure

If one of the fetchOnce requests succeeds, then everyone succeeds. If all else fails, the request fails.

conclusion

In order to solve the problem of the first rendering, the request can be sent multiple times

The complete code repository can be viewed here on fetchOnce, and like friends can leave your likes and star~