The reason for organizing this JS system of knowledge is inspired by the series of questions of god’s triple soul

In the face of so many new technologies that are constantly iterating, I always feel that I don’t have enough time to learn and the effect is not good, and I am worried about whether MY focus was wrong at the beginning. My focus should not be on dazzling technologies, but on the construction of my knowledge system.

Although I write code every day, many concepts of what I write sound familiar, but I can’t put my finger on them. In order to clear up these problems in my mind, I started the construction of this knowledge system.

This is the second in a two-part series on EventLoop, which extends your knowledge to macro tasks, microtasks and async. Then it combs some common API implementation methods.

As the Soul Quest has inspired me, I hope the knowing what series has inspired you. In addition, due to the limited level of personal knowledge, if there is any wrong understanding, please criticize and correct.

11 EventLoop

Each renderer process has a main thread, and the main thread is very busy dealing with DOM, styling, layout, JavaScript tasks, and various input events. For so many different types of tasks to run smoothly in the main thread, you need a system for scheduling them, and that’s the message queue and event loop system we’re going to talk about today.

1. Message queues and event loops – how does the browser page main thread work

To gain a better understanding of the event loop mechanism, let’s start with the simplest scenario and walk through how the main thread of the browser page works.

Use a single thread to process scheduled tasks

Let’s start with the simplest scenario, such as the following sequence of tasks:

  • Task 1:1 +2
  • Task 2:20/5
  • Task 3:7 *8
  • Task 4: Print the operation results of task 1, Task 2, and task 3

We write all the task code in order into the main thread, and when the thread executes, these tasks will be executed in order in the thread; When all tasks are completed, the thread exits automatically. You can refer to the following figure to intuitively understand the execution process:

This is the first version of our main thread model: one execution of a thread

New tasks are processed while the thread is running

But not all tasks are uniformly scheduled before execution, and in most cases, new tasks are created while the thread is running. For example, during thread execution, a new task is received to compute “10+2”, then the above method does not handle this situation.

In order to be able to receive and execute new tasks while the thread is running, it needs to adopt the event loop mechanism. We can use a for loop to listen for new tasks.

There are two improvements to threading compared to the first version.

  • The first introduces looping by adding a for loop to the end of a thread statement, which will continue to execute.
  • The second point is the introduction of events, which can wait for the user to input the number while the thread is running. During the waiting process, the thread will be suspended. Once the user input information is received, the thread will be activated, and then perform the addition operation, and finally output the result.

By introducing an event loop, we can make the thread “live”. Each time we enter two digits, we print the result of the sum of the two digits. You can use the following image to refer to the improved thread:

This is our main thread model version 2: introducing event loops in threads

Process tasks sent by other threads

In the second version of the threading model, all tasks come from within the thread. If another thread wants the main thread to perform a task, it cannot do so with the second version of the threading model.

So how do you design a thread model so that it can receive messages sent by other threads?

A common pattern is to use message queues. What is a message queue? Please refer to the following figure:

As you can see from the figure, a message queue is a data structure that holds tasks to be executed. It conforms to the “first in, first out” nature of the queue, that is, to add tasks to the end of the queue; To fetch a task, fetch it from the head of the queue.

With queues in place, we can continue to transform the threading model as shown below:

This is our main thread model version 3: queue + loop

As can be seen from the figure above, our transformation is as follows:

  • Add a message queue;
  • New tasks generated in the IO thread are added to the end of the message queue;
  • The render main thread executes the task by reading it in a loop from the message queue header.

Process tasks sent by other processes

By using message queues, we achieve message communication between threads. Cross-process tasks occur frequently in Chrome, so how do you handle tasks sent by other processes? Please refer to the following figure:

As you can see from the diagram, the renderer process has a special IO thread to receive messages from other processes. After receiving the messages, it assembles them into tasks and sends them to the render main thread. The following steps are the same as “processing tasks sent by other threads”.

Message queues and event loops

Through the above step by step analysis, we finally understand how the browser page main thread works. Browser pages are driven by an event loop mechanism. Each rendering process has a message queue. The main thread of the page executes the events in the message queue in order, such as executing JavaScript events, parsing DOM events, calculating layout events, user input events, and so on. The new event will be appended to the end of the event queue. So it’s the message queue and main thread loops that keep the pages running smoothly.

2. Task types in the message queue

Now let’s look at the types of tasks in the message queue. Check out the Official Chromium source code, which includes many types of internal messages such as input events (mouse scroll, click, move), microtasks, file reads and writes, Websockets, JavaScript timers, etc.

In addition, the message queue contains many page-related events, such as JavaScript execution, DOM parsing, style calculation, layout calculation, CSS animation, and so on.

All of these events are executed in the main thread, so when writing a Web application, you need to measure how long these events take and figure out how to solve the problem of individual tasks taking too long on the main thread.

3. How to exit safely

When the main page execution is complete, how to ensure that the main page can safely exit? The main thread of the page sets an exit flag variable when deciding to exit the current page, and determines whether an exit flag is set at the end of each task.

4. Disadvantages of single-threaded pages

Macro and micro Tasks: How to handle high-priority tasks.

A typical scenario, for example, is to monitor DOM node changes (node insertions, modifications, deletions, etc.) and then process the corresponding business logic based on those changes. A common design is to use JavaScript to design a set of listening interfaces that the rendering engine calls synchronously when changes occur, a typical observer pattern.

There is a problem with this pattern, however, because the DOM changes very frequently, and if the corresponding JavaScript interface is called directly every time a change occurs, the current task will take longer to execute, resulting in less efficient execution.

If these DOM changes are made into asynchronous message events and added to the end of the message queue, then the real-time monitoring will be affected because many tasks may be queued before being added to the message queue.

That is to say, if DOM changes and synchronous notification is adopted, the execution efficiency of the current task will be affected. If the macro task mode is adopted, the real-time monitoring will be affected.

So how do you balance efficiency against real-time?

In response to this situation, microtasks are born. Let’s take a look at how microtasks trade off efficiency and real-time performance.

We put in the message queue task usually referred to as macro task, each macro task contains a task queue, in the process of macro tasks, if the DOM have change, you will add the changes to the task list, so as not to affect the macro mission continue to execute, thus solved the problem of the execution efficiency.

After the main functions in the macro task are directly completed, the rendering engine does not rush to execute the next macro task, but the micro task in the current macro task, because the DOM change events are stored in the micro task queue, thus solving the real-time problem.

Asynchronous: How to solve the problem that a single task takes too long to execute.

Because all tasks are executed in a single thread, only one task can be executed at a time, leaving all other tasks in a wait state. If one of the tasks takes too long to execute, then the next task has to wait a long time. Please refer to the following figure:

As you can see from the figure, if one of the JavaScript tasks takes too long to execute during the animation process and takes up the time of a single frame of animation, this will create a feeling of lag for the user, which is of course a very bad user experience. In this case, JavaScript can circumvent this problem through asynchrony, which means that the JavaScript task to be executed is delayed.

Macro and micro tasks

What is the difference between microtasks and macro tasks when single threads introduce microtasks to handle high-priority tasks? How do they complement each other?

1. The macro task

Most of the tasks in the page are performed on the main thread. These tasks include:

  • Render events (such as PARSING DOM, calculating layout, drawing);
  • User interaction events (such as mouse click, page scrolling, zooming in and out, etc.);
  • JavaScript script execution events;
  • Network request completion, file read and write completion event, timer.

To coordinate the execution of these tasks methodically on the main thread, the page process introduces message queues and event loops, and the renderer maintains multiple message queues internally, such as delayed execution queues (timers) and regular message queues. The main thread then takes a for loop, continually fetching and executing tasks from these task queues. We call these tasks in the message queue macro tasks.

Macro tasks can meet most of our daily needs, but if there is a high demand for time precision, macro tasks are not competent. Let’s analyze why macro tasks are difficult to meet the high demand for time precision tasks.

Page rendering events, various IO completion events, JavaScript script execution events, user interaction events and so on May be added to the message queue at any time, and the adding events are operated by the system, JavaScript code can not accurately control the position of the task to be added to the queue. There is no control over the position of the task in the message queue, so it is difficult to control when the task starts.

Therefore, the time granularity of macro tasks is relatively large, and the time interval of execution cannot be precisely controlled, which does not meet some high real-time requirements, such as the requirement of monitoring DOM changes to be introduced later.

2. Micro tasks

An asynchronous callback

To understand microtasks, let’s take a look at asynchronous callbacks, which come in two main forms.

  • The first is to encapsulate the asynchronous callback function as a macro task, add it to the end of the message queue, and execute the callback function when the loop executes the task. This is easy to understand, as the setTimeout and XMLHttpRequest callbacks we described earlier are implemented this way.
  • The second method is executed after the main function completes but before the current macro task completes, usually in the form of a microtask.

What are microtasks

So what exactly is microtask?

A microtask is a function that needs to be executed asynchronously, after the completion of the main function but before the completion of the current macro task.

But to understand how the micromission system works, you have to look at the V8 engine.

We know that when JavaScript executes a script, V8 creates a global execution context for it, and at the same time that the global execution context is created, V8 also creates a queue of microtasks internally. As the name implies, this microtask queue is used to store microtasks, because in the current macro task execution process, sometimes there are multiple microtasks, this microtask queue is needed to store these microtasks. However, the microtask queue is for internal use in the V8 engine, so you can’t access it directly through JavaScript.

Timing of microtasks

That is, each macro task is associated with a microtask queue. Then, we need to analyze two important points in time — when the microtask is generated and when the microtask queue is executed.

Let’s start by looking at how microtasks come about, okay? In modern browsers, there are two ways to generate microtasks.

  • The first approach is to use MutationObserver to monitor a DOM node, and then modify the node through JavaScript, or add or remove partial nodes to the node. When the DOM node changes, the micro-task of recording the DOM changes is generated.
  • The second approach is to use promises, which also produce microtasks when promise.resolve () or promise.reject () is called.

Timing of microtask execution

Now that you have microtasks in the microtask queue, it’s time to look at when the microtask queue is executed.

Typically, when the JavaScript in the current macro task is about to complete, just as the JavaScript engine is about to exit the global execution context and clear the call stack, the JavaScript engine checks the microtask queue in the global execution context and executes the microtasks in the queue in order. The WHATWG refers to the point at which a microtask is performed as a checkpoint.

If a new microtask is created during the execution of a microtask, it is added to the microtask queue. The V8 engine executes the tasks in the microtask queue repeatedly until the queue is empty. That is, new microtasks generated during the execution of a microtask are not postponed to the next macro task, but continue to be executed in the current macro task.

3. Monitor DOM change method evolution

Let’s take a look at how microtasks are used in MutationObserver. MutationObserver is a set of methods for listening for DOM changes, which has always been a core requirement for front-end engineers.

For example, many Web applications use HTML and JavaScript to build their own custom controls, which, unlike some built-in controls, are not inherent. To work well with built-in controls, they must be able to adapt to content changes, respond to events, and user interaction. Therefore, Web applications need to monitor DOM changes and respond to them in a timely manner.

Polling detection

Early pages did not provide support for listening, so polling was the only way to see if the DOM had changed, such as using setTimeout or setInterval to periodically check if the DOM had changed. This approach is simple and crude, but there are two problems: if the time interval is set too long, DOM changes are not responsive enough; On the other hand, if the interval is set too short, a lot of useless work is wasted checking the DOM, making the page inefficient.

Mutation Event

Until the introduction of Mutation Events in 2000, Mutation events adopted the design mode of the observer, which immediately triggered the corresponding Event when the DOM changed, which was a synchronous callback.

The use of Mutation events solves the real-time problem because the JavaScript interface is invoked as soon as the DOM changes. But it’s this real time that causes serious performance problems, because every time the DOM changes, the rendering engine calls JavaScript, which incurs a significant performance overhead. It was also because using Mutation events caused page performance problems that they were deprecated and gradually removed from the Web standard events.

MutationObserver

To address the performance problems of Mutation events due to synchronous calls to JavaScript, starting with DOM4, it is recommended to use MutationObserver instead of Mutation events. The MutationObserver API can be used to monitor DOM changes, including attribute changes, node additions and subtractions, content changes, and so on.

Compared with Mutation Events, what improvements did MutationObserver make?

First, MutationObserver changes the response function to an asynchronous call, so that instead of triggering an asynchronous call every time the DOM changes, the asynchronous call can be triggered once after several DOM changes, and a data structure is used to record all DOM changes during that time. This allows frequent DOM manipulation without much impact on performance.

We mitigated the performance problem by making asynchronous calls and reducing the number of triggers, so how do we keep message notifications timely? At this point, microtasks come into play. Each time a DOM node changes, the rendering engine encapsulates the change record as a microtask and adds the microtask to the current microtask queue. This way, when it comes to checkpoints, the V8 engine will perform the microtasks sequentially.

To sum up, MutationObserver adopts an “asynchronous + microtask” strategy.

  • The performance problem of synchronous operation is solved by asynchronous operation.
  • The problem of real-time performance is solved by micro-task.

4. Talk about EventLoop

Let’s take a look at the sequence of EventLoop, macro, and micro tasks from a practical perspective. What does the following code output?

function bar(){
  console.log('bar')
  Promise.resolve().then(
    (str) = >console.log('micro-bar'))setTimeout((str) = >console.log('macro-bar'),0)}function foo() {
  console.log('foo')
  Promise.resolve().then(
    (str) = >console.log('micro-foo'))setTimeout((str) = >console.log('macro-foo'),0)
  
  bar()
}
foo()
console.log('global')
Promise.resolve().then(
  (str) = >console.log('micro-global'))setTimeout((str) = >console.log('macro-global'),0)
Copy the code

Here’s a closer look at how V8 executes this JavaScript code.

(1) When V8 executes this code, it pushes the global execution context onto the call stack and creates an empty microtask queue in the execution context.

The state diagram of the message queue, main thread and call stack is as follows:

(2) When performing a call to function foo, V8 creates the execution context of function foo first and pushes it onto the stack. Resolve is then executed, which triggers a micro-foo1 microtask, which V8 adds to the microtask queue. The setTimeout method is then executed. This method triggers a macro-foo1 macro task, which V8 adds to the message queue. (3) Foo calls bar, V8 creates the execution context of bar, pushes it onto the stack, and then executes Promise. Resolve, which triggers a micro-bar microtask that is added to the microtask queue. The setTimeout method is then executed, which also triggers a macro-bar macro task, which is also added to the message queue.

The state diagram of the message queue, main thread and call stack is as follows:

(4) When the bar function finishes and exits, the execution context of the bar function is popped from the stack, followed by the completion and exit of the foo function, and the execution context of the foo function is also popped from the stack. Resolve. This triggers a micro-global microtask, which V8 adds to the microtask queue. It then executes the setTimeout method, which triggers a macro-global macro task that V8 adds to the message queue.

The state diagram of the message queue, main thread and call stack is as follows:

(6) when the code is about to complete, V8 destroys the code’s environment object. At this point, the destructor of the environment object is called (note that the destructor is a C++ concept). This is the checkpoint at which V8 executes the microtask, and V8 checks the microtask queue. If there are microtasks in the microtask queue, V8 pulls them out one by one and executes them in line. Since the tasks in the microtask queue are micro-foo, micro-bar, and micro-Global, the order of execution is the same. (7) After all the microtasks in the microtask queue are completed, the current macro task is finished, and then the main thread will continue to repeat the process of fetching and executing tasks. Macro -foo, macro-bar and macro- Global are all printed out in the first – in, first – out order.

The state diagram of the message queue, main thread and call stack is as follows:

Through the above analysis, executing this code, we find that the final print order is:

foo
bar
global
micro-foo
micro-bar
micro-global
macro-foo
macro-bar
macro-global
Copy the code

Let’s see if we really understand EventLoop and analyze the printout of this code.

async function foo() {
    console.log('foo')}async function bar() {
    console.log('bar start')
    await foo()
    console.log('bar end')}console.log('script start')
setTimeout(function () {
    console.log('setTimeout')},0)
bar();
new Promise(function (resolve) {
    console.log('promise executor')
    resolve();
}).then(function () {
    console.log('promise then')})console.log('script end')
Copy the code

13 JS asynchronous solution

1. What problems should asynchrony solve and the solution

The first thing to remember about asynchrony is that no matter what the asynchronous solution is, asynchrony is still an asynchronous task. Various asynchronous solution, mainly to a more synchronized manner to express asynchronous flow, because our brains are understanding of the way things plan is linear, blocking, single thread synchronization approach, but the callback direct form of asynchronous flow is nonlinear, not order, this makes it the correct understanding of such code is very difficult.

What is asynchrony

“Asynchronous” simply means that a task is not completed consecutively. It can be understood that the task is artificially divided into two parts, the first part is performed, then the other part is performed, and then the second part is performed when it is ready.

For example, if you have a task that reads a file for processing, the first step of the task is to make a request to the operating system to read the file. The program then performs other tasks, waits until the operating system returns the file, and then proceeds to the second part of the task (processing the file). This discontinuous execution is called asynchrony.

Accordingly, continuous execution is called synchronization. Because the execution is continuous, no other tasks can be inserted, so the program has to wait while the operating system reads the file from the hard disk.

Why does JS have asynchrony

Js is a single-threaded language, which can only execute one task at a time. If there are multiple tasks, they must be queued and executed in sequence. Only after the execution of the previous task is completed, the next task will be executed.

The advantage of this mode is that the implementation is relatively simple, the execution environment is relatively simple; The disadvantage is that as long as one task takes a long time, subsequent tasks must wait in line, which will delay the execution of the entire program.

For example, when js is running in the browser, there may be a large number of network requests, and it is unpredictable when a network resource will return. Does that mean waiting, stalling and doing nothing? ———— That won’t do.

Therefore, JS designed asynchronous ———— for this scenario, that is, to initiate a network request, leave this side alone, do other things first, when the network request to return the result, then we will see. This ensures the flow of a web page.

Core principles of asynchronous implementation

Let’s take a look at the following code

var ajax = $.ajax({
    url: '/data/data1.json'.success: function () {
        console.log('success')}})Copy the code

$.ajax() takes two arguments, url and success, where URL is the route to the request and success is a function. The function passed does not execute immediately, but waits for the request to succeed. A callback is a function that does not execute immediately until the result is known

JavaScript can implement asynchrony through callbacks, which allow JavaScript tasks to be executed to be delayed. The core principle of asynchrony is to pass a callback as an argument to an asynchronous function and trigger the callback when a result is returned

Common asynchronous operations

Common asynchronous operations in development are:

  • Network request, such asajax
  • For example, I/O operationsreadFile readdir
  • Timing functions, such assetTimeout setInterval

The evolution of asynchronous solutions

Asynchrony can be implemented with callback functions, promises, generators, Async/Await. The history of asynchronous solutions is shown below:

2. Callback functions

The problem with asynchronous programming: Discontinuous code logic

The single-threaded architecture of Web pages dictates asynchronous callbacks, and how do asynchronous callbacks affect how we code?

Suppose you have a download requirement and use XMLHttpRequest to implement it. For details, you can refer to the following code:


// Execution status
function onResolve(response){console.log(response) }
function onReject(error){console.log(error) }

let xhr = new XMLHttpRequest()
xhr.ontimeout = function(e) { onReject(e)}
xhr.onerror = function(e) { onReject(e) }
xhr.onreadystatechange = function () { onResolve(xhr.response) }

// Set the request type, request URL, and whether to synchronize information
let URL = 'https://web.cn'
xhr.open('Get', URL, true);

// Set parameters
xhr.timeout = 3000 // Set the timeout period for XHR requests
xhr.responseType = "text" // Format the data to be returned in the response
xhr.setRequestHeader("X_TEST"."web.cn")

// Make a request
xhr.send();
Copy the code

We execute the above code, can normally output results. However, there are five callbacks in this short piece of code. So many callbacks can cause the code to be incoherent and not linear, which is very counterintuitive. This is how asynchronous callbacks affect our coding. Is there any way to solve this problem?

Sure, we can encapsulate the messy code and reduce the number of asynchronous callbacks.

Encapsulate asynchronous code to make the process linear

We can wrap up the code for the XMLHttpRequest request process, focusing on the input data and the output.

So let’s do this with the code. First of all, we save all the input HTTP request information into a request structure, including the request address, request header, request mode, reference address, synchronous or asynchronous request, security Settings and other information. The request structure is as follows:

//makeRequest is used to construct request objects
function makeRequest(request_url) {
    let request = {
        method: 'Get'.url: request_url,
        headers: ' '.body: ' '.credentials: false.sync: true.responseType: 'text'.referrer: ' '
    }
    return request
}
Copy the code

We can then wrap the request process. Here we wrap all the request details into the XFetch function, which looks like this:

//[in] request, request information, request header, delay value, return type, etc
//[out] resolve, execute successfully, callback this function
//[out] reject reject
function XFetch(request, resolve, reject) {
    let xhr = new XMLHttpRequest()
    xhr.ontimeout = function (e) { reject(e) }
    xhr.onerror = function (e) { reject(e) }
    xhr.onreadystatechange = function () {
        if (xhr.status = 200)
            resolve(xhr.response)
    }
    xhr.open(request.method, URL, request.sync);
    xhr.timeout = request.timeout;
    xhr.responseType = request.responseType;
    // Add additional request information
    / /...
    xhr.send();
}
Copy the code

The XFetch function takes a Request as input, and then two callback functions, resolve and Reject, are called when the request succeeds, and Reject when the request fails.

With this in mind, we are ready to implement the business code as follows:

XFetch(makeRequest('https://web.cn'),
    function resolve(data) {
        console.log(data)
    }, function reject(e) {
        console.log(e)
    })
Copy the code

New problem: callback hell

The code above is fairly linear and works well in simple scenarios, but once you get to a more complex project, you’ll find that too many nested callback functions can easily set you up for callback hell. You can refer to the messy code below:

XFetch(makeRequest('https://web.cn/?category'),
      function resolve(response) {
          console.log(response)
          XFetch(makeRequest('https://web.cn/column'),
              function resolve(response) {
                  console.log(response)
                  XFetch(makeRequest('https://web.cn')
                      function resolve(response) {
                          console.log(response)
                      }, function reject(e) {
                          console.log(e)
                      })
              }, function reject(e) {
                  console.log(e)
              })
      }, function reject(e) {
          console.log(e)
      })
Copy the code

This code looks messy for two reasons:

  • The first is nested calls, where the following tasks rely on the result of the previous task’s request and execute the new business logic inside the callback function of the previous task, so that the code becomes very unreadable as the nesting level increases.
  • The second is the uncertainty of the task. There are two possible results (success or failure) for each task, so it is reflected in the code that the execution result of each task needs to be judged twice. Such an extra error handling for each task significantly increases the level of confusion in the code.

After analyzing the reasons, the solution to the problem is very clear:

The first is to eliminate nested calls; The second is error handling for merging multiple tasks.

Promise has helped us solve both of these problems. So let’s take a look at how Promise eliminates the error handling of nested calls and merging multiple tasks.

3. Promise

How does Promise solve callback hell

First, we use Promise to refactor XFetch’s code, as shown in the following example:

function XFetch(request) {
  function executor(resolve, reject) {
      let xhr = new XMLHttpRequest()
      xhr.open('GET', request.url, true)
      xhr.ontimeout = function (e) { reject(e) }
      xhr.onerror = function (e) { reject(e) }
      xhr.onreadystatechange = function () {
          if (this.readyState === 4) {
              if (this.status === 200) {
                  resolve(this.responseText, this)}else {
                  let error = {
                      code: this.status,
                      response: this.response
                  }
                  reject(error, this)
              }
          }
      }
      xhr.send()
  }
  return new Promise(executor)
}
Copy the code

Next, we use XFetch to construct the request flow as follows:

var x1 = XFetch(makeRequest('https://web.cn/?category'))
var x2 = x1.then(value= > {
    console.log(value)
    return XFetch(makeRequest('https://web.cn/column'))})var x3 = x2.then(value= > {
    console.log(value)
    return XFetch(makeRequest('https://web.cn/'))
})
x3.catch(error= > {
    console.log(error)
})
Copy the code

Promise uses three major techniques to solve callback hell:

  • Delayed binding of callback functions.
  • The callback function returns the value through.
  • Error bubbling.
Delayed binding of callback functions.
// Create the Promise object X1 and execute the business logic in the executor function
let x1 = new Promise(function executor(resolve, reject){
    resolve(100)})//x1 deferred binding callback onResolve
x1.then(function onResolve(value){
    console.log(value)
})
Copy the code

The delayed binding of the callback function is shown in code by creating the Promise object X1 and executing the business logic through the Promise constructor executor. Once the Promise object X1 is created, use x1.then to set up the callback function. That is, the callback function is not declared directly, but is passed in through the later THEN method.

Return value penetration

let x2 = new Promise(function executor(resolve, reject){
    return new Promise(function executor(resolve, reject){
        resolve(100)})// The inner return value Promise penetrates to the outermost layer which is x2
})
x2.then(function onResolve(value){
    console.log(value)
})
Copy the code

We decide what type of Promise task to create based on the incoming value of the callback (onResolve). The Promise object needs to be returned to the outermost layer to get rid of the nested loop. The x here refers to the internally returned Promise, which can then be followed by the chain call.

The problem of multi-layer nesting of callback functions can be solved by using callback function delayed binding and return value penetration techniques as follows:

new Promise(function(resolve, reject){
    return new Promise(function(resolve, reject){
        resolve(100)
    })
}).then(function(value){
    return new Promise(function(resolve, reject){
        resolve(200)
    })
}).then(function(value){
    return new Promise(function(resolve, reject){
        resolve(300)})})Copy the code

The two techniques combine to produce the effect of chain calls.

Error “bubbling” technique
new Promise(function(resolve, reject){
    return new Promise(function(resolve, reject){
        resolve(100)
    })
}).then(function(value){
    return new Promise(function(resolve, reject){
        resolve(200)
    })
}).then(function(value){
    return new Promise(function(resolve, reject){
        resolve(300)
    })
}).catch(err= >{
// Error handling
)
Copy the code

Errors on the Promise object are “bubbling” and passed backwards until they are processed by the Reject function or caught by a catch statement. With this bubbling feature, there is no need to catch exceptions individually in each Promise object.

By calling function delay binding, return value penetration and error bubbling, we eliminate nested calls and frequent error handling, making our code more elegant and linear.

Why did Promise introduce microtasks

The execution function in the Promise is synchronous, but there is an asynchronous operation that calls either resolve when the asynchronous operation ends or Reject when it fails, both of which enter the EventLoop as microtasks. But have you ever wondered why Promise introduced microtasks for callbacks?

The solution

Back to the problem itself, it’s really a matter of how to handle callbacks. To sum up, there are three ways:

  1. Use synchronous callbacks until the asynchronous task is complete before proceeding to subsequent tasks.
  2. With asynchronous callbacks, place the callback function onMacro task queueThe next morning.
  3. Using asynchronous callback, place the callback function inCurrently in a macro taskAt the back.
Advantages and disadvantages compared

The first way is obviously not desirable, because the synchronization problem is very obvious, will make the entire script blocked, waiting for the current task, the back of the task cannot be enforced, and that part of the waiting time is can be used to do other things, cause the CPU utilization is very low, and there is another deadly problem, is unable to achieve the effect of delayed binding.

If you use the second approach, the resolve/reject callback should be executed after all the previous macro tasks are complete. If the current task queue is very long, the callback will not be executed, causing the application to stall.

To address the problem with the above solution, as well as the need for delayed binding, Promise took a third approach, introducing microtasks that put the execution of the resolve(Reject) callback at the end of the current macro task.

In this way, the use of microtasks solves two major pain points:

    1. Using asynchronous callbacks instead of synchronous callbacks solves the problem of wasting CPU performance.
    1. To solve the real-time problem of callback execution, the current macro task is executed last.

Now that the basic implementation idea of Promise is clear, and I’m sure you know why it’s designed this way, let’s take a step at a time to figure out how it’s designed internally.

How does Promise implement chain calls

From now on, let’s start implementing a fully functional Promise, digging into the details step by step. Let’s start with the chain call.

Simplified version implementation

Write the first version of the code first:

// Define three states
const PENDING = "pending";
const FULFILLED = "fulfilled";
const REJECTED = "rejected";

function MyPromise(executor) {
  let self = this; // Cache the current Promise instance
  self.value = null;
  self.error = null; 
  self.status = PENDING;
  self.onFulfilled = null; // Successful callback function
  self.onRejected = null; // Failed callback function

  const resolve = (value) = > {
    if(self.status ! == PENDING)return;
    setTimeout(() = > {
      self.status = FULFILLED;
      self.value = value;
      self.onFulfilled(self.value);// Execute a success callback in resolve
    });
  };

  const reject = (error) = > {
    if(self.status ! == PENDING)return;
    setTimeout(() = > {
      self.status = REJECTED;
      self.error = error;
      self.onRejected(self.error);// Execute a success callback in resolve
    });
  };
  executor(resolve, reject);
}
MyPromise.prototype.then = function(onFulfilled, onRejected) {
  if (this.status === PENDING) {
    this.onFulfilled = onFulfilled;
    this.onRejected = onRejected;
  } else if (this.status === FULFILLED) {
    // If the status is fulfilled, execute the success callback directly and pass in the success value
    onFulfilled(this.value)
  } else {
    // If the status is Rejected, execute the failure callback directly and pass in the failure cause
    onRejected(this.error)
  }
  return this;
}
Copy the code

As can be seen, the essence of Promise is a finite state machine, which has three states:

  • PENDING (waiting)
  • FULFILLED (successful)
  • REJECTED (failure)

For a Promise, the state change is irreversible, that is, after the waiting state changes to another state, it cannot be changed again.

Going back to the current version of Promise, though, there are some problems.

Set the callback array

Only one callback function can be executed first, not multiple callback bindings, such as the following:

let promise1 = new MyPromise((resolve, reject) = > {
  fs.readFile('./foo.js'.'utf-8'.(err, data) = > {
    if(! err){ resolve(data); }else{ reject(err); }})});let x1 = promise1.then(data= > {
  console.log("First demonstration", data);    
});

let x2 = promise1.then(data= > {
  console.log("Second presentation", data);    
});

let x3 = promise1.then(data= > {
  console.log("Third show", data);    
});
Copy the code

What if I bind three callbacks that I want to execute together after resolve()?

Ondepressing and onRejected need to be changed to an array. When you call resolve, you can take out the methods and perform them one by one.

self.onFulfilledCallbacks = [];
self.onRejectedCallbacks = [];
Copy the code
MyPromise.prototype.then = function(onFulfilled, onRejected) {
  if (this.status === PENDING) {
    this.onFulfilledCallbacks.push(onFulfilled);
    this.onRejectedCallbacks.push(onRejected);
  } else if (this.status === FULFILLED) {
    onFulfilled(this.value);
  } else {
    onRejected(this.error);
  }
  return this;
}
Copy the code

Next modify the parts of the resolve and Reject methods that perform callbacks:

/ / resolve
self.onFulfilledCallbacks.forEach((callback) = > callback(self.value));
/ / reject
self.onRejectedCallbacks.forEach((callback) = > callback(self.error));
Copy the code
The chain call is complete

Let’s test with the current code:

let fs = require('fs');
let readFilePromise = (filename) = > {
  return new MyPromise((resolve, reject) = > {
    fs.readFile(filename,'utf-8'.(err, data) = > {
      if(! err){ resolve(data); }else {
        reject(err);
      }
    })
  })
}
readFilePromise('./foo.js').then(data= > {
  console.log(data;    
  return readFilePromise('./bar.js');
}).then(data= > {
  console.log(data);
})

/ / foo js file
/ / foo js file
Copy the code

Yi? Print two foo.js files, the second time is not read the bar.js file file?

Here’s the problem:

MyPromise.prototype.then = function(onFulfilled, onRejected) {
  / /...
  return this;
}
Copy the code

I’ll write it this way every time I return the first Promise. The second Promise returned from the then function is simply ignored!

The implementation of THEN needs to be improved. We now need to pay attention to the return value of THEN.

MyPromise.prototype.then = function (onFulfilled, onRejected) {
  let bridgePromise;
  let self = this;
  if (self.status === PENDING) {
    return bridgePromise = new MyPromise((resolve, reject) = > {
      self.onFulfilledCallbacks.push((value) = > {
        try {
          // See? To get the result returned by the callback in then.
          let x = onFulfilled(value);
          resolve(x);
        } catch(e) { reject(e); }}); self.onRejectedCallbacks.push((error) = > {
        try {
          let x = onRejected(error);
          resolve(x);
        } catch(e) { reject(e); }}); }); }/ /...
}
Copy the code

If the current state is PENDING, add the above function to the callback array, and when the Promise state changes, the corresponding callback array is iterated and executed.

But there are some problems with this degree:

  1. First, the case that two parameters in then are not passed is not handled.
  2. If the result returned after the callback in then is executed (that is, abovexResolve (resolve) is a Promise, which we don’t want.

How to solve these two problems?

First, judge the case that the parameters are not transmitted:

The successful callback does not pass it a default function
onFulfilled = typeof onFulfilled === "function" ? onFulfilled : value= > value;
// Throw an error for a failed callback
onRejected = typeof onRejected === "function" ? onRejected : error= > { throw error };
Copy the code

Then handle the case where a Promise is returned:

function resolvePromise(bridgePromise, x, resolve, reject) {
  If x is a promise
  if (x instanceof MyPromise) {
    // Unpack the promise until the return value is not a PROMISE
    if (x.status === PENDING) {
      x.then(y= > {
        resolvePromise(bridgePromise, y, resolve, reject);
      }, error= > {
        reject(error);
      });
    } else{ x.then(resolve, reject); }}else {
    // Resolve if no Promise is maderesolve(x); }}Copy the code

Then make the following changes in the then method implementation:

resolve(x)  ->  resolvePromise(bridgePromise, x, resolve, reject);
Copy the code

I want to emphasize the meaning of the resolve and reject parameters that are always passed in the recursive call. In fact, they control the state of the bridgePromise that is originally passed in, which is very important.

Next, we implement the logic when the Promise state is not PENDING.

Successful status then:

if (self.status === FULFILLED) {
  return bridgePromise = new MyPromise((resolve, reject) = > {
    try {
      // When the status becomes successful, there is a corresponding self.value
      let x = onFulfilled(self.value);
      Resolve (x) resolve(x
      resolvePromise(bridgePromise, x, resolve, reject);
    } catch(e) { reject(e); }})}Copy the code

Failed state then:

if (self.status === REJECTED) {
  return bridgePromise = new MyPromise((resolve, reject) = > {
    try {
      // If the state becomes failed, there is self.error
      let x = onRejected(self.error);
      resolvePromise(bridgePromise, x, resolve, reject);
    } catch(e) { reject(e); }}); }Copy the code

As promised in Promise A+, both successful and failed callbacks are microtasks. Since JS in the browser cannot touch the distribution of the underlying microtasks, setTimeout(belonging to the category of macro tasks) can be directly used to simulate, and setTimeout can be used to wrap the tasks to be executed. Of course, The resolve implementation is the same, but it’s not really a microtask.

if (self.status === FULFILLED) {
  return bridgePromise = new MyPromise((resolve, reject) = > {
    setTimeout(() = > {
      / /...})}Copy the code
if (self.status === REJECTED) {
  return bridgePromise = new MyPromise((resolve, reject) = > {
    setTimeout(() = > {
      / /...})}Copy the code

Ok, now that we have basically implemented the THEN method, let’s test the code we just tested, and print it as follows:

001. TXT content 002. TXT content copy codeCopy the code

As you can see, the chain call has been successfully completed.

Error capture and bubbling mechanism analysis

Now implement the catch method:

Promise.prototype.catch = function (onRejected) {
  return this.then(null, onRejected);
}
Copy the code

Okay, so catch is the syntactic sugar for then.

More important than implementation is understanding the error bubbling mechanism, which means that once an error occurs in the middle, it can be caught with a catch at the end.

It’s not hard to look back at how Promise works, with a key line of code:

// Then
onRejected = typeof onRejected === "function" ? onRejected : error= > { throw error };
Copy the code

Once there is an error in the PENDING Promise state, the state will inevitably change to failure, and then the onRejected function will be executed. The onRejected function will throw an error, and the new Promise state will change to failure. The new Promise state changes to onRejected…… if it fails This continues until the error is caught and the drop is stopped.

This is the Promise error bubble mechanism.

So far, Promise has three magic bullets: callback function delay binding, callback return value penetration, and error bubbling.

Implement Promise’s resolve, Reject, and finally

Realize the Promise. Resolve

There are three key elements to implementing the Resolve static method:

    1. Pass the parameter as a Promise and return it directly.
    1. Pass the parameter as a Thenable object, and the return Promise will follow that object,Take its final stateAs aState of oneself.
    1. Otherwise, a Promise object with that value as a success status is returned directly.

The concrete implementation is as follows:

Promise.resolve = (param) = > {
  if(param instanceof Promise) return param;
  return new Promise((resolve, reject) = > {
    if(param && param.then && typeof param.then === 'function') {
      // A successful param state calls resolve, which changes the state of the new Promise to successful and vice versa
      param.then(resolve, reject);
    }else{ resolve(param); }})}Copy the code
Realize the Promise. Reject

The argument passed in promise.reject is passed down as a reason as follows:

Promise.reject = function (reason) {
    return new Promise((resolve, reject) = > {
        reject(reason);
    });
}
Copy the code
Realize the Promise. Prototype. Finally

Regardless of whether the current Promise succeeds or fails, a call to finally executes the function passed in finally and passes the value down unchanged.

Promise.prototype.finally = function(callback) {
  this.then(value= > {
    return Promise.resolve(callback()).then(() = > {
      returnvalue; })},error= > {
    return Promise.resolve(callback()).then(() = > {
      throwerror; })})}Copy the code

Implement the Promise’s ALL and Race

Realize the Promise. All

For the all method, the following core functions need to be done:

  1. If the passed argument is an empty iterable, thenProceed to Resolve directly.
  2. If the parameterThere is aPromise fails, then promise. all returns a promise object that fails.
  3. In any case, promise.all returns the result of the completion state of the PromiseAn array of

The concrete implementation is as follows:

Promise.all = function(promises) {
  return new Promise((resolve, reject) = > {
    let result = [];
    let index = 0;
    let len = promises.length;
    if(len === 0) {
      resolve(result);
      return;
    }
   
    for(let i = 0; i < len; i++) {
      // Why not just promise[I]. Then, because promise[I] may not be a promise
      Promise.resolve(promise[i]).then(data= > {
        result[i] = data;
        index++;
        if(index === len) resolve(result);
      }).catch(err= >{ reject(err); })}})}Copy the code
Realize the Promise. Race

The implementation of race is relatively simple: once a promise is completed, resolve and stop execution.

Promise.race = function(promises) {
  return new Promise((resolve, reject) = > {
    let len = promises.length;
    if(len === 0) return;
    for(let i = 0; i < len; i++) {
      Promise.resolve(promise[i]).then(data= > {
        resolve(data);
        return;
      }).catch(err= > {
        reject(err);
        return; })}})}Copy the code

At this point, we’ve fulfilled a full Promise. From the principle to the details, we step by step dismantling and implementation, I hope you know the Promise design after a few bright spots, can also manually realize a Promise, let their thinking level and hands-on ability to the next level!

The problem of Promise

Prmise implements the chain call, but each time it needs to return a value and then call in the way of then. If there are too many calls, the call chain will be very long. Is there a more linear, sequential way to code to solve this problem, and there is, of course, Generator.

4. Generator

What is a generator

Normally all functions run until completion, in other words, once a function has started, nothing will interrupt it until it finishes.

The Generator function is an asynchronous programming solution provided by ES6. It is called Generator and its syntactic behavior is completely different from that of traditional functions.

A generator is an asterisked “function “(note: it is not really a function) that can be paused and resumed using the yield keyword

Also, each pause/resume loop during execution provides an opportunity for two-way information transfer.

The execution flow of a generator

function* genDemo() {
    console.log("Proceed with paragraph one.")
    const yield1 = yield 'generator 1'

    console.log("Proceed to paragraph 2.")
    console.log(yield1, 'yield1')
    const yield2 = yield 'generator 2'

    console.log("End of execution")
    console.log(yield2, 'yield2')
    return 'generator 3'
}

console.log('main 0')
let gen = genDemo()
console.log(gen.next().value)
console.log('main 1')
console.log(gen.next().value)
console.log('main 2')
console.log(gen.next().value)
console.log('main 3')

// main 0
// Start the first paragraph
// {value: "generator 1", done: false}
// main 1
// Start the second paragraph
//next 1 yield1
// {value: "generator 2", done: false}
// main 2
// No further action is required
//next 2 yield2
// {value: "generator 3", done: true}
// main 3
Copy the code

As you can see, there are several key points to the execution of a generator:

  1. After calling gen(), the program blocks and does not execute any statements.
  2. After calling g.ext (), the program continues until it encounters yield program pause.
  3. The next method actually returns an object with two properties:valuedone. The value ofThe result after the current yieldSaid and doneWhether the execution is complete, metreturnLater,donebyfalseintotrue.
  4. Note that the call to next resumes the execution of gen, and that arguments in next can be passed to values before yield

How to use generator functions:

  1. Execute a piece of code inside a generator function, and if the yield keyword is encountered, the JavaScript engine returns what follows the keyword externally and suspends execution of the function.
  2. External functions can resume their execution through the next method.

Why can generators pause and resume? First, understand the concept of coroutines

Generator implementation mechanism – coroutine

What is a coroutine

Coroutines are more lightweight than threads. In the environment of threads, a thread can have more than one coroutine, which can be understood as a task in a thread. Just as a process can have multiple threads, a thread can have multiple coroutines. Most importantly, coroutines are not controlled by the operating system, but by program code. The benefit of this is that the performance is greatly improved and the resources are not consumed like thread switching.

The execution flow of coroutines

So you might ask, well, isn’t JS executed in a single thread, and can all these coroutines be executed together?

The answer is: no. What is the execution flow of multiple coroutines? A thread can execute only one coroutine at a time. For example, A coroutine is currently executing A coroutine, and there is another B coroutine. If you want to execute B’s task, you must transfer the control of JS thread to B coroutine in A coroutine, so now B is executing, A is equivalent to being suspended. Similarly, you can start A coroutine from A B coroutine. In general, if we start A coroutine B, we call A coroutine the parent of A coroutine B.

To better understand how coroutines are executed, use the above code to draw the following “coroutine execution flowchart”, which can be analyzed against the code:

The diagram shows the four rules of coroutines:

  1. A coroutine gen is created by calling the generator function genDemo. After creation, the Gen coroutine is not executed immediately.
  2. To get the Gen coroutine to execute, call Gen.next.
  3. While the coroutine is executing, you can use the yield keyword to suspend the execution of the Gen coroutine and return the main information to the parent coroutine.
  4. If, during execution, the coroutine encounters a return keyword, the JavaScript engine terminates the current coroutine and returns the content following the return to the parent coroutine.

The parent coroutine has its own call stack, and the Gen coroutine has its own call stack. How does V8 switch to the parent coroutine’s call stack when the Gen coroutine yields control to the parent? When the parent coroutine restores the Gen coroutine via Gen.next, how do you switch the call stack of the Gen coroutine?

To figure this out, you need to focus on two things.

  1. First point: Gen coroutines and parent coroutines are executed interactively on the main thread, not concurrently, and their previous switches are performed using yield and Gen.next.
  2. Second point: When yield is invoked in a Gen coroutine, the JavaScript engine saves the current stack information for the Gen coroutine and restores the stack information for the parent coroutine. Similarly, when gen.next is executed in a parent coroutine, the JavaScript engine saves the parent coroutine’s call stack information and restores the Gen coroutine’s call stack information.

To get a sense of how parent coroutines and gen coroutines switch call stacks, you can look at the following figure:

Now that you’ve figured out how coroutines work, in fact, in JavaScript generators are one way to implement coroutines, and you’ll understand what generators are.

You might also wonder: doesn’t this generator just pause-resume, pause-resume? What does it have to do with asynchrony? Also, if you call next every time you execute, can you do it all at once? In the next video, we’ll take a closer look at these problems.

Asynchronous application of Generator

There are really two problems:

  1. GeneratorHow can IasynchronousMake a relationship?
  2. How to put theGeneratorIn order?
Thunk function

The Thunk function is a way of automatically executing Generator functions.

In the JavaScript language, the Thunk function replaces a multi-argument function with a single-argument function that takes only a callback function as an argument. Let’s look at the following code:

// define parameter A, callback is the callback function,
function foo(a,callback){
    callback(a)
}
// We can call foo like this
foo(1.res= >{
    console.log(res,'res1')})Copy the code

Let’s rewrite foo as follows

function thunkFoo(a){
    return function(callback){
        callback(a)
    }
}

let foo2=thunkFoo(2)
foo2(res= > console.log(2.'res2'))
Copy the code

We call thunkFoo to generate foo2, and then we call foo2. We change the function foo with two arguments to a single-argument function with a single callback. That’s the thunk function. So this seems to be getting more complicated, and the complexity you see is just superficial, and this is getting more complicated in order to simplify things that are going to get more complicated later.

The Generator + Thunk version of asynchronous

Using file operations (in node.js, same below) as an example, let’s see how asynchronous operations apply to generators.

// Normal version of readFile (multi-parameter version)
// fs.readFile(fileName, options,callback);

// Thunk readFile (single-argument version)
var readFileThunk = function (fileName) {
    return function (callback) {
        return fs.readFile(fileName,'utf-8',callback);
    };
};
Copy the code

ReadFileThunk is a thunk function. Part of the core of asynchronous operations is binding callback functions, and the thunk function does that for us. The file name is passed in first, and a custom function is generated for a file. This function passes in a callback that becomes the callback for an asynchronous operation. This makes the Generator associated with asynchrony.

Then we do the following operations:

const gen = function* () {
    const foo = yield readFileThunk('foo.js')
    console.log(foo)
    const bar = yield readFileThunk('bar.js')
    console.log(bar)
}
Copy the code

Then we let it finish:

let g = gen();
// Step 1: Since approach is paused, we call next to start execution.
// Next returns a value, which is the result of yield. Thunk is the custom function that needs to be passed a callback
g.next().value((err, data1) = > {
  // Step 2: Get the last result (foo.js), call next, pass the result foo.js as an argument, and continue execution.
  // Similarly, value is passed in as a callback
  g.next(data1).value((err, data2) = > {
  // When g.ext (foo.js) is called in the parent coroutine, foo.js is passed to foo in the g coroutineg.next(data2); })})// Print the result
/ / foo js file
/ / bar. Js file

Copy the code

If there are more tasks, there will be many layers of nesting, which is not operable. It is necessary to encapsulate the code executed:

function run(gen){
  const next = (err, data) = > {
    let res = gen.next(data);
    if(res.done) return;
    res.value(next);
  }
  next();
}
run(g);
// Print the result
/ / foo js file
/ / bar. Js file
Copy the code

Execute again and print the correct result again. The run function of the above code is an automatic executor of a Generator function. The internal next function is the Thunk callback. The next function moves the pointer to the next step in the Generator function (the Gen. next method) and then determines whether the Generator function is finished (the result.done property). If not, Pass next to Thunk (result.value), or exit.

The Thunk function is the automatic execution of Generator functions in sequence, which is the case for asynchronous operations through Thunk.

The Generator + Promise the asynchronous version

Take a look at Generator+Promise asynchrony again

const readFilePromise = (filename) = > {
    return new Promise((resole, reject) = > {
        fs.readFile(filename, 'utf-8'.(err, data) = > {
            if (err) {
                reject(err)
            } else {
                resole(data)
            }
        })
    })
}

const gen = function* () {
    const foo = yield readFilePromise('foo.js')
    console.log(foo)
    const bar = yield readFilePromise('bar.js')
    console.log(bar)
}
Copy the code

The code executed is as follows:

let g = gen()
function getGenPromise(gen,data){
    return gen.next(data).value
}
getGenPromise(g).then(data1= >{
    return getGenPromise(g,data1)
}).then(data2= >{
    return getGenPromise(g,data2)
})
// Print the result
/ / foo js file
/ / bar. Js file
Copy the code

Similarly, we can encapsulate code that executes Generator:

function run(g) {
    const next = (data) = > {
        let res = g.next(data);
        if (res.done) return;
        res.value.then(data= > {
            next(data);
        })
    }
    next();
}

run(g)
// Print the result
/ / foo js file
/ / bar. Js file
Copy the code

It also outputs the correct results. The code is very concise, and I hope to see the recursive call process in detail by referring to the chain call example.

Co library

We have encapsulated the one-time execution of Generator asynchronous operations thunk and Promise, but mature toolkits already exist. If the well-known CO library, in fact, the core principle is that we have already written (just encapsulated in the Promise case of the implementation of the code), but the source code will handle various boundary cases. It’s very simple to use:

const co = require('co');
let g = gen();
co(g).then(res= >{
  console.log(res);
})
Copy the code

Co is used with Generator, and Generator does all of its work in a few lines of code.

The problem of the Generator

Using Generator solutions looks more linear and sequential, similar to synchronous code writing, but the problem is that we need an executor to execute this synchronous code. Is it possible to write asynchronous code synchronously without an executor, and also async+await.

5. Async+await

What is async function

The ES2017 standard introduces async functions to make asynchronous operations more convenient.

What is async function? In short, it is the syntactic sugar of Generator functions.

There is a Generator function that reads two files in turn.

const readFilePromise = (filename) = > {
    return new Promise((resole, reject) = > {
        fs.readFile(filename, 'utf-8'.(err, data) = > {
            if (err) {
                reject(err)
            } else {
                resole(data)
            }
        })
    })
}

const gen = function* () {
    const foo = yield readFilePromise('foo.js')
    console.log(foo)
    const bar = yield readFilePromise('bar.js')
    console.log(bar)
}
Copy the code

The function gen in the above code can be written as async, as follows.

const asyncReadFile = async function () {
    const foo = await readFilePromise('foo.js')
    console.log(foo)
    const bar = await readFilePromise('bar.js')
    console.log(bar)
}
Copy the code

A comparison shows that an async function simply replaces the asterisk (*) of a Generator function with async, yields with await, and nothing more.

The improvements of async over Generator are shown in the following four aspects.

(1) Built-in actuators.

Generator functions must be executed by an executor, hence the CO module, while async functions have their own executor. In other words, async functions are executed exactly like normal functions, with only one line.

asyncReadFile();
Copy the code

The above code calls the asyncReadFile function, which then automatically executes and outputs the final result. This is not at all like a Generator function where you need to call the next method or use the CO module to actually execute and get the final result.

(2) Better semantics.

Async and await are semantic clearer than asterisks and yield. Async means that there is an asynchronous operation in a function, and await means that the following expression needs to wait for the result.

(3) wider applicability.

According to the CO module convention, yield can only be followed by Thunk or Promise, while async can be followed by await and Promise and primitive type values (numeric, string, Boolean, etc.). But that automatically changes to an immediate Resolved Promise).

(4) Return the Promise.

Async functions return a Promise object, which is much more convenient than Generator functions returning an Iterator. You can specify what to do next using the then method.

Further, async functions can be thought of as multiple asynchronous operations wrapped as a Promise object, and await commands are syntactic sugar for internal THEN commands.

ES7 has introduced async/await, which is a way to completely get rid of executors and generators and achieve more intuitive and concise code. The secret behind async/await technology is Promise and generator applications, microtasks and coroutines at a lower level. To understand how async and await work, we need to analyze async and await separately.

Async

According to MDN definition, async is a function that executes asynchronously and implicitly returns a Promise as a result.

To understand async functions, you need to focus on two words: asynchronous execution and implicitly returning promises.

Let’s take a look at how to implicitly return a Promise. You can refer to the following code:

async function foo() {
    return 'foo'
}
console.log(foo())  // Promise {<resolved>: foo}
Copy the code

When we execute this code, we can see that foo, which calls the async declaration, returns a Promise object in the resolved state, which is what will happen if we return a Promise implicitly.

Await

Now let’s see what await does.


async function foo() {
    console.log(1)
    let a = await 100
    console.log(a)
    console.log(2)}console.log(0)
foo()
console.log(3)
Copy the code

Let’s take a look at the overall execution flow chart of this code from the perspective of coroutines:

Combined with the above figure, let’s analyze the execution process of async/await.

First, execute the console.log(0) statement and print out a 0.

This is followed by executing foo, which is flagged by async, so when entering foo, the JavaScript engine saves the current call stack and so on, and then executes console.log(1) in foo, printing out 1.

We are going to await ‘await 100’ in foo. This is the focus of our analysis because the JavaScript engine is doing too much behind the scenes for us to await ‘await 100’. So let’s break this statement down. Let’s take a look at what JavaScript does.

When executing to await 100, a Promise object is created by default, with code like this:

let promise_ = new Promise((resolve,reject){
  resolve(100)})Copy the code

During the creation of the promise_ object, we see that the resolve function is called in the executor function, and when the resolve function is called, the JavaScript engine submits the task to the microtask queue

The JavaScript engine then suspends execution of the current coroutine, transfers control of the main thread to the parent coroutine, and returns the promise_ object to the parent coroutine.

Now that control of the main thread has been handed over to the parent coroutine, one thing the parent coroutine does is call promise_. Then to monitor the promise state. Resolve (100) will be called when the promise state becomes depressing, adding the callback function in promise_. Then to the microtask queue.

To continue the parent coroutine process, here we execute console.log(3) and print out 3. The parent coroutine will then execute to the checkpoint of the microtask and then execute the microtask queue (resolve(100)), which will trigger the callback function in promise_. Then as follows:

promise_.then((value) = >{
   // After the callback function is activated
  // Give the main thread control to the Foo coroutine and pass the vaule value to the coroutine
})
Copy the code

When activated, the callback gives control of the main thread to the coroutine of foo, along with the value. When the foo coroutine is activated, it assigns the value to variable A, and then the Foo coroutine executes the following statement, giving control back to the parent coroutine.

The final output is as follows

//  0
//  1
//  3
//  100
//  2
Copy the code

To sum up, async/await uses coroutines and promises to achieve synchronous writing of asynchronous code, in which Generator is an implementation of coroutines. Although the syntax is simple, the engine does a lot of work behind the coroutines, and we have also disassembled these works one by one. The code written with async/await is also more elegant and beautiful. Compared with the previous way of calling THEN continuously, the semantic is more obvious. Compared with CO + Generator, the performance is higher and the cost of getting started is lower, which is worthy of being the ultimate JS asynchronous solution!

What is the problem with forEach using await? How to solve this problem?

The problem

Problem: For asynchronous code, forEach does not guarantee sequential execution.

Here’s an example:

async function test() {
	let arr = [4.2.1]
	arr.forEach(async item => {
		const res = await handle(item)
		console.log(res)
	})
	console.log('the end')}function handle(x) {
	return new Promise((resolve, reject) = > {
		setTimeout(() = > {
			resolve(x)
		}, 1000 * x)
	})
}

test()
Copy the code

The expected results are:

4 
2 
1The end of theCopy the code

But it actually prints:

The end of the1
2
4
Copy the code
Question why

Why is that? I think it’s worth taking a look at the underlying implementation of forEach.

Why is that? I think we need to take a look`forEach`How does that work at the bottom.Copy the code

As you can see, forEach takes it and executes it directly, which makes it unable to guarantee the order in which the asynchronous tasks are executed. For example, if a later task is short and resove adds the callback function to the microtask first, it may be executed before the previous task.

The solution

How to solve this problem?

We use for… Of can be easily solved.

async function test() {
  let arr = [4.2.1]
  for(const item of arr) {
	const res = await handle(item)
	console.log(res)
  }
	console.log('the end')}Copy the code
Solution Principle -Iterator

Okay, this seems like a simple problem to solve, but have you ever wondered why this works?

In fact, for… Instead of traversing through execution in a crude way like forEach, of uses a special means of traversing — iterators.

First, it is an iterable data type for arrays. So what are iterable data types?

The data type native to the [symbol. iterator] property is an iterable data type. Such as groups, class arrays (e.g. Arguments, NodeList), sets, and maps.

An iterator is an ordered, continuous, pull-based organization for consuming data.

Iterables can be traversed by iterators.

let arr = [4.2.1];
// This is the iterator
let iterator = arr[Symbol.iterator]();
console.log(iterator.next());
console.log(iterator.next());
console.log(iterator.next());
console.log(iterator.next());


// {value: 4, done: false}
// {value: 2, done: false}
// {value: 1, done: false}
// {value: undefined, done: true}
Copy the code

Therefore, our code can be organized like this:

async function test() {
  let arr = [4.2.1]
  let iterator = arr[Symbol.iterator]();
  let res = iterator.next();
  while(! res.done) {let value = res.value;
    console.log(value);
    await handle(value);
    res = iterator.next();
  }
	console.log('the end')}/ / 4
/ / 2
/ / 1
/ / end
Copy the code

Multiple tasks successfully executed in sequence! Actually, for… The of loop code is the syntactic sugar of this code.

Relearning generators

Go back to the code for iterator over the array [4,2,1].

let arr = [4.2.1];
/ / the iterator
let iterator = arr[Symbol.iterator]();
console.log(iterator.next());
console.log(iterator.next());
console.log(iterator.next());
console.log(iterator.next());

// {value: 4, done: false}
// {value: 2, done: false}
// {value: 1, done: false}
// {value: undefined, done: true}
Copy the code

The return value has value and done attributes, and the generator can also call next, which returns the same data structure. !

Yes, the generator itself is an iterator. Generators are controlled by iterators

In fact, this also explains why we can use generators to perform asynchronous tasks sequentially, right

Since it is an iterator, it can use for… Did you go through “of”?

Of course, write a simple Fibonacci sequence (up to 50) :

function* fibonacci(){
  let [prev, cur] = [0.1];
  console.log(cur);
  while(true) {
    [prev, cur] = [cur, prev + cur];
    yieldcur; }}for(let item of fibonacci()) {
  if(item > 50) break;
  console.log(item);
}
/ / 1
/ / 1
/ / 2
/ / 3
/ / 5
/ / 8
/ / 13
/ / 21
/ / 34
Copy the code

That’s the beauty of iterators, while at the same time gaining a deeper understanding of generators than our old acquaintance Generator could have.

14 Write call, apply and bind by hand

1. The difference and use of call, apply and bind

The difference between

The first argument to apply specifies what this object in the function body points to. The second argument is a collection with an underlying index, which can be an array or an array of classes. The apply method passes these elements as arguments to the called function.

The call method passes these elements as arguments to the called function.

The first argument to bind specifies the point to the this object in the function body. The argument can be passed in as call, or it can be passed in multiple times to return a function

  • Can change the inside of the functionthisPoint to and borrow methods from other objects
  • The call and will applyExecuted immediatelyBind does not, but returns a function
  • Call and bind can receiveMultiple parameters.applyI can only accept two. The second isAn array of
  • The bind argument can be passed in multiple passes

use

Change the this point in the function

let obj = {
    c: 'c'
}

function foo(a, b) {
    return a + b + this.c
}

console.log(foo.call(obj, 'a'.'b')) // ABC passes multiple arguments in sequence, executing immediately
console.log(foo.apply(obj, ['a'.'b'])) // ABC the second argument is an array, executed immediately
console.log(foo.bind(obj, 'a') ('b')) // ABC bind can pass in multiple arguments, return a function, and then execute
console.log(foo.bind(obj, 'a'.'b') ())//abc

Copy the code

Borrowing methods from other objects

// Use Array's concat method to convert class Array arguments to arrays
function bar(a, b) {
    let arr = Array.prototype.concat.apply([], arguments)
    console.log(arr.pop()) / / 2
}
bar(1.2)
Copy the code

2. Write a call

Function.prototype.myCall=function(context){

    Context is optional. If not, the default context is Window
    context = context || window
    const fn=Symbol('fn')

    // Change this to next create a fn property for the context and set the value to the function you want to call
    context.fn=this// This refers to the context in fn that calls call

    Because call can pass multiple arguments as arguments to the calling function, the argument after the second is truncated
    const args = [...arguments].slice(1)

    // Execute the function and then call the function and delete the function on the object
    constresult = context.fn(... args)delete context.fn
    return result
}
Copy the code

3. The handwritten apply

Function.prototype.myApply=function(context){

    context = context ||window
    const fn =Symbol('fn')
    // Change the this pointer
    context.fn=this
    
    // Pass the parameter and execute the function
    let result 
    // Check whether there are array arguments
    if(arguments[1]){ result = context.fn(... arguments[1])}else{
        result = context.fn()
    }
    delete context.fn
    return result
}
Copy the code

4. Write a bind

Function.prototype.myBind = function (context) {
    if (typeof this! = ='function') {
        throw new TypeError('Error')}const _this = this
    // arguments in bind
    const args = [...arguments].slice(1)
    // Return a function
    return function F() {
        // Since we return a function, we can new F(), so we need to determine
        if (this instanceof F) {
            // merge args from bind and arguments from call return function
            return new_this(... args, ... arguments) }return_this.apply(context, args.concat(... arguments)) } }Copy the code

Here is an analysis of the implementation:

  • The first few steps are similar to the previous implementation, so I won’t go into details
  • bindReturns a function, for functions there are two ways to call, one is directly called, one is throughnewLet’s start with the direct call
  • For direct calls, this is selectedapplyBut for the parameters need to pay attention to the following: becausebindYou can implement code like thisf.bind(obj, 1)(2), so we need to concatenate the parameters on both sides, so we have this implementationargs.concat(... arguments)
  • And finally, passnewIn the previous chapter we learned how to judgethisfornewWill not be changed in any waythis, so in this case we need to ignore the incomingthis

15. How to simulate the effect of new

What does new do when it is called:

  1. Give instances access to properties on the constructor. Prototype chain
  2. Give instances access to private properties.
  3. If the constructor returns a result that is not a reference data type

So how do you achieve the effect of new? Calling the constructor in this way actually goes through the following four steps:

Create a new object; (2) Make the new object’s prototype point to the constructor’s prototype object; (3) Assign the scope of the constructor to the new object (so this refers to the new object), execute the code in the constructor (add attributes to the new object); (4) Return a new object.

    function myNew(fn, ... args) {
        // No function can be new
        if (typeoffn ! = ="function") {
            throw new Error('TypeError')}// Step (1)(2) creates an object stereotype that inherits fn stereotype
        const newObj = Object.create(fn.prototype);
        // The constructor inherits attributes
        // (3) binds fn's this to the new object, inherits its properties, and returns the result
        const result = fn.apply(newObj, args);
        // The result is returned based on the type of the result object
        return result && (typeof result === "object" || typeof result == "function")? result : newObj; }Copy the code

The Object. Create constructor inherits properties from the Object. Fn. apply inherits properties from the constructor.

Why is arguments not an array for a function? How do I convert it to an array?

Since arguments itself does not call array methods, it is just another object type with properties starting from 0, 0, 1, 2… Finally, there are callee and Length attributes. We also refer to such objects as class arrays.

Common class arrays include:

    1. Using getElementsByTagName/ClassName HTMLCollection ()
    1. NodeList obtained with querySelector

So that makes a lot of array methods unusable, and if we need to convert them to arrays, what are the methods?

1. Extend the operator

//[...arrayLike]
function foo(a,b) {
    let arr=[...arguments]
    console.log(arr.pop())/ / 2
}
foo(1.2)
Copy the code

2. Array.from

The array.from () method creates a new, shallow-copy Array instance from an array-like or iterable.

function foo(a,b) {
    let arr=Array.from(arguments)
    console.log(arr.pop())/ / 2
}
foo(1.2)
Copy the code

3. Array.prototype.slice

The slice() method returns a new array object that is a shallow copy (including begin, but not end) of the array determined by begin and end. The original array will not be changed.

function foo(a,b) {
    let arr=Array.prototype.slice.call(arguments)
    console.log(arr.pop())/ / 2
}
foo(1.2)
Copy the code

4. Array.apply

function foo(a,b) {
    let arr=Array.apply(null.arguments)
    console.log(arr.pop())/ / 2
}
foo(1.2)
Copy the code

5. Array.prototype.concat

The concat() method is used to merge two or more arrays. This method does not change an existing array, but returns a new array.

function foo(a,b) {
    let arr=Array.prototype.concat.apply([],arguments)
    console.log(arr.pop())/ / 2
}
foo(1.2)
Copy the code

17 Array flattening

A multidimensional array is converted to a one-dimensional array as follows

let ar = [1[2[3[4.5]]].6];// -> [1, 2, 3, 4, 5, 6]
let str = JSON.stringify(arr);
Copy the code

1. Flat method in ES6

// falt() is used to flatten the base array into a one-dimensional array no matter how many layers it has nested, using the Infinity keyword
let flatArr = arr.flat(1)
let flatArr = arr.flat(Infinity)
Copy the code

2. replace+split

let flatArr = str.replace(/(\[|\])/g.' ').split(', ')
Copy the code

3. replace+JSON.parse

// Convert the string to a string, remove the "[" and"] "from the string, and convert the string back to an array
str = str.replace(/(\[|\])/g.' ')
str = '[' + str + '] '
let flatArr = JSON.parse(str)
Copy the code

4. For loop + recursion

let result = []
let arrFlat = function (arr) {
    for (var i = 0; i < arr.length; i++) {
        let item = arr[i]
        if (Array.isArray(arr[i])) {
            arrFlat(item)
        } else {
            result.push(item)
        }
    }
    return result
}
let flatArr = arrFlat(arr)
Copy the code

5. Reduce function iteration + recursion

// Use recursion, use for loop plus recursion, here use reduce
// Reduce accumulator, also cyclic in nature,
// cur = arr[I]; pre = arr[i-1];
function arrFlat(arr) {
    return arr.reduce((pre, cur) = > {
        return pre.concat(Array.isArray(cur) ? arrFlat(cur) : cur)
    }, [])
}
let flatArr = arrFlat(arr)
Copy the code

6. Extend operators

// As long as one element has an array, the loop continues
while (arr.some(Array.isArray)) { arr = [].concat(... arr) }console.log(arr)
Copy the code

18 Shallow copy implementation mode

We know that primitive types copy copies of values, and reference types copy references to values. Reference types are objects. How do we make a copy of an object

Take a look at the following sample code

let arr = [1.2, {val:3}]
let copyArr=arr.slice()

copyArr[1] =3// The original array value 2 does not change
copyArr[2].val=4// The original array value val changes
console.log(JSON.stringify(arr),'arr')/ / [1, 2, {4} "val" :]
console.log(JSON.stringify(copyArr),'copyArr')/ / [1, 3, {4} "val" :]
Copy the code

When we change the value of 2 in our copyArr array, the value of arR in our copyArr array doesn’t change, when we change the value of val in our copyArr array, the value of arR in our copyArr array does change. This is essentially a shallow copy. It can only copy one layer of objects. If there is nesting of objects, shallow copies are useless. Fortunately, deep copy is designed to solve this problem. It can solve the object nesting problem of infinite poles and achieve a complete copy. Let’s look at the implementation of shallow copy.

1. Shallow copy of array

let arr = [1.2, {val:3}]
/ / 1, concat
//let copyArr= arr.concat()

//2
//let copyArr = [...arr]

/ / 3, slice
let copyArr=arr.slice()

copyArr[1] =3
copyArr[2].val=4

console.log(JSON.stringify(arr),'arr')/ / [1, 2, {4} "val" :]
console.log(JSON.stringify(copyArr),'copyArr')/ / [1, 3, {4} "val" :]
Copy the code

2. Shallow copy of objects

let obj={name:'foo'.atr: {age:18.sex:'man'}}

/ / 1, the Object. The assign
//let copyObj= Object.assign({},obj,{name:'bar'})

//2
letcopyObj = {... obj} copyObj.name='foo2'
copyObj.atr.age=28

console.log(JSON.stringify(obj),'obj')//{"name":"foo","atr":{"age":28,"sex":"man"}}
console.log(JSON.stringify(copyObj),'copyObj')//{"name":"foo2","atr":{"age":28,"sex":"man"}}
Copy the code

3. Manual implementation

const shallowClone = (target) = > {
  if (typeof target === 'object'&& target ! = =null) {
    const cloneTarget = Array.isArray(target) ? [] : {};for (let prop in target) {
    // Copy only its own attributes
      if(target.hasOwnProperty(prop)) { cloneTarget[prop] = target[prop]; }}return cloneTarget;
  } else {
    returntarget; }}Copy the code

19 Implementation mode of deep copy

1. Simple version and existing problems

JSON.parse(JSON.stringify());
Copy the code

This API is expected to cover most application scenarios, and yes, it’s the first thing that comes to mind when I talk about deep copy. But in practice, for some strict scenarios, this approach has huge pitfalls. Questions as follows

    1. Unable to resolveA circular referenceThe problem. Here’s an example:
let obj = {
    foo: 'foo'.bar: function () {
        return this.foo
    }
}
//obj.target = obj
Copy the code

Copying A will result in stack overflow because of infinite recursion.

    1. Unable to copy someSpecial object, such as RegExp, Date, Set, Map, etc.
const set = new Set([1.2.3.4.5.5.5])
const map = new Map([['name'.'Joe'],
    ['title'.'Author']])let copySet = JSON.parse(JSON.stringify(set))
let copyMap = JSON.parse(JSON.stringify(map))
console.log(copySet, 'copySet') / / {}
console.log(copyMap, 'copyMap') / / {}
Copy the code
    1. Unable to copyfunction(Emphasis).
console.log(copyObj, 'copyObj') //{foo: "foo"}
Copy the code

Let’s start by writing a simple deep copy and work through the three problems step by step from there

const deepClone = (target) = > {
  if (typeof target === 'object'&& target ! = =null) {
    const cloneTarget = Array.isArray(target) ? [] : {};for (let prop in target) {
      if(target.hasOwnProperty(prop)) { cloneTarget[prop] = deepClone(target[prop]); }}return cloneTarget;
  } else {
    returntarget; }}Copy the code

2. Solve circular references

Now here’s the question:

let obj = {val : 2};
obj.target = obj;

deepClone(obj);RangeError: Maximum Call Stack Size exceeded
Copy the code

This is circular reference. How can we solve this problem?

Create a WeakMap. Record the object that has been copied, if it has been copied, then return it directly.

As for why WeakMap is used instead of Map, the key on Map and Map form a strong reference relationship. WeakMap is a special Map in which the key is weakly referenced. The key must be an object, and the value can be arbitrary. A weakly referenced object can be reclaimed at any time, whereas a strong reference cannot be reclaimed as long as the strong reference exists. In the example above, map and A have a strong reference relationship, and the memory space occupied by A will not be freed until the program ends.

const isObject = (target) = > (typeof target === 'object' || typeof target === 'function') && target ! = =null;

const deepClone = (target, map = new WeakMap(a)) = > { 
  if(map.get(target))  
    return target; 
 
  if (isObject(target)) { 
    map.set(target, true); 
    const cloneTarget = Array.isArray(target) ? [] : {};for (let prop in target) { 
      if(target.hasOwnProperty(prop)) { cloneTarget[prop] = deepClone(target[prop],map); }}return cloneTarget; 
  } else { 
    returntarget; }}Copy the code

Now try it:

const a = {val:2};
a.target = a;
let newA = deepClone(a);
console.log(newA)//{ val: 2, target: { val: 2, target: [Circular] } }
Copy the code

3. Copy special objects

Traversable objects

For special objects, we use the following methods to identify:

Object.prototype.toString.call(obj);
Copy the code

Let’s tease out what happens to traversable objects:

'Map'
'Set'
'Array'
'Object'
'Arguments'
Copy the code

Ok, based on these different strings, we can successfully identify these objects.

const getType = Object.prototype.toString.call(obj).slice(8, -1);;

const canTraverse = {
    'Map': true.'Set': true.'Array': true.'Object': true.'Arguments': true};const deepClone = (target, map = new Map(a)) = > {
  if(! isObject(target))return target;
  let type = getType(target);
  let cloneTarget;
  if(! canTraverse[type]) {// Process objects that cannot be traversed
    return;
  }else {
    // This operation is critical to ensure that the object's prototype is not lost!
    let ctor = target.constructor;
    cloneTarget = new ctor();
  }

  if(map.get(target)) 
    return target;
  map.put(target, true);

  if(type === mapTag) {
    / / the Map processing
    target.forEach((item, key) = >{ cloneTarget.set(key, deepClone(item, map)); })}if(type === setTag) {
    / / handle the Set
    target.forEach(item= >{ target.add(deepClone(item, map)); })}// Handle arrays and objects
  for (let prop in target) {
    if(target.hasOwnProperty(prop)) { cloneTarget[prop] = deepClone(target[prop], map); }}return cloneTarget;
}
Copy the code

Object cannot be traversed

const boolTag = 'Boolean';
const numberTag = 'Number';
const stringTag = 'String';
const symbolTag = 'Symbol';
const dateTag = 'Date';
const errorTag = 'Error';
const regexpTag = 'RegExp';
const funcTag = 'Function';
Copy the code

For non-traversable objects, different objects have different processing.

const handleRegExp = (target) = > {
  const { source, flags } = target;
  return new target.constructor(source, flags);
}

const handleFunc = (target) = > {
  // The highlight of the meeting
}

const handleNotTraverse = (target, tag) = > {
  const Ctor = targe.constructor;
    switch (tag) {
        case boolTag:
            return new Object(Boolean.prototype.valueOf.call(target));
        case numberTag:
            return new Object(Number.prototype.valueOf.call(target));
        case stringTag:
            return new Object(String.prototype.valueOf.call(target));
        case symbolTag:
            return new Object(Symbol.prototype.valueOf.call(target));
        case errorTag:
        case dateTag:
            return new Ctor(target);
        case regexpTag:
            return handleRegExp(target);
        case funcTag:
            return handleFunc(target);
        default:
            return newCtor(target); }}Copy the code

4. Copy functions

Functions are also objects, but they are so special that we separate them out.

Speaking of functions, there are two kinds of functions in JS, one is ordinary function, the other is arrow function. Every normal Function is an instance of Function, whereas the arrow Function is not an instance of any class, and each call is a different reference. So we just have to deal with the normal function case, where the arrow function just returns itself.

So how do you distinguish between the two?

The answer is: use prototypes. There is no prototype for arrow functions.

The code is as follows:

const handleFunc = (func) = > {
  // The arrow function returns itself directly
  if(! func.prototype)return func;
  const bodyReg = / (? <={)(.|\n)+(? =})/m;
  const paramReg = / (? < = (.) + (? =)\s+{)/;
  const funcString = func.toString();
  // Match function parameters and function body s respectively
  const param = paramReg.exec(funcString);
  const body = bodyReg.exec(funcString);
  if(! body)return null;
  if (param) {
    const paramArr = param[0].split(', ');
    return new Function(... paramArr, body[0]);
  } else {
    return new Function(body[0]); }}Copy the code

By now, our deep copy implementation is fairly complete.

5. Complete code

Deep copy of the full version

const getType = obj= > Object.prototype.toString.call(obj).slice(8, -1);

const isObject = (target) = > (typeof target === 'object' || typeof target === 'function') && target ! = =null;

const canTraverse = {
    'Map': true.'Set': true.'Array': true.'Object': true.'Arguments': true};const mapTag = 'Map';
const setTag = 'Set';
const boolTag = 'Boolean';
const numberTag = 'Number';
const stringTag = 'String';
const symbolTag = 'Symbol';
const dateTag = 'Date';
const errorTag = 'Error';
const regexpTag = 'RegExp';
const funcTag = 'Function';

const handleRegExp = (target) = > {
    const {
        source,
        flags
    } = target;
    return new target.constructor(source, flags);
}

const handleFunc = (func) = > {
    // The arrow function returns itself directly
    if(! func.prototype)return func;
    const bodyReg = / (? <={)(.|\n)+(? =})/m;
    const paramReg = / (? < = \ () + (? =\)\s+{)/;
    const funcString = func.toString();
    // Match function parameters and function body respectively
    const param = paramReg.exec(funcString);
    const body = bodyReg.exec(funcString);
    if(! body)return null;
    if (param) {
        const paramArr = param[0].split(', ');
        return new Function(... paramArr, body[0]);
    } else {
        return new Function(body[0]); }}const handleNotTraverse = (target, tag) = > {
    const Ctor = target.constructor;
    switch (tag) {
        case boolTag:
            return new Object(Boolean.prototype.valueOf.call(target));
        case numberTag:
            return new Object(Number.prototype.valueOf.call(target));
        case stringTag:
            return new Object(String.prototype.valueOf.call(target));
        case symbolTag:
            return new Object(Symbol.prototype.valueOf.call(target));
        case errorTag:
        case dateTag:
            return new Ctor(target);
        case regexpTag:
            return handleRegExp(target);
        case funcTag:
            return handleFunc(target);
        default:
            return newCtor(target); }}const deepClone = (target, map = new WeakMap(a)) = > {
    if(! isObject(target))return target;
    let type = getType(target);
    let cloneTarget;
    if(! canTraverse[type]) {// Process objects that cannot be traversed
        return handleNotTraverse(target, type);
    } else {
        // This operation is critical to ensure that the object's prototype is not lost!
        let ctor = target.constructor;
        cloneTarget = new ctor();
    }

    if (map.get(target))
        return target;
    map.set(target, true);

    if (type === mapTag) {
        / / the Map processing
        target.forEach((item, key) = >{ cloneTarget.set(key,deepClone(item, map)); })}if (type === setTag) {
        / / handle the Set
        target.forEach(item= >{ cloneTarget.add(deepClone(item, map)); })}// Handle arrays and objects
    for (let prop in target) {
        if(target.hasOwnProperty(prop)) { cloneTarget[prop] = deepClone(target[prop], map); }}return cloneTarget;
}
Copy the code

The postscript

Of course, JS related content is far more than these, I just comb a part of them. After reviewing this js series, my biggest feeling is that knowledge is interlinked. A lot of knowledge does not exist in isolation, but is linked one by one. Take Eventloop for example, macro task, micro task and asynchronous technology are all developed on the basis of Eventloop. And if you want to understand a technology, it’s important to understand how it’s evolved, how it’s evolved. The iterative process of technology, that is, the process of constantly solving the problems existing in existing technologies, understanding the ins and outs of these technologies, I believe we also understand the language we use in our hands.

Know what it is, know why, the process of technological improvement is not a smooth road, the road ahead is long, do not forget the original aspiration, forge ahead, encourage you together, hope to know what JS series inspire you.

References:

How Browsers Work and Practice geek Time Illustrated Google V8 Geek Time JavaScript Advanced Programming (3rd Edition) JavaScript You Didn’t Know (Middle Volume)

😇 native JS soul of ask (below), sprint 🚀 advanced the last kilometre foundation is very good? Learn about the most complete handwritten JS interview questions