This article has participated in the third “topic writing” track of the Denver Creators Training Camp. For details, check out: Digg Project | Creators Training Camp third is ongoing, “write” to make a personal impact.

We know that JavaScript provides a pretty good event loop mechanism, so let’s take a look at how the event loop works.

Event loop

According to the website, JavaScript has a concurrency model based on event loops, which are responsible for executing code, collecting and processing events, and executing subtasks in queues.

Take a closer look at the cycle of responsibility, that is, what he is responsible for:

  • Execute the code
  • Collect and process events
  • Execute the subtasks in the queue

We can receive three pieces of information. Based on these three pieces of information, we can consider the following questions:

  • How to execute the code? In what process does it operate?
  • How do you collect events? How do you handle events? Is there a priority of events, such as which macro task or micro task executes first?
  • How do I execute the subtasks in the queue?

Analyze according to the following figure and the code below:

function foo(b) { let a = 10; return a + b + 11; } function bar(x) { let y = 3; return foo(x * y); } console.log(bar(7)); / / return 42Copy the code

Stack Stack

In JavaScript, the call line of a function becomes a stack of frames. Each frame is 60Hz, which is about 16.6ms.

For example, when bar is called, the first frame is created and pushed. This frame contains the parameters and local variables of bar.

When bar calls foo, a second frame is created and pushed on top of the first frame, which contains Foo’s arguments and local variables.

When Foo completes and returns, the second frame is popped off the stack. This leaves the bar function call frame.

When bar completes and returns, the first stack is popped and the stack is cleared.

Heap Heap

A heap is a computer term for a large (usually unstructured) area of memory.

Objects are allocated in the heap. For example, if you store an object with a variable, the object will have a unique identifier in the heap, and the variable in the stack will hold this unique identifier to determine which object you refer to. If multiple variable identifiers are the same, the same object is referenced.

Queue Queue

A JavaScript code runtime contains a message queue of messages to be processed. Each message contains a callback function that processes the message.

At some point during the event loop, the runtime starts processing from the message that is first queued. The processed message is removed from the queue and the function associated with it is called as an input parameter. This function call also creates a new stack.

Function processing continues until the stack is empty; The event loop then proceeds to process the next message in the task queue.

completes

After each message has been fully executed, other messages will be executed.

Doing so provides some nice features for program analysis, such as the following:

  • When a function is executed, it is not preempted, and only after it has finished does other code run, modifying the data that the function operates on.

Of course, this model also has disadvantages:

  • When a message takes too long to process, the Web application can’t handle user interactions, such as scrolling/clicking.

Make it a habit to shorten the time it takes to process individual messages.

Add a message

In the browser, a message is added to the event queue every time an event occurs and an event listener is bound to the event. If there is no event listener, the event will be lost.

For example, when an element with a click event is clicked, it generates a message just like any other event.

The setTimeout function is special and takes two arguments: the message to be queued | a time value (default: 0).

This time represents the minimum delay for the message to actually be queued. If there are no other messages in the queue and the stack is empty, the setTimeout message will be processed immediately after the delay has passed. However, if there are other messages, setTimeout must wait for the other messages to be processed before processing the message after this delay has passed.

These are often referred to as macro tasks and micro tasks. As a macro task, setTimeout is always delayed after the execution of the previous micro task. Other macro tasks include JS files, that is, the JS file in front takes precedence over the JS file in the next macro task.

Never block

The JavaScript event loop model has one very interesting feature that sets it apart from other languages: it never blocks. For example, when the user XHR requests back, we can still input the input box.

There are exceptions, such as alert or synchronous XHR.

The garbage collection

Garbage collection is used in computer programming to describe the process of finding and removing objects that are no longer referenced by other objects.

When a program occupies a portion of memory that is no longer accessible by the program, the program returns the portion of memory to the operating system through a garbage collection algorithm.

Remember the Heap in the event loop, where all memory space is managed. When an object is not used, no one refers to it. That’s going to be recycled, that’s going to be processed by the garbage collection algorithm.

How was it handled? Let’s see.

The principle of

There are two basic principles of garbage collection:

  • Consider an object that will not be accessed in the future.
  • Reclaim the memory occupied by these objects.

Consider the following question:

  • How do I know if this object will not be used in the future?

Trace collector

The trace collector is run algorithmically, and what it does is periodically traverse the memory space it manages.

It starts with several root storage objects looking for associated storage objects, then marks the remaining unassociated storage objects, and finally reclaims the memory occupied by these unassociated storage objects.

Now that we know the general running flow in the collector, let’s see what algorithms it has.

Mark-clear

First suspend the whole process of all the running thread, let the recovery thread with a single thread to scan mark, and directly clear the recovery, and then after the recovery is completed, restore the running thread.

This can result in a large amount of free space fragmentation and waste by making contiguous memory space difficult to obtain for high-volume objects.

Let’s take an example:

JS runs in a single thread, which will be garbage collected before running. The JS operation will be suspended first, the thread will be recycled for scanning, and the object that is not used will be marked. After the marking is completed, the object will be cleared directly and the memory will be recycled. Finally, JS is resumed.

Since the reclaimed space may not be continuous, it will lead to free space fragmentation. If the reclaimed memory is stored with large-capacity objects on both sides, there will be no continuous storage space for large-capacity objects, resulting in waste.

Why is discontinuity wasteful? If a variable applies this large object, it should just find the next memory, because it’s not continuous, it has to find the next memory, and that’s a waste.

Mark-compress

Similar to mark-sweep, except that the retained storage objects are moved to contigual memory during reclamation.

This consolidates free space and avoids memory fragmentation.

copy

The program needs to split the space it owns into two parts. The storage objects needed to run the program are first stored in a partition (partition 0). Similarly, suspend all running threads of the whole running program, and mark them. During the period of looking back, the reserved storage objects will be transported and collected to another partition (partition 1) to complete recycling. After this recycling, the program will store the storage objects generated next to “partition 1”. On the next recycle, switch the roles of the two partitions.

Incremental collector

The program is required to divide the memory space it owns into several partitions. The storage objects needed by the program are distributed among these partitions, and only one partition is reclaimed at a time.

This avoids suspending all running threads for collection and allows some threads to remain running without affecting the collection behavior, reducing the collection time and increasing the response time.

generational

Because the replication algorithm has a long lifetime, it takes a long time to move a large storage object, and the lifetime of the storage object is different. The program is required to divide all memory space into partitions and mark them as young generation space and old age space.

Objects that need to be stored at run time are first stored in the young partition.

The young partition will perform garbage collection more frequently, and each time the collection is completed, the lifetime counter in the surviving storage object will be +1. When the life counter of the storage object of the young generation partition reaches a certain threshold or the space occupied by the storage object exceeds a certain threshold, it will be moved to the old generation partition.

Older partitions have less running garbage collection behavior.

In general, there is also persistent generation space, which is used to store objects for the entire life of the program, such as running code, data constants, and so on. This space does not run garbage collection operations.

Through this generation processing, storage objects that exist in a limited domain, have a small capacity, and have a short life can be quickly reclaimed. Storage objects that live in the global, large capacity and long life are less likely to be recycled.

conclusion

This article concludes with an overview of how event loops and garbage collection work.

  • Event collection:
    • Stack: A queue in which messages are pushed
    • Heap: object variables referenced in the stack. Memory for storing objects.
    • Queue: The order in which messages are executed.
  • Garbage collection: For the heap, how to store objects, how to recycle, through the relevant algorithms.
    • Mark – clear: fragmentation is numerous and discontinuous.
    • Mark-compress: Reduces fragmentation, makes space continuous and reduces waste. But time is relatively long, need to tidy up space.
    • Copy: Moving is time consuming. The storage object lifetime is different. Procedure
    • Incremental collector: Avoid suspending all threads. The recovery time is reduced and the response speed is increased.
    • Generation: young generation division, old generation division. The replication algorithm is better optimized.