This is the 16th day of my participation in the August Text Challenge.More challenges in August

This is the fifth article in the Memory Control section, Memory Leaks

  • Article 1: V8’s memory limitations and why
  • Chapter two: V8’s garbage collection mechanism
  • Article 3: Efficient Use of Memory
  • Part four: Memory metrics

In the last four articles,

Due to the limitations of the V8 garbage collection mechanism, NodeJS has some memory limitations and can only use partial memory (about 1.4 GB on 64-bit systems and 0.7 GB on 32-bit systems)

In a V8 garbage recycling mechanism, the old generation and the new generation of recycling algorithms are analyzed.

In order for V8 garbage collection to work well, we need to pay attention to the use of global variables and closures, as they can cause memory not to be reclaimed immediately and allow objects to grow in older generations.

We can check the memoryUsage of the system using totalmem and freemem in OS. We can check the memoryUsage of the process using process.memoryusage (). We can check the memoryUsage of the nodejs process using V8 heap and Buffer. The memory used by Buffer is not limited by the size of the heap

This article focuses on the causes and solutions of memory leaks:

With V8’s garbage collection mechanism, memory leaks are rare. However, NodeJS is very sensitive to memory leaks. Once memory leaks occur, garbage collection will take more time to scan and slow down the application until the process memory runs out and the application crashes.

There are many reasons for a memory leak, but the essence is that an object that should be recycled is accidentally not recycled and becomes an object that resides in the old generation.

There are several possible causes of memory leaks. Let’s take a look at each one:

  • The cache
  • Queue consumption is not timely
  • Scope is not released

Be wary of using memory as a cache

Cache limiting policy

If you want to cache some results through JavaScript objects, the next time you access the cache, the first to determine what is in the cache, if not retrieved. This is fine, but be careful to avoid memory leaks.

Why might this be a memory leak problem?

Once the object is used as a cache, it will live in the old generation, and the more keys an object has, the more memory space it occupies, and the garbage collection will do useless processing for these objects, and will affect the memory usage.

If you want to use this approach, you need to place caching restrictions, such as weeding out the oldest unused cache. You can use Isaac Z. Schlueter’s LRU algorithm cache at github.com/isaacs/node… Use caching wisely.

In addition, when using modules, because modules are compiled and executed, and then cached to the old generation. So be careful of memory leaks when designing modules

In this example, leakArray increases as leak is executed, causing the local variable leakArray to increase its footprint and not be released

var leakArray = []; 
exports.leak = function () {
    leakArray.push("leak" + Math.random()); 
};
Copy the code

This situation requires the addition of a queue emptying interface for the caller to free up memory.

Caching solutions

When using caches, try to use out-of-process caches. External cache software has a good expiration policy and its own memory management

External caches mainly use Redis and Memcatched

Their main benefits are:

  1. Moving the cache outside reduces the number of resident in-memory processes and makes garbage collection more efficient
  2. Caches can be shared between processes

Paying attention to queue status

In the producer-consumer model, we use queues (array objects) as intermediates. In this case, once the rate of consumption is less than the rate of production, a pile will form.

For example, when we collect logs, we may use a database to store the logs if we are not careful. The database is built on the file system. The write speed of the database is much slower than that of the file system. Therefore, the database writes are piled up. Memory usage increases, causing memory leaks.

The superficial solution to this scenario is that we can write the log to a file, but if the production speed surges for some reason, it will still be a problem.

The depth solution is to monitor the length of the queue and generate an alarm if there is a problem. The other is that any asynchronous call should have a timeout mechanism, which starts when the call is queued, and if the response is not completed within the specified time, the timeout exception should be passed through the callback function. Or you can use denial mode, where when the queue is congested, incoming calls respond directly to congestion errors.

The above are the causes and solutions of memory leaks