• Implementing an efficient LRU cache in JavaScript
  • Original author: Yomguithereal
  • The Nuggets translation Project
  • Permanent link to this article: github.com/xitu/gold-m…
  • Translator: hanxiaosss
  • Proofreader: TokenJan, Shixi-Li

Implement an efficient LRU cache using JavaScript

How to use the power of JavaScript typed arrays to design your own low-cost pointer system for fixed-capacity data structures

Suppose we need to work with a very large (say, several hundred GIGABytes) CSV file that contains the URL to download. To make sure we don’t run out of memory while parsing the entire file, we read the file line by line:

csv.forEachLine(line= > {
  download(line.url);
});
Copy the code

Now assume that whoever created this file forgot to remove duplicate urls.

I know it can be easily solved using sort-u, but that’s beside the point. De-replicating urls may not be as simple as it seems. For example, you can look at the Ural library in Python to see some examples of how this can be done.

This is a problem because we don’t want to fetch the same URL multiple times: fetching resources from a web page is time-consuming and we need to keep it as simple as possible to avoid making too many requests to the site that fetched the resources.

The obvious solution is to remember that the URLS we’ve retrieved are cached in a map:

const done = new Map(a); csv.forEachLine(line= > {
  if (done.has(line.url))
    return done.get(line.url);

  const result = download(line.url);
  done.set(line.url, result);
});
Copy the code

At this point, the savvy reader will notice that we have just dropped the intention of reading the CSV file line by line, because now we need to commit all of its urls to memory.

That’s the problem: we want to avoid retrieving the same URL multiple times while making sure we don’t run out of memory. We must find a happy medium.

Interesting fact: If you are dealing with data from the Internet, such as a list of urls to crawl, you will inevitably run into problems like Power Law. This is common because people link to Twitter.com exponentially more than they do to Unknownwebsite.fr.

Fortunately for us, it seems that only a small percentage of the urls included in the file are repeated frequently, and the vast majority of the rest occur only once or twice. We can design a strategy to take advantage of this fact by removing urls from the map that we feel are unlikely to reappear, so we don’t allow our map to exceed one of our preset memory levels.

One of the most common expulsion strategies is “LRU”

Least Recently Used.

The LRU cache

Therefore, we understand that the LRU cache is a fixed-capacity map, and key values can be bound as follows: If the cache is full and we still need to insert a new element, we will make space by removing the least recently used element.

In this way, the cache needs to store a given item in the order it was accessed last. So every time someone tries to set a new key, or access a new key, we modify the underlying list to make sure we maintain the order we need.

Note: LFU(LFU (least frequently used)) is a perfectly valid cache expulsion strategy, just not as common. You can read about it here and find the implementation notes here

But why is this order so important? Wouldn’t it be better to keep track of how many times each element is accessed so that we can eliminate the least recently used element?

Not necessarily. Here are some reasons why:

  • LRU is actually a good substitute for LFU, because the more frequently a key-value pair is accessed, the less likely it is to be cleaned up.
  • Among other things, you need to store integers to keep track of how many times that element is accessed.
  • Sorting elements by last-accessed order is straightforward because it can be synchronized with operations on the cache.
  • LFU usually forces you to choose any element to clear: for example, if all key-value pairs have been accessed only once. With LRU, you don’t have to make that choice: you only need to remove the least recently used element, and there is no ambiguity.

Implement an LRU cache

There are many ways to implement an effective LRU buffer, but I will only focus on the ones you are most likely to use when developing a high-level language.

In general, to implement a suitable LRU cache, we need the following two elements:

  1. Something likehashmapCan efficiently retrieve values associated with any key ———— such as strings. In JavaScript, we can use ES6MapOr any ordinary object{}: Remember that our cache is no different from a fixed-capacity key-value pair store.
  2. A method of storing all elements in last-accessed order. More importantly, we will need to move elements efficiently, which is what people tend to do with bidirectional lists.

Our implementation needs to be able to do at least two things:

  • #.set: associates the value to the given key and clears the least recently used element when the cache is full.
  • #.get: Retrieves the value associated with the given key if it exists in the cache, while updating the underlying list to preserve the LRU order.

Here’s how we can use such a cache:

// Create a cache that can hold three elements
const cache = new LRUCache(3);

// Add some elements
cache.set(1.'one');
cache.set(2.'two');
cache.set(3.'three');

// So far, no elements have been removed from the cache
cache.has(2);
>>> true

/ / oh no! We need to add a new element
cache.set(4.'four');

// '1' is cleared because it is the least recently used key
cache.has(1);
>>> false

// If we access '2', it is no longer the key of LRU
cache.get(2);
>>> 'two'

// This means that '3' will be cleared
cache.set(5.'five');
cache.has(3);
>>> false

// Therefore we never store more than 3 elements
cache.size
>>> 3
cache.items()
>>> ['five'.'two'.'four']
Copy the code

Two-way linked list

Question: Why is a single linked list not enough in our case? Because we need to do the following efficiently in our list:

  • Place an element at the beginning of the list
  • Moves an element to the starting position at any position in the list
  • Removes the last element of the list while keeping the pointer to the last newly created element correct

To be able to implement an LRU cache, we need to implement a bidirectional linked list to ensure that we can store all elements in the order of last access: starting with the most recently used element and ending with the least recently used element.

Note that the opposite may be true and the direction of the list does not matter.

So how do we represent a bidirectional linked list in memory? In general, we create a node structure that contains:

  1. A payload, which is the actual value or element to store. It can be of any type, from string to integer…
  2. The pointer to the preceding element in the list.
  3. The pointer to the next element in the list.

Then we also need to store Pointers to the first and last elements of the list. And you’re done.

If you’re not sure what Pointers are, now might be a good time to review Pointers.

Here it looks like this:

Node structure: (prev | payload | next) content: string type, store the values of the prev: pointer to an element before next: a pointer to the next element, : pointer x: Null null pointer list structure: ┌ ─ ─ ─ ─ ─ > ┐ ┌ ─ ─ ─ ─ ─ ─ ─ > ┐ ┌ ─ ─ ─ ─ ─ ─ ─ ─ > ┐ head, (x |"one"| |) ("two"| |) ("three"| x), a tail └ < ─ ─ ─ ─ ─ ─ ─ ┘ └ < ─ ─ ─ ─ ─ ─ ─ ┘ └ < ─ ─ ─ ─ ─ ─ ┘Copy the code

LRU cache list operation

As long as the cache is not full, it is very easy to maintain the list of elements we have acquired. We simply insert a new element in front of the list:

1.A capacity of3Empty cache head x x tail2.Insert as "one" key ┌ ─ ─ ─ ─ ─ > ┐ head, (x |"one"| x), a tail └ < ─ ─ ─ ─ ─ ┘3.Insert for "two" key (notice how "two" in front of the inserted into the list) ┌ ─ ─ ─ ─ ─ > ┐ ┌ ─ ─ ─ ─ ─ ─ ─ > ┐ head, (x |"two"| |) ("one"| x), a tail └ < ─ ─ ─ ─ ─ ─ ─ ┘ └ < ─ ─ ─ ─ ─ ┘4.Finally, we insert for the "three" key ┌ ─ ─ ─ ─ ─ ─ > ┐ ┌ ─ ─ ─ ─ ─ ─ ─ > ┐ ┌ ─ ─ ─ ─ ─ ─ ─ > ┐ head, (x |"three"| |) ("two"| |) ("one"| x), a tail └ < ─ ─ ─ ─ ─ ─ ─ ─ ┘ └ < ─ ─ ─ ─ ─ ─ ─ ┘ └ < ─ ─ ─ ─ ─ ┘Copy the code

So far so good. Now, to keep our list in LRU order, if anyone accesses the keys already stored in the cache, we need to reorder the accessed keys by moving them to the start of the list:

1.Current status of our cache ┌ ─ ─ ─ ─ ─ ─ > ┐ ┌ ─ ─ ─ ─ ─ ─ ─ > ┐ ┌ ─ ─ ─ ─ ─ ─ ─ > ┐ head, (x |"three"| |) ("two"| |) ("one"| x), a tail └ < ─ ─ ─ ─ ─ ─ ─ ─ ┘ └ < ─ ─ ─ ─ ─ ─ ─ ┘ └ < ─ ─ ─ ─ ─ ┘2.access"two"Key, we first extract the element from the list and relink its preceding and following elements. Extract: (x |"two"| x) ┌ ─ ─ ─ ─ ─ ─ > ┐ ┌ ─ ─ ─ ─ ─ ─ ─ > ┐ head, (x |"three"| |) ("one"| x), a tail └ < ─ ─ ─ ─ ─ ─ ─ ─ ┘ └ < ─ ─ ─ ─ ─ ┘3.Then it moved to the front ┌ ─ ─ ─ ─ ─ > ┐ ┌ ─ ─ ─ ─ ─ ─ ─ ─ > ┐ ┌ ─ ─ ─ ─ ─ ─ ─ > ┐ head, (x |"two"| |) ("three"| |) ("one"| x), a tail └ < ─ ─ ─ ─ ─ ─ ─ ┘ └ < ─ ─ ─ ─ ─ ─ ─ ─ ┘ └ < ─ ─ ─ ─ ─ ┘Copy the code

Note that we need to update the header pointer every time, and sometimes we need to update the tail pointer as well.

Finally, if the cache is full, an unknown key needs to be inserted, so we need to remove the last element in the list to make room for the new one.

1.The current state of the cache is ┌ ─ ─ ─ ─ ─ > ┐ ┌ ─ ─ ─ ─ ─ ─ ─ ─ > ┐ ┌ ─ ─ ─ ─ ─ ─ ─ > ┐ head, (x |"two"| |) ("three"| |) ("one"| x), a tail └ < ─ ─ ─ ─ ─ ─ ─ ┘ └ < ─ ─ ─ ─ ─ ─ ─ ─ ┘ └ < ─ ─ ─ ─ ─ ┘2.Then we need to insert a key for the "six elements", but we need to cache is full, as "one" delete to remove the LRU elements: (x |"one"| x) ┌ ─ ─ ─ ─ ─ > ┐ ┌ ─ ─ ─ ─ ─ ─ ─ ─ > ┐ head, (x |"two"| |) ("three"|,), tail └ < ─ ─ ─ ─ ─ ─ ─ ┘ └ < ─ ─ ─ ─ ─ ─ ┘3.Then insert the new element to the front ┌ ─ ─ ─ ─ ─ > ┐ ┌ ─ ─ ─ ─ ─ ─ ─ > ┐ ┌ ─ ─ ─ ─ ─ ─ ─ ─ > ┐ head, (x |"six"| |) ("two"| |) ("three"| x), a tail └ < ─ ─ ─ ─ ─ ─ ─ ┘ └ < ─ ─ ─ ─ ─ ─ ─ ┘ └ < ─ ─ ─ ─ ─ ─ ┘Copy the code

That’s all we need to know about bidirectional linked lists to make a good LRU cache.

Use JavaScript to implement a two-way linked list

Now, there’s a slight problem: the JavaScript language itself doesn’t have Pointers. In fact, we can only solve this problem by passing a reference, unpassing a pointer by accessing an object property.

This means that people typically implement linked lists by writing classes in JavaScript like the following:

// Class constructor for a node
function Node(value) {
  this.value = value;
  this.previous = null;
  this.next = null;
}

// The class constructor for the list
function DoublyLinkedList() {
  this.head = null;
  this.tail = null;
}

// Perform the operation
// should be wrapped in list methods
const list = new DoublyLinkedList();

const node1 = new Node('one');
const node2 = new Node('two');

list.head = node1;
list.tail = node2;

node1.next = node2;
node2.previous = node1;

// ...
Copy the code

While there is nothing wrong with this approach, it still has some drawbacks that make this similar-looking implementation perform inefficiently:

  1. Every time you instantiate a node, you allocate some extra memory for the record.
  2. If the list moves quickly, such as nodes being added or removed frequently, the garbage collection mechanism will be triggered and run slowly.
  3. Most of the time, the engine will try to optimize the object into a low-level structure, but you have no control over it.
  4. In the end, and3.On a related note, accessing object properties is not the fastest way in the JavaScript world.

That’s why you don’t see a lot of people using linked lists in JavaScript code. It’s only really used by people who need to use it in certain situations, like when it’s algorithmically brilliant. Node.js has an implementation like this here, and you can find it applied to timers here.

But we can be a little smarter than that. In fact, we can take advantage of one feature of linked lists to improve performance: they can’t be larger than a given number.

So, instead of using JavaScript references and properties as Pointers, let’s use Typed Arrays to play with our own pointer systems.

Custom pointer system

Typed arrays are very compact JavaScript objects that can represent fixed-capacity arrays with typical numeric types such as INT32 or FLOAT64…

It is fairly fast and consumes very little memory because the stored elements are statically typed and are not affected by recording overhead.

Here’s how to use it:

// A typed array of 256 unsigned 16-bit byte integers
const array = new Uint16Array(256);

// Initialize each index to 0.
// We can naturally read/set elements
array[10] = 34;
array[10];
>>> 34

// But the cache: we cannot add a new element
array.push(45);
>>> throw `TypeError: array.push is not a function`
Copy the code

Also, conceptually, instantiating a typed array in JavaScript is not that far from calling a malloc in C, or at least you can use them to perform the same task in both languages.

Since we can use these high-performance arrays, why don’t we use them to implement our own pointer system? After all, Pointers are just addresses for mapped chunks of memory.

We use a typed array as our block of memory and its index as the address! The only tricky part now is choosing the right integer type based on our capacity to avoid overflow.

If you don’t already know how to develop, refer to this function here

So, to implement a fixed-capacity double-linked list structure for our LRU cache structure in JavaScript, we need the following typed array:

  1. A normal array is used to store the payload of our list, which is key-value pairs. Or two types of arrays or plain arrays to store keys and values according to their respective types. For example, if we can ensure that our values do not exceed 32-bit integers, we can do this again with type arrays.
  2. A typed array that stores a series of indexes representing the next pointer.
  3. Another typed array that stores a set of indexes representing the previous pointer.

In general, using typed array Pointers is a bit redundant when you need to ensure that index 0 represents a null value or pointer. There are two ingenious ways to circumvent this problem:

  • You can offset the array of values by leaving the first element empty0There is no risk that the index will store anything important.
  • You can offset the index by 1, but this usually requires some lightweight operations on the index, which can make the code look very complex and difficult to understand.

Please pay attention to, people also use a symbolic rather than an unsigned type array (is obviously index cannot be negative) the type of the array of techniques to add a layer of indirection: pointer can according to the index of the symbol of one or another of different things For example, sometimes by a negative index save memory, will be a Trie node id for the leaf node.

Our code would then look like this:

function FixedCapacityDoublyLinkedList(capacity) {
  this.keys   = new Array(capacity);
  this.values = new Uint8Array(capacity);

  this.next = new Uint8Array(capacity);
  this.previous = new Uint8Array(capacity);

  this.head = 0;
  this.tail = 0;
}

// Here is the previous example:
const list = new DoublyLinkedList();

const node1 = new Node('one');
const node2 = new Node('two');

list.head = node1;
list.tail = node2;

node1.next = node2;
node2.previous = node1;

// Now it should be
const list = new FixedCapacityDoublyLinkedList(2);

// The first node
list.keys[0] = 'one';
list.values[0] = 1;

// The second node
list.keys[1] = 'two';
list.values[1] = 2;

// Write and pointer connections to nodes
list.next[0] = 1;
list.previous[1] = 0;

list.head = 0;
list.tail = 1;
Copy the code

This code looks a little complicated, but now, instead of setting properties, we find and set indexes in typed arrays.

Here’s how we did it:

According to the following list: ┌ ─ ─ ─ ─ ─ ─ > ┐ ┌ ─ ─ ─ ─ ─ ─ ─ > ┐ ┌ ─ ─ ─ ─ ─ ─ ─ > ┐ head, (x |"three"| |) ("two"| |) ("one"| x), a tail └ < ─ ─ ─ ─ ─ ─ ─ ─ ┘ └ < ─ ─ ─ ─ ─ ─ ─ ┘ └ < ─ ─ ─ ─ ─ ┘ we store the following indexes and array: head =2
  tail     = 0
  capacity = 3
  size     = 3

  index       0      1       2
  keys   = ["one"."two"."three"]
  prev   = [    1.2,       x]
  next   = [    x,     0.1]

// x (null pointer) should be 0
// But don't worry about it for simplicity
Copy the code

So, if we use the new scheme to “rerun” the list operations required by the previous example, we get:

1.We start with an empty list: head = x tail = x capacity =3
  size     = 0

  index       0      1      2
  keys   = [    x,     x,     x]
  prev   = [    x,     x,     x]
  next   = [    x,     x,     x]

2.Insert "one" head =0
  tail     = 0
  capacity = 3
  size     = 1

  index       0      1      2
  keys   = ["one",     x,     x]
  prev   = [    x,     x,     x]
  next   = [    x,     x,     x]

3.Insert "two" head =1
  tail     = 0
  capacity = 3
  size     = 2

  index       0      1      2
  keys   = ["one"."two",     x]
  prev   = [    1,     x,     x]
  next   = [    x,     0,     x]

4.Insert "three" head =2
  tail     = 0
  capacity = 3
  size     = 3

  index       0      1       2
  keys   = ["one"."two"."three"]
  prev   = [    1.2,       x]
  next   = [    x,     0.1]

5.Access "two" and place it at the top of the list (note that only Pointers change value invariant) head =1
  tail     = 0
  capacity = 3
  size     = 3

  index       0      1       2
  keys   = ["one"."two"."three"]
  prev   = [    2,     x,       1]
  next   = [    x,     2.0]

6.Finally insert "six" and delete "one" head =0
  tail     = 2
  capacity = 3
  size     = 3

  index       0      1       2
  keys   = ["six"."two"."three"]
  prev   = [    1.0.1]
  next   = [    x,     2,       x]
Copy the code

Looks boring, doesn’t it?

But this is actually a good thing, it means we don’t have to move elements around too much, we don’t create elements, we just have to read and write the index of the array.

So why is this faster than the traditional implementation we discussed earlier?

  • Memory is allocated only once, new objects are never instantiated and old objects are never garbage collected. Yes, you can temporarily adjust memory allocation and garbage collection by using object pools. But if you rely on typed arrays you’ll find that it’s faster and takes up less memory. And the predictability of memory is highly desirable in an IMPLEMENTATION of LRU caching. The only memory problem in our implementation is deleting some of the keys later when the capacity in the map fills up.
  • Array index look-ups/writes are very fast because the allocated memory is mostly contiguous, especially for typed arrays. You don’t have to make a huge leap to find what you need, and cache optimizations show their magic better.
  • There is no need to explain too much. The engine doesn’t have to be smart about everything, and can handle very low-level optimizations instantaneously.

Is it really worth the trouble?

To confirm this, I tried to implement and benchmark the custom pointer system I just described.

So you can see an implementation of LRU caching based on typed arrays in the Mnemonist library. You can also check out implementations that rely on JavaScript objects in LRUCache, as well as another implementation that relies on ES6 Maps, LRUMap.

You can read their source code here and decide for yourself if handling indexes instead of object attributes is messy.

Then there’s dominictarr/bench-lru, a common benchmark repository. Although each benchmark may not fit perfectly into your use case, it can still avoid some common unrelated engine tuning pitfalls and other related problems.

Here are some recent benchmark results representing various typical LRU caching methods in OPS /ms (the higher the better) using Node 12.6.0 on my 2013 MacBook:

name set get1 update get2 evict
mnemonist-object 10793 53191 40486 56497 8217
hashlru * 13860 14981 16340 15385 6959
simple-lru-cache 5875 36697 28818 37453 6866
tiny-lru 4378 36101 34602 40568 5626
lru-fast 4993 38685 38986 47619 5224
quick-lru * 4802 3430 4958 3306 5024
hyperlru-object 3831 12415 13063 13569 3019
mnemonist-map 3533 10020 6072 6475 2606
lru 3072 3929 3811 4654 2489
secondary-cache 2629 8292 4772 9699 2004
js-lru 2903 6202 6305 6114 1661
lru-cache 2158 3882 3857 3993 1350
hyperlru-map 1757 4425 3684 3503 1289
modern-lru 1637 2746 1934 2551 1057
mkc 1589 2192 1283 2092 999

Note that HashLRU and Quick-LRU are not traditional LRU caches, and they still have (mainly the first) very good write performance, but slightly worse read performance because they have to perform two different HashMap lookups. These libraries are ranked based on delete performance, as this is usually the slowest but critical operation for THE LRU cache. But that makes the results hard to understand, and you need to take the time to peruse the list.

You should also run it on your computer, because while the rankings are mostly stable, the results can vary.

Also, it should be noted that the array provided by the benchmark library is different in nature, so the results are not entirely fair.

Finally, we might add memory consumption to the benchmark in the future, and while it’s not easy to reliably measure memory consumption in JavaScript, in fact I seriously doubt that using typed arrays will help reduce the memory consumption of any implementation.

conclusion

Now you know how to create your own pointer system using JavaScript typed arrays. This technique is not limited to fixed-capacity linked lists, but can be used to implement various data structures.

For example, many tree data structures can also benefit from this technique.

But as always, and this advice is representative of most high-level languages, optimizing JavaScript is like trying to squint and pretend to use the language.

The typed array pointer technique is far from suitable for every high-level language. For example, in Python, if you try to use the bytearray or NP.Array multiplexing technique, you will find that performance is very poor.

  1. There are static types
  2. Is low

Basically, the fewer engines you have to choose from, the easier it is to optimize your code.

Of course, this is not desirable for application code, and this level of optimization should only be targeted if you are trying to optimize something critical, such as LRU caching, which I can think of.

LRU caching, for example, is important for many storage implementations. Web servers and clients also rely heavily on this cache.

Finally, don’t take my words or advice too seriously, tuning interpreted languages is notoriously tricky, and JavaScript is even worse because you have to consider JIT and multiple engines such as Gecko or V8.

So, test your code, because your scenarios may be different.

Wish you a happy life!


stories

About deleting and expanding trees

In every implementation of JavaScript’s LRU, the only problem that gets in the way of deletion performance is that deleting a key from either an object (using the delete keyword) or a map map (using the #.delete method) is very expensive.

Again, solutions to general problems are presented in two libraries, Hashlru and Quick-lru, and I strongly encourage you to read them.

Seems (for now?) There is no way around this, because it is nearly impossible to beat the performance of the native HashMap in the engine using the interpreted language JavaScript.

On the contrary, it would be very counterintuitive, since there is no way to run the hash algorithm as fast as the engine itself. I tried tree-based key-value associated data structures, such as CritBit trees, but I must say I couldn’t beat JavaScript objects or maps. You can still implement these trees so that their performance is only 2 to 5 times worse than native objects for a particular use case, while preserving dictionary order. I guess that’s not so bad, huh? Freely review the unarchived code here

This means that you can’t run your own map in JavaScript, for example, a fixed-capacity map with low-cost delete functionality, and hope to outperform the map already provided to you by native objects.

An interesting solution worth trying is to use Splay Trees.

These trees, which are variants of binary search trees, support fairly efficient associative key-value operations and are well suited to LRU caches because they already work by “unrolling” very frequently accessed keys to the top of their hierarchy.

About webassembly

Would it be faster to implement LRU caching with hot new things like WebAssembly, rather than trying to shoehorn underlying concepts into JavaScript high-level code?

Yes. But there’s a problem: if you need to use this implementation from the JavaScript side, you’re out of luck, because it can be very slow. Communication between the JavaScript side and the Web assembly side will slow you down just a little, but enough to make such an implementation meaningless.

Communication between WASM and JS has been greatly improved, for example check out blog posts for this issue. Unfortunately, however, it is still not enough to run hot data structure methods on the WASM side while calling them from the JS side.

However, if you can write code that runs only in WebAssembly and doesn’t rely on JavaScript apis, such as if you compile Rust into WebAssembly, then implementing LRU caching there is also a great idea. You’re bound to get better results.

Saves an array of Pointers

For LRU caches, there is a known technique for holding a pointer level. This doesn’t mean you don’t need double-linked lists for efficiency, but the hashMap/Dictionary structure you use saves memory by storing Pointers to previous keys instead of keys that are critical.

I won’t explain it here, but if you want to get the gist you can go to stackOverflow Answer here.

Note that in JavaScript, this is usually not a good idea if you want to preserve performance (obviously in terms of computation, not memory), because you’ll need more HashMap lookups to update Pointers, and they’re very expensive.

About Arbitrary deletion

Note that the LRU cache implementation proposed here would be difficult to handle arbitrary purging, that is, having the user remove the key.

Why is that? Because until now, we didn’t need to find a way to find an “available” slot in memory, since we swapped key-value pairs only when we cleared an LRU key to make room for new ones. If the user could delete keys at will, these slots would appear randomly in the input array, and we needed a way to keep track of them in order to “reallocate” them.

You can do this by using an extra array as an idle pointer stack, but this obviously consumes memory and performance.

A completely dynamic custom pointer system

I’m focusing here on typed arrays that implement a fixed-capacity pointer system. But if you’re more ambitious, you can well imagine designing a dynamic capacity pointer system.

To do this, you can use dynamically typed arrays, such as Mnemonist’s Vector. They are sometimes more efficient than plain JavaScript arrays when working with numbers, and allow you to do what you need.

However, I’m not sure if using a custom dynamic pointer system would result in any performance improvement when implementing other data structures.

A link to the

  • You’ll find Mnemonist very useful. It contains many implementations of JavaScript’s efficient and cohesive data structures.
  • I gave a talk (Slides) on implementing data structures in JavaScript at FOSDEM in 2019.

If you find any mistakes in your translation or other areas that need to be improved, you are welcome to the Nuggets Translation Program to revise and PR your translation, and you can also get the corresponding reward points. The permanent link to this article at the beginning of this article is the MarkDown link to this article on GitHub.


The Nuggets Translation Project is a community that translates quality Internet technical articles from English sharing articles on nuggets. The content covers Android, iOS, front-end, back-end, blockchain, products, design, artificial intelligence and other fields. If you want to see more high-quality translation, please continue to pay attention to the Translation plan of Digging Gold, the official Weibo, Zhihu column.