The introduction

This title clearly tells us that the front-end needs to understand the LRU algorithm!

This is also a good part of your front-end skills, so when the interviewer asks you what algorithms you’ve encountered in front-end development, you can skip this part too!

Enter into this section according to the following steps:

  • LRU algorithm principle is derived from browser cache strategy
  • And then into thevue δΈ­ keep-aliveThe application of
  • Then, throughvue δΈ­ keep-aliveThe source code to seeLRUImplementation of algorithm
  • Finally, a leetcode problem, where we implement an LRU algorithm

Follow this step to fully master the LRU algorithm, light up the front-end skills, and start below πŸ‘‡

First, LRU cache elimination strategy

Caching is everywhere on computer networks. For example, when we first visit a web page, it opens slowly, but when we open it again, it opens quickly.

This involves an application cached on the browser: the browser cache. When we open a web page, https://github.com/sisterAn/JavaScript-Algorithms, for example, it will be in a real network request before query browser cache, look to whether have to request file, if you have, the browser will intercept requests, return to cache files, And simply end the request without going to the server to download. If it does not exist, the request will be made to the server.

In fact, the browser cache is a local copy of resources, its size is limited, when we request too many, the cache space will be used up, at this point, continue to make network requests need to determine which data in the cache to be retained and which data to be removed, this is the browser cache elimination strategy. The most common elimination strategies are FIFO (first-in, first-out), LFU (least used), and LRU (least recently used).

The LRU (Least Recently Used) cache elimination strategy is to eliminate data based on the historical access records of data. The core idea is that if data has been accessed Recently, it has a higher chance of being accessed in the future, and the data that has not been accessed Recently is preferentially eliminated.

Let’s draw a picture to help us understand:

2. Implementation of LRU on Keep-Alive (Vue)

1. keep-alive

Keep-alive is used in VUE to cache components so that the current component will not be uninstalled when the component is switched.

<! <keep-alive> <component :is="view"></component> </keep-alive>Copy the code

The two most commonly used attributes, include and exculde, are used for conditional caching of components and can be represented as comma-delimited strings, regular expressions, or an array.

In version 2.5.0, keep-Alive added the Max attribute, which specifies the maximum number of component instances that can be cached. Once this number is reached, the cached component instances that have not been accessed for the longest time are destroyed before new instances are created. Here, the LRU algorithm is applied. That is, if the cache in keep-alive reaches Max, the newly added cache instance will preferentially discard the recently inaccessible instance πŸŽ‰πŸŽ‰πŸŽ‰

Below we look through the vue source code to see the specific implementation πŸ‘‡

2. See the implementation of Keep-alive from vue source code

export default {
  name: "keep-alive".// Abstract component properties, which are ignored when a component instance sets up a parent-child relationship, during initLifecycle
  abstract: true.props: {
    // The cached component
    include: patternTypes, 
    // The component is not cached
    exclude: patternTypes,
    // Specify the cache size
    max: [String.Number] 
  },
  created() {
    // Initialize the cache object used to store the cache
    this.cache = Object.create(null);
    // Initializes the array of keys used to store VNode keys
    this.keys = []; 
  },
  destroyed() {
    for (const key in this.cache) {
      // Delete all caches
      pruneCacheEntry(this.cache, key, this.keys);
    }
  },
  mounted() {
    // Listen for changes to the include/exclude component
    // Readjust the cache as it changes
    // pruneCache: Traverses the cache. If the cached node name does not match the passed rule, the node is removed from the cache
    this.$watch("include", val => {
      pruneCache(this, name => matches(val, name));
    });
    this.$watch("exclude", val => {
      pruneCache(this, name => ! matches(val, name)); }); }, render() {// Get the vNode of the first child
    const slot = this.$slots.default;
    const vnode: VNode = getFirstComponentChild(slot);
    constcomponentOptions: ? VNodeComponentOptions = vnode && vnode.componentOptions;if (componentOptions) {
      // Name is not in inLCude or is returned directly to vnode in exlude; otherwise proceed to the next step
      // check pattern
      constname: ? string = getComponentName(componentOptions);const { include, exclude } = this;
      if (
        // not included(include && (! name || ! matches(include, name))) ||// excluded
        (exclude && name && matches(exclude, name))
      ) {
        return vnode;
      }
      
      const { cache, keys } = this;
      // Get the component's name field first, otherwise the component's tag
      constkey: ? string = vnode.key ==null
          ? // same constructor may get registered as different local components
            // so cid alone is not enough (#3269)
            componentOptions.Ctor.cid +
            (componentOptions.tag ? ` : :${componentOptions.tag}` : "")
          : vnode.key;
        
      // --------------------------------------------------
      // The LRU algorithm
      // If there is one in the cache, adjust it.
      // If the length exceeds Max, it will discard those that have not been accessed recently.
      // --------------------------------------------------
      // If the cache is hit, the vNode component instance is fetched from the cache and the keys are placed at the end of the keys array
      if (cache[key]) {
        vnode.componentInstance = cache[key].componentInstance;
        // make current key freshest
        remove(keys, key);
        keys.push(key);
      }
      // If there is no cache hit, vNode is put in the cache
      else {
        cache[key] = vnode;
        keys.push(key);
        // prune oldest entry
        // If Max is configured and the cache length exceeds this. Max, remove the first object from the cache
        if (this.max && keys.length > parseInt(this.max)) {
          pruneCacheEntry(cache, keys[0], keys, this._vnode); }}// keepAlive flag bit
      vnode.data.keepAlive = true;
    }
    return vnode || (slot && slot[0]); }};// Remove the key cache
function pruneCacheEntry (cache: VNodeCache, key: string, keys: Array
       
        , current? : VNode
       ) {
  const cached = cache[key]
  if(cached && (! current || cached.tag ! == current.tag)) { cached.componentInstance.$destroy() } cache[key] =null
  remove(keys, key)
}

// remove method (shared/util.js)
/** * Remove an item from an array. */
export function remove (arr: Array<any>, item: any) :Array<any> | void {
  if (arr.length) {
    const index = arr.indexOf(item)
    if (index > - 1) {
      return arr.splice(index, 1)}}}Copy the code

Keep-alive indicates the source code path

When the keep-alive cache exceeds Max, the cache elimination algorithm is THE LRU algorithm. In the implementation process, the CACHE object is used to store the cached component instance and key value, and the keys array is used to store the cached component key. When keep-alive renders an instance that needs to be cached:

  • Determine whether the instance has been cached in the cache, cache directly, and adjustkey 在 keysIn (removekeys δΈ­ keyAnd in thekeysLast bit of array)
  • If there is no cache, the instance is cached, ifkeysThe length is greater thanmaxIf the cache length exceeds the upper limit, the cache is removedkeys[0]The cache

Let’s implement an LRU algorithm by ourselves ⛽️⛽️ port

LRU caching mechanism

Using your knowledge of data structures, design and implement an LRU (least recently used) caching mechanism. It should support the following operations: get for data and PUT for writing data.

Get (key) – Gets the value of the key if it exists in the cache (always positive), otherwise -1 is returned. Write data PUT (key, value) – Writes data if the key does not exist. When the cache reaches its maximum capacity, it should delete the oldest unused data before writing new data to make room for new data.

Advanced:

Can you do both in O(1) time?

Example:

LRUCache cache = new LRUCache( 2 /* Cache capacity */ );

cache.put(1.1);
cache.put(2.2);
cache.get(1);       / / returns 1
cache.put(3.3);    // This operation invalidates key 2
cache.get(2);       // return -1 (not found)
cache.put(4.4);    // This operation invalidates key 1
cache.get(1);       // return -1 (not found)
cache.get(3);       / / return 3
cache.get(4);       / / return 4
Copy the code

Keep -alive LRU implementation source code has been introduced in front, now look at this problem is not very simple 😊😊😊, you can try to answer the β›½ 😊, and then think about what to continue to optimize! More solutions are welcome

Please submit your answers to github.com/sisterAn/Ja… “, so that more people can see, Aquarius will post his own solution tomorrow.

Four, past series

Bottle jun front-end advanced algorithm camp first week summary

Front-end advanced algorithm 2: JavaScript array from Chrome V8 source code

Front-end advanced algorithm 1: How to analyze and count the execution efficiency and resource consumption of algorithms?

Five, know more front-end road friends, together with advanced front-end development

The first phase of front-end algorithm training camp is free πŸŽ‰πŸŽ‰πŸŽ‰, free yo!

Here, you can advance the front-end algorithm with like-minded friends (600+), from 0 to 1 to build a complete data structure and algorithm system.

Here, I not only introduce algorithms, but also combine algorithms with various front-end areas, including browsers, HTTP, V8, React, Vue source code, and so on.

Here, you can learn a big factory algorithm (Ali, Tencent, Baidu, byte, etc.) or leetcode every day, Aquarius will solve it the next day!

More benefits waiting for you to unlock πŸ”“πŸ”“πŸ”“!

Scan code to join [front-end algorithm exchange group exchange group], if the number of TWO-DIMENSIONAL code has reached the upper limit, you can scan the bottom two-dimensional code, in the public number “front-end bottle jun” reply “algorithm” automatically pull you into the group learning