Origins in the operating system

Cache file replacement mechanism

Many features of modern languages can be found in the operating system of the original prototype, LRU we can also find the original design in the operating system.

"Caching is the only important idea in computer science" -Bill JoyCopy the code

As we know, whether it is memory or hard disk, or the cache we use in our respective applications, due to the fixed size, there is always insufficient space, and the need for cache replacement (or replacement), and the replacement principle is called cache file replacement mechanism.

And today’s topic is: the least recently used algorithm, that is, the most long time not accessed content as the replacement object.

Page replacement algorithm

In the operating system, we can use overwrite or swap to expand memory. When the memory of the operating system adopts basic paging storage management, that is, the operating system adopts paging system foundation, the operating system can carry out paging request function and page replacement function, so as to realize the function of virtual memory. The algorithm that chooses to call out the page is called the page replacement algorithm. LRU is one of them.

The most recently unused /LRU page replacement algorithm selects pages that have not been visited in the most recent period of time to be eliminated. It considers that pages that have not been visited in the past period of time may not be visited in the near future. The algorithm sets an access field for each page to record the time that the page has experienced since it was visited last time, and selects the largest value in the existing page to be eliminated when the page is eliminated.

To access the page 7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1
Physical block 1 7 7 7 2 2 4 4 4 0 1 1 1
Physical block 2 0 0 0 0 0 0 3 3 3 0 0
Physical block 3 1 1 3 3 2 2 2 2 2 7
Missing page no ✔ ️ ✔ ️ ✔ ️ ✔ ️ ✔ ️ ✔ ️ ✔ ️ ✔ ️ ✔ ️ ✔ ️ ✔ ️ ✔ ️

As shown in the table above, when the process accesses page 2 for the first time, it swaps out the most recent and longest unaccessed page 7. When you visit page 3 later, swap out the most recently unused page 1.

LruCache algorithm

As a cache elimination strategy, the LRU algorithm is applied to cache.

LruCache algorithm is implemented through LinkedHashMap. LinkedHashMap inherits from HashMap. It uses a two-way linked list to store the order of entries in the Map. For get, PUT, remove and other operations, LinkedHashMap does the same things that HashMap does. I also do some work adjusting the list of entries.

LruCache implements LRU caching by setting the order of the LinkedHashMap to LRU order, and moves the object to the end of the list each time a call to GET (that is, to fetch the image from the memory cache) is called. The call to put inserts new objects are also stored at the end of the list, so that when the memory cache reaches its maximum value, objects at the head of the list (least recently used) are removed.

The Android LruCache

LruCache was introduced in Android 3.1 (Api12), and compatible lower versions actually use v4 packages. The introduction of LruCache allows the system to remove the value at the end of the queue when the cache is low, facilitating GC. LruCache is used for memory caching and has a good performance in avoiding OOM and improving execution efficiency.

Take a look at the LruCache in the Androidx. collection package:

public class LruCache<K.V> {
    private final LinkedHashMap<K, V> map;

    private int size;           // The current cache size
    private int maxSize;        // Maximum cacheable size

    private int putCount;
    private int createCount;
    private int evictionCount;
    private int hitCount;      // The number of cache hits
    private int missCount;     // The number of times the cache was lost

 
    public LruCache(int maxSize) {
        if (maxSize <= 0) {
            throw new IllegalArgumentException("maxSize <= 0");
        }
        this.maxSize = maxSize;
        this.map = new LinkedHashMap<K, V>(0.0.75 f.true);
    }
  
  // ...
  
  }
    
Copy the code

The maxSize parameter passed in the constructor of LruCache is the maximum capacity that can be cached. MaxSize represents the maximum number of key-value pairs that the LinkedHashMap inside LruCache can store.

 public final V put(@NonNull K key, @NonNull V value) {
        if (key == null || value == null) {
            throw new NullPointerException("key == null || value == null");
        }

        V previous;
        synchronized (this) {
            putCount++;
            size += safeSizeOf(key, value);
            previous = map.put(key, value);
            if(previous ! =null) { size -= safeSizeOf(key, previous); }}if(previous ! =null) {
            entryRemoved(false, key, previous, value);
        }

        trimToSize(maxSize);
        return previous;
    }
Copy the code

The put() method of LruCache first determines that the key or value of LruCache cannot be null.

SafeSizeOf () is called once before the element is inserted. The safeSizeOf() method calls sizeOf(). SizeOf() returns 1 by default and is generally overridden: For example, if the value stored by LruCache is a File, sizeOf should return the sizeOf the File that should be the key (all files cannot be larger than maxSize). Because the current cache increases, the corresponding size also increases and the corresponding key-value is inserted into the linked list (this step: previous = map.put(key, value)).

if (previous ! = null) {entryRemoved(false, key, previous, value); } this step performs a double check. If the key already exists in the linked list, then the size of the new value should be subtracted from the size occupied by the previous value.

To ensure the accuracy of size in multi-threaded scenarios, use synchronized (this) to ensure that the above operations are synchronized.

If an old value is overridden, LruCache also provides an empty method called entryRemoved. This method is called when a value is removed or put, and the default implementation does nothing.

The trimToSize() method ensures that the cache does not overflow:

public void trimToSize(int maxSize) {
        while (true) {
            K key;
            V value;
            synchronized (this) {
                if (size < 0|| (map.isEmpty() && size ! =0)) {
                    throw new IllegalStateException(getClass().getName()
                            + ".sizeOf() is reporting inconsistent results!");
                }

                if (size <= maxSize || map.isEmpty()) {
                    break;
                }

                Map.Entry<K, V> toEvict = map.entrySet().iterator().next();
                key = toEvict.getKey();
                value = toEvict.getValue();
                map.remove(key);
                size -= safeSizeOf(key, value);
                evictionCount++;
            }

            entryRemoved(true, key, value, null); }}Copy the code

The trimToSize() method removes the oldest key-value pair until the cache <= maximum capacity. To be specific: this method is called every time an element is inserted, and the method is an infinite loop that ends when the current cache size does not exceed the maximum capacity. Otherwise, remove the header of the entrySet of the LinkedHashMap, which is the earliest inserted key value pair and has not been accessed recently, and update the size. Repeat this step until the cache <= maximum capacity. The implementation of the LRU cache takes advantage of the access sequence LinkedHashMap feature.

The get() method for the value:

 public final V get(@NonNull K key) {
        if (key == null) {
            throw new NullPointerException("key == null");
        }

        V mapValue;
        synchronized (this) {
            mapValue = map.get(key);
            if(mapValue ! =null) {
                hitCount++;
                return mapValue;
            }
            missCount++;
        }

        V createdValue = create(key);
        if (createdValue == null) {
            return null;
        }

        synchronized (this) {
            createCount++;
            mapValue = map.put(key, createdValue);

            if(mapValue ! =null) {
                // There was a conflict so undo that last put
                map.put(key, mapValue);
            } else{ size += safeSizeOf(key, createdValue); }}if(mapValue ! =null) {
            entryRemoved(false, key, createdValue, mapValue);
            return mapValue;
        } else {
            trimToSize(maxSize);
            returncreatedValue; }}Copy the code

The key value of the get() method cannot be null. If a value corresponding to the key exists in the map, the value is returned and the number of cache hits is +1. If no, the number of cache misses is +1. If not, V createdValue = create(key) is called to try to create a value based on that key. The create() method returns NULL by default and needs to be implemented by itself.

At the end

LruCache is used as a basic caching strategy in many ways. Android introduced LruCache from Android3.1, and many open source frameworks such as Glide image caching also use LruCache.

reference

zh.wikipedia.org/w/index.php

Developer.android.com/reference/a…