Reference article:

LruCache  |  Android Developers

Android uses LruCache cache – Vulcan Walk – Blog garden

Android LruCache_xxdw1992 blog -CSDN blog

Simple LRU queue kotlin implementation _long for US-CSDN blog _kotlin queue

Kotlin writes a simple LRU cache structure

LRU cache algorithm implementation _iKun-CSDN blog _LRU algorithm

Super detailed LinkedHashMap parsing _ request offer dish chicken blog -CSDN blog _linkedhashmap

A list,

Android system’s memory consumption conditions are quite harsh, wanton use of memory will lead to OOM and kill the program, so each APP should be cautious when using memory.

LRU is the least recently used algorithm, and its core idea is that when the cache is full, the least recently used cache objects are preferentially eliminated. There are two kinds of cache using LRU algorithm: LrhCache and DisLruCache, which are used to implement memory cache and hard disk cache respectively. Their core ideas are LRU cache algorithm.

Since Android 3.1, LruCache has been maintained as part of the Android source code. In order to be compatible with previous versions of Android, the support-V4 package also provides maintenance of LruCache. If the App needs to be compatible with android versions before 3.1, you need to use LruCache in support-V4 package. If you do not need to be compatible with Android 3.1, you can directly use LruCache in Android source code. But DiskLruCache is not part of the Android source code.

Second, handwriting LRU

Before using the LruCache library, consider how to manually implement the LRU algorithm, so that you can better understand the implementation of LruCache. To implement LRU, you have to mention LinkedHashMap.

2.1 LinkedHashMap

LinkedHashMap inherits from HashMap, and its various operations are based on HashMap operations. Unlike a HashMap, LinkedHashMap maintains a bidirectional list of entries, ensuring the order in which entries are inserted. This also means Linked. The structure diagram is as follows:

Insert key1, key2, key3, key4, and a bidirectional linked list as shown in the red line is maintained.

We can say that LinkedHashMap=HashMap+ two-way linked list

Using the characteristics of LinkedHashMap, it is not only convenient to implement LRU algorithm, but also efficient access to elements, that is, every hit element, the element will be placed at the end of the linked list (deleted first, then added), so as to ensure that the head element is the longest unused element, once the cache space is full, the back element can be replaced. The LinkedHashMap also provides the flag bit accessOrder and removeEldestEntry(Eldest) methods to implement the LRU algorithm.

  • If accessOrder is true, the accessed elements are placed at the end of the list in the order they were accessed
  • removeEldestEntry(eldest)For user override, if true is returned, the oldest unused element is removed

2.2 GET operation source code

/** * Maintain the list * after calling the hashMap getNode method to get the value@param key
* @return* /

public V get(Object key) {
    Node<K,V> e;
    if ((e = getNode(hash(key), key)) == null)
    return null;
    if (**accessOrder**)
    afterNodeAccess(e);
    return e.value;
}
Copy the code

AfterNodeAccess puts the newly inserted node at the end, and this method is controlled by accessOrder.

// Use accessOrder to determine if the list order needs to be adjusted after the node is accessed
void afterNodeAccess(Node<K,V> e) { // move node to last
    LinkedHashMap.Entry<K,V> last;
    // If accessOrder is false, do nothing
    if(**accessOrder** && (last = tail) ! = e) {//p points to the element to be deleted, B executes the precursor, and A executes the back-end
        LinkedHashMap.Entry<K,V> p =
            (LinkedHashMap.Entry<K,V>)e, b = p.before, a = p.after;
        // Delete the list
        p.after = null;
        if (b == null)
            head = a;
        else
            b.after = a;
        if(a ! =null)
            a.before = b;
        else
            last = b;
        // Put p at the tail
        if (last == null)
            head = p;
        else {
            p.before = last;
            last.after = p;
        }
        tail = p;
        // Ensure concurrent read security.++modCount; }}Copy the code

2.3 addEntry source

void addEntry(int hash, K key, V value, int bucketIndex) {
    // Create a new Entry and insert it into the LinkedHashMap
    createEntry(hash, key, value, bucketIndex); // Overrides the createEntry method in HashMap
    // The first valid node of the bidirectional list (the one after the header) is the least recently used node, which is used to support the LRU algorithm
    Entry<K,V> eldest = header.after;
    // If necessary, remove the least recently used node,
    // This depends on the override of removeEldestEntry, which defaults to false and does nothing by default.
    if (**removeEldestEntry(eldest)** ) {
        removeEntryForKey(eldest.key);
    } else {
    // Double the size
        if (size >= threshold)
            resize(2* table.length); }}Copy the code

RemoveEldestEntry (Eldest) controls whether the least recently used nodes are removed.

2.4 Implement LRU using LinkedHashMap

class LRUCache extends LinkedHashMap {
    private int capacity;
    public LRUCache(int capacity) {
        / / accessOrder to true
        super(capacity, 0.75 F.true);
        this.capacity = capacity;
    }

    public int get(int key) {
        return (int)super.getOrDefault(key, -1);
    }

    public void put(int key, int value) {
        super.put(key, value);
    }

    protected boolean removeEldestEntry(Map.Entry eldest) {
        returnsize() > capacity; }}Copy the code

Third, LruCache library

3.1 introduction

LinkedHashMap is also used internally to manage data in the LruCache library:

public class LruCache<K.V> {
    private final LinkedHashMap<K, V> map;
    /** Size of this cache in units. Not necessarily the number of elements. */
    private int size; // The current size
    private int maxSize; // Maximum capacity
    private int putCount; / / put the number
    private int createCount; // Create times
    private int evictionCount; // Recycle times
    private int hitCount; // Number of hits
    private int missCount; // The number of misses
    
    public LruCache(int maxSize) {
        if (maxSize <= 0) {
            throw new IllegalArgumentException("maxSize <= 0");
        }
        this.maxSize = maxSize;
        //accessOrder is true, i.e. sort by access
        this.map = new LinkedHashMap<K, V>(0.0.75 f.true); }}Copy the code
  • Methods that must be overridden:
    • Protected int sizeOf(K key, V value) : calculates the sizeOf each object, allowing LruCache to control the sizeOf the cache

In the case of overriding sizeOf, maxSize represents the maximum we can add up to each element size.

  • Methods that can be overridden
    • Protected void entryRemoved(Boolean evicted, K key, V oldValue, V newValue) : An additional action taken to remove an element, such as freeing a resource
    • Protected V create(K key) : Creates the element

3.2 Critical code analysis

3.2.1 PUT () Adds the cache

public final V put(K key, V value) {
    // Key-value pairs cannot be null
    if (key == null || value == null) {
        throw new NullPointerException("key == null || value == null");
    }
    / / the old value
    V previous;
    // Synchronize code blocks. Using this means that only one thread can manipulate the object at a time
    synchronized (this) {
        putCount++;
        SafeSizeOf = safeSizeOf = safeSizeOf = safeSizeOf
        size += safeSizeOf(key, value);
        // Insert the key/value pair into the LinkedHashMap, if there is a return value, the same key exists, fetch the old value to previous
        previous = map.put(key, value);
        // If old values exist, the size occupied by the old value is removed from the current size.
        if(previous ! =null) { size -= safeSizeOf(key, previous); }}The removed method is called entryRemoved if the value is removed.
    // entryRemoved Is an empty implementation by default, which users can implement themselves to remove resources if required.
    if(previous ! =null) {
        entryRemoved(false, key, previous, value);
    }
    // This is the most critical method to calculate whether the current size meets the requirements.
    trimToSize(maxSize);
    // Return the old value
    return previous;
}
Copy the code

The trimToSize(maxSize) method ends with a call to the trimToSize(maxSize) method, which is a core method that calculates whether the current size exceeds the maximum set, and removes the least recently used element if it does.

3.2.2 trimToSize() Controls the cache capacity

public void trimToSize(int maxSize) {
    while (true) {
        K key;
        V value;
        // Also thread synchronization.
        synchronized (this) {
            if (size < 0|| (map.isEmpty() && size ! =0)) {
                throw new IllegalStateException(getClass().getName()
                        + ".sizeOf() is reporting inconsistent results!");
            }
            // If the current size is smaller than the set maximum size, jump out of the loop.
            if (size <= maxSize || map.isEmpty()) {
                break;
            }
            // Use map.entryset () to represent traversal from the header of the LinkedHashMap, in
            ToEvict is the element of the head node
            Map.Entry<K, V> toEvict = map.entrySet().iterator().next();
            // Delete the element key
            key = toEvict.getKey();
            // Delete the value of the element
            value = toEvict.getValue();
            // Remove the specified element using the remove method of LinkedHashMap
            map.remove(key);
            // Recalculate the current size
            size -= safeSizeOf(key, value);
            // Remove times +1
            evictionCount++;
        }
        // Calls user-defined entryRemoved() if user-defined
        entryRemoved(true, key, value, null); }}Copy the code

3.3.3 remove() Deletes the cache

public final V remove(K key) {
    // Null values are not allowed
    if (key == null) {
        throw new NullPointerException("key == null");
    }
    // Delete the element
    V previous;
    // Synchronize code blocks to ensure thread safety
    synchronized (this) {
        // Delete the element and assign the value to Previous
        previous = map.remove(key);
        // If there was a value for key before, subtract it
        if(previous ! =null) { size -= safeSizeOf(key, previous); }}// If the user overrides entryRemoved and there is a previous value corresponding to key, perform entryRemoved.
    if(previous ! =null) {
        entryRemoved(false, key, previous, null);
    }
    return previous;
}
Copy the code

3.3.4 get() Obtains the cache

public final V get(K key) {
    // Null keys are not allowed
    if (key == null) {
        throw new NullPointerException("key == null");
    }
    / / the value of value
    V mapValue;
    // Synchronize code blocks to keep the current instance thread-safe
    synchronized (this) {
    // use the LinkedHashMap get method to find
        mapValue = map.get(key);
        // Return +1 hit value
        if(mapValue ! =null) {
            hitCount++;
            return mapValue;
        }
        // Not found, missed count +1
        missCount++;
    }
    If you want a return value, you can override the create method to create a return value of your own.
    V createdValue = create(key);
    // Create a value of null and return null
    if (createdValue == null) {
        return null;
    }
    synchronized (this) {
        createCount++;
        // Add createdValue to the map and save the object with the original key to mapValue
        mapValue = map.put(key, createdValue);
        // The original position is not empty,
        if(mapValue ! =null) {
            // There was a conflict so undo that last put
            // Undo the previous step and still put the original value in the cache. To replace the newly created value
            map.put(key, mapValue);
        } else {
            // The current cache size is calculated.size += safeSizeOf(key, createdValue); }}// The createdValue is replaced by the original value, and the createdValue is removed here. Returns the value of the original key.
    if(mapValue ! =null) {
        entryRemoved(false, key, createdValue, mapValue);
        return mapValue;
    } else {
        // Call the trimToSize method to see if old data needs to be reclaimed
        trimToSize(maxSize);
        returncreatedValue; }}Copy the code

3.3.5 evictAll() Clears all cached data

public final void evictAll(a) {
    trimToSize(-1); // -1 will evict 0-sized elements
}
Copy the code

The loop stops when map.isEmpty() is executed

if (size <= maxSize || map.isEmpty()) {
    break;
}
Copy the code

3.3 summarize

  • LruCache, internal memory level cache using Map (manually mapped to disk cache)
  • LruCache can be configured with various types using generics
  • LruCache uses the Lru algorithm to save data
  • LruCache uses only the PUT and GET methods to press in and out data