LruCache is a memory-based caching framework provided by Android. LRU is an abbreviation for Least Recently Used. A block of memory is removed from the cache when it has not been used recently. In this article, we will briefly introduce the use of LruCache, and then we will analyze its source code.

1. Basic usage examples

First, let’s take a quick look at how to implement memory caching using LruCache. Here is an example use of LruCache.

Here we realize the function of RecyclerView list screenshot. Because we need to store the Bitmap of each item in the list, and then when all the bitmaps of the list items are available, draw them on a complete Bitmap in order and position. If we didn’t use LruCache, we could of course do this by putting all the List items’ bitmaps into a List. But that approach has a downside: because it is a strong reference type, it can result in OOM when out of memory.

In the following method, we first get 1/8 of the size of memory as the size of cache space, used to initialize the LruCache object, and then fetch all ViewHolder from the RecyclerView adapter and get their corresponding Bitmap. It is then placed into the LruCache as a key-value pair. When all the bitmaps of the list items are obtained, we create the final Bitmap and draw the previous Bitmap on the final Bitmap in turn:

public static Bitmap shotRecyclerView(RecyclerView view) {
    RecyclerView.Adapter adapter = view.getAdapter();
    Bitmap bigBitmap = null;
    if(adapter ! =null) {
        int size = adapter.getItemCount();
        int height = 0;
        Paint paint = new Paint();
        int iHeight = 0;
        final int maxMemory = (int) (Runtime.getRuntime().maxMemory() / 1024);

        // Use 1/8 of memory as the cache space for the cache framework
        final int cacheSize = maxMemory / 8;
        LruCache<String, Bitmap> bitmaCache = new LruCache<>(cacheSize);
        for (int i = 0; i < size; i++) {
            RecyclerView.ViewHolder holder = adapter.createViewHolder(view, adapter.getItemViewType(i));
            adapter.onBindViewHolder(holder, i);
            holder.itemView.measure(
                    View.MeasureSpec.makeMeasureSpec(view.getWidth(), View.MeasureSpec.EXACTLY),
                    View.MeasureSpec.makeMeasureSpec(0, View.MeasureSpec.UNSPECIFIED));
            holder.itemView.layout(0.0, holder.itemView.getMeasuredWidth(),
                    holder.itemView.getMeasuredHeight());
            holder.itemView.setDrawingCacheEnabled(true);
            holder.itemView.buildDrawingCache();
            Bitmap drawingCache = holder.itemView.getDrawingCache();
            if(drawingCache ! =null) {
                bitmaCache.put(String.valueOf(i), drawingCache);
            }
            height += holder.itemView.getMeasuredHeight();
        }

        bigBitmap = Bitmap.createBitmap(view.getMeasuredWidth(), height, Bitmap.Config.ARGB_8888);
        Canvas bigCanvas = new Canvas(bigBitmap);
        Drawable lBackground = view.getBackground();
        if (lBackground instanceof ColorDrawable) {
            ColorDrawable lColorDrawable = (ColorDrawable) lBackground;
            int lColor = lColorDrawable.getColor();
            bigCanvas.drawColor(lColor);
        }

        for (int i = 0; i < size; i++) {
            Bitmap bitmap = bitmaCache.get(String.valueOf(i));
            bigCanvas.drawBitmap(bitmap, 0f, iHeight, paint); iHeight += bitmap.getHeight(); bitmap.recycle(); }}return bigBitmap;
}
Copy the code

Therefore, we can summarize the basic usage of LruCahce as follows:

First, you declare the size of a cache, and here we use 1/8 of the runtime memory as the size of the cache

LruCache<String, Bitmap> bitmaCache = new LruCache<>(cacheSize);
Copy the code

One issue you should be aware of, however, is the unit of cache space. Since the key-value pair of LruCache can be of any type, it is up to you to specify the size of the type you pass in. We will point out the unit problem later when we analyze its source code. The LruCahce API already provides methods for calculating the size of incoming values. We just need to override this method when instantiating a LruCache. In this case, we consider the memory size of a Bitmap object to be less than 1KB.

We can then call its PUT () and get() methods to insert and retrieve data from the cache just like a normal Map:

bitmaCache.put(String.valueOf(i), drawingCache);
Bitmap bitmap = bitmaCache.get(String.valueOf(i));
Copy the code

2, LruCahce source code analysis

2.1 Before analysis: What do we need to consider when implementing a LruCache ourselves

Before we analyze the source code for LruCache, let’s consider what we need to consider when implementing a LruCache ourselves to read the source code with questions in mind.

Since we need to store the data and be able to retrieve it from the cache based on the specified ID, we need to use a hash table table structure. Or use two arrays, one for the key and one for the value, and then use their indexes to implement the mapping. However, the latter is not as efficient as the former.

In addition, we need to sort the inserted elements, because we need to remove those elements that are used least often. We can do this using linked lists, where we can move data to the head of the list whenever it is needed. So when the number of elements to be inserted is larger than the maximum space in the cache, we remove the last element of the list to free up space in the cache.

Combining these two, we need a data structure that functions as both a hash table and a queue. In the Java collection, we are already provided with a LinkedHashMap to do this.

In fact, LruCache in Android is implemented using LinkedHashMap. LinkedHashMap extends from HashMap. The source code for HashMap is not hard to read if you understand it. The LinkedHashMap simply adds nodes to a two-way linked list on top of the HashMap. Each time an element is added or deleted, the element being manipulated is moved to the end of the list. LruCahce in Android is an extension of LinkedHashMap, but the implementation of LruCache in Android has some neat things to learn from.

2.2 LruCache source code analysis

From the above analysis, we know why we chose the LinkedHashMap as the underlying data structure. Let’s examine some of these methods. The implementation of this class has many details that are well considered and worth learning from.

2.2.1 Maximum available cache space

In LruCache there are two fields size and maxSize. MaxSize is assigned to the LruCache constructor to indicate the maximum available space for the cache:

int cacheSize = 4 * 1024 * 1024; // 4MiB. The unit of cacheSize is KB
LruCache<String, Bitmap> bitmapCache = new LruCache<String, Bitmap>(cacheSize) {
    protected int sizeOf(String key, Bitmap value) {
        returnvalue.getByteCount(); }}};Copy the code

Here we use 4MB to set the size of the cache space. We know that the principle of LruCache is that after the specified size, if you continue to insert elements beyond the specified size, the “removable” elements will be removed to make room for new elements. Therefore, since the type of insert is not determined, it is up to the user to calculate the size of the inserted object.

In the above code, we directly use the Bitmap’s getByteCount() method to get the size of the Bitmap. Also, notice that we didn’t do this in the original example. Then a Bitmap will be counted as 1KB.

Here sizeOf() is a protected method that clearly expects the user to implement the logic of the computation themselves. It defaults to 1 in the same units as the maxSize specified to set the cache size:

protected int sizeOf(K key, V value) {
    return 1;
}
Copy the code

It should also be mentioned here that although this method is left to the user to implement, it is not called directly in the LruCache source code, instead

private int safeSizeOf(K key, V value) {
    int result = sizeOf(key, value);
    if (result < 0) {
        throw new IllegalStateException("Negative size: " + key + "=" + value);
    }
    return result;
}
Copy the code

So, a check is added to prevent parameter errors. In fact, this is a very thoughtful consideration. If an illegal parameter is passed in, it causes an unexpected error, and the error is hard to trace. If we want to design an API for others to use and give them a way to override it themselves, we can use this design.

2.2.2 Get () method of LruCache

Let’s look at its get() method. It is used to retrieve the corresponding value from LruCahce based on the specified key:

/** * 1). Obtain the element corresponding to the specified key, and create one with craete() if it does not exist. * 2). When an element is returned, it will be moved to the top of the queue; * 3). If it does not exist in the cache and cannot be created, return null */
public final V get(K key) {
    if (key == null) {
        throw new NullPointerException("key == null");
    }

    V mapValue;
    synchronized (this) {
        // In this case the return element is moved to the head of the queue if it is not empty, which is implemented in LinkedHashMap
        mapValue = map.get(key);
        if(mapValue ! =null) {
            // Cache hit
            hitCount++;
            return mapValue;
        }
        // The cache failed to hit, probably because the key-value pair was removed
        missCount++;
    }

    // The creation is single threaded, and the key specified at the time of creation may already be occupied by other key/value pairs
    V createdValue = create(key);
    if (createdValue == null) {
        return null;
    }

    // If the specified key is already occupied by another value, insert the key
    synchronized (this) {
        createCount++;
        // When a new key is inserted into the table, the previous value of the key is returned, or null is returned if there is no new key
        mapValue = map.put(key, createdValue);
        if(mapValue ! =null) {
            // Undo the previous insert
            map.put(key, mapValue);
        } else{ size += safeSizeOf(key, createdValue); }}if(mapValue ! =null) {
        entryRemoved(false, key, createdValue, mapValue);
        return mapValue;
    } else {
        trimToSize(maxSize);
        returncreatedValue; }}Copy the code

Here, the current instance is locked to ensure thread-safety when the value is obtained. The create() method is used when the map’s get() method fails to get data. Because the specified key-value pair cannot be found, it may not exist in the first place, or it may have been removed due to insufficient cache, we need to provide this method for the user to handle this situation. By default, this method returns null. If the user overwrites the create() method and the value returned is not NULL, we need to insert that value into the hash table.

The insertion logic is also done in synchronized code blocks. This is because the operations created can be too long and asynchronous. When we insert a value into the specified key again, it may already have a value. If the map put() function does not return null, it indicates that the corresponding key already has the corresponding value, and the insert operation needs to be cancelled. Finally, when mapValue is non-null, the entryRemoved() method is also called. This method is called back each time a key-value pair is removed from the hash table.

Finally, the trimToSize() method is called to ensure that the cache size does not exceed our specified value after the new value is inserted. The “least recently used” item is removed from the hash table when it is found that the cache in use exceeds the maximum cache size.

So how do you determine which is the “least recently used” item? Let’s look at the trimToSize() method definition:

public void trimToSize(int maxSize) {
    while (true) {
        K key;
        V value;
        synchronized (this) {
            if (size < 0|| (map.isEmpty() && size ! =0)) {
                throw new IllegalStateException(getClass().getName()
                        + ".sizeOf() is reporting inconsistent results!");
            }

            if (size <= maxSize) {
                break;
            }

            // Get the least-recently used item to remove
            Map.Entry<K, V> toEvict = map.eldest();
            if (toEvict == null) {
                break;
            }

            key = toEvict.getKey();
            value = toEvict.getValue();
            map.remove(key);
            size -= safeSizeOf(key, value);
            evictionCount++;
        }

        entryRemoved(true, key, value, null); }}Copy the code

Obviously, there’s the LinkedHashMap John () method, which returns:

public Map.Entry<K, V> eldest(a) {
    return head;
}
Copy the code

This is the header of the LinkedHashMap. So why remove the header? It is against LRU’s principle to remove the header. Actually, no, the magic happens in the get() method. In the LruCache get() method, we call the LinkedHashMap get() method, which in turn calls the following method when the value is retrieved:

void afterNodeAccess(Node<K,V> e) { // move node to last
    LinkedHashMapEntry<K,V> last;
    if(accessOrder && (last = tail) ! = e) { LinkedHashMapEntry<K,V> p = (LinkedHashMapEntry<K,V>)e, b = p.before, a = p.after; p.after =null;
        if (b == null)
            head = a;
        else
            b.after = a;
        if(a ! =null)
            a.before = b;
        else
            last = b;
        if (last == null)
            head = p;
        else{ p.before = last; last.after = p; } tail = p; ++modCount; }}Copy the code

The logic here is to move the nodes returned in the get() method to the end of the bidirectional list. So, of course, the least recently used node is the header.

3, summarize

The above is a summary of our use and source code for LruCache. Here we actually only analyze the get() process. Because this method is the heart of LruCache, it contains the process of inserting values and moving recently used items. As for the put() and remove() methods, they actually call the LinkedHashMap method directly internally, and we won’t analyze them here.