, anticipation heavy sugar-coated berry students watched a few articles on the nuggets app, accidentally saw a familiar word LRU algorithm, mind would think this is not the only thing is often said, the thing is, in the evening sleep, LRU algorithm is what, like what the least recently used of the articles on the subway to see too many during the day, But in the evening to think as if nothing to remember, remember the LRU algorithm, I found that most people are like this ah, for their familiar part can also remember a point, unfamiliar or not may really be read to forget ah ~ since this is not as good as the first familiar to understand.

The next day came to the company, I think it is necessary to have a look at the SOURCE of this LRU, in the end what is going on, well, tanghulu students brush brush to see, next we will enter the topic, please wear glasses of students to wipe the glasses, ha ha ha

First

First look at the source code, and then use specific demo to verify, we first look at the LruCache class structure and methods, as shown in the following figure:

The get(K), put(K,V), remove(K) methods look like a set of maps with keys and values.

public class LruCache<K.V> {
    private final LinkedHashMap<K, V> map;

    /** Size of this cache in units. Not necessarily the number of elements. */
    private int size;
    private int maxSize;

    private int putCount;
    private int createCount;
    private int evictionCount;
    private int hitCount;
    private int missCount;

    / * * *@param maxSize for caches that do not override {@link #sizeOf}, this is
     *     the maximum number of entries in the cache. For all other caches,
     *     this is the maximum sum of the sizes of the entries in this cache.
     */
    public LruCache(int maxSize) {
        if (maxSize <= 0) {
            throw new IllegalArgumentException("maxSize <= 0");
        }
        this.maxSize = maxSize;
        this.map = new LinkedHashMap<K, V>(0.0.75 f.true);
    }
Copy the code

The LruCache class maintains a collection of Linkedhashmaps, caches our object, and the constructor requires us to pass in a value of maxSize, According to the comment above, this is the maximum number of objects in our LruCache cache.

What’s the use?

According to the conventional thinking, we can think that when putting a new cache object, we will remove some cache objects in the set according to the maximum value we set, and then add new cache objects.

Second

Based on our analysis, it is necessary to look at the source code for the put method:

    /**
     * Caches {@code value} for {@code key}. The value is moved to the head of
     * the queue.
     *
     * @return the previous value mapped by {@code key}.
     */
    public final V put(K key, V value) {
        if (key == null || value == null) {
            throw new NullPointerException("key == null || value == null");
        }

        V previous;
        synchronized (this) {
            putCount++;
            size += safeSizeOf(key, value);
            previous = map.put(key, value);
            if(previous ! =null) { size -= safeSizeOf(key, previous); }}if(previous ! =null) {
            entryRemoved(false, key, previous, value);
        }

        trimToSize(maxSize);
        return previous;
    }
Copy the code

In the synchronized block, we see that size is the sum of the number of cached objects put in, and then call the map.put method on the set, returning a previous object, If the cache object is added to the set, if it is not null, the size is subtracted back.

TrimToSize (maxSize); trimToSize(maxSize); trimToSize(maxSize); trimToSize(maxSize); trimToSize(maxSize);

The source code is as follows:

/**
     * Remove the eldest entries until the total of remaining entries is at or
     * below the requested size.
     *
     * @param maxSize the maximum size of the cache before returning. May be -1
     *            to evict even 0-sized elements.
     */
    public void trimToSize(int maxSize) {
        while (true) {
            K key;
            V value;
            synchronized (this) {
                if (size < 0|| (map.isEmpty() && size ! =0)) {
                    throw new IllegalStateException(getClass().getName()
                            + ".sizeOf() is reporting inconsistent results!");
                }

                // End the loop as long as the current size<= maxSize
                if (size <= maxSize || map.isEmpty()) {
                    break;
                }
                // Get the object and remove it from the map, making sure size<=maxSize
                Map.Entry<K, V> toEvict = map.entrySet().iterator().next();
                key = toEvict.getKey();
                value = toEvict.getValue();
                map.remove(key);
                size -= safeSizeOf(key, value);
                evictionCount++;
            }

            entryRemoved(true, key, value, null); }}Copy the code

2. Remove the remaining entries until the total of remaining entries is at or below the requested size. Clear the oldest objects until the size of the remaining cache objects is smaller than the set size. That’s exactly what we’re looking for.

MaxSize is what we pass in the constructor

public LruCache(int maxSize) {
        if (maxSize <= 0) {
            throw new IllegalArgumentException("maxSize <= 0");
        }
        this.maxSize = maxSize;
        this.map = new LinkedHashMap<K, V>(0.0.75 f.true);
    }
Copy the code

The core method of LruCache, trimToSize, is called trimToSize.

Set the scene

Let’s say we set maxSize to 2, and we display 3 ImageViews in the layout, each representing the 3 images that we want to display. Let’s add 3 images and see if we can show 3 images.

The XML layout looks like this (I won’t paste the code, it’s simple) :

The activity code is as follows:

public final int MAX_SIZE = 2;
    @Override
    protected void onCreate(@Nullable Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.layout_lru);

        ImageView iv1 = (ImageView) findViewById(R.id.iv1);
        ImageView iv2 = (ImageView) findViewById(R.id.iv2);
        ImageView iv3 = (ImageView) findViewById(R.id.iv3);

        Bitmap bitmap1 = BitmapFactory.decodeResource(getResources(),R.drawable.bg);
        Bitmap bitmap2 = BitmapFactory.decodeResource(getResources(),R.drawable.header_img);
        Bitmap bitmap3 = BitmapFactory.decodeResource(getResources(),R.drawable.ic_launcher);

        LruCache<String,Bitmap> lruCache = new LruCache<>(MAX_SIZE);
        lruCache.put("1",bitmap1);
        lruCache.put("2",bitmap2);
        lruCache.put("3",bitmap3);

        Bitmap bitmap = lruCache.get("1");
        iv1.setImageBitmap(bitmap);

        Bitmap b2 = lruCache.get("2");
        iv2.setImageBitmap(b2);

        Bitmap b3 = lruCache.get("3");
        iv3.setImageBitmap(b3);
    }
Copy the code

Graph:

When you put the third Bitmap, you will find that the size is 3, and the MaxSize is 2. If you put the third Bitmap, you will find that the size is 3, and the MaxSize is 2

public void trimToSize(int maxSize) {
        while (true) {
            K key;
            V value;
            synchronized (this) {
                if (size < 0|| (map.isEmpty() && size ! =0)) {
                    throw new IllegalStateException(getClass().getName()
                            + ".sizeOf() is reporting inconsistent results!");
                }
                // The first loop: size is 3 and maxSize is 2
                // The second loop, where size is 2 and maxSize is 2, satisfies the condition and breaks the loop
                if (size <= maxSize || map.isEmpty()) {
                    break;
                }
              // Get the first element added first
                Map.Entry<K, V> toEvict = map.entrySet().iterator().next();
                key = toEvict.getKey();
                value = toEvict.getValue();
              // Remove the first cache object
                map.remove(key);
              // size = 2, minus the element removed
                size -= safeSizeOf(key, value);
                evictionCount++;
            }

            entryRemoved(true, key, value, null); }}Copy the code

This safeSizeOf is calling sizeOf.

In other words, when we put the third bitmap, LruCache will automatically remove the first cache object for us, because the first one is added first and takes the longest time. Of course, the later bitmap is the new and most recent one. Therefore, we infer that iv1 cannot display the picture, because it is removed. The other two can be displayed, so let’s see if the results are the same as our analysis:

A: wow! It’s exactly what we thought. It proves we were right. Here’s why LruCache uses this LinkedHashMap, and why the way to create a LinkedHashMap is different from what we usually create. Here’s the source code:

public LruCache(int maxSize) {
        if (maxSize <= 0) {
            throw new IllegalArgumentException("maxSize <= 0");
        }
        this.maxSize = maxSize;
        this.map = new LinkedHashMap<K, V>(0.0.75 f.true);
    }
Copy the code

New LinkedHashMap

(0, 0.75f, true) This last parameter is crucial. True indicates that elements are sorted by access order. False indicates that elements are sorted by insertion order. When this is set to true, if an operation is performed on an element (put, get), it will place that element at the end of the collection.
,>

Indeed, this is true. Let’s look at the source code of LinkedHashMap:

/**
     * Constructs an empty <tt>LinkedHashMap</tt> instance with the
     * specified initial capacity, load factor and ordering mode.
     *
     * @param  initialCapacity the initial capacity
     * @param  loadFactor      the load factor
     * @param  accessOrder     the ordering mode - <tt>true</tt> for
     *         access-order, <tt>false</tt> for insertion-order
     * @throws IllegalArgumentException if the initial capacity is negative
     *         or the load factor is nonpositive
     */
    public LinkedHashMap(int initialCapacity,
                         float loadFactor,
                         boolean accessOrder) {
        super(initialCapacity, loadFactor);
        this.accessOrder = accessOrder;
    }
Copy the code

The assessOrder note is pretty clear: The ordering mode – true for * access-order, false for insertion- >true False means in the order of insertion. The default is false, and every time we get(K), put(K,V), we adjust the position of the element in the set according to this variable. The only purpose of this is to retain the most recently used cache objects. Here’s an example:

We added three elements to this set

LruCache<String, Bitmap> lruCache = new LruCache<>(MAX_SIZE); (MAX_SIZE=2) lruCache.put("1", bitmap1); lruCache.put("2", bitmap2); lruCache.put("3", bitmap3);Copy the code

Now their order in the set looks like this:

Let’s say we use element 1 before putting element 3, so we call get(“1”). We know that LinkedHashMap will change the order of the elements in the list. The code looks like this:

        lruCache.put("1", bitmap1);
        lruCache.put("2", bitmap2);
        lruCache.get("1");
        lruCache.put("3", bitmap3);
Copy the code
Then the corresponding order in the linked list is:Copy the code

Put (“3”, bitmap3); put(“3”, bitmap3); put(“3”, bitmap3); , the set will eventually look like this:

All that’s left in the set is the cache objects for 1 and 3.

So far, LruCache is finished, if you do not understand the place can leave a message, together to discuss ~