It’s always the attention to detail and the little grace notes that really make something sing. Focus on details more brilliant, chic mind determines success or failure.

This article mainly shares two implementations of LRU cache elimination algorithm. It’s not the realization that counts, but the thought!

All source code has been uploaded to Github: link

define

The LRU (Least Recently Used) policy, as its name suggests, culls data based on its historical access history. The idea is that “if the data has been accessed Recently, it has a higher chance of being accessed in the future; Data that has not been used for a long time is less likely to be used in the future; When the memory usage of data reaches a certain threshold, the least recently used data is removed.

For example,

(such as a bookcase capacity of 10), I’ll put it in my own books, including two books is my favorite, often reading, but with my purchase, bookcase will gradually put full (memory), then need to use the ideas of the LRU, I don’t often see, from the bookcase and into the box, and then put new books in the bookcase.

Redis, for example, is based on memory, but memory is not infinite. When memory usage reaches a threshold, it can use a series of caching algorithms, such as LRU, or save data to hard disk.

The specific implementation code is as follows:

Based on the list

The linked list is initialized, and the memory space of capacity is applied

    private LRUByLinkedList(int capacity) {
        head = null;
        size = capacity;
        count = 0;
    }Copy the code

Simulate LRU access (insert header if the data is not available)

Note: The if-else statement cannot be reversed, otherwise the list is full and the data already exists cannot be processed.

    private void insert(int data) {
        Node preNode = findNode(data);
        if(null ! = preNode) { delete(preNode); }else {
            if(count >= size) {// list full deleteToTail(); } } insertToHead(data); }Copy the code

Delete the specified data method, regular delete

    private void delete(Node preNode) {
        System.out.println("Delete specified element:" + preNode.next.data);
        preNode.next = preNode.next.next;
        --count;
    }Copy the code

Delete data at the end of the list

    private void deleteToTail() {
        Node node = head;
        Node curNode = null;
        while(null ! = node.next) { curNode = node; node = node.next; }if(null ! = curNode) { System.out.println("List is full, delete the trailing element:" + curNode.next.data);
            curNode.next = null;
        }
        --count;
    }Copy the code

Insert data in the header

    private void insertToHead(int data) {
        Node node = new Node(data, null);
        if (null == head) {
            head = node;
        } else {
            node.next = head;
        }
        head = node;
        ++count;
    }
Copy the code

The test results are as follows

  1. First, fill the list with capacity
  2. When capacity+1 data is inserted, the tail data needs to be deleted
  3. When the inserted data exists, it is removed from its location and inserted into the header



Based on the array

Data comparison linked list to achieve more simple, do not elaborate here, directly on the code.

    private LRUByArray(int capacity) {
        arrays = new int[capacity];
        size = capacity;
        count = 0;
    }Copy the code

    private void insert(int data) {
        int index = findValue(data);
        if(1! = index) { delete(index); }else {
            if(count >= size) {// Array full deleteToTail(); } } insertToHead(data); }Copy the code

Private void delete(int index) {// Delete the value by data migrationfor (int i = index + 1; i < count; i++) {
            arrays[i - 1] = arrays[i];
        }
        System.out.println("Delete elements...");
        --count;
    }Copy the code

    private void deleteToTail() { --count; Println (system.out.println (system.out.println))"Delete tail elements...");
    }Copy the code

    private void insertToHead(int data) {
        if (count > 1) {
            for (int i = count - 1; i > -1; --i) {
                arrays[i + 1] = arrays[i];
            }
        }
        arrays[0] = data;
        ++count;
    }Copy the code

The test results are as follows



Note: Arrays are not a good place to do this because of the large number of frequent accesses that lead to frequent data migration. Consider adding a HaspMap cache to avoid frequent data migration.

end



Your likes and attention are the biggest support for me, thank you!