Unlike NSDictionary, YYMemoryCache is a retain instead of a copy operation ona key (the reason YYMemoryCache keys can be any object is internally implemented using CFMutableDictionaryRef). The API provided is similar to NSCache, and all methods of YYMemoryCache are safe.

YYMemoryCache has the following features compared with NSCache:

  1. Use LRU(least-recently-used) to flush out algorithms (NSCache is to clear all cached information)
  2. Clear the cache using cost count age (NSCache value provides two cost count dimensions)
  3. Automatically clear the cache (clears the cache in the background when a memory warning is received)

YYMemoryCache uses a bidirectional list and dictionary to add, delete, modify, and query:

  1. When a new cache needs to be added, a node is generated and set as a header
  2. When you need to fetch a cache, get the node from YYLinkedMap->_dic by key and set the node as the header
  3. When a cache needs to be deleted, delete it from YYLinkedMap-> _DIC by key to the corresponding node and remove the node from the bidirectional linked list
  4. When the cache needs to be cleared on demand, the cleanup starts at the end node

A YYLinkedMapNode.

YYLinkedMapNode can be understood as a node in a linked list. YYMemoryCache encapsulates the incoming value into a node and stores it in the cache.

@interface _YYLinkedMapNode : NSObject { @package __unsafe_unretained _YYLinkedMapNode *_prev; // retained by dic __unsafe_unretained _YYLinkedMapNode *_next; // retained by dic id _key; // key id _value; // value NSUInteger _cost; // The cost of the node, the number of bytes in memory NSTimeInterval _time; } @end@implementation _YYLinkedMapNode @endCopy the code

Note that Node does not hold strong references to _prev and _next. _prev and _next perform strong references to _dic

2. YYLinkedMap

YYLinkedMap can be understood as a bidirectional linked list, which facilitates the operation of head and tail nodes.

@interface _YYLinkedMap : NSObject { @package CFMutableDictionaryRef _dic; // do not set object directly key: key, value: Node NSUInteger _totalCost; // Total overhead NSUInteger _totalCount; _YYLinkedMapNode *_head; // MRU, do not change it directly header _YYLinkedMapNode *_tail; // LRU, do not change it directly end BOOL _releaseOnMainThread; BOOL _releaseAsynchronously; // async release node}Copy the code

Ask why YYLinkedMap should use a dictionary.

The dictionary is used so that the time complexity of the operation of a node is O(1), and the time complexity of the operation of a node is O(n) if it is a bidirectional list.

YYLinkedMap uses CFMutableDictionaryRef instead of NSDictionary to create a dictionary:

Because NSDictionary keys need to follow the NSCoding protocol, CFMutableDictionaryRef doesn’t, and CFMutableDictionaryRef is closer to the bottom layer, so it’s more efficient, However, the _DIC memory created needs to be reclaimed manually by ourselves.

YYLinkedMap interface:

    // Insert nodes at the head of the list
    - (void)insertNodeAtHead:(_YYLinkedMapNode *)node;
    // Move the node to the head of the list
    - (void)bringNodeToHead:(_YYLinkedMapNode *)node;
    // Remove the node
    - (void)removeNode:(_YYLinkedMapNode *)node;
    // Remove the tail
    - (_YYLinkedMapNode *)removeTailNode;
    // Clear all nodes
    - (void)removeAll;
Copy the code

The operation of YYLinkedMap is mainly to insert and delete the bidirectional linked list. In the following figure, dotted lines are weak references and solid lines are strong references

- (void)insertNodeAtHead:(_YYLinkedMapNode *)node {
    // Store node in _dic
    CFDictionarySetValue(_dic, (__bridge const void *)(node->_key), (__bridge const void *)(node));
    // Change the total cost recorded in the map according to the cost of the node
    _totalCost += node->_cost;
    // Record the number of caches
    _totalCount++;
    if (_head) {
        // set node to the head node
        node->_next = _head;
        _head->_prev = node;
        _head = node;
    } else {
        // If the header is nil, the list is empty, and the node added is the header and the tail_head = _tail = node; }}Copy the code

Moving node:

- (void)bringNodeToHead:(_YYLinkedMapNode *)node {
    // If there is only one node, no need to move
    if (_head == node) return;
    
    if (_tail == node) {
        // If node is the end node, reset the end node
        _tail = node->_prev;
        _tail->_next = nil;
    } else {
        // Node is neither a header nor a tail
        node->_next->_prev = node->_prev;
        node->_prev->_next = node->_next;
    }
    // Set node as a header
    node->_next = _head;
    node->_prev = nil;
    _head->_prev = node;
    _head = node;
}
Copy the code

Delete node:

- (void)removeNode:(_YYLinkedMapNode *)node {
    // Remove the node from the dictionary _dic. Note that this is node reference count minus 1
    CFDictionaryRemoveValue(_dic, (__bridge const void *)(node->_key));
    // Change the total cost
    _totalCost -= node->_cost;
    // The number of caches is reduced by 1
    _totalCount--;
    // Change the prev pointer to the next node of node
    if (node->_next) node->_next->_prev = node->_prev;
    // Change the next pointer of a node
    if (node->_prev) node->_prev->_next = node->_next;
    // If node is a header, set the header to the next node
    if (_head == node) _head = node->_next;
    // If node is the last node, set the last node to the last node
    if (_tail == node) _tail = node->_prev;
}
Copy the code

Note the order in which node operations are performed, considering whether node is a head or tail or an intermediate node. Remove the tail:

- (_YYLinkedMapNode *)removeTailNode {
    if(! _tail)return nil;
    // Get the end node
    _YYLinkedMapNode *tail = _tail;
    // Remove the endnode from the dictionary because the dictionary has strong references
    CFDictionaryRemoveValue(_dic, (__bridge const void *)(_tail->_key));
    _totalCost -= _tail->_cost;
    _totalCount--;
    if (_head == _tail) {
        // If the header is equal to the tail, the list is empty after removal
        _head = _tail = nil;
    } else {
        // Reset the tail
        _tail = _tail->_prev;
        _tail->_next = nil;
    }
    return tail;
}	
Copy the code

Clear data:

- (void)removeAll {
    // Clear the information
    _totalCost = 0;
    _totalCount = 0;
    _head = nil;
    _tail = nil;
    if (CFDictionaryGetCount(_dic) > 0) {
        // execute a mutableCopy on _dic. Since _dic is immutable, the holder and _dic execute the same memory space (heap space).
        CFMutableDictionaryRef holder = _dic;
        // re-allocate memory in the heap space, _dic performs the newly allocated memory (the memory address of the previous heap space is stored in the holder)
        _dic = CFDictionaryCreateMutable(CFAllocatorGetDefault(), 0, &kCFTypeDictionaryKeyCallBacks, &kCFTypeDictionaryValueCallBacks);
        // If asynchronous execution is required, the current thread is not blocked
        if (_releaseAsynchronously) {
            dispatch_queue_t queue = _releaseOnMainThread ? dispatch_get_main_queue() : YYMemoryCacheGetReleaseQueue();
            dispatch_async(queue, ^{
                CFRelease(holder); // hold and release in specified queue
            });
        } else if(_releaseOnMainThread && ! pthread_main_np()) {// Main thread execution
            dispatch_async(dispatch_get_main_queue(), ^{
                CFRelease(holder); // hold and release in specified queue
            });
        } else {
            // Main thread execution
            CFRelease(holder); }}}Copy the code

The author’s operation here is very clever, using holder to reference DIC, applying heap space for DIC again, and implementing the release operation to Holder in the specified queue.

3. YYMemoryCache

YYMemoryCache is an externally provided class that provides methods for configuring properties and external access.

Properties and methods

@interface YYMemoryCache : NSObject
// Cache name
@property (nullable.copy) NSString *name;

// Total number of caches
@property (readonly) NSUInteger totalCount;

// Total overhead of the cache
@property (readonly) NSUInteger totalCost;

The default value is NSUIntegerMax, which means no limit
@property NSUInteger countLimit;

The default value is NSUIntegerMax, which indicates that there is no limit. For example, if the total cache cost is set to 200K, then when the total cache cost exceeds 200K, it will be automatically cleared in the background
@property NSUInteger costLimit;

// Cache time limit, default is DBL_MAX meaning no limit
@property NSTimeInterval ageLimit;

// Set the automatic cleanup time. The default value is 5s, or 5 seconds
@property NSTimeInterval autoTrimInterval;

// Whether to clear all caches when a memory warning is received. The default is YES
@property BOOL shouldRemoveAllObjectsOnMemoryWarning;

// Whether to clear all caches when the App enters the background. The default is YES
@property BOOL shouldRemoveAllObjectsWhenEnteringBackground;

// Block executed when a memory warning is received
@property (nullable.copy) void(^didReceiveMemoryWarningBlock)(YYMemoryCache *cache);

// The block executed when entering the background
@property (nullable.copy) void(^didEnterBackgroundBlock)(YYMemoryCache *cache);

// When a value needs to be removed, whether the release operation is performed on the main thread
@property BOOL releaseOnMainThread;

// Whether release is executed asynchronously when a value needs to be removed. Default is YES
@property BOOL releaseAsynchronously;

// Whether there is a key in the cache
- (BOOL)containsObjectForKey:(id)key;

// Get the value associated with the key from the cache
- (nullable id)objectForKey:(id)key;

// Cache key-value. If value=nil, remove the cache of the key
- (void)setObject:(nullable id)object forKey:(id)key;

// cache key-value, which specifies the overhead
- (void)setObject:(nullable id)object forKey:(id)key withCost:(NSUInteger)cost;

// Remove the key from the cache
- (void)removeObjectForKey:(id)key;

// Clear the cache
- (void)removeAllObjects;

// Prune the cache to the specified amount and remove it from the end of the list
- (void)trimToCount:(NSUInteger)count
    
// Prune the cache to the specified overhead and remove it from the end of the list
- (void)trimToCost:(NSUInteger)cost;

// Prune the cache to a specified time. Any data larger than this time is erased
- (void)trimToAge:(NSTimeInterval)age;
@end
Copy the code

2. Perform initial configuration

In the initialization method of YYMemoryCahce, some properties are configured with default parameters

@implementation YYMemoryCache {
    pthread_mutex_t _lock; / / the mutex
    _YYLinkedMap *_lru; // least recent used
    dispatch_queue_t _queue; // Serial queue
}

- (instancetype)init {
    self = super.init;
    // Initialize the lock
    pthread_mutex_init(&_lock, NULL);
    // Initializes the bidirectional linked list
    _lru = [_YYLinkedMap new];
    // A serial queue is used to perform pruning operations.
    _queue = dispatch_queue_create("com.ibireme.cache.memory", DISPATCH_QUEUE_SERIAL);
    
    _countLimit = NSUIntegerMax;
    _costLimit = NSUIntegerMax;
    _ageLimit = DBL_MAX;
    _autoTrimInterval = 5.0;
    _shouldRemoveAllObjectsOnMemoryWarning = YES;
    _shouldRemoveAllObjectsWhenEnteringBackground = YES;
    
    // Register App notification
    [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(_appDidReceiveMemoryWarningNotification) name:UIApplicationDidReceiveMemoryWarningNotification object:nil];
    [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(_appDidEnterBackgroundNotification) name:UIApplicationDidEnterBackgroundNotification object:nil];
    
    // Automatic clearing function
    [self _trimRecursively];
    return self;
}
Copy the code

3. Clear the cache automatically

The _TRIMRECURshrink method is called in the init method.

- (void)_trimRecursively {
    // With weakSelf, block does not strongly reference self
    __weak typeof(self) _self = self;
    dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(_autoTrimInterval * NSEC_PER_SEC)), dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_LOW, 0), ^ {// the __strong modifier prevents self from being destroyed prematurely
        __strong typeof(_self) self = _self;
        if (!self) return;
        [self _trimInBackground];
        [self _trimRecursively];
    });
}

- (void)_trimInBackground {
    // Why serial queue? Tasks in the serial queue wait for the last task to complete in order of cost->count->age to clear the cache
    dispatch_async(_queue, ^{
        [self _trimToCost:self->_costLimit];
        [self _trimToCount:self->_countLimit];
        [self _trimToAge:self->_ageLimit];
    });
}
Copy the code

The _trimRecursively invitable _trimRecursively enables caching to be automatically cleared through the dispatch_after method, which is the attribute autoTrimInterval.

Why use __weak and __strong to personalize self in _trimrecurshrink?

When self is not modified with __weak and __strong, the demo looks like this:

@interface MyObject : NSObject
@end
@implementation MyObject
- (void)_trimRecursively {
    dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(5 * NSEC_PER_SEC)), dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_LOW, 0), ^ {NSLog(@ 1 -- -- -- -- -- "");
        if (!self) return; // Block will catch self, copy, self reference count plus one.
        [self _trimRecursively]; // Since this is an asynchronous call, it does not block the current thread (copy and release self are repeated in recursive calls).
        NSLog(@ "2 -- -- -- -- --"); // After executing the block, the block releases the reference to self, and the self reference count is reduced by one.
    });
}
@end

@implementation ViewController

- (void)viewDidLoad {
    [super viewDidLoad];
    MyObject *obj = [[MyObject alloc] init];
    [obj _trimRecursively];
}

@end
Copy the code

Obj should be freed when viewDidLoad is complete, but there is a strong reference to obj in the _TRIMRECURlock block, so OBj can never be freed.

4. Manually clear the cache

_trimToCost _trimToCount _trimToAge The three methods of clearing cache are the same, but the basis for clearing cache is different. Therefore, only one case is analyzed here.

Static modifiers can be defined in multiple files
// Inline functions are called with direct substitutions, while normal functions need to open up stack space (push pop operations).
static inline dispatch_queue_t YYMemoryCacheGetReleaseQueue() {
    return dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_LOW, 0);
}
Copy the code
- (void)_trimToCost:(NSUInteger)costLimit {
    BOOL finish = NO;
    // Select lock to avoid data corruption in multi-threaded access
    pthread_mutex_lock(&_lock);
    if (costLimit == 0) {
        // If the memory cost is set to 0, the cache is cleared
        [_lru removeAll];
        finish = YES;
    } else if (_lru->_totalCost <= costLimit) {
        // If the current cache cost is less than the specified limit, no cleanup is required
        finish = YES;
    }
    pthread_mutex_unlock(&_lock);
    if (finish) return;
    
    // Create an array of nodes to be released asynchronously or synchronously on the specified thread
    NSMutableArray *holder = [NSMutableArray new];
    // The time complexity is O(n)??
    while(! finish) {// Try locking, 0 if successful
        if (pthread_mutex_trylock(&_lock) == 0) {
            // Add the node to be removed to the array
            if (_lru->_totalCost > costLimit) {
                // Nodes are removed from dic dictionaries and bidirectional lists, but added to arrays, so nodes are not destroyed
                _YYLinkedMapNode *node = [_lru removeTailNode];
                if (node) [holder addObject:node];
            } else {
                finish = YES;
            }
            pthread_mutex_unlock(&_lock);
        } else {
            Sleep10ms to prevent contention for resources
            usleep(10 * 1000); //10 ms}}// The holder holds the node that needs to be released
    if (holder.count) {
        // Select whether to release node resources in the main thread according to the configuration
        / / YYMemoryCacheGetReleaseQueue () is an inline function that created global concurrent queue
        dispatch_queue_t queue = _lru->_releaseOnMainThread ? dispatch_get_main_queue() : YYMemoryCacheGetReleaseQueue();
        // Remove from the queue
        dispatch_async(queue, ^{
            When the block completes, the holder's reference count is reduced by 1
            [holder count]; // release in queue
        });
    }
    } is not destroyed, but is destroyed in the block (implement destruction in the specified queue).
}
Copy the code

5. Setter methods for properties

YYMemoryCache provides some configuration properties to implement the configuration of YYLinkedMap by overwriting setter methods

- (NSUInteger)totalCount {
    pthread_mutex_lock(&_lock);
    NSUInteger count = _lru->_totalCount;
    pthread_mutex_unlock(&_lock);
    return count;
}

- (NSUInteger)totalCost {
    pthread_mutex_lock(&_lock);
    NSUInteger totalCost = _lru->_totalCost;
    pthread_mutex_unlock(&_lock);
    return totalCost;
}

- (BOOL)releaseOnMainThread {
    pthread_mutex_lock(&_lock);
    BOOL releaseOnMainThread = _lru->_releaseOnMainThread;
    pthread_mutex_unlock(&_lock);
    return releaseOnMainThread;
}

- (void)setReleaseOnMainThread:(BOOL)releaseOnMainThread {
    pthread_mutex_lock(&_lock);
    _lru->_releaseOnMainThread = releaseOnMainThread;
    pthread_mutex_unlock(&_lock);
}

- (BOOL)releaseAsynchronously {
    pthread_mutex_lock(&_lock);
    BOOL releaseAsynchronously = _lru->_releaseAsynchronously;
    pthread_mutex_unlock(&_lock);
    return releaseAsynchronously;
}
Copy the code

6. Add, delete, modify and check

Looking for:

- (BOOL)containsObjectForKey:(id)key {
    if(! key)return NO;
    // Lock the map because you need to manipulate the data
    pthread_mutex_lock(&_lock);
    BOOL contains = CFDictionaryContainsKey(_lru->_dic, (__bridge const void *)(key));
    pthread_mutex_unlock(&_lock);
    return contains;
}

- (id)objectForKey:(id)key {
    if(! key)return nil;
    pthread_mutex_lock(&_lock);
    // Get node from the dictionary
    _YYLinkedMapNode *node = CFDictionaryGetValue(_lru->_dic, (__bridge const void *)(key));
    if (node) {
        // The node time needs to be changed to the current time
        node->_time = CACurrentMediaTime(a);// set the node to the head.
        [_lru bringNodeToHead:node];
    }
    pthread_mutex_unlock(&_lock);
    return node ? node->_value : nil;
}
Copy the code

Add:

- (void)setObject:(id)object forKey:(id)key {
    [self setObject:object forKey:key withCost:0];
}

- (void)setObject:(id)object forKey:(id)key withCost:(NSUInteger)cost {
    if(! key)return;
    // If object is nil, the key is removed
    if(! object) { [self removeObjectForKey:key];
        return;
    }
    pthread_mutex_lock(&_lock);
    // Store Node in a dictionary
    _YYLinkedMapNode *node = CFDictionaryGetValue(_lru->_dic, (__bridge const void *)(key));
    // Get the current time, which is used to set the node operation time
    NSTimeInterval now = CACurrentMediaTime(a);if (node) {
        // If node is already in the cache, change node's time and add node to the table header
        _lru->_totalCost -= node->_cost;
        _lru->_totalCost += cost;
        node->_cost = cost;
        node->_time = now;
        node->_value = object;
        [_lru bringNodeToHead:node];
    } else {
        // If node does not exist, create a node and add it to the table header
        node = [_YYLinkedMapNode new];
        node->_cost = cost;
        node->_time = now;
        node->_key = key;
        node->_value = object;
        [_lru insertNodeAtHead:node];
    }
    // After the addition, because _totalCost increases, we need to check whether the total cost exceeds the specified _costLimit
    if (_lru->_totalCost > _costLimit) {
        // Clear the cache asynchronously in serial queues
        dispatch_async(_queue, ^{
            [self trimToCost:_costLimit];
        });
    }
    // If _totalCount is greater than _countLimit, you need to check if _totalCount is greater than _countLimit
    if (_lru->_totalCount > _countLimit) {
        // Since only one node is added at a time, only the last tail needs to be removed
        _YYLinkedMapNode *node = [_lru removeTailNode];
        if (_lru->_releaseAsynchronously) {
            dispatch_queue_t queue = _lru->_releaseOnMainThread ? dispatch_get_main_queue() : YYMemoryCacheGetReleaseQueue();
            dispatch_async(queue, ^{
                [node class]; //hold and release in queue
            });
        } else if(_lru->_releaseOnMainThread && ! pthread_main_np()) {dispatch_async(dispatch_get_main_queue(), ^{
                [node class]; //hold and release in queue
            });
        }
    }
    pthread_mutex_unlock(&_lock);
}
Copy the code

Delete:

- (void)removeObjectForKey:(id)key {
    if(! key)return;
    pthread_mutex_lock(&_lock);
    // Get the node to be removed
    _YYLinkedMapNode *node = CFDictionaryGetValue(_lru->_dic, (__bridge const void *)(key));
    if (node) {
        [_lru removeNode:node];
        if (_lru->_releaseAsynchronously) {
            dispatch_queue_t queue = _lru->_releaseOnMainThread ? dispatch_get_main_queue() : YYMemoryCacheGetReleaseQueue();
            dispatch_async(queue, ^{
                [node class]; //hold and release in queue
            });
        } else if(_lru->_releaseOnMainThread && ! pthread_main_np()) {dispatch_async(dispatch_get_main_queue(), ^{
                [node class]; //hold and release in queue
            });
        }
    }
    pthread_mutex_unlock(&_lock);
}

- (void)removeAllObjects {
    pthread_mutex_lock(&_lock);
    [_lru removeAll];
    pthread_mutex_unlock(&_lock);
}
Copy the code

7. Other

- (NSString *)description {
    if (_name) return [NSString stringWithFormat:@"<%@: %p> (%@)".self.class, self, _name];
    else return [NSString stringWithFormat:@"<%@: %p>".self.class, self];
}
Copy the code

Overrides the description method to be called when logging YYMemoryCache objects, log information about them.

Thinking four.

1. MRU LRU elimination algorithm?

LRU(least-recently-used): least recently used. It is an elimination algorithm. If the module data has been accessed recently, it has a high probability of being accessed in the future. The data that has not been accessed for the longest time is less likely to be accessed in the future. To eliminate those that have not been used for the longest time, a time field should be used to record the time of each visit. When it is necessary to eliminate, select the elimination with the largest time.

MRU(most-recently-used): used most recently. This is the opposite of LRU.

2. How does YYMemoryCache keep threads safe?

YYMemoryCache is locked when resources need to be operated. Pthread_mutexes lock.

In YYMemoryCache, data in _YYLinkedMap and getter setter methods are locked for clearing the cache to avoid data errors in multi-threaded access.

3. In what ways does YYMemoryCache improve its performance?

  • YYMemoryCache uses bidirectional linked lists + dictionaries. Bidirectional linked lists facilitate the operation of head nodes and tail nodes, and facilitate the search and modification of the last and next nodes of intermediate nodes. The time complexity of operations is O(1), while the search complexity of bidirectional linked lists is O(n). The dictionary search complexity is O(1), and YYMemoryCache uses bidirectional linked list + dictionary data structure. The time complexity of add, delete, modify, and query operations is O(1).
  • The operation to clear the cache is performed asynchronously, and the Release object is performed asynchronously

4. What is the performance comparison between dispatch_semaphore and Pthread_mutex locks?

Dispatch_semaphore_t lock = dispatch_semaphore_create(1); // Dispatch_semaphore_t lock = dispatch_semaphore_create(1); dispatch_semaphore_wait(lock, DISPATCH_TIME_FOREVER); . dispatch_semaphore_signal(lock); pthread_mutex_t lock; pthread_mutex_init(&lock, NULL); pthread_mutex_lock(&lock); . pthread_mutex_unlock(&lock);Copy the code

When there is no wait, dispatch_semaphore performs better than pthread_mutext, but when there is a wait, performance degrades considerably.