☞ source: YYCache

☞ Author: YYCache design idea

☞ Read the code: github.com/wenghengcon…

YYCache

YYCache is a Cache matrix consisting of the memory Cache YYMemoryCache and the disk Cache YYDiskCache. The memory Cache provides low-capacity but high-speed access, and the disk Cache provides large-capacity but low-speed persistent storage.

When using an interface to access a cache object in YYCache, the first attempt is made

  • Slave memory cacheYYMemoryCacheIn the access
    • Hit: Any hit object that was previously accessed and is still in memory.
    • Miss – >YYDiskCache: An object that has not been used by the key cache before, or that has been expelled due to various restrictions.
      • Hit: Any hit object is updated to the memory cache.
      • Missed: Cached to disk.
@interface YYCache : NSObject

@property (copy.readonly) NSString *name;
@property (strong.readonly) YYMemoryCache *memoryCache;
@property (strong.readonly) YYDiskCache *diskCache;

- (nullable instancetype)initWithName:(NSString *)name;
- (nullable instancetype)initWithPath:(NSString *)path NS_DESIGNATED_INITIALIZER;
+ (nullable instancetype)cacheWithName:(NSString *)name;
+ (nullable instancetype)cacheWithPath:(NSString *)path;

- (instancetype)init UNAVAILABLE_ATTRIBUTE;
+ (instancetype)new UNAVAILABLE_ATTRIBUTE;

- (BOOL)containsObjectForKey:(NSString *)key;
- (nullable id<NSCoding>)objectForKey:(NSString *)key;
- (void)setObject:(nullable id<NSCoding>)object forKey:(NSString *)key;
- (void)removeObjectForKey:(NSString *)key;
- (void)removeAllObjects;
- (void)removeAllObjectsWithBlock:(void(^) (void))block;

- (void)containsObjectForKey:(NSString *)key withBlock:(nullable void(^) (NSString *key, BOOLcontains))block; .@end
Copy the code

Caching strategies

LRU (Least recently used) algorithm eliminates data according to the historical access records of data. The core idea is that “if data has been accessed recently, it has a higher chance of being accessed in the future”.

In YYCache, there are two implementations:

Memory cache, the implementation of double linked list:

  1. Insert new data into the linked list header;
  2. Whenever a cache hit (that is, cached data is accessed), the data is moved to the head of the linked list;
  3. When the cache needs to be cleared, the data at the end of the list is discarded.

In disk cache, SQLite implementation is used:

  1. Insert new data into the cache table;
  2. Whenever a cache hit (that is, cached data is accessed), the corresponding row in the table corresponding to the cache is updatedAccess time;
  3. When the cache needs to be cleared, according toAccess timeDelete the oldest entry;

Memory cache

YYMemoryCache

YYMemoryCache is a high-speed memory cache for storing key-value pairs. It is the opposite of NSDictionary in that the Key is retained and not copied. The API and performance are similar to NSCache, and all methods are thread-safe.

It is through the double linked list to achieve LRU algorithm, **CFMutableDictionaryRef**** to achieve fast access, ****pthread_mutex_lock** to ensure thread safety.

The difference from NSCache is that:

YYMemoryCache NSCache
deportation LRU(least-recently-used) algorithm to expel objects uncertain
control Age, cost, count inaccurate
abnormal Memory warning or automatic expulsion of objects when the App enters the background There is no

API

Elegant and concise API:

@interface YYMemoryCache : NSObject

#pragma mark - Attribute
@property (nullable.copy) NSString *name;
@property (readonly) NSUInteger totalCount;
@property (readonly) NSUInteger totalCost;

#pragma mark - Limit
@property NSUInteger countLimit;
@property NSUInteger costLimit;
@property NSTimeInterval ageLimit;
@property NSTimeInterval autoTrimInterval;

@property BOOL shouldRemoveAllObjectsOnMemoryWarning;
@property BOOL shouldRemoveAllObjectsWhenEnteringBackground;
@property (nullable.copy) void(^didReceiveMemoryWarningBlock)(YYMemoryCache *cache);
@property (nullable.copy) void(^didEnterBackgroundBlock)(YYMemoryCache *cache);

@property BOOL releaseOnMainThread;
@property BOOL releaseAsynchronously;

#pragma mark - Access Methods
- (BOOL)containsObjectForKey:(id)key;
- (nullable id)objectForKey:(id)key;
- (void)setObject:(nullable id)object forKey:(id)key;
- (void)setObject:(nullable id)object forKey:(id)key withCost:(NSUInteger)cost;
- (void)removeObjectForKey:(id)key;
- (void)removeAllObjects;


#pragma mark - Trim
- (void)trimToCount:(NSUInteger)count;
- (void)trimToCost:(NSUInteger)cost;
- (void)trimToAge:(NSTimeInterval)age;

@end

@implementation YYMemoryCache {
    pthread_mutex_t _lock;  // Mutex, thread-safe, for all properties and methods
    _YYLinkedMap *_lru;     //_YYLinkedMap processing layer class. Handles linked list operations that manipulate cached objects
    dispatch_queue_t _queue;    // Serial queue, used for YYMemoryCache trim operation
}
Copy the code

LRU cache policy

The implementation of LRU cache strategy relies on the double linked list _YYLinkedMap, and _YYLinkedMapNode is the linked list node. Strategy is to:

  1. Insert new data into the linked list header;
  2. Whenever a cache hit (that is, cached data is accessed), the data is moved to the head of the linked list;
  3. When the cache needs to be cleared, the data at the end of the list is discarded.
@interface _YYLinkedMapNode : NSObject {
    @package
    __unsafe_unretained: The same as __weak, the only difference is that even if the object is destroyed, the pointer is not automatically null. In this case, the pointer points to a useless wild address. If this pointer is used, the program throws a BAD_ACCESS exception.
    // __unsafe_unretained as a performance optimization, the node is strongly referenced by _DIC of _YYLinkedMap
    __unsafe_unretained _YYLinkedMapNode *_prev; // retained by dic
    __unsafe_unretained _YYLinkedMapNode *_next; // retained by dic
    id _key;
    id _value;
    NSUInteger _cost;
    NSTimeInterval _time;
}

@interface _YYLinkedMap : NSObject {
    @package
    CFMutableDictionaryRef _dic; // do not set object directly
    NSUInteger _totalCost;
    NSUInteger _totalCount;
    _YYLinkedMapNode *_head; // MRU, do not change it directly
    _YYLinkedMapNode *_tail; // LRU, do not change it directly
    BOOL _releaseOnMainThread;
    BOOL _releaseAsynchronously;
}
Copy the code

Linked list operations

 (void)setObject:(id)object forKey:(id)key withCost:(NSUInteger)cost {
    if(! key)return;
    if(! object) {// Set nil to remove the corresponding Node
        [self removeObjectForKey:key];
        return;
    }
    pthread_mutex_lock(&_lock);
    _YYLinkedMapNode *node = CFDictionaryGetValue(_lru->_dic, (__bridge const void *)(key));
    NSTimeInterval now = CACurrentMediaTime(a);if (node) {
        _lru->_totalCost -= node->_cost;    // Remove the old cost
        _lru->_totalCost += cost;           // Add a new cost
        node->_cost = cost;
        node->_time = now;
        node->_value = object;
        [_lru bringNodeToHead:node];        // Put the visited in the header of the table
    } else {
        node = [_YYLinkedMapNode new];
        node->_cost = cost;
        node->_time = now;
        node->_key = key;
        node->_value = object;
        [_lru insertNodeAtHead:node];
    }
    if (_lru->_totalCost > _costLimit) {
        dispatch_async(_queue, ^{
            [self trimToCost:_costLimit];
        });
    }
    if (_lru->_totalCount > _countLimit) {
        _YYLinkedMapNode *node = [_lru removeTailNode];
        if (_lru->_releaseAsynchronously) {
            dispatch_queue_t queue = _lru->_releaseOnMainThread ? dispatch_get_main_queue() : YYMemoryCacheGetReleaseQueue();
            dispatch_async(queue, ^{
                // Hold node, waiting to be released on the queue
                [node class]; //hold and release in queue
            });
        } else if(_lru->_releaseOnMainThread && ! pthread_main_np()) {dispatch_async(dispatch_get_main_queue(), ^{
                [node class]; //hold and release in queue
            });
        }
    }
    pthread_mutex_unlock(&_lock);
}

- (void)removeObjectForKey:(id)key {
    if(! key)return;
    pthread_mutex_lock(&_lock);
    
    // Node to be removed
    _YYLinkedMapNode *node = CFDictionaryGetValue(_lru->_dic, (__bridge const void *)(key));
    
    if (node) {
        / / remove the node
        [_lru removeNode:node];
        if (_lru->_releaseAsynchronously) {
            dispatch_queue_t queue = _lru->_releaseOnMainThread ? dispatch_get_main_queue() : YYMemoryCacheGetReleaseQueue();
            dispatch_async(queue, ^{
                // Hold node, waiting to be released on the queue
                [node class]; //hold and release in queue
            });
        } else if(_lru->_releaseOnMainThread && ! pthread_main_np()) {dispatch_async(dispatch_get_main_queue(), ^{
                [node class]; // Asynchronous release}); } } pthread_mutex_unlock(&_lock); }...Copy the code

Ejection cache object

The expulsion object has three dimensions: cost, count, and age, which represent the consumption of each object, the total number of objects, and the expiration time of objects respectively

call

#pragma mark - Trim
- (void)trimToCount:(NSUInteger)count;
- (void)trimToCost:(NSUInteger)cost;
- (void)trimToAge:(NSTimeInterval)age;
Copy the code

Take Count as an example:

- (void)_trimToCount:(NSUInteger)countLimit {
    BOOL finish = NO;
    pthread_mutex_lock(&_lock);
    if (countLimit == 0) {
        [_lru removeAll];
        finish = YES;
    } else if (_lru->_totalCount <= countLimit) {
        finish = YES;
    }
    pthread_mutex_unlock(&_lock);
    if (finish) return;
    
    NSMutableArray *holder = [NSMutableArray new];
    while(! finish) {if (pthread_mutex_trylock(&_lock) == 0) {
            if (_lru->_totalCount > countLimit) {
                // Clear the cache from the end of the list (LRU) to free resources
                _YYLinkedMapNode *node = [_lru removeTailNode];
                if (node) [holder addObject:node];
            } else {
                finish = YES;
            }
            pthread_mutex_unlock(&_lock);
        } else {
            usleep(10 * 1000); //10 ms}}if (holder.count) {
        // Determine whether to free the cache object in the main thread
        dispatch_queue_t queue = _lru->_releaseOnMainThread ? dispatch_get_main_queue() : YYMemoryCacheGetReleaseQueue();
        dispatch_async(queue, ^{
            [holder count]; // Asynchronous release}); }}Copy the code

timing

- (void)_trimRecursively {
    __weak typeof(self) _self = self;
    dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(_autoTrimInterval * NSEC_PER_SEC)), dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_LOW, 0), ^{
        __strong typeof(_self) self = _self;
        if (!self) return;
        [self _trimInBackground];
        [self _trimRecursively];
    });
}
Copy the code

In the above code, every _autoTrimInterval tries to process data in the background and then calls itself again, thus implementing a timer-like function.

Asynchronous release

Note about the code for asynchronously releasing cache objects above:

dispatch_queue_t queue = _lru->_releaseOnMainThread ? dispatch_get_main_queue() : YYMemoryCacheGetReleaseQueue();
dispatch_async(queue, ^{
    [holder count]; // release in queue
});
Copy the code

Ibireme mentioned this technique in his other post on how to keep the iOS interface smooth:

Note: Object destruction is not a huge resource, but it adds up. Usually when a container class holds a large number of objects, the resource cost of their destruction becomes apparent. Similarly, if an object can be released in a background thread, move it to the background thread. Here’s a little Tip: Capture an object in a block, throw it on a background queue, send a random message to avoid compiler warnings, and let the object be destroyed in the background thread.

The code above YYMemoryCacheGetReleaseQueue in the queue is:

// static inline dispatch_queue_t
static inline dispatch_queue_t YYMemoryCacheGetReleaseQueue() {
    return dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_LOW, 0);
}
Copy the code

You can see in the source YYMemoryCacheGetReleaseQueue is a low priority DISPATCH_QUEUE_PRIORITY_LOW queue, can let the iOS release in relatively idle to asynchronous system cache object.

  1. Let the block retain the object?

The holder will not be dealloc, because the block retains node, which makes the holder’s reference count 1. The holder’s reference count is -1 again, and node will release the block in its queue. 2. Why do WE need to destroy objects in the specified queue? Destroy objects in other threads, relieving the current thread of stress, generally relieving the main thread of stress

Quick access to

YYMemoryCache can hit the cache as fast as NSDictionary, but it uses CFMutableDictionaryRef instead of NSMutableDictionary for two reasons:

  • CFMutableDictionaryRefBetter performance;
  • NSMutableDictionaryThe keys in are copied and need to be immutable. If a key is changed after it is used to put a value in the dictionary, the value becomes unreachable. An interesting detail is that in the middle keys are copied, but in using a Toll-free bridgeCFMutableDictionaryRefBut will only be retained.

The use of the CFMutableDictionaryRef

//create
_dic = CFDictionaryCreateMutable(CFAllocatorGetDefault(), 0, 
                                 &kCFTypeDictionaryKeyCallBacks, &kCFTypeDictionaryValueCallBacks);
//setter
CFDictionarySetValue(_dic, (__bridge const void *)(node->_key), (__bridge const void *)(node));
//getter
_YYLinkedMapNode *node = CFDictionaryGetValue(_lru->_dic, (__bridge const void *)(key));
//contains
BOOL contains = CFDictionaryContainsKey(_lru->_dic, (__bridge const void *)(key));
//remove
CFDictionaryRemoveValue(\_dic, (__bridge const void *)(node->_key));
Copy the code

Thread safety

In the memory cache, pthread_mutex_t is used to ensure thread-safety, as described in the documentation:

When reading or writing cache, lock first, then unlock:

- (void)setObject:(id)object forKey:(id)key withCost:(NSUInteger)cost { 
    pthread_mutex_lock(&_lock); 
    // Write cached data
    pthread_mutex_unlock(&_lock); 
} 
- (id)objectForKey:(id)key { 
    pthread_mutex_lock(&_lock); 
    // Read the cached data
    pthread_mutex_unlock(&_lock); 
}
Copy the code

However, pthread_mutex_trylock is used to lock cached objects. The difference is that:

pthread_mutex_trylock behaves identically to pthread_mutex_lock, except that it does not block the calling thread if the mutex is already locked by another thread (or by the calling thread in the case of a “fast” mutex). Instead, Pthread_mutex_trylock returns immediately with the error code EBUSY. From www.skrenta.com/rt/man/pthr…

Pthread_mutex_lock (pthread_mutex_t *mutex); // Attempt to lock (non-blocking operation) // Hold the mutex when it is idle; Otherwise, return // immediately, but unlike pthread_mutex_lock, return EBUSY when the lock is already in use, instead of pending. pthread_mutex_trylock(pthread_mutex_t *mutex);

Why do you do that?

Because of the read and write cache, it is necessary to ensure thread-safety and must be blocked. However, in order to improve performance, this low-priority, timed operation is tried to lock to avoid the loss caused by thread locking.

Disk cache

YYDiskCache

YYDiskCache is a thread-safe disk cache for storing key-value pairs supported by SQLite and the file system.

SQLite database table to implement LRU policy, dispatch_semaphore to ensure thread safety, through SQLite high performance and SQL statement cache _dbStmtCache (CFMutableDictionaryRef class), And thresholds are used to determine whether the storage domain file system is SQLite to ensure fast access.

Where disk cache for data size is selected:

  • Sqlite: Access efficiency for small data (such as NSNumber) is significantly higher than file.
  • File: More efficient than SQLite for accessing large data, such as high-quality images.

Initialization method

//.h
- (instancetype)init UNAVAILABLE_ATTRIBUTE;
+ (instancetype)new UNAVAILABLE_ATTRIBUTE;
- (nullable instancetype)initWithPath:(NSString *)path
                      inlineThreshold:(NSUInteger)threshold NS_DESIGNATED_INITIALIZER;

//.m
- (instancetype)init {
    @throw [NSException exceptionWithName:@"YYDiskCache init error" reason:@"YYDiskCache must be initialized with a path. Use 'initWithPath:' or 'initWithPath:inlineThreshold:' instead." userInfo:nil];
    return [self initWithPath:@ "" inlineThreshold:0];
}
Copy the code

LRU cache policy

LRU implementation:

  1. Insert new data into the cache table;
  2. Whenever a cache hit (that is, cached data is accessed), the corresponding row in the table corresponding to the cache is updatedAccess time;
  3. When the cache needs to be cleared, according toAccess timeDelete the oldest entry;

YYKVStorage

YYKVStorage is a key-value storage based on SQLite and file system. In general, we should not use this class directly. Instances of this class are not thread-safe, and you need to ensure that only one thread can access the instance at the same time. If you really need to process large amounts of data in multiple threads, you should split the data into multiple KVStorage instances (sharding).

/** Storage type, indicating where yykvStorageItem. value is stored. @discussion In general, writing data to SQLite is faster than external files, but read performance depends on the data size. In my tests (environment iPhone 6 64G), reading data from external files was faster when the data was large (over 20KB) than SQLite. */ typedef NS_ENUM(NSUInteger, YYKVStorageType) { YYKVStorageTypeFile = 0, // Value is stored as a file in the file system YYKVStorageTypeSQLite = 1, // Value is stored as a binary file in sqLite YYKVStorageTypeMixed = 2, // Value will be mixed based on the above two forms as you choose}; @interface YYKVStorage : NSObject #pragma mark - Attribute @property (nonatomic, readonly) NSString *path; ///< The path of this storage. @property (nonatomic, readonly) YYKVStorageType type; ///< The type of this storage. @property (nonatomic) BOOL errorLogsEnabled; ///< Set `YES` to enable error logs for debug. #pragma mark - Initializer - (nullable instancetype)initWithPath:(NSString *)path type:(YYKVStorageType)type NS_DESIGNATED_INITIALIZER; #pragma mark - Save Items - (BOOL)saveItem:(YYKVStorageItem *)item; - (BOOL)saveItemWithKey:(NSString *)key value:(NSData *)value; - (BOOL)saveItemWithKey:(NSString *)key value:(NSData *)value filename:(nullable NSString *)filename extendedData:(nullable NSData *)extendedData; #pragma mark - Remove Items - (BOOL)removeItemForKey:(NSString *)key; - (BOOL)removeItemsLargerThanSize:(int)size; . - (BOOL)removeAllItems; - (void)removeAllItemsWithProgressBlock:(nullable void(^)(int removedCount, int totalCount))progress endBlock:(nullable void(^)(BOOL error))end; #pragma mark - Get Items - (nullable YYKVStorageItem *)getItemForKey:(NSString *)key; - (nullable NSData *)getItemValueForKey:(NSString *)key; . #pragma mark - Get Storage Status - (BOOL)itemExistsForKey:(NSString *)key; - (int)getItemsCount; - (int)getItemsSize; @endCopy the code

YYKVStorageItem

This class corresponds to the data stored in the SQLite table:

instructions YYKVStorageItem SQLite
key key key
Value is saved as binary data value inline_data
The file system stores the corresponding file name filename filename
The size of the size size
Modification time (in this case, modification time) modTime modification_time
Access time accessTime last_access_time
Extension data extendedData extended_data

Store to file or SQLite

YYDiskCache has a threshold of 20K for whether to store to a file system or SQLite. This threshold is obtained by the author’s test.

  • If the value is smaller than this threshold, SQLite access is faster.
  • If the value is greater than the threshold, the access speed is faster using the file system.

This threshold is specified at initialization time:

- (instancetype)initWithPath:(NSString *)path { return [self initWithPath:path inlineThreshold:1024 * 20]; } /** threshold 0: Sqlite */ - (instancetype)initWithPath:(NSString *)path inlineThreshold:(NSUInteger)threshold { YYKVStorageType type; if (threshold == 0) { type = YYKVStorageTypeFile; } else if (threshold == NSUIntegerMax) { type = YYKVStorageTypeSQLite; } else { type = YYKVStorageTypeMixed; }... }Copy the code

Therefore, if the threshold is not specified or is between 0 and NSUIntegerMax, the storage type of YYKVStorage is YYKVStorageTypeMixed. The hybrid storage is based on the threshold value.

But we found through the source code, even if the storage type of YYKVStorage is YYKVStorageTypeFile, it will still store the corresponding entries in SQLite, because we need to achieve LRU expulsion cache object.

If the storage type of YYKVStorage is YYKVStorageTypeSQLite, the corresponding objects are not stored in the file system.

save read delete
YYKVStorageTypeSQLite * SQLite * SQLite * SQLite
YYKVStorageTypeFile _ File System
_ SQLite * SQLite _ File System
_ SQLite
YYKVStorageTypeMixed _ File System
_ SQLite * SQLite _ File System
_ SQLite

save

/** store data @param key key @param value value @param filename filename for storing data @param extendedData */ - (BOOL)saveItemWithKey:(NSString *)key value:(NSData *)value filename:(NSString *)filename extendedData:(NSData *)extendedData { if (key.length == 0 || value.length == 0) return NO; if (_type == YYKVStorageTypeFile && filename.length == 0) { return NO; } // If (filename.length) {if (! [self _fileWriteWithName:filename data:value]) { return NO; } // sqLite storage fails, delete local file storage, and return if (! [self _dbSaveWithKey:key value:value fileName:filename extendedData:extendedData]) { [self _fileDeleteWithName:filename]; return NO; } return YES; } else {// If the file name is empty, the default is sqLite, and if the file name is not only sqLite, delete the file name corresponding to the key. If (_type! = YYKVStorageTypeSQLite) {// If the current storage mode is not SQLite, then you need to find the file name, and if you find the file name, Delete the file and store it in sqlite NSString *filename = [self _dbGetFilenameWithKey:key]; if (filename) { [self _fileDeleteWithName:filename]; } } return [self _dbSaveWithKey:key value:value fileName:nil extendedData:extendedData]; }}Copy the code

read

- (BOOL)removeItemsToFitCount:(int)maxCount { if (maxCount == INT_MAX) return YES; if (maxCount <= 0) return [self removeAllItems]; int total = [self _dbGetTotalItemCount]; if (total < 0) return NO; if (total <= maxCount) return YES; NSArray *items = nil; BOOL suc = NO; do { int perCount = 16; items = [self _dbGetItemSizeInfoOrderByTimeAscWithLimit:perCount]; for (YYKVStorageItem *item in items) { if (total > maxCount) { if (item.filename) { [self _fileDeleteWithName:item.filename]; } suc = [self _dbDeleteItemWithKey:item.key]; total--; } else { break; } if (! suc) break; } } while (total > maxCount && items.count > 0 && suc); if (suc) [self _dbCheckpoint]; return suc; }Copy the code

delete

/** This method can obtain all the data corresponding to the key including the value(from SQLite, if there is a file, */ - (YYKVStorageItem *)getItemForKey:(NSString *)key {if (key.length == 0) return nil; //excludeInlineData specifies whether inline data is included, which is the value of the key. YYKVStorageItem *item = [self _dbGetItemWithKey:key excludeInlineData:NO]; If (item) {/ / update the access time, when used to delete to determine whether the recent visit of data [self _dbUpdateAccessTimeWithKey: key]; If (item. Filename) {/ / if there are files are stored, so the value read from the file system item. The value = [self _fileReadWithName: item. The filename]; if (! Item.value) {// If the item is empty, delete the sqLite data directly [self _dbDeleteItemWithKey:key]; item = nil; } } } return item; }Copy the code

NSMapTable

NSMapTable is a collection similar to dictionaries, but with broader semantics of available memory. NSMapTable is a class introduced after iOS6 that is modeled based on NSDictionary, but with the following differences:

  • ‘Keys and values can be held as’ weakly’ so that they can be removed when one of the objects is recovered.
  • It can contain any pointer (its contents are not constrained as objects).
  • You can configure the NSMapTable instance to operate on any pointer, not just objects.

Note: When configuring the mapping table, please Note that only the options listed in NSMapTableOptions will make the rest of the API work, including copy, archive, and fast enumeration. While other NSPointerFunctions options are used for some configurations, such as holding arbitrary Pointers, not all combinations of options are valid. With some combinations, NSMapTableOptions may not work properly and may not even be properly initialized.

For more information, see the official NSMapTable document.

Each YYDiskCache instance is stored in a global _globalInstances instance of the NSMapTabel class for fast access:

- (instancetype)initWithPath:(NSString *)path
             inlineThreshold:(NSUInteger)threshold {
    self = [super init];
    if (!self) return nil;
    
    // If there is already a cache object in the path, return it directly
    YYDiskCache *globalCache = _YYDiskCacheGetGlobal(path);
    if (globalCache) return globalCache;
    
    // Otherwise, create a cache object
    YYKVStorage *kv = [[YYKVStorage alloc] initWithPath:path type:type];
    if(! kv)return nil;

    _YYDiskCacheSetGlobal(self);
    
    return self;
}

/// weak reference for all instances
// Store all weakly referenced Cache objects and fetch them by key
static NSMapTable *_globalInstances;
static dispatch_semaphore_t _globalInstancesLock;

static void _YYDiskCacheInitGlobal() {
    static dispatch_once_t onceToken;
    dispatch_once(&onceToken, ^{
        _globalInstancesLock = dispatch_semaphore_create(1);
        //maptable 就是一个 dictionary
        // The difference is that the key of dictionary is the copy semantics and the value is the retain semantics
        // MapTable key and value can be customized
        _globalInstances = [[NSMapTable alloc] initWithKeyOptions:NSPointerFunctionsStrongMemory valueOptions:NSPointerFunctionsWeakMemory capacity:0];
    });
}

static YYDiskCache *_YYDiskCacheGetGlobal(NSString *path) {
    if (path.length == 0) return nil;
    _YYDiskCacheInitGlobal();
    // Semaphore access
    dispatch_semaphore_wait(_globalInstancesLock, DISPATCH_TIME_FOREVER);
    id cache = [_globalInstances objectForKey:path];
    dispatch_semaphore_signal(_globalInstancesLock);
    return cache;
}

static void _YYDiskCacheSetGlobal(YYDiskCache *cache) {
    if (cache.path.length == 0) return;
    _YYDiskCacheInitGlobal();
    dispatch_semaphore_wait(_globalInstancesLock, DISPATCH_TIME_FOREVER);
    [_globalInstances setObject:cache forKey:cache.path];
    dispatch_semaphore_signal(_globalInstancesLock);
}
Copy the code

Thread safety

YYKVStore does not support thread safety. Therefore, you are not recommended to use it.

However, YYDiskCache supports thread safety, which implements mutex via dispatch_semaphore.

Dispatch_semaphore can also be used as a lock if the total number of signals is set to 1. It performs even better than pthread_mutex when there are no wait conditions, but much worse when there are wait conditions. Its advantage over OSSpinLock is that it does not consume CPU resources while waiting. It is suitable for disk caching.

Two semaphore mutex uses in the disk cache:

_globalInstances = YYDiskCache; _globalInstances = YYDiskCache;

dispatch_semaphore_wait(_globalInstancesLock, DISPATCH_TIME_FOREVER);
id cache = [_globalInstances objectForKey:path];
// or [_globalInstances setObject:cache forKey:cache.path];
dispatch_semaphore_signal(_globalInstancesLock);
Copy the code

In _kv all operations on item (including read, write, delete, check), and define two macros to facilitate use:

#define Lock() dispatch_semaphore_wait(self->_lock, DISPATCH_TIME_FOREVER)
#define Unlock() dispatch_semaphore_signal(self->_lock)
Copy the code

architecture

layered

YYCache has excellent layered design.

performance

In fact, performance is invisible and ubiquitous (laughter). It was brought in from the very beginning when we were designing a cache architecture, all the way through to the specifics of our implementation, and ultimately became the determining factor for the quality of the cache we were designing.

The implementation details of YYCache’s performance improvement have been dissected above:

  • Asynchronously release the cache object
  • The choice of the lock
  • YYDiskCache managed using NSMapTable singleton
  • In the YYKVStorage_dbStmtCache
  • Use CoreFoundation in exchange for minimal performance gains

The choice of the lock

  • Disk caching usesdispatch_semaphore, memory cache is usedpthread_mutex

Dispatch_semaphore is a semaphore, but can also be used as a lock if the total number of signals is set to 1. It performs even better than pthread_mutex when there are no wait conditions, but much worse when there are wait conditions. Its advantage over OSSpinLock is that it does not consume CPU resources while waiting. It is suitable for disk caching.

  • Do not use OSSpinLock

OSSpinLock Spin lock, the highest performance lock. The principle is very simple, just do while waiting. The downside is that it consumes a lot of CPU resources when waiting, so it is not suitable for longer tasks. It is ideal for memory cache access. Since OSSpinLock was declared unsafe, third-party libraries have replaced OSSpinLock with pthread_mutex. Ibireme did a simple performance test to find an alternative to OSSpinLock after confirming that it was no longer safe, comparing several performance alternatives to OSSpinLock locks. In the comments at the end of OSSpinLock, which is no longer safe, I found the author’s reason for using pthread_mutex.

ibireme: – Apple staff say that libobJC spinlock uses some proprietary method (mach_thread_switch) to contribute high thread priority to avoid priority inversion, but I can’t find any logic in libDispatch source code. Or maybe I missed something. In some of my tests, neither OSSpinLock nor Dispatch_semaphore produced particularly noticeable deadlocks, so I wasn’t sure whether it was right to replace OSSpinLock with dispatch_semaphore. To be sure, pthread_mutex is safe.

reference

  1. Cache replacement policies
  2. From YYCache source Get to how to design a good cache