PINCache was created a long time ago to deal with threading problems in TMCache, so today I want to see how PINCache handles threads.

After looking at NSCache and YYCache a while ago, I’m going to read PINCache’s code today. At a glance, the PINCache and YY exposed similar interfaces, but PINCache stored all the data, including each pair of key and value pairs, in a file, and implemented various internal processing mechanisms. Today in the PINOperationGroup and PINOperationQueue two classes go through, roughly record, see how the author uses GCD to achieve some of the NSOperation function.

PINCache directory structure

PINCache followed two protocols

The PINCaching protocol is used to expose interfaces. Both the memory and the hard disk have corresponding interfaces. Therefore, the PINCaching protocol is unified.

PINCacheObjectSubscripting agreement. There were two must implement method is mainly for convenient access.

  • (nullable id)objectForKeyedSubscript:(NSString *)key;

  • Id object = cache[@”key”]; This is the way to implement the value.

  • (void)setObject:(nullable id)object forKeyedSubscript:(NSString *)key;

  • Cache [@”key”] = object; When you store the value

The above knowledge is usually relatively easy to ignore, generally know that there will be no mistake, hit a breakpoint to see it. Just look at the two pictures below.

PINCaching PINCacheObjectSubscripting

As I said, these are the two protocols that are important in PINCache, and all the major classes follow them.


Some Runtime macros, check out Github. Want to in-depth understanding of the LLVM to see the official documents, compilation principles of what, front-end and back-end DYLD many environment variables configuration, system configuration are among them. Most of the configuration items can also be found in Xcode.


The class for disk caching is described below


The memory cache class is described below

PINCache interface design


1. Check the number of bytes in the disk queue. @property (readonly) NSUInteger diskByteCount; 2. DiskCache @property (readonly) PINDiskCache *diskCache; 3. MemoryCache @property (readonly) PINMemoryCache *memoryCache; @property (class, strong, readonly) PINCache *sharedCache;Copy the code

A constructor

- (instancetype)initWithName:(nonnull NSString *)name; - (instancetype)initWithName:(nonnull NSString *)name rootPath:(nonnull NSString *)rootPath; . Ignore the intermediate constructor - (instancetype)initWithName:(nonnull NSString *)name rootPath:(nonnull NSString *)rootPath serializer:(nullable PINDiskCacheSerializerBlock)serializer deserializer:(nullable PINDiskCacheDeserializerBlock)deserializer keyEncoder:(nullable PINDiskCacheKeyEncoderBlock)keyEncoder keyDecoder:(nullable PINDiskCacheKeyDecoderBlock)keyDecoder ttlCache:(BOOL)ttlCache NS_DESIGNATED_INITIALIZER;Copy the code

This method is not recommended

All methods in PINCache (Deprecated) are Deprecated.Copy the code

Method used by the official Demo

It's all in the PINCaching protocolCopy the code


The first is to follow PINCaching PINCacheObjectSubscripting two big deal.

PINMemoryCache and interface design are quite different from those of YY before.

I call these three cache properties. Total capacity, capacity limit, time limit. @property (readonly) NSUInteger totalCost; @property (readonly) NSUInteger totalCost; @property (assign) NSUInteger costLimit; @property (assign) NSTimeInterval ageLimit; This attribute is a general idea that is to say, if the to YES the object lifecycle is follow ageLimit - setObject: forKey: withAgeLimit will rewrite the limit, but must be greater than zero. @ Property (nonatomic, readonly, getter=isTTLCache) BOOL ttlCache; See how names mean @ property (assign) BOOL removeAllObjectsOnMemoryWarning; @property (assign) BOOL removeAllObjectsOnEnteringBackground; Next is a bunch of blocks, ignoring the life cycle, which is also a singleton, for external use, and does not use @Property (class, strong, readonly) PINMemoryCache *sharedCache internally; What follows is dozens of constructors, plus a policy method for asynchronous pruning of memory with PINOperationQueue as the input. We'll look at the internal implementation in a second. Below are a bunch of methods that are not recommended, so you can ignore them if you don't want to see them.Copy the code

In the implementation file, there is

@interface PINMemoryCache ()
@property (copy, nonatomic) NSString *name;
@property (strong, nonatomic) PINOperationQueue *operationQueue;
@property (assign, nonatomic) pthread_mutex_t mutex;
@property (strong, nonatomic) NSMutableDictionary *dictionary;
@property (strong, nonatomic) NSMutableDictionary *createdDates;
@property (strong, nonatomic) NSMutableDictionary *accessDates;
@property (strong, nonatomic) NSMutableDictionary *costs;
@property (strong, nonatomic) NSMutableDictionary *ageLimits;

@implementation PINMemoryCache

@synthesize name = _name;
@synthesize ageLimit = _ageLimit;
@synthesize costLimit = _costLimit;
@synthesize totalCost = _totalCost;
@synthesize ttlCache = _ttlCache;
Copy the code

When I see this at sign synthesize up here, I think it’s worth saying, because back in the day when a good colleague and I were developing it, we wrote a block in the protocol, and then we found out that there was no set or get method, but we didn’t know about this thing, so we just put it in the corresponding protocol compliant class, Each time you override a set and a get and you have to set a corresponding block to receive it. It’s kind of funny when you think about it.

At sign synthesize and at sign dynamic one is to automatically synthesize a set,get, and one is to prevent a set,get.

Now, the thing to notice is when you synthesize at sign, you can’t override both set and get, because if you do, you’re not going to generate a member variable. If you don’t set the alias, you’re going to generate a member variable with exactly the same name, which by default is @synthesize name = _name, where _name is the automatically generated member variable.

The main logic

Each dictionary stores different attributes for the same key, using _dictionary to store keys and values, _createdDates to create dates, _accessDates to access dates, _costs memory size, and _ageLimits to store time limits. Look at the

  • (void)setObject:(id)object forKey:(NSString *)key withCost:(NSUInteger)cost ageLimit:(NSTimeInterval)ageLimit; methods
  1. Memory caching takes effect only when ageLimit is less than 0 or the TTL cache policy is YES and ageLimit is greater than 0.
  2. Pthread_mutex_init (&_mutex, NULL); pthread_mutex_init(&_mutex, NULL);
  3. WillAddObjectBlock, didAddObjectBlock, willblock callback, then lock, _dictionary, date, time, access, etc., set totalCost, etc., unlock. Five different dictionaries, storing different information for the same key.
  4. When unlocked, call back didbLock.
  5. If costLimit>0, crop the cache.

Note: costLimit is 0 by default and needs to be set to take effect.

- (void)setObject:(id)object forKey:(NSString *)key withCost:(NSUInteger)cost ageLimit:(NSTimeInterval)ageLimit { NSAssert (ageLimit < = 0.0 | | (ageLimit > 0.0 && _ttlCache), @"ttlCache must be set to YES if setting an object-level age limit."); if (! key || ! object) return; [self lock]; PINCacheObjectBlock willAddObjectBlock = _willAddObjectBlock; PINCacheObjectBlock didAddObjectBlock = _didAddObjectBlock; NSUInteger costLimit = _costLimit; [self unlock]; if (willAddObjectBlock) willAddObjectBlock(self, key, object); [self lock]; NSNumber* oldCost = _costs[key]; if (oldCost) _totalCost -= [oldCost unsignedIntegerValue]; NSDate *now = [NSDate date]; _dictionary[key] = object; _createdDates[key] = now; _accessDates[key] = now; _costs[key] = @(cost); If (ageLimit > 0.0) {_ageLimits[key] = @(ageLimit); } else { [_ageLimits removeObjectForKey:key]; } _totalCost += cost; [self unlock]; if (didAddObjectBlock) didAddObjectBlock(self, key, object); if (costLimit > 0) [self trimToCostByDate:costLimit]; }Copy the code

Remove the details

- (void)trimToCostLimitByDate:(NSUInteger)limit { if (self.isTTLCache) { [self removeExpiredObjects]; } NSUInteger totalCost = 0; [self lock]; totalCost = _totalCost; NSArray *keysSortedByAccessDate = [_accessDates keysSortedByValueUsingSelector:@selector(compare:)]; [self unlock]; if (totalCost <= limit) return; for (NSString *key in keysSortedByAccessDate) { // oldest objects first [self removeObjectAndExecuteBlocksForKey:key]; [self lock]; totalCost = _totalCost; [self unlock]; if (totalCost <= limit) break; }}Copy the code

NSArray *keysSortedByAccessDate = [_accessDates keysSortedByValueUsingSelector:@selector(compare:)]; The removal policy has a line of code that says, this is an NSComparator in OC, and the collection class including string inherits from compare, similar to block, and the return value determines the ascending or descending order, and that’s the one that’s often used with the array’s own block traversal method. The calls here can also be learned. Comparators are usually created and passed in the corresponding type to be used.

Asynchronous policy for PINMemoryCache

This is all done by the PINOperationQueue object. You can temporarily think of it as a serial queue, and asynchronous tasks are called like this. Read PINOperationQueue for a closer look.

Memory Cache Summary

  1. The memory cache uses five different dictionaries to store the value and attribute state of the same key. The cache strategy is basically the same as before.
  2. Execute the corresponding cache policy according to limitCost, agelimit, etc. The default ageLimit and costLimit are both 0 and are not removed.
  3. The removal policies are all sorted by NSComparator’s compare method and then removed, so the complexity defaults to nlogn.
  4. Added a TTL value according to age to remove the property by default to NO. After enabled, the life cycle is all managed by ageLimit.
  5. The locks in the memory cache use the mutex pthread_mutex_
  6. Asynchronous operations in the memory cache use PINOperationQueue, which is currently a serial queue by default. Asynchronous tasks first, as you’ll see later in this class. Now, all you need to know about asynchronous methods is that a synchronous method is called from within a block of an instance object.
  7. Also has a memory warning to remove all removeAllObjectsOnMemoryWarning default to YES, whether the memory into the background removed _removeAllObjectsOnEnteringBackground default to YES.
  8. What’s left is a little bit of detail, a little bit of a record of less common methods, not a lot of hard stuff to understand.


Also must follow PINCaching PINCacheObjectSubscripting agreement. There are three types of disk caches: initialization interfaces, asynchronous interfaces, and synchronous interfaces.

PINDiskCache Interface design

Initialize the design of the interface with a total of 9 parameters, plus the hidden parameters self, _cmd, a total of 11 parameters. More than 8 parameters will open up an extra piece of memory storage behind the corresponding stack space, not register storage, many frameworks usually most interface design is to keep within 6 parameters is good to lose.

// final initialization - (instancetype)initWithName:(nonnull NSString *)name prefix:(nonnull NSString *)prefix rootPath:(nonnull NSString *)rootPath serializer:(nullable PINDiskCacheSerializerBlock)serializer deserializer:(nullable PINDiskCacheDeserializerBlock)deserializer keyEncoder:(nullable PINDiskCacheKeyEncoderBlock)keyEncoder keyDecoder:(nullable PINDiskCacheKeyDecoderBlock)keyDecoder operationQueue:(nonnull PINOperationQueue *)operationQueue ttlCache:(BOOL)ttlCache NS_DESIGNATED_INITIALIZER; Prefix: indicates the prefix of the file name. RootPath: indicates the root directory. Serializer: Indicates that serialized data is stored locally. 6. KeyEncoder: encryption (not real encryption, just similar to url conversion) 7. KeyDecoder: decryption (same as above) 9. TtlCache: The same as the memory cache, only according to ageLimit processing lifecycle each parameter corresponding to the default value, is in each constructor, each layer has a different design, can be self view.Copy the code

Asynchronous Methods Asynchronous Methods

It’s a few, basically a custom _queue that runs through the whole world, calls a synchronized method in a block, and then throws the entire task into the execution list, and it’s always going to happen. This is clear when I write PINOperationQueue. For now, I’m only interested in the synchronization implementation

- (void)setObjectAsync:(id <NSCoding>)object forKey:(NSString *)key withCost:(NSUInteger)cost ageLimit:(NSTimeInterval)ageLimit completion:(nullable PINCacheObjectBlock)block;

- (void)removeObjectForKeyAsync:(NSString *)key completion:(nullable PINDiskCacheObjectBlock)block;

Copy the code

Synchronous Methods Synchronization method

To put it simply, all of the methods in sync, asynchronous calls to queue asynchronously perform operations, callbacks, serial methods internally lock, operate, unlock data security. That’s probably the simplest way to think about it.

- (void)synchronouslyLockFileAccessWhileExecutingBlock:(PIN_NOESCAPE PINCacheBlock)block; Nullable id <NSCoding> objectForKey (NSString *); - (nullable NSURL *)fileURLForKey:(nullable NSString *)key; // add - (void)setObject:(nullable id <NSCoding>)object forKey:(NSString *)key; // trim by size - (void)trimToSize:(NSUInteger)byteCount; Trim by date - (void)trimToSizeByDate:(NSUInteger)byteCount; / / traverse data - (void) enumerateObjectsWithBlock: (PIN_NOESCAPE PINDiskCacheFileURLEnumerationBlock) block;Copy the code

The definition of the interface is mainly to learn a design thinking, only look at what is not very special.

PINDiskCache some default properties and parameter parsing

The following are default Settings. This code is not the internal code of the initialization method. Write the core code of initialization, not the type of object initialized.

- (instancetype)initWithName:(NSString *)name prefix:(NSString *)prefix rootPath:(NSString *)rootPath serializer:(PINDiskCacheSerializerBlock)serializer deserializer:(PINDiskCacheDeserializerBlock)deserializer keyEncoder:(PINDiskCacheKeyEncoderBlock)keyEncoder keyDecoder:(PINDiskCacheKeyDecoderBlock)keyDecoder OperationQueue :(PINOperationQueue *)operationQueue ttlCache:(BOOL)ttlCache {// the suffix name = of the folder name before the file name @"PINDiskCacheShared"; / / in front of the file folder name prefix prefix = @ "com. Pinterest. PINDiskCache"; / / file path rootPath = [NSSearchPathForDirectoriesInDomains (NSCachesDirectory NSUserDomainMask, YES) objectAtIndex: 0]; / / the serialization serializer = [NSKeyedArchiver archivedDataWithRootObject: object requiringSecureCoding: NO error: & error]; / / deserialization _deserialize = [NSKeyedUnarchiver unarchiveObjectWithData: data]; / / encryption, Is actually a string of legal analytical replace percent. KeyEncode [decodedKey stringByAddingPercentEncodingWithAllowedCharacters: [[NSCharacterSet characterSetWithCharactersInString:@".:/%"] invertedSet]]; / / decryption, string of the corresponding percent encoding. DefaultKeyDecoder [encodedKey stringByRemovingPercentEncoding]; // The number of bytes in the initializeDiskProperties method is initialized to the size of the file in bytes _byteCount = 0; // 50 MB by default. _byteLimit = 50 * 1024 * 1024; // 30 days by default. _ageLimit = 60 * 60 * 24 * 30; // The default is an empty dictionary _metadata // the file path, the default is rootPath+_prefix+name _cacheURL // these two conditions lock. The code is initialized with yes, so the following conditional locks are not executed. While this may not be true if success is false, // it's better than deadlocking later. This may not be true, but this is much more than deadlocked _diskWritableCondition _diskStateKnownCondition}Copy the code

Conditional lock _diskWritableCondition, _diskStateKnownCondition and diskWritable and diskStateKnown are used together with YES by default. By default, it is disabled, and there is no signal anywhere in the code except for the pthread_cond_broadcast activation condition called in the _locked_createCacheDirectory file creation method, which is equivalent to the semaphore concept. After locking, if the conditional lock is opened, wait for the signal of the conditional lock before continuing. At the moment, none of them are used in the code, so just go through it.

PINDiskCache cropping policy

  • (void)trimDiskToSize:(NSUInteger)trimByteCount;

Again, the comparator is used. The code is as follows.

- (void)trimDiskToSize:(NSUInteger)trimByteCount { NSMutableArray *keysToRemove = nil; //lockForWriting is equivalent to the mutex locking operation [self lockForWriting]; TrimByteCount trimByteCount trimByteCount trimByteCount trimByteCount If (_byteCount > trimByteCount) {keysToRemove = [[NSMutableArray alloc] init]; // Order keys based on the size of the object, Ascending NSArray * keysSortedBySize = [_metadata keysSortedByValueUsingComparator: ^ NSComparisonResult (PINDiskCacheMetadata * _Nonnull obj1, PINDiskCacheMetadata * _Nonnull obj2) { return [obj1.size compare:obj2.size]; }]; // Delete the generated objects in order of size from largest to smallest. Return [obj2.size compare:obj1.size]; You can directly generate the descending array NSUInteger bytesSaved = 0; for (NSString *key in [keysSortedBySize reverseObjectEnumerator]) { // largest objects first [keysToRemove addObject:key]; NSNumber *byteSize = _metadata[key].size; if (byteSize) { bytesSaved += [byteSize unsignedIntegerValue]; } if (_byteCount - bytesSaved <= trimByteCount) { break; } } } [self unlock]; for (NSString *key in keysToRemove) { [self removeFileAndExecuteBlocksForKey:key]; }}Copy the code

The core of pruning is size, which is sorted from largest to smallest by the size attribute of the dictionary object. This is removed recursively until the memory requirement is met. The complexity depends on the complexity of the comparator, because it’s the implementation of the system, but the size of the comparison sort of basically even the implementation of the system is basically nlogn complexity.

Crop by date, just like size, using the lastModifiedDate parameter to compare, and the result is an array of dates from far to near. The oldest unused data is removed each time. Methods the following

  • (void)trimDiskToSizeByDate:(NSUInteger)trimByteCount
- (void)trimDiskToSizeByDate:(NSUInteger)trimByteCount { if (self.isTTLCache) { [self removeExpiredObjects]; } NSMutableArray *keysToRemove = nil; [self lockForWriting]; if (_byteCount > trimByteCount) { keysToRemove = [[NSMutableArray alloc] init]; // last modified represents last access. NSArray *keysSortedByLastModifiedDate = [_metadata keysSortedByValueUsingComparator:^NSComparisonResult(PINDiskCacheMetadata * _Nonnull obj1, PINDiskCacheMetadata * _Nonnull obj2) { return [obj1.lastModifiedDate compare:obj2.lastModifiedDate]; }]; NSUInteger bytesSaved = 0; // objects accessed last first. for (NSString *key in keysSortedByLastModifiedDate) { [keysToRemove addObject:key]; NSNumber *byteSize = _metadata[key].size; if (byteSize) { bytesSaved += [byteSize unsignedIntegerValue]; } if (_byteCount - bytesSaved <= trimByteCount) { break; } } } [self unlock]; for (NSString *key in keysToRemove) { [self removeFileAndExecuteBlocksForKey:key]; }}Copy the code

After policy execution is removed, there is no processing of byteSize and metaData. The processing is placed in the following parameter. And it operates on the empty value block.

  • (BOOL)removeFileAndExecuteBlocksForKey:(NSString *)key
- (BOOL)removeFileAndExecuteBlocksForKey:(NSString *)key { NSURL *fileURL = [self encodedFileURLForKey:key]; if (! fileURL) { return NO; } // We only need to lock until writable at the top because once writable, always writable [self lockForWriting]; if (! [[NSFileManager defaultManager] fileExistsAtPath:[fileURL path]]) { [self unlock]; return NO; } PINDiskCacheObjectBlock willRemoveObjectBlock = _willRemoveObjectBlock; if (willRemoveObjectBlock) { [self unlock]; willRemoveObjectBlock(self, key, nil); [self lock]; } / / move to a trash folder BOOL trashed = [PINDiskCache moveItemAtURLToTrash: fileURL]; if (! trashed) { [self unlock]; return NO; } // Empty the folder [PINDiskCache emptyTrash]; NSNumber *byteSize = _metadata[key].size; if (byteSize) self.byteCount = _byteCount - [byteSize unsignedIntegerValue]; // atomic [_metadata removeObjectForKey:key]; PINDiskCacheObjectBlock didRemoveObjectBlock = _didRemoveObjectBlock; if (didRemoveObjectBlock) { [self unlock]; _didRemoveObjectBlock(self, key, nil); [self lock]; } [self unlock]; return YES; }Copy the code

So much for disk caching strategies, let’s go back to a comparison of the removal strategies of different caching frameworks.

Add and delete

Set is also stored directly, remove, add, delete, change and check are constant level. Just look at the code.

_metadata stores the key and the corresponding PINDiskCacheMetadata object (mainly attributes such as access time, size, etc.). The actual file name is accessed using percentage code based on the key.

Lock in the disk cache

The mutex is used

@property (assign, nonatomic) pthread_mutex_t mutex;

- (void)lock
    __unused int result = pthread_mutex_lock(&_mutex);
    NSAssert(result == 0, @"Failed to lock PINDiskCache %@. Code: %d", self, result);

- (void)unlock
    __unused int result = pthread_mutex_unlock(&_mutex);
    NSAssert(result == 0, @"Failed to unlock PINDiskCache %@. Code: %d", self, result);

Copy the code

In use, you lock, operate, and unlock. It is important to note that mutex locks, unlike recursive locks, cannot be locked continuously. But mutex can be paired with condition variables to use per-thread synchronization, as shown above in the use of two condition locks that don’t matter.

Threads in the disk cache


What is noteworthy is the following method

+ (dispatch_queue_t)sharedTrashQueue
    static dispatch_queue_t trashQueue;
    static dispatch_once_t predicate;
    dispatch_once(&predicate, ^{
        NSString *queueName = [[NSString alloc] initWithFormat:@"%@.trash", PINDiskCachePrefix];
        trashQueue = dispatch_queue_create([queueName UTF8String], DISPATCH_QUEUE_SERIAL);
        dispatch_set_target_queue(trashQueue, dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_BACKGROUND, 0));
    return trashQueue;
Copy the code

It uses this GCD function dispatch_set_target_queue, which sets a serial queue. Basically, there are two kinds of use:

  1. dispatch_set_target_queue(q1, targetQueue); Execute at the priority of the destination queue
  2. Execute other queues in the target queue
dispatch_queue_t q1 = dispatch_queue_create("q1", DISPATCH_QUEUE_SERIAL); dispatch_queue_t q2 = dispatch_queue_create("q2", DISPATCH_QUEUE_SERIAL); dispatch_queue_t q3 = dispatch_queue_create("q3", DISPATCH_QUEUE_SERIAL); dispatch_queue_t targetQueue = dispatch_queue_create("serial", DISPATCH_QUEUE_SERIAL); // Dispatch_set_target_queue (q1, targetQueue); dispatch_set_target_queue(q2, targetQueue); dispatch_set_target_queue(q3, targetQueue); dispatch_async(q1, ^{ NSLog(@"1"); }); dispatch_async(q2, ^{ NSLog(@"2"); }); dispatch_async(q3, ^{ NSLog(@"3"); }); Print 123 is in order, normal is out of orderCopy the code

SharedTrashQueue is a synchronization queue with a very low priority of DISPATCH_QUEUE_PRIORITY_BACKGROUND for the global queue. SharedLock is used in combination with sharedTrashURL.

SharedLock is NSLock, which is a pthread_mutex wrapper and uses a mutex type. SharedTrashURL is TMP. It is mainly used to automatically clean up garbage files after they are moved to TMP files.

Use classes that are not often used


From the name, it is a class used to get information about the process. Is a singleton that gets process information, gets environment variables, gets the number of active processors, and so on. Do monitoring and so on are very useful, detailed can see the official documents. Know on the line, can be used in a detailed understanding.

[[NSProcessInfo processInfo] globallyUniqueString] is created in the TMP file; Junk file name, when removing first move to TMP folder, and then remove the corresponding data.


NSCharacterSet is a string handling class, so it’s really nice to have a look at it. NSCharacterSet

NSString *encodedString = [decodedKey stringByAddingPercentEncodingWithAllowedCharacters:[[NSCharacterSet characterSetWithCharactersInString:@".:/%"] invertedSet]]; Roughly is the character of markup percent encoding conversion or outside the tag, use it more clearly, @ "7: sjf7.8 sf990s / /. : 12356756576". 6 a positive words: 37: % % 73% % 66% 37. The % 38% 73% 66% 39% 39% 30%, 73 / /. : % 31% 32% 33% 35% 36% 37% 35% 36% 35% 37% 36 invertedSet words: 7% 3asjf7% 2E8sf990s%2F%2F%2E%3A12356756576 According to PINDiskCache, the original name of the file is, after the transformation, in fact, after the encrypt file name is com%2Eabc, And then every time you get the name of the file first to decrypt the actual storage file name is still, roughly like this. % sign the sign in @".:/%". PINDiskCache encrypts and decrypts the file name by converting a percentage sign of NSURL. When storing the file, yyCache encrypts the file name by MD5 and obtains 128-bit encrypted text as the file name.Copy the code

Disk Cache Summary

  1. The disk cache is added, deleted, changed and looked up according to the dictionary, and the complexity is constant.
  2. The cropping strategy for disk caching is to sort by cost, lastAccesstime, and ageLimit attributes, and then loop to remove until the conditions are met. It’s nlogn because it’s sorted
  3. The lock for the disk cache is a pThread mutex, and two conditional locks were written in the project but not used.
  4. To remove the data, you first move it to a file under the TMP file that gets something like a global unique identifier based on NSProcessInfo. Then use a serial low priority queue, plus the lock NSLock for data removal operation.
  5. There are a lot of good details in the article to take a closer look at

Directory structure for PINOperation

PINOperationQueue and PINOperationGroup are some of my packages for GCD. Before I saw PINCache, I thought they were packages for native NSOperation. The most important one is PINOperationQueue. In my opinion, the group is not much different from the COMMUNIST Party itself.

For those unfamiliar with the COMMUNIST Party itself, PINOperation may be difficult to read…


Looking through the call and internal implementation of the PINOperationGroup, it seems that the task added to the _operationQueue object is still executed as an operation, which is not very different from the normal use of thread groups.

PINGroupOperationReference a custom protocol inherited NSObject, internal defines an NSNumber classification.

The inheritance of NSNumber is NSNumber>NSValue>NSObject, which already implements the protocol of NSObject, just using NSNumber is basically the same thing, but it doesn’t extend very much.

@interface NSNumber (PINGroupOperationQueue) <PINGroupOperationReference>
Copy the code

Outside calls

- (void)setObjectAsync:(nonnull id)object forKey:(nonnull NSString *)key withCost:(NSUInteger)cost ageLimit:(NSTimeInterval)ageLimit completion:(nullable PINCacheObjectBlock)block { if (! key || ! object) return; PINOperationGroup *group = [PINOperationGroup asyncOperationGroupWithQueue:_operationQueue]; [group addOperation:^{ [self->_memoryCache setObject:object forKey:key withCost:cost ageLimit:ageLimit]; }]; [group addOperation:^{ [self->_diskCache setObject:object forKey:key withAgeLimit:ageLimit]; }]; if (block) { [group setCompletion:^{ block(self, key, object); }]; } [group start]; }Copy the code

It feels like it’s a little bit of code, but it’s easy to get caught up in it without looking at the custom queue. Pictured above,

The main thing is to look at the logic of the external call

The core is

  1. _operations into groups
  2. Take out the task
  3. The packing task
  4. Tasks are thrown into the _operationQueue throughout the play
  5. Specific outgroup and task cancellation can be viewed by their own code

PINOperationQueue (key)

There are a few important things to look at first:

  1. PINOperationReference is a protocol that follows the NSObject protocol. They’re all NSNumber objects, mentioned above, the same thing.
  2. PINOperation Specifies the definition in the pinOperationQueue. m file. The PINOperation block and Completions objects are blocks of code that store the task that needs to be executed (see the PINOperation object, which exists like a node, as some encapsulation of the task and some additional information).

PINOperation is the encapsulation of a task, so get a sense of the structure

  1. The task that Block needs to perform
  2. Reference Indicates the number of tasks
  3. Priority Priority of the task
  4. Identifier Indicates the identifier of the task
  5. Data Specifies the ID type of the task
  6. _completions a task may execute a lot of blocks, all of which are added in here, and when you call it, you execute the block first, and then you loop through the _completions task, which by default is not actually used.

Let’s first look at the default call in PINCache:

- (instancetype)initWithName:(NSString *)name rootPath:(NSString *)rootPath serializer:(PINDiskCacheSerializerBlock)serializer deserializer:(PINDiskCacheDeserializerBlock)deserializer keyEncoder:(PINDiskCacheKeyEncoderBlock)keyEncoder keyDecoder:(PINDiskCacheKeyDecoderBlock)keyDecoder ttlCache:(BOOL)ttlCache { if (! name) return nil; if (self = [super init]) { _name = [name copy]; //10 may actually be a bit high, but currently much of our threads are blocked on empyting the trash. Until we can resolve that, Lets bump this up. / / the key _operationQueue = [[PINOperationQueue alloc] initWithMaxConcurrentOperations: 10]. _diskCache = [[PINDiskCache alloc] initWithName:_name prefix:PINDiskCachePrefix rootPath:rootPath serializer:serializer deserializer:deserializer keyEncoder:keyEncoder keyDecoder:keyDecoder operationQueue:_operationQueue ttlCache:ttlCache];  _memoryCache = [[PINMemoryCache alloc] initWithName:_name operationQueue:_operationQueue ttlCache:ttlCache]; } return self; }Copy the code

The code creates a maximum concurrency of 10. The comment indicates that the concurrency may be a bit high, but because the thread is blocked while emptying the garbage, it is written to solve this problem.

The core

PINCache knows from this code that the foundation for all asynchronous security and implementation is an _operationQueue object that runs through the entire project.

PINCache, PINMemoryCache, and PINDiskCache all use the same _operationQueue. When PINOperationGroup is used, it is passed in the same queue, which ensures that all tasks are handled and operated on by the same object throughout the PINCache.

Starting with

  1. InitWithMaxConcurrentOperations construction method, this paper examine the attributes, variables such as understanding the structure of a class.
  2. From – (id) scheduleOperation: (dispatch_block_t) block withPriority (PINOperationQueuePriority) priority; All external calls are made to this method, so start with this method.
  • Penetrate according to the construction method
- (instancetype)initWithMaxConcurrentOperations:(NSUInteger)maxConcurrentOperations { return [self initWithMaxConcurrentOperations:maxConcurrentOperations concurrentQueue:dispatch_queue_create("PINOperationQueue Concurrent Queue", DISPATCH_QUEUE_CONCURRENT)]; } - (instancetype)initWithMaxConcurrentOperations:(NSUInteger)maxConcurrentOperations concurrentQueue:(dispatch_queue_t)concurrentQueue { if (self = [super init]) { NSAssert(maxConcurrentOperations > 0, @"Max concurrent operations must be greater than 0."); _maxConcurrentOperations = maxConcurrentOperations; _operationReferenceCount = 0; pthread_mutexattr_t attr; pthread_mutexattr_init(&attr); // Mutex must be recursive to allow scheduling of operations from within operations Internal scheduling uses pthread_mutexattr_setType (&attr, PTHREAD_MUTEX_RECURSIVE); pthread_mutex_init(&_lock, &attr); _group = dispatch_group_create(); _serialQueue = dispatch_queue_create("PINOperationQueue Serial Queue", DISPATCH_QUEUE_SERIAL); _concurrentQueue = concurrentQueue; //Create a queue with max - 1 because this plus the serial queue add up to max. _concurrentSemaphore = dispatch_semaphore_create(_maxConcurrentOperations - 1); _semaphoreQueue = dispatch_queue_create("PINOperationQueue Serial Semaphore Queue", DISPATCH_QUEUE_SERIAL); _queuedOperations = [[NSMutableOrderedSet alloc] init]; _lowPriorityOperations = [[NSMutableOrderedSet alloc] init]; _defaultPriorityOperations = [[NSMutableOrderedSet alloc] init]; _highPriorityOperations = [[NSMutableOrderedSet alloc] init]; _referenceToOperations = [NSMapTable weakToWeakObjectsMapTable]; _identifierToOperations = [NSMapTable weakToWeakObjectsMapTable]; } return self; }Copy the code

//The number of active processing cores available on the computer. The number of processors that are currently available, active, on the computer, and this is the initializer of the Queue, but what’s different is the mutex in other places where the _lock has to be a recursive lock.

At the beginning of reading this, there are actually a few doubts.

    1. Why is the maximum concurrency of the initialization thread set to 10?
    1. Why use recursive locks here? Judging from the previous judgment, mutex is more appropriate and sufficient if only sequential queue is used for asynchronous execution.
    1. Why does it maintain three queues, one asynchronous queue and two synchronous queues?

_group thread group

Why do you want to maintain a thread group? Since there is a semaphore, in fact, the thread group is not used at all. So I will search for the use of “dispatch_group_Enter”, “dispatch_group_leave”, “dispatch_group_WAIT” and “dispatch_group_notify” methods.

The use of Enter, There's this one - (void)locked_addOperation:(PINOperation *)operation {NSMutableOrderedSet *queue = [self operationQueueWithPriority:operation.priority]; /*****enter****/dispatch_group_enter(_group); [queue addObject:operation]; [_queuedOperations addObject:operation]; [_referenceToOperations setObject:operation forKey:operation.reference]; if (operation.identifier ! = nil) { [_identifierToOperations setObject:operation forKey:operation.identifier]; }} the wait to use only this one - (void) waitUntilAllOperationsAreFinished {[self scheduleNextOperations: NO]; dispatch_group_wait(_group, DISPATCH_TIME_FOREVER); } leave use - (void)scheduleNextOperations:(BOOL)onlyCheckSerial {} cancel a task - (BOOL)locked_cancelOperation:(id) <PINOperationReference>)operationReference { PINOperation *operation = [_referenceToOperations objectForKey:operationReference]; BOOL success = [self locked_removeOperation:operation]; if (success) { dispatch_group_leave(_group); } return success; }Copy the code

As we can see from the above operations, enter handles the addition of tasks, the completion of tasks, or the deletion of tasks when the leave, wait is set, and the subsequent tasks will wait for the previous group to complete all tasks before being added. So this design group, I think this method is to waitUntilAllOperationsAreFinished implementation.

_serialQueue Serial queue

Only one task at a time in _queuedOperations is processed recursively

The only place this queue is used is in the scheduleNextOperations method, which is used for task execution in the block of the task object. The implementation method is synchronous queue, asynchronous execution, in plain English, is to create a sub-thread specialized to deal with the block property of each task of the code block sequence execution.

_semaphoreQueue Synchronization queue named semaphore

Only the signal and wait of the semaphore in _semaphoreQueue are processed.

In setMaxConcurrentOperations method and scheduleNextOperations half part to perform operations. This is used to set the semaphore, which defaults to 10, so that up to 10 sub-threaded tasks can run simultaneously. Singletons are created based on the number of active processors, which should be >=2.

_concurrentQueue A unique asynchronous queue

Only one of the tasks with the highest priority is retrieved for processing according to the priority order in the priority queue.

The entire task containing semp is executed asynchronously in the synchronous queue _semaphoreQueue.

Core method scheduleNextOperations:

- (void)scheduleNextOperations:(BOOL)onlyCheckSerial { [self lock]; // Get next available operation in order, ignoring priority and run it on the serial queue If (_serialQueueBusy == NO) {// Get the next operation, all tasks in _queuedOperations, PINOperation *operation = [self locked_nextOperationByQueue]; If (operation) {_serialQueueBusy = YES; if (operation) {_serialQueueBusy = YES; dispatch_async(_serialQueue, ^{ operation.block(; for (dispatch_block_t completion in operation.completions) { completion(); } dispatch_group_leave(self->_group); [self lock]; self->_serialQueueBusy = NO; [self unlock]; / / see if there are any other operations, if there are other task [self scheduleNextOperations: YES]; }); } } NSInteger maxConcurrentOperations = _maxConcurrentOperations; [self unlock]; if (onlyCheckSerial) { return; } //if only one concurrent operation is set, Let's just use the serial queue for Executing it if (maxConcurrentOperations < 2) {return if maxConcurrentOperations < 2; } // Add async tasks to a serial queue, _concurrentSemaphore, dispatch_async(_semaphoreQueue, ^{ dispatch_semaphore_wait(self->_concurrentSemaphore, DISPATCH_TIME_FOREVER); [self lock]; PINOperation *operation = [self locked_nextOperationByPriority]; [self unlock]; if (operation) { dispatch_async(self->_concurrentQueue, ^{ operation.block(; for (dispatch_block_t completion in operation.completions) { completion(); } dispatch_group_leave(self->_group); dispatch_semaphore_signal(self->_concurrentSemaphore); }); } else { dispatch_semaphore_signal(self->_concurrentSemaphore); }}); }Copy the code

The core logic in this code is to recursively implement each task, so you must use a recursive lock, otherwise a deadlock will occur.


  • For the first execution, onlyCheckSerial is NO, and the tasks in _queuedOperations are put into _serialQueue to implement the tasks. Then, the tasks are retrieved recursively in turn.
  • If onlyCheckSerial is YES, repeat the preceding operations.


  • After the first entry, the tasks in the top half of the queue are fetched and executed at the same time. If the maximum concurrent amount is less than 2, it will not be processed. Because the concurrent amount is less than 2, there is already one thread above to process the tasks, so there is no need to process the tasks in this way.
  • Next is the _semaphoreQueue serial queue, which sets the number of semaphorequeue threads at the same time, allowing the maximum amount of concurrency (initialized with one less, because there is one above),
  • Then the locked_nextOperationByPriority method retrieves tasks from the corresponding priority queue and removes the _queuedOperations task from the highest to the lowest priority queue.
  • Then in the synchronous queue, let the task asynchronously execute in the asynchronous queue _concurrentQueue, the execution is finished.

The queue to summarize

  1. Figure out what the function of each queue is set to do.
  2. The core place code, scheduleNextOperations the entire logic of this method. For the operation of the lock, the lock should be protected when the data is changed. The framework code isn’t much, but it’s classic, and the encapsulation thinking for GCD is excellent and worth learning. Code is like this, look simple, write difficult, do not have high eyes but low hands.

Follow the normal call through the code logic

Default code synchronization

The most common ways:

[[PINCache sharedCache] setObject:@”123″ forKey:@”name”]; Let’s see what’s going on inside;

  1. [PINCache sharedCache] calls initWithName, which sets the default values.

It has memory, it has disk initialization configuration, it has the same configuration as the PINCache.

2. When the initialization is complete, look at the set method

The default cost and agelimit are both 0.

The set method is basically a mutex with five internal dictionaries to add, delete, and look up. By default cost and Agelimit and TTLCahce are 0 and NO, so the default is that data in memory is not removed except when memory warnings occur and programs go into the background.

The _metadata data is bound to a custom PINDiskCacheMetadata object. The data is then written to a file based on the key creation path. Then _metadata is set to the corresponding object storage file some access time and other property updates.

The end result is:


  • For asynchronous calls, all tasks are added to the _operationQueue. The execution mode is handled according to the maximum amount of concurrency:

    • More than or equal to 2: the queued tasks are first executed in a serial queue clock, and then the tasks are fetched from the semaphore queue according to the priority and asynchronously executed in the asynchronous queue. The number of concurrent tasks is controlled by the semaphore.
    • Less than 2: Asynchronous execution remains stable in only one serial queue.

Lock: whether synchronous or asynchronous, the mutex is used globally, and the recursive lock of the mutex is used after the task is executed in the queue.


PINCache looked at it, and the most impressive part of it was the encapsulation of the GCD, how to maximize concurrency, how to cancel things like that. I don’t use the NSOperation of the system much, but after reading the PIN, it should be similar to the system, otherwise it is very difficult for the open source author to think all by himself. At the very least, you can see, for example, that a task that has been started has been removed from the queue before it started, so even if you want to cancel, it’s all the tasks that haven’t been executed since cancel, and you can’t control what has been executed.

In fact, there is no need to write a lot, I also try to pick the core part to record, in the future I want to see it does not need to spend a lot of time, probably look at a glance can think of it.

There are some mistakes, the wrong place welcome to discuss.