preface

This article will introduce the implementation of reference counting and weak through source code, which was briefly introduced in the previous article

Weak underlying principle

The creation of a weak

There are two cases of using the weak modifier

  • propertyThe use ofweakmodified
@property (nonatomic,weak) NSObject *referent;
Copy the code
Id objc_storeWeak(id *location, id newObj) {return storeWeak<DoHaveOld, DoHaveNew, DoCrashIfDeallocating>
        (location, (objc_object *)newObj);
}
Copy the code
  • use__weakModify the object
NSObject *obj1 = [[NSObject alloc] init];
id __weak obj2 = obj1;
Copy the code
Objc_initWeak (id *location, id newObj) {if(! newObj) { *location = nil;return nil;
    }

    return storeWeak<DontHaveOld, DoHaveNew, DoCrashIfDeallocating>
        (location, (objc_object*)newObj);
}
Copy the code

As you can see, the underlying storeWeak method is called. The difference is that the first parameter of the template haveOld

Go to the storeWeak function

template <HaveOld haveOld, HaveNew haveNew,
          CrashIfDeallocating crashIfDeallocating>
static id 
storeWeak(id *location, objc_object *newObj)
{
    assert(haveOld  ||  haveNew);
    if(! haveNew) assert(newObj == nil); Class previouslyInitializedClass = nil; id oldObj; SideTable *oldTable; SideTable *newTable; // Acquire locksfor old and new values.
    // Order by lock address to prevent lock ordering problems. 
    // Retry if// The old value changes Puppy US. retry: // If the weak pointer has weakly referenced an obj before, fetch the SideTable corresponding to the OBj and assign it to oldTableif (haveOld) {
        oldObj = *location;
        oldTable = &SideTables()[oldObj];
    } else{// oldTable = nil oldTable = nil; } // If the weak pointer wants to weakly reference a new obj, remove the SideTable corresponding to the obj and assign it to newTableif (haveNew) {
        newTable = &SideTables()[newObj];
    } else{ newTable = nil; SideTable::lockTwo<haveOld, haveNew>(oldTable, newTable); // Location should be consistent with oldObj. If it is different, the current location has already been processed by oldObj but has been modified by another threadif(haveOld && *location ! = oldObj) { SideTable::unlockTwo<haveOld, haveNew>(oldTable, newTable); goto retry; } // Prevent a deadlock between the weak reference machinery // and the +initialize machinery by ensuring that no // weakly-referenced object has an un-+initialized isa.if(haveNew && newObj) { Class cls = newObj->getIsa(); // If CLS is not already initialized, initialize first and then try to set weak referencesif(cls ! = previouslyInitializedClass && ! ((objc_class *)cls)->isInitialized()) { SideTable::unlockTwo<haveOld, haveNew>(oldTable, newTable); _class_initialize(_class_getNonMetaClass(cls, (id)newObj)); / / after initialization marked previouslyInitializedClass = CLS; NewObj goto retry after newObj is initialized. } } // Clean up old value,ifWeak_unregister_no_lock is called to remove the weak pointer address from weak_entry_t in oldObj's weak_entry_tif (haveOld) {
        weak_unregister_no_lock(&oldTable->weak_table, oldObj, location);
    }

    // Assign new value, ifAny. // If the weak pointer needs to weakly reference the new object newObjif(haveNew) {// Call weak_register_no_lock, Weak_entry_t = (objc_object *) Weak_register_no_lock (&newTable-> Weak_table, (id)newObj, weak_entry_t = (objc_object *) Weak_register_no_lock (&newTable-> Weak_table, (id)newObj, location, crashIfDeallocating); // weak_register_no_lock returns nilif weak store should be rejected

        // Set is-weakly-referenced bit inRefcount table. // Update weakly_referenced bit flag bit of newObj's ISA pointerif(newObj && ! newObj->isTaggedPointer()) { newObj->setWeaklyReferenced_nolock();
        }

        // Do not set* Location anywhere else. That would introduce a race. And newObj's reference count +1 *location = (id)newObj; }else {
        // No new value. The storage is not changed.
    }
    
    SideTable::unlockTwo<haveOld, haveNew>(oldTable, newTable);

    return (id)newObj;
}
Copy the code

Because we’re calling it for the first time, we have a new object, in the case of haveNew, and we get a new hash table, SideTable, which basically uses the Weak_register_NO_lock method to insert. Next, let’s look at how weak_register_no_lock registers weak references.

id weak_register_no_lock(weak_table_t *weak_table, id referent_id, id *referrer_id, Bool crashIfDeallocating) {// The weakly referenced object objc_Object * Referent = (objc_object *) Referent_id; // Referrer = (objc_object **)referrer_id; // If the referent of the weakly referenced object is nil or the TaggedPointer count is used, the value is returnedif(! referent || referent->isTaggedPointer())returnreferent_id; // Ensure that the referenced object is viable // Ensure that the referenced object is viable (there is no destructor, and weak references should be supported) bool deallocating;if(! referent->ISA()->hasCustomRR()) { deallocating = referent->rootIsDeallocating(); }else {
        BOOL (*allowsWeakReference)(objc_object *, SEL) = 
            (BOOL(*)(objc_object *, SEL))
            object_getMethodImplementation((id)referent, 
                                           SEL_allowsWeakReference);
        if ((IMP)allowsWeakReference == _objc_msgForward) {
            returnnil; } deallocating = ! (*allowsWeakReference)(referent, SEL_allowsWeakReference); } // If the object is being destructed, it cannot be weakly referencedif (deallocating) {
        if (crashIfDeallocating) {
            _objc_fatal("Cannot form weak reference to instance (%p) of "
                        "class %s. It is possible that this object was "
                        "over-released, or is in the process of deallocation.",
                        (void*)referent, object_getClassName((id)referent));
        } else {
            return nil;
        }
    }

    // now remember it and whereIt is being stored // Find weak_entry corresponding to weak_reference object referent in weak_table, and add the referrer to weak_entry weak_entry_t *entry;if((entry = weak_entry_for_referent(weak_table, Append_entry ())) {// If weak_entry is found, append the referrer to weak_entry (entry, referrer); }elseWeak_entry_t new_entry(referent, referrer); // If weak_entry_t new_entry(referent, referrer) is not found. Weak_grow_maybe (weak_table); // Judge whether the capacity of weak_table is enough. Weak_entry_insert (weak_table, &new_entry); // Insert new entry into weak_table weak_entry_insert(weak_table, &new_entry); } // Do notset *referrer. objc_storeWeak() requires that the 
    // value not change.

    return referent_id;
}
Copy the code

The corresponding Weak_entry_t is removed from the Weak_entry_t hash array in the Weak_table where the weak_entry_t resides. If the weak_entry_t does not exist, a new weak_entry_t will be created. The referrer pointer to the weakly referenced object’s address is then inserted into the corresponding Weak_entry_t reference array through the append_referrer function.

Static void append_referrer(Weak_entry_t *entry, objc_object **new_referrer) {// If weak_entry uses the static array inline_referrersif(! Entry ->out_of_line()) {// Try to insert inline. // Try to insert the referrer into the arrayfor (size_t i = 0; i < WEAK_INLINE_COUNT; i++) {
            if (entry->inline_referrers[i] == nil) {
                entry->inline_referrers[i] = new_referrer;
                return; } // If the inline_referrers location is already full, the inline_referrers will be converted to referrers. Weak_referrer_t * new_Referrers = (Weak_referrer_t *) calloc(WEAK_INLINE_COUNT, sizeof(Weak_Referrer_t)); // This constructed table is invalid, but grow_refs_and_insert // will fix it andrehash it.
        for(size_t i = 0; i < WEAK_INLINE_COUNT; i++) { new_referrers[i] = entry->inline_referrers[i]; } entry->referrers = new_referrers; entry->num_refs = WEAK_INLINE_COUNT; entry->out_of_line_ness = REFERRERS_OUT_OF_LINE; entry->mask = WEAK_INLINE_COUNT-1; entry->max_hash_displacement = 0; } assert(entry->out_of_line()); // If the number of elements in the dynamic array is greater than or equal to 3/4 of the total array space, expand the array space to double the current length, and insert the referrer into the arrayif (entry->num_refs >= TABLE_SIZE(entry) * 3/4) {
        returngrow_refs_and_insert(entry, new_referrer); Size_t begin = w_hash_pointer(new_referrer) & (entry->mask); // If expansion is not required, insert it directly into weak_entry. size_t index = begin; size_t hash_displacement = 0;while(entry->referrers[index] ! = nil) { hash_displacement++; index = (index+1) & entry->mask;if (index == begin) bad_weak_table(entry);
    }
    if (hash_displacement > entry->max_hash_displacement) {
        entry->max_hash_displacement = hash_displacement;
    }
    weak_referrer_t &ref = entry->referrers[index];
    ref = new_referrer;
    entry->num_refs++;
}
Copy the code

But in this case it’s new, so let’s focus on the other case

weak_entry_t new_entry(referent, referrer);
weak_grow_maybe(weak_table);
weak_entry_insert(weak_table, &new_entry);
Copy the code
static void weak_grow_maybe(weak_table_t *weak_table)
{
    size_t old_size = TABLE_SIZE(weak_table);

    // Grow if at least 3/4 full.
    if(weak_table->num_entries >= old_size * 3 / 4) { weak_resize(weak_table, old_size ? old_size*2 : 64); } } static void weak_resize(weak_table_t *weak_table, size_t new_size) { size_t old_size = TABLE_SIZE(weak_table); weak_entry_t *old_entries = weak_table->weak_entries; weak_entry_t *new_entries = (weak_entry_t *) calloc(new_size, sizeof(weak_entry_t)); weak_table->mask = new_size - 1; weak_table->weak_entries = new_entries; weak_table->max_hash_displacement = 0; weak_table->num_entries = 0; // restore by weak_entry_insert below // restore old data to insert into newly allocated spaceif (old_entries) {
        weak_entry_t *entry;
        weak_entry_t *end = old_entries + old_size;
        for (entry = old_entries; entry < end; entry++) {
            if(entry->referent) { weak_entry_insert(weak_table, entry); } } free(old_entries); }}Copy the code

When the num_entries of the Weak_table are greater than 3/4 of the total amount, the size of the weak_table will be expanded to two times of the original size. Next, insert the new Weak_entry_t

static void weak_entry_insert(weak_table_t *weak_table, weak_entry_t *new_entry) { weak_entry_t *weak_entries = weak_table->weak_entries; assert(weak_entries ! = nil); size_t begin = hash_pointer(new_entry->referent) & (weak_table->mask); size_t index = begin; size_t hash_displacement = 0;while(weak_entries[index].referent ! = nil) { index = (index+1) & weak_table->mask;if (index == begin) bad_weak_table(weak_entries);
        hash_displacement++;
    }

    weak_entries[index] = *new_entry;
    weak_table->num_entries++;

    if(hash_displacement > weak_table->max_hash_displacement) { weak_table->max_hash_displacement = hash_displacement; }}Copy the code

The destruction of weak

The process of destruction is to call the destructor (dealloc), the principle of the destructor is analyzed below

- (void)dealloc { _objc_rootDealloc(self); } // _objc_rootDealloc void _objc_rootDealloc(id obj) {// Assert to check whether obj is empty assert(obj); obj->rootDealloc(); }Copy the code

Enter the rootDealloc internal function

inline void
objc_object::rootDealloc() {// Exit if it is TaggedPointerif (isTaggedPointer()) return;  // fixme necessary?

    ifFastpath (isa.nonpointer && // Is the object using an optimized ISA counting method! Isa.weakly_referenced && // The object is not weakly referenced! Isa.has_assoc && // Object has no associated object! Isa.has_cxx_dtor && / objects have no custom C++ destructors! Isa.has_sidetable_rc)) // The object does not use sideTable for reference counting {assert(! sidetable_present()); // If all the above criteria are met, the C function free is called to free the object free(this); }elseObject_dispose ((id)this) {// Dispose (id)this); }}Copy the code

Keep going down

id 
object_dispose(id obj)
{
    if(! obj)return nil;

    objc_destructInstance(obj);    
    free(obj);

    return nil;
}
Copy the code
void *objc_destructInstance(id obj) 
{
    if (obj) {
        // Read all of the flags at once forperformance. bool cxx = obj->hasCxxDtor(); bool assoc = obj->hasAssociatedObjects(); // if there is a C++ destructor, the C++ destructor is destroyed from the classif(cxx) object_cxxDestruct(obj); // If there are any associated objects, remove all associated objects and remove themselves from the Association Manager mapif(assoc) _object_remove_assocations(obj); // Continue cleaning up other related references obj->clearDeallocating(); }return obj;
}
Copy the code
inline void 
objc_object::clearDeallocating()
{
    if(slowpath(! isa.nonpointer)) { // Slow pathforRaw Pointer isa. // If the object to be released does not have an optimized ISA reference count, sidetable_clearDeallocating(); }else if (slowpath(isa.weakly_referenced  ||  isa.has_sidetable_rc)) {
        // Slow path forNon-pointer ISA with weak refs and/or side table data. // If the object is using an optimized ISA reference count, And has weak references or uses sideTable's auxiliary reference count clearDeallocating_slow(); } assert(! sidetable_present()); }Copy the code

There are two cases here

  • The freed object does not have an optimized ISA reference count
void 
objc_object::sidetable_clearDeallocatingSideTables = SideTables()[this]; SideTables = SideTables(); SideTables = SideTables(); // clear any weak table items // clear extra retain count and deallocating bit // (fixme warn or abortif extra retain count == 0 ?)
    table.lock();
    // 在散列表SideTable中找到对应的引用计数表RefcountMap,拿到要释放的对象的引用计数
    RefcountMap::iterator it = table.refcnts.find(this);
    if(it ! = table.refcnt.end ()) {// If the object to be released is weakly referenced, set the weak reference pointer to the object to nil using the Weak_clear_no_lock functionif(it->second & SIDE_TABLE_WEAKLY_REFERENCED) { weak_clear_no_lock(&table.weak_table, (id)this); } // Erase the reference count table.refcnt.erase (it) from the reference count table; } table.unlock(); }Copy the code
  • The freed objects have an optimized ISA reference count
NEVER_INLINE void
objc_object::clearDeallocating_slow() { assert(isa.nonpointer && (isa.weakly_referenced || isa.has_sidetable_rc)); SideTables = SideTables()[this]; SideTables = SideTables()[this]; table.lock();if(Isa.weakly_referenced) {// The object to be released is weakly referenced, Weak_clear_no_lock (&table. Weak_no_lock, (id)this); Weak_clear_NO_lock (&table. Weak_table, (id)this); weak_clear_no_lock(&table. } // Use sideTable's auxiliary reference count to erase the object's reference count directly from sideTableif (isa.has_sidetable_rc) {
        table.refcnts.erase(this);
    }
    table.unlock();
}
Copy the code

Both methods call weak_clear_no_lock, which sets weak-reference Pointers to weak-referenced objects to nil

void weak_clear_no_lock(weak_table_t *weak_table, Objc_object *referent = (objc_object *)referent_id; Weak_entry_t weak_entry_t *entry = Weak_entry_for_referent (weak_table, referent);if (entry == nil) {
        //printf("XXX no entry for clear deallocating %p\n", referent);
        return; } // zero out references weak_referrer_t *referrers; size_t count; // Find an array of weak pointer addresses that weakly reference this objectif (entry->out_of_line()) {
        referrers = entry->referrers;
        count = TABLE_SIZE(entry);
    } 
    else{ referrers = entry->inline_referrers; count = WEAK_INLINE_COUNT; } // the address of each weak pointer is iteratedfor (size_t i = 0; i < count; ++i) {
        objc_object **referrer = referrers[i]; 
        if(referrer) {// If the weak pointer does weakly reference the object referent, set the weak pointer to nilif(*referrer == referent) { *referrer = nil; } // If the stored weak pointer does not have a weak reference object referent, this may be due to a logic error in the Runtime codeelse if (*referrer) { 
                _objc_inform("__weak variable at %p holds %p instead of %p. "
                             "This is probably incorrect use of "
                             "objc_storeWeak() and objc_loadWeak(). "
                             "Break on objc_weak_error to debug.\n", referrer, (void*)*referrer, (void*)referent); objc_weak_error(); Weak_entry_remove (weak_table, entry); weak_entry_remove(weak_table, entry); }Copy the code

The dealloc method releases the entire process as shown below:

Reference counting

The following test code is based on the MRC environment test, set up MRC portal

#import <Foundation/Foundation.h>

NS_ASSUME_NONNULL_BEGIN

@interface NHText : NSObject

@end

NS_ASSUME_NONNULL_END
Copy the code
#import "NHText.h"

@implementation NHText


- (void)dealloc{
    NSLog(@"NHText dealloc");
}

@end
Copy the code
#import <Foundation/Foundation.h>
#import "NHText.h"

int main(int argc, const char * argv[]) {
    @autoreleasepool {
    
        NHText *t = [NHText alloc];
        NSLog(@"retainCount = %lu",(unsigned long)[t retainCount]);
        [t retain];
        NSLog(@"retainCount = %lu",(unsigned long)[t retainCount]);
        [t release];
        NSLog(@"retainCount = %lu",(unsigned long)[t retainCount]);
        [t release];
        NSLog(@"retainCount = %lu",(unsigned long)[t retainCount]);
        
     }
    return 0;
}
Copy the code

Print:

2020-02-17 11:58:11.0taincount = 1 2020-02-17 11:58:11.0taincount = 1 2020-02-17 11:58:11.0taincount = 1 2020-02-17 11:58:11.0taincount = 1 2020-02-17 11:58:11.0taincount = 1 2020-02-17 11:58:11.0taincount [1819:45917] Taincount = 2 2020-02-17 11:58:11.020382+0800 NHText[1819:45917] TAINcount = 1 2020-02-17 11:58:11.020391+0800 [1819:45917] NHText Dealloc 2020-02-17 11:58:11.020402+0800 NHText[1819:45917] retainCount = 1 Program ended withexit code: 0
Copy the code

alloc

Let’s look at the flow of the alloc method

+ (id)alloc {
    return _objc_rootAlloc(self);
}
Copy the code
id
_objc_rootAlloc(Class cls)
{
    return callAlloc(cls, false/*checkNil*/, true/*allocWithZone*/);
}
Copy the code
static ALWAYS_INLINE id
callAlloc(Class cls, bool checkNil, bool allocWithZone=false) {// Check if the object passed is nullif(slowpath(checkNil && ! cls))return nil;

#if __OBJC2__
    if(fastpath(! CLS ->ISA()->hasCustomAWZ())) {// If CLS does not implement custom allocWithZone methodsif(fastPath (CLS ->canAllocFast())) {// If CLS supports fast Alloc bool dtor = CLS ->hasCxxDtor(); // Get whether CLS has its own destructor. id obj = (id)calloc(1, cls->bits.fastInstanceSize()); // Use calloc to allocate memory space according to the size of fastInstanceSizeif(slowpath(! obj))returncallBadAllocHandler(cls); // If the memory space fails, call callBadAllocHandler obj->initInstanceIsa(CLS, dtor); // If successful, initialize ISAreturnobj; // Return object}else{// If CLS does not support rapid Alloc // Has ctor or raw isa or something. Use the slower path. id obj = class_createInstance(CLS, 0);  // Create the object directly with the class_createInstance methodif(slowpath(! obj))returncallBadAllocHandler(cls); // If object creation fails, call callBadAllocHandlerreturnobj; // Return object}}#endif

    // No shortcuts available.
    if (allocWithZone) return[cls allocWithZone:nil]; // If allocWithZone istrueCall allocWithZone to create the objectreturn[cls alloc]; // otherwise call the alloc method to create an object}Copy the code

So let’s break down the function

  • hasCustomAWZ

Gets data from bits, and then uses the bitmask RW_HAS_DEFAULT_AWZ to determine if it is the default allocWithZone method

bool hasCustomAWZ() {
    return ! bits.hasDefaultAWZ();
}
Copy the code
bool hasDefaultAWZ() {
    return data()->flags & RW_HAS_DEFAULT_AWZ;
}
Copy the code
  • canAllocFast

Similar to the hasCustomAWZ process, fast Alloc is supported or not based on the bitmask FAST_ALLOC

bool canAllocFast() { assert(! isFuture());return bits.canAllocFast();
}
Copy the code
bool canAllocFast() {
    return bits & FAST_ALLOC;
}
Copy the code
  • initInstanceIsa

Simple initialization work, code flow is very clear, not elaborated

inline void objc_object::initInstanceIsa(Class cls, bool hasCxxDtor) { assert(! cls->instancesRequireRawIsa()); assert(hasCxxDtor == cls->hasCxxDtor()); initIsa(cls,true, hasCxxDtor);
}
Copy the code
inline void objc_object::initIsa(Class cls, bool nonpointer, bool hasCxxDtor) { assert(! isTaggedPointer());if(! nonpointer) { isa.cls = cls; // If nonpointer isfalse, CLS is directly assigned to CLS}else{ assert(! DisableNonpointerIsa); assert(! cls->instancesRequireRawIsa()); isa_t newisa(0); // Initializes a new ISA#if SUPPORT_INDEXED_ISAassert(cls->classArrayIndex() > 0); newisa.bits = ISA_INDEX_MAGIC_VALUE; Isa. magic is part of ISA_MAGIC_VALUE // ISa. nonpointer is part of ISA_MAGIC_VALUE newisa.has_cxx_dtor = hasCxxDtor; newisa.indexcls = (uintptr_t)cls->classArrayIndex();#else
        newisa.bits = ISA_MAGIC_VALUE;
        // isa.magic is part of ISA_MAGIC_VALUE
        // isa.nonpointer is part of ISA_MAGIC_VALUE
        newisa.has_cxx_dtor = hasCxxDtor;
        newisa.shiftcls = (uintptr_t)cls >> 3;
#endifisa = newisa; // Assign newisa to isa}}Copy the code
  • class_createInstance

If the class does not support AllocFast, then the object is created using the class_createInstance method and the alloc reference count is 0

id 
class_createInstance(Class cls, size_t extraBytes)
{
    return _class_createInstanceFromZone(cls, extraBytes, nil);
}
Copy the code
static __attribute__((always_inline)) 
id
_class_createInstanceFromZone(Class cls, size_t extraBytes, void *zone, 
                              bool cxxConstruct = true, 
                              size_t *outAllocatedSize = nil)
{
    if(! cls)returnnil; assert(cls->isRealized()); bool hasCxxCtor = cls->hasCxxCtor(); Bool hasCxxDtor = CLS ->hasCxxDtor(); Bool fast = CLS ->canAllocNonpointer(); Isa is the type of isa. If a class and instances of its parent cannot use ISA of type ISA_t, the return value isfalsesize_t size = cls->instanceSize(extraBytes); // Get the required space sizeif (outAllocatedSize) *outAllocatedSize = size;

    id obj;
    if(! Zone && fast) {// If the zone parameter is empty and fast is supported, apply space through calloc and initialize isa obj = (id)calloc(1, size);if(! obj)return nil;
        obj->initInstanceIsa(cls, hasCxxDtor);
    } 
    else {
        ifObj = (id)malloc_zone_calloc ((malloc_zone_t *)zone, 1, size); }else{// Otherwise use calloc method to apply space obj = (id)calloc(1, size); }if(! obj)returnnil; // Use raw pointer isa on the assumption that they might be // doing something weird with the zone or RR. obj->initIsa(cls); // initialize isa}if(cxxConstruct && hasCxxCtor) { obj = _objc_constructOrFree(obj, cls); // Build object}return obj;
}
Copy the code

Alloc flowchart

retainCount

RetainCount is a reference count for the current object.

- (NSUInteger)retainCount {
    return ((id)self)->rootRetainCount();
}
Copy the code
inline uintptr_t 
objc_object::rootRetainCount() {// Return TaggedPointer directlyif (isTaggedPointer()) return(uintptr_t)this; Sidetable_lock (); LoadExclusive(&isa.bits); // Load isa through LoadExclusive and lock ISA_t bits = LoadExclusive(&isa.bits); / / unlock ClearExclusive (& isa. Bits); // If isa is not implemented as a pointerif(bits.nonpointer) {// Get the current object's extra_rc and add a uintptr_t rc = 1 + bits.extra_rc; // If the ISA hash contains a valueif(bits.has_sidetable_rc) {// Add rc += sidetable_getExtraRC_nolock(); } // hash unlock sidetable_unlock(); / / return the rcreturnrc; } // hash unlock sidetable_unlock(); // Returns the value in the hash tablereturn sidetable_retainCount();
}
Copy the code
  • extra_rc

Extra_rc is an 8-bit flag bit that holds automatic reference counting in the ISA_T structure. It stores the number of reference counts outside of the object itself, so the total is incremented by one. But extra_RC might not be enough to store reference counts, where sidetable comes in handy.

  • has_sidetable_rc

Has_sidetable_rc is the flag used to indicate whether reference counts are stored through sideTable.

  • The sidetable_getExtraRC_nolock function is a method for getting reference count information from sideTable
size_t 
objc_object::sidetable_getExtraRC_nolock() { assert(isa.nonpointer); SideTable& table = SideTables()[this]; Table RefcountMap::iterator it = table.refcnt.find (this); // Find the reference countif (it == table.refcnts.end()) return0; // If not found, return 0else returnit->second >> SIDE_TABLE_RC_SHIFT; // If found, use SIDE_TABLE_RC_SHIFT bitmask to get the corresponding reference count}Copy the code
  • sidetable_retainCount
uintptr_t
objc_object::sidetable_retainCountSideTables() {SideTables = SideTables()[this]; size_t refcnt_result = 1; // Set the reference count of the object itself to 1 table.lock(); RefcountMap::iterator it = table.refcnts.find(this); // Find the reference countif(it ! = table.refcnt.end ()) {// This is validforSIDE_TABLE_RC_PINNED too refcnt_result += it->second >> SIDE_TABLE_RC_SHIFT; // Return 1 + reference count stored in sideTable} table.unlock();return refcnt_result;
}
Copy the code

If nonpointer is true, the old way of representing ISA by pointer is still used, and all reference counts are stored in the SideTable

retain

The main work of retain is to increase the reference count.

- (id)retain {
    return ((id)self)->rootRetain();
}
Copy the code
ALWAYS_INLINE id 
objc_object::rootRetain()
{
    return rootRetain(false.false);
}
Copy the code
ALWAYS_INLINE ID objC_object ::rootRetain(bool tryRetain, bool handleOverflow) {// Check whether it is a TaggedPointer type, if so return itselfif (isTaggedPointer()) return(id)this; Bool sideTableLocked =false; Bool transcribeToSideTable = bool transcribeToSideTable =false;

    isa_t oldisa;
    isa_t newisa;

    do {
        transcribeToSideTable = false; Isa oldisa = LoadExclusive(&isa.bits); newisa = oldisa; // nonpointer: indicates whether pointer optimization is enabled for ISA. 0 indicates a pure ISA pointer, and 1 indicates that in addition to the address, it contains some information about the class, the reference count of the object, and so on.if(slowpath(! Newisa.nonpointer)) {// If ISA is ClearExclusive(& ISa.bits); // Unlock // If tryRetain isfalseSideTable is unlocked if it has been lockedif(! tryRetain && sideTableLocked) sidetable_unlock(); // If tryRetain is true, return this or nil depending on the result of the call to sidetable_tryRetain().if (tryRetain) return sidetable_tryRetain() ? (id)this : nil;
            else returnsidetable_retain(); Return sidetable_retain}if(slowpath(tryRetain && newisa.deallocating) {// If tryRetain istrueClearExclusive(& ISa.bits), and the object is being destroyed; // Unlock // If tryRetain isfalseSideTable is unlocked if it has been lockedif(! tryRetain && sideTableLocked) sidetable_unlock();return nil;
        }
        uintptr_t carry;
        newisa.bits = addc(newisa.bits, RC_ONE, 0, &carry);  // extra_rc++

        ifSlowpath (carry) {// If carry istrue// newisa.extra_rc++ overflowsif(! HandleOverflow) {// If handleOverflow isfalseClearExclusive(&isa.bits); / / unlockreturnrootRetain_overflow(tryRetain); RootRetain_overflow} // Leave half of the retain counts inline and // prepare to copy the other half to the Sidetable. // If sidetable_lock is unlocked, lock itif(! tryRetain && ! sideTableLocked) sidetable_lock(); sideTableLocked =true;
            transcribeToSideTable = true; newisa.extra_rc = RC_HALF; // The value of extra_rc is halved newisa.has_sidetable_rc =true; // Whether the tag has a reference count in sideTable}}while(slowpath(! StoreExclusive(&isa.bits, oldisa.bits, newisa.bits)));if(slowPath (transcribeToSideTable)) {// Copy the other half of the retain counts to the side table. sidetable_addExtraRC_nolock(RC_HALF); }if(slowpath(! tryRetain && sideTableLocked)) sidetable_unlock(); / / unlockreturn (id)this;
}
Copy the code

The comments in the source code are written clearly at each step, so let’s disassemble the function

  • sidetable_tryRetain
bool
objc_object::sidetable_tryRetain()
{
#if SUPPORT_NONPOINTER_ISAassert(! isa.nonpointer);#endif
    SideTable& table = SideTables()[this];
    
    bool result = true; RefcountMap::iterator it = table.refcnt.find (this);ifRefcnts [this] = SIDE_TABLE_RC_ONE; (it == table.refcnt.end ()) {// If table.refcnts[this] = SIDE_TABLE_RC_ONE; // set the table reference count to SIDE_TABLE_RC_ONE}else if(it->second & SIDE_TABLE_DEALLOCATING) {// If the object is in deallocating state result =false;
    } else if(! (it->second & SIDE_TABLE_RC_PINNED)) {// If it is found and does not overflow, the reference count is increased by 1. it->second += SIDE_TABLE_RC_ONE; }return result;
}
Copy the code
  • sidetable_retain
id
objc_object::sidetable_retain()
{
#if SUPPORT_NONPOINTER_ISAassert(! isa.nonpointer);#endif
    SideTable& table = SideTables()[this];
    
    table.lock();
    size_t& refcntStorage = table.refcnts[this];
    if(! (refcntStorage & SIDE_TABLE_RC_PINNED)) {// If it is found and does not overflow, the reference count is increased by 1. refcntStorage += SIDE_TABLE_RC_ONE; } table.unlock();return (id)this;
}
Copy the code
  • addc

This function just adds one to the reference count

static ALWAYS_INLINE uintptr_t 
addc(uintptr_t lhs, uintptr_t rhs, uintptr_t carryin, uintptr_t *carryout)
{
    return __builtin_addcl(lhs, rhs, carryin, carryout);
}
Copy the code
  • rootRetain_overflow

When the addC method tells us that there is an overflow, the rootRetain_overflow method is called, which in turn calls rootRetain internally, a recursive call

NEVER_INLINE id 
objc_object::rootRetain_overflow(bool tryRetain)
{
    return rootRetain(tryRetain, true);
}
Copy the code
  • sidetable_addExtraRC_nolock

The sidetable_addExtraRC_nolock function adds the reference count to the hash table

bool 
objc_object::sidetable_addExtraRC_nolock(size_t delta_rc)
{
    assert(isa.nonpointer);
    SideTable& table = SideTables()[this];

    size_t& refcntStorage = table.refcnts[this];
    size_t oldRefcnt = refcntStorage;
    // isa-side bits should not be set here
    assert((oldRefcnt & SIDE_TABLE_DEALLOCATING) == 0);
    assert((oldRefcnt & SIDE_TABLE_WEAKLY_REFERENCED) == 0);

    if (oldRefcnt & SIDE_TABLE_RC_PINNED) return true;
    
    uintptr_t carry;
    size_t newRefcnt = 
        addc(oldRefcnt, delta_rc << SIDE_TABLE_RC_SHIFT, 0, &carry);
    
    if (carry) {
        refcntStorage =
            SIDE_TABLE_RC_PINNED | (oldRefcnt & SIDE_TABLE_FLAG_MASK);
        return true;
    }
    else {
        refcntStorage = newRefcnt;
        return false; }}Copy the code

release

Release does the opposite of retain, subtracting the object’s reference count by one.

- (oneway void)release {
    ((id)self)->rootRelease();
}
Copy the code
ALWAYS_INLINE bool 
objc_object::rootRelease()
{
    return rootRelease(true.false);
}
Copy the code
ALWAYS_INLINE bool OBJC_object ::rootRelease(bool performDealloc, bool handleUnderflow) {// Check whether it is TaggedPointer, If so, an error is returned because the TaggedPointer type does not have a reference countif (isTaggedPointer()) return false;

    bool sideTableLocked = false;

    isa_t oldisa;
    isa_t newisa;

 retry:
    do {
        oldisa = LoadExclusive(&isa.bits); // 为 isa.bits 加锁
        newisa = oldisa;
        if(slowpath(! Newisa.nonpointer)) {// Check whether ISA implements ClearExclusive(& ISa.bits); / / unlock isa. Bitsif(sideTableLocked) sidetable_unlock(); / / sidetable unlockedreturnsidetable_release(performDealloc); // Unlock it with sidetable_release} uintptr_t carry; newisa.bits = subc(newisa.bits, RC_ONE, 0, &carry); // if extra_rc==0, extra_rc-- is negative, carry=1ifSlowpath (carry) {// If an overflow is found, goto underflow; }}while(slowpath(! StoreReleaseExclusive(&isa.bits, oldisa.bits, newisa.bits)));if (slowpath(sideTableLocked)) sidetable_unlock();
    return false; underflow: // newisa.extra_rc-- underflowed: Borrow from side table or deallocate // abandon newisa to undo the decrement // newisa = oldisa; // Check if there is a reference countif(slowpath(newisa.has_sidetable_rc)) {// With a borrow save, rootRelease_underflow re-enters the functionif(! handleUnderflow) { ClearExclusive(&isa.bits);returnrootRelease_underflow(performDealloc); } // Transfer retain count from side table to inline storageif(! sideTableLocked) { ClearExclusive(&isa.bits); sidetable_lock(); sideTableLocked =true; // Need to start over to avoid a race against // the nonpointer -> raw pointer transition. goto retry; } // get the number of borrowed references, (Maximum number of times obtained RC_HALF) // Try to remove some retain counts from the side table.size_t borrowed = sidetable_subExtraRC_nolock(RC_HALF); // To avoid races, has_sidetable_rc must remainset 
        // even ifThe side table count is now zeroif(borrowed > 0) {// Side table retain count decreased. // Try to add them to the inline count. StoreExclusive // If &ISa. bits and oldisa.bits are equal, then assign the value of newISa. bits to &ISa. bits and returntruenewisa.extra_rc = borrowed - 1; Bits bool Stored = StoreReleaseExclusive(& Isa. bits, oldisa.bits, newisa.bits);if(! Stored) {// If the save fails, // Try it again right now. This prevents livelock on LL/SC // architectureswhere the side table access itself may have 
                // dropped the reservation.
                isa_t oldisa2 = LoadExclusive(&isa.bits);
                isa_t newisa2 = oldisa2;
                if (newisa2.nonpointer) {
                    uintptr_t overflow;
                    newisa2.bits = 
                        addc(newisa2.bits, RC_ONE * (borrowed-1), 0, &overflow);
                    if(! overflow) { stored = StoreReleaseExclusive(&isa.bits, oldisa2.bits, newisa2.bits); }}}if(! Stored) {// still failed, Put the number back into SideTable and retry // Inline update failed. // Put the retains backin the side table.
                sidetable_addExtraRC_nolock(borrowed);
                goto retry;
            }

            // Decrement successful after borrowing from side table.
            // This decrement cannot be the deallocating decrement - the side 
            // table lock and has_sidetable_rc bit ensure that if everyone 
            // else tried to -release while we worked, the last one would block.
            sidetable_unlock();
            return false;
        }
        else{// Side table is empty after all. fall-through to the dealloc path. Go straight down}} // Really deallocate. // If there is no borrowing save times, go hereif(slowpath(newisa.deallocating)) {// If the object is already being released, error warning: release ClearExclusive(& ISa.bits) multiple times;if (sideTableLocked) sidetable_unlock();
        return overrelease_error();
        // does not actually return
    }
    newisa.deallocating = true; / / save bitsif(! StoreExclusive(&isa.bits, oldisa.bits, newisa.bits)) goto retry;if (slowpath(sideTableLocked)) sidetable_unlock();

    __sync_synchronize();
    if(performDealloc) {call dealloc ((void(*)(objc_object *, SEL))objc_msgSend)(this, SEL_dealloc); }return true;
}
Copy the code
  • sidetable_release

If isa isa pure pointer implementation, i.e. there is no reference counting in isa, reference counting in the hash table is directly manipulated

uintptr_t
objc_object::sidetable_release(bool performDealloc)
{
#if SUPPORT_NONPOINTER_ISAassert(! isa.nonpointer);#endifSideTable& table = SideTables()[this]; // Table bool do_dealLOc =false; // Indicates whether the dealloc method table.lock() needs to be executed; RefcountMap::iterator it = table.refcnt.find (this); // Use this to find the reference countif(it == table.refcnt.end ()) {do_dealloc = not foundtrue; table.refcnts[this] = SIDE_TABLE_DEALLOCATING; // Set it to deallocating}else if(it->second < SIDE_TABLE_DEALLOCATING) {// If the object is in deallocating state do_dealloc =true;
        it->second |= SIDE_TABLE_DEALLOCATING;
    } else if(! (it->second -= SIDE_TABLE_RC_ONE) {it->second -= SIDE_TABLE_RC_ONE; } table.unlock();if(do_dealloc &&performDealloc) {// Execute the dealloc method ((void(*)(objc_object *, SEL))objc_msgSend)(this, SEL_dealloc); }return do_dealloc;
}
Copy the code
  • subc

Reduce the reference count in the ISA pointer by one

static ALWAYS_INLINE uintptr_t 
subc(uintptr_t lhs, uintptr_t rhs, uintptr_t carryin, uintptr_t *carryout)
{
    return __builtin_subcl(lhs, rhs, carryin, carryout);
}
Copy the code
  • rootRelease_underflow

If there is an overflow, rootRelease_underflow is executed, which implements a recursive call to rootRelease

NEVER_INLINE bool 
objc_object::rootRelease_underflow(bool performDealloc)
{
    return rootRelease(performDealloc, true);
}
Copy the code
  • sidetable_subExtraRC_nolock

Take the value from the reference count and assign it to ISa.extra_rc

size_t 
objc_object::sidetable_subExtraRC_nolock(size_t delta_rc)
{
    assert(isa.nonpointer);
    SideTable& table = SideTables()[this];

    RefcountMap::iterator it = table.refcnts.find(this);
    if (it == table.refcnts.end()  ||  it->second == 0) {
        return 0;
    }
    size_t oldRefcnt = it->second;

    // isa-side bits should not be sethere assert((oldRefcnt & SIDE_TABLE_DEALLOCATING) == 0); // Make sure the object is not deallocating state assert((oldRefcnt & SIDE_TABLE_WEAKLY_REFERENCED) == 0); Size_t newRefcnt = oldRefcnt - (delta_rc << SIDE_TABLE_RC_SHIFT); // The hash table reference count minus the delta_rc assert(oldRefcnt > newRefcnt) passed in; it->second = newRefcnt; // Sets the hash table's new reference countreturn delta_rc;
}
Copy the code
  • overrelease_error

This method is triggered if you call release again while the object is being destructed,

NEVER_INLINE
bool 
objc_object::overrelease_error()
{
    _objc_inform_now_and_on_crash("%s object %p overreleased while already deallocating; break on objc_overrelease_during_dealloc_error to debug", object_getClassName((id)this), this);
    objc_overrelease_during_dealloc_error();
    return false;  // allow rootRelease() to tail-call this
}
Copy the code
  • __sync_synchronize

__sync_synchronize is used to: No memory operand will be moved across the operation, either forward or backward. Further, instructions will be issued as necessary to prevent the processor from speculating loads across the operation and from queuing stores after the operation. In short, after this method call, all memory-related operators are disallowed (because of the destructor).


Refer to the article

Alloc, new, copy, and mutablecopy are all implemented by Apple in the same way as retainCount, retain, and retain The release of