@synchronized, also known as object lock, is basically the most frequently used lock in iOS development due to its ease of use.

The usage is as follows:

@synchronized() {// synchronized blocks}Copy the code

The principle of

So how does @synchronized implement the lock function? Let’s look at an example:

- (void)synchronizedTest { @synchronized (self) { NSLog(@"====synchronized===="); }}Copy the code

Set a breakpoint on the program and enter assembly. We can see that two methods are wrapped before and after the change:

objc_sync_enter
objc_sync_exit
Copy the code

Or compile the program using the following code:

clang -x objective-c -rewrite-objc -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator.sdk main.m
Copy the code

It can also be concluded that the analysis should focus on the following code:

objc_sync_enter
objc_sync_exit
Copy the code

Symbolic breakpoints allow us to locate the above code in objC source code.

// Allocates recursive mutex associated with 'obj' if needed.
int objc_sync_enter(id obj)
{
    int result = OBJC_SYNC_SUCCESS;

    if (obj) {
        SyncData* data = id2data(obj, ACQUIRE);
        assert(data);
        data->mutex.lock();
    } else {
        // @synchronized(nil) does nothing
        if (DebugNilSync) {
            _objc_inform("NIL SYNC DEBUG: @synchronized(nil); set a breakpoint on objc_sync_nil to debug");
        }
        objc_sync_nil();
    }

    return result;
}

BREAKPOINT_FUNCTION(
    void objc_sync_nil(void)
);
Copy the code

The following conclusions can be drawn from the code:

  • @synchronizedRecursive locking is used(recursive mutex)
  • @synchronized(nil)Does not do anything that can be used to prevent dead recursion.

Let’s take a look at what @synchronized did when OBJ existed.

SyncData* data = id2data(obj, ACQUIRE);
Copy the code

From this line of code, you can see that obj is stored in a SyncData structure. SyncData is a structure with the following details:

typedef struct alignas(CacheLineSize) SyncData {
    struct SyncData* nextData;
    DisguisedPtr<objc_object> object;
    int32_t threadCount; 
    recursive_mutex_t mutex;
} SyncData;
Copy the code
  • struct SyncData* nextData:SyncDataPointer to the next piece of data
  • DisguisedPtr<objc_object> object: Locked object
  • int32_t threadCount: Number of waiting threads
  • recursive_mutex_t mutex: the recursive lock used

What is the process of getting data from the SyncData structure?

  1. If the supporttlsFrom the cache,tlsCache accessobjInformation about. This method checks for matching objects in each thread’s single item cache.
SyncData *data = (SyncData *)tls_get_direct(SYNC_DATA_DIRECT_KEY);
Copy the code

Thread Local Storage (TLS) is a private space provided by the operating system for threads, usually with limited capacity.

result = data;
lockCount = (uintptr_t)tls_get_direct(SYNC_COUNT_DIRECT_KEY);
lockCount++;
tls_set_direct(SYNC_COUNT_DIRECT_KEY, (void*)lockCount);
Copy the code

Multiple entries here, which is a recursive operation, only increments the lockCount.

If the obJ information stored in TLS is updated, the number of locks is increased by 1, and the data is returned. If not, go to the second step.

  1. In thread cachingSyncCacheTo find if there isobjData information. This method checks for matching objects in the cache for each thread that already owns the lock.
typedef struct {
    SyncData *data; 
    unsigned int lockCount;  // number of times THIS THREAD locked this block
} SyncCacheItem;

typedef struct SyncCache {
    unsigned int allocated;
    unsigned int used;
    SyncCacheItem list[0];
} SyncCache;

SyncCache *cache = fetch_cache(NO);
SyncCacheItem *item = &cache->list[i];
item->lockCount++;
Copy the code

If the current obJ data is present, increase the number of obJ locks in the SyncCache cache by one and return the data. If not, go to step 3.

  1. In use listsDataListsFind object in

The list sDataLists need to lock the lookups to prevent exceptions in multithreaded lookups. Using list sDataLists, SyncData is further encapsulated as a structure, SyncList.

spinlock_t *lockp = &LOCK_FOR_OBJ(object); SyncData **listp = &LIST_FOR_OBJ(object); using spinlock_t = mutex_tt<LOCKDEBUG>; #define LOCK_FOR_OBJ(obj) sDataLists[obj].lock #define LIST_FOR_OBJ(obj) sDataLists[obj].data struct SyncList { SyncData  *data; spinlock_t lock; constexpr SyncList() : data(nil), lock(fork_unsafe_lock) { } }; static StripedMap<SyncList> sDataLists;Copy the code

To traverse to make a match:

SyncData* p;
SyncData* firstUnused = NULL;
for(p = *listp; p ! = NULL; p = p->nextData) {if ( p->object == object ) {
        result = p;
        OSAtomicIncrement32Barrier(&result->threadCount);
        goto done;
    }
    if ( (firstUnused == NULL) && (p->threadCount == 0) )
        firstUnused = p;
}
Copy the code

If found, the data is written to the TLS cache and the thread cache SyncCache and returned.

// Write to TLS cache tls_set_direct(SYNC_DATA_DIRECT_KEY, result); tls_set_direct(SYNC_COUNT_DIRECT_KEY, (void*)1); // Write thread cache if (! cache) cache = fetch_cache(YES); cache->list[cache->used].data = result; cache->list[cache->used].lockCount = 1; cache->used++;Copy the code
  1. Create a new oneSyncDataIn thesDataListsAnd savetlsCache and thread cache, and then return.
posix_memalign((void **)&result, alignof(SyncData), sizeof(SyncData)); result->object = (objc_object *)object; result->threadCount = 1; New (&result->mutex) recursive_mutex_t(fork_unsafe_lock); result->nextData = *listp; *listp = result;Copy the code

After looking at acquiring locks, let’s look at releasing locks. The process of releasing is similar to saving. If the object passed in is empty, nothing will be done.

// End synchronizing on 'obj'. // Returns OBJC_SYNC_SUCCESS or OBJC_SYNC_NOT_OWNING_THREAD_ERROR int objc_sync_exit(id obj) { int result = OBJC_SYNC_SUCCESS; if (obj) { SyncData* data = id2data(obj, RELEASE); if (! data) { result = OBJC_SYNC_NOT_OWNING_THREAD_ERROR; } else { bool okay = data->mutex.tryUnlock(); if (! okay) { result = OBJC_SYNC_NOT_OWNING_THREAD_ERROR; } } } else { // @synchronized(nil) does nothing } return result; }Copy the code

If the object passed has a value:

  1. From the firsttlsIf found, the lock count is reduced by 1, update the cache data, if the current object corresponding to the lock count is 0, directly from thetlsDelete from cache.
lockCount--;
tls_set_direct(SYNC_COUNT_DIRECT_KEY, (void*)lockCount);
if (lockCount == 0) {
    tls_set_direct(SYNC_DATA_DIRECT_KEY, NULL);                    OSAtomicDecrement32Barrier(&result->threadCount);
}
Copy the code
  1. Slave cacheSyncCacheIf the lock count of the current object is 0, it is directly removed from the thread cacheSyncCacheRemoved.
item->lockCount--;
if (item->lockCount == 0) {
    cache->list[i] = cache->list[--cache->used];                OSAtomicDecrement32Barrier(&result->threadCount);
}
Copy the code
  1. fromsDataListsFind it, and if you find it, just set it to zeronil.

In fact, @synchronized is a recursive lock. It maintains an internal table to store information about objects and locks. The operation of locking and releasing locks is the operation of the lock count.

Pay attention to the point

One caveat for those using @synchronzied

for (int i = 0; i < 200000; i++) {
    dispatch_async(dispatch_get_global_queue(0, 0), ^{
        self.mArray = [NSMutableArray array];
    });
}
Copy the code

This code is going to crash because, you know, we’re constantly creating arrays, mArray is constantly assigning new values, releasing old values, and at that point multithreading is going to have the possibility that the value has been released, and some other thread is still doing it, and that’s going to crash. This is where we need to lock the program. Change the above procedure to the following:

@synchronized (self.mArray) {
    self.mArray = [NSMutableArray array];
}
Copy the code

The program still crashes because @synchronized does nothing if it’s nil, and it might not lock, which would also result in the value becoming nil on release. So what should we do about it?

The first way is to use semaphore locking:

dispatch_semaphore_wait(_semp, DISPATCH_TIME_FOREVER);
dispatch_async(dispatch_get_global_queue(0, 0), ^{
    self.mArray = [NSMutableArray array];
    dispatch_semaphore_signal(self.semp);
});
Copy the code

The second uses NSLock directly:

NSLock *lock = [[NSLock alloc] init];
for (int i = 0; i < 200000; i++) {
    dispatch_async(dispatch_get_global_queue(0, 0), ^{
        [lock lock];
        self.mArray = [NSMutableArray array];
        [lock unlock];
    });
}
Copy the code

Use @synchronized(self) with caution in normal development. Passing self directly to @synchronized is a simple and crude method, but it can lead to deadlocks. The reason is that self is likely to be accessed by an external object and used as a key to generate the lock. Scenarios where two common locks are used interchangeably are prone to deadlocks.

conclusion

Synchronized is a recursive lock that encapsulates and special handles recursive_mutex_t underneath.

LockCount is what makes @synchronized capable of handling recursion, and threadCount is what makes @synchronized capable of handling multithreading.

The entry to the code block is objc_sync_Enter (ID obj) and the exit is objc_Sync_Enter (ID obj).

The core processing is as follows:

  • If the supporttlsCache, just fromtlsLook up object locks in the cacheSyncDataTo find thelockCountPerform corresponding operations
  • If not supportedtlsCache, or fromtlsIf it is not found in the cache, it is fetched from the thread cacheSyncCacheAgain, find the rightlockCountPerform corresponding operations
  • If there is no cache hit, thesDataListsSearch in the linked list, find the relevant operation, and writetlsCache and thread cacheSyncCache
  • If none is found, create a node and lock the objectSyncDatainsertsDataListsAnd write to the cache

Releasing an object is similar.

Note that the @synchronized operation consumes more performance than other locks, so it is not recommended to use it in large quantities. In addition, @synchronized may not be able to lock in some multithreaded operations.