Small valley bottom exploration collection

  • Most of the guys have heard of locks. The reason locks were created is for security

  • @synchronized locks are probably the one we use most in development. Today a small talk on a wave ~

1. @synchronizedSimple to use

    1. First let’s write a little demo for buying tickets
#import "ViewController.h"

@interface ViewController(a)
@property(nonatomic)NSInteger ticketCounts;
@end

@implementation ViewController

- (void)viewDidLoad {
    [super viewDidLoad];
    // Do any additional setup after loading the view.
    
    self.ticketCounts = 30;
    [self xg_doPay];
}

/ / to buy tickets
- (void)xg_doPay{
    // buy tickets asynchronously
    dispatch_async(dispatch_get_global_queue(0.0), ^ {for (int i = 0; i <10; i++) {
            [selfsaleTickets]; }});dispatch_async(dispatch_get_global_queue(0.0), ^ {for (int i = 0; i <15; i++) {
            [selfsaleTickets]; }});dispatch_async(dispatch_get_global_queue(0.0), ^ {for (int i = 0; i <10; i++) {
            [selfsaleTickets]; }}); }/ / tickets
- (void)saleTickets{
    if (self.ticketCounts > 0) {
        NSLog(@" Current number of votes: %ld".self.ticketCounts);
        self.ticketCounts--;
    }else{
        NSLog("No tickets ~~~"); }}Copy the code

Since buying a ticket is asynchronous — it’s out of order — it often creates confusion, which is unsafe.

    1. At this point, we often do a wave of operations. (Add a handful@synchronizedLock)
/ / tickets
- (void)saleTickets{
    
    @synchronized (self) {
        if (self.ticketCounts > 0) {
            NSLog(@" Current number of votes: %ld".self.ticketCounts);
            self.ticketCounts--;
        }else{
            NSLog("No tickets ~~~"); }}}Copy the code

Run at this point

At this point, there is no problem.

    1. We’re going to look at it today. this@synchronizedDo what, he will be ok ~

2. @synchronizedAnalysis of the underlying

2.1. Source positioning

For low-level analysis, THERE are just a few ways I can do it. 😆

    1. So first of all, let me write a simple example, and then look at the structure

    1. So let’s go to assembly, break point

So we have the source location

2.2. Source code analysis

    1. The following comments on the source code also show that what we found is correct.

objc_sync_enter

// Begin synchronizing on 'obj'. 
// Allocates recursive mutex associated with 'obj' if needed.
// Returns OBJC_SYNC_SUCCESS once lock is acquired. 
int objc_sync_enter(id obj)
{
    int result = OBJC_SYNC_SUCCESS;

    if (obj) {
        SyncData* data = id2data(obj, ACQUIRE);
        ASSERT(data);
        data->mutex.lock();
    } else {
        // @synchronized(nil) does nothing
        if (DebugNilSync) {
            _objc_inform("NIL SYNC DEBUG: @synchronized(nil); set a breakpoint on objc_sync_nil to debug");
        }
        objc_sync_nil();
    }

    return result;
}
Copy the code

objc_sync_exit

// End synchronizing on 'obj'. 
// Returns OBJC_SYNC_SUCCESS or OBJC_SYNC_NOT_OWNING_THREAD_ERROR
int objc_sync_exit(id obj)
{
    int result = OBJC_SYNC_SUCCESS;
    
    if (obj) {
        SyncData* data = id2data(obj, RELEASE); 
        if(! data) { result = OBJC_SYNC_NOT_OWNING_THREAD_ERROR; }else {
            bool okay = data->mutex.tryUnlock();
            if(! okay) { result = OBJC_SYNC_NOT_OWNING_THREAD_ERROR; }}}else {
        // @synchronized(nil) does nothing
    }
     

    return result;
}
Copy the code

These two methods, they both call ID2Data, and they both return type SyncData, so I feel like there’s something going on here

    1. Guys, take a lookSyncDatastructure
typedef struct alignas(CacheLineSize) SyncData {
    struct SyncData* nextData;
    DisguisedPtr<objc_object> object;
    int32_t threadCount;  // number of THREADS using this block
    recursive_mutex_t mutex;
} SyncData;
Copy the code
    1. So when we first look at this structure, and we don’t think much about it, we look at the naming and that’s what it feels like to me

Of course, this is just a first impression, I don’t know if it’s right

It feels like a one-way linked list, and each node has something stored

    1. Next, let’s dissectid2dataFunction (this is a long function, but I will use the old method, big fold method ~)

Give me a preview

    1. thisSUPPORT_DIRECT_THREAD_KEYS The definition of

    1. Let’s, uh let’s examine that judgment

    1. cacheoperation

    1. Continue the locking operation inside

    1. thendoneThen it will unlock
done:
    lockp->unlock();
    if (result) {
        // Only new ACQUIRE should get here.
        // All RELEASE and CHECK and recursive ACQUIRE are 
        // handled by the per-thread caches above.
        if (why == RELEASE) {
            // Probably some thread is incorrectly exiting 
            // while the object is held by another thread.
            return nil;
        }
        if(why ! = ACQUIRE) _objc_fatal("id2data is buggy");
        if(result->object ! = object) _objc_fatal("id2data is buggy");

#if SUPPORT_DIRECT_THREAD_KEYS
        if(! fastCacheOccupied) {// Save in fast thread cache
            tls_set_direct(SYNC_DATA_DIRECT_KEY, result);
            tls_set_direct(SYNC_COUNT_DIRECT_KEY, (void*)1);
        } else 
#endif
        {
            // Save in thread cache
            if(! cache) cache = fetch_cache(YES); cache->list[cache->used].data = result;
            cache->list[cache->used].lockCount = 1; cache->used++; }}return result;
}
Copy the code

This one is much better. It has a lot of comments

    1. I stole a schematic

The above is@synchronizedThe use and structure of

Continued working overtime ~~~