Summary of basic principles of iOS

The purpose of this article is to understand the basic concepts and principles of processes, threads, multithreading, thread pools, etc

Threads and processes

Definitions of threads and processes

thread

  • Thread is the basic execution unit of a process. All tasks of a process are executed in a thread
  • In order for a process to perform a task, it must have threads,A process must have at least one thread
  • The program starts with a thread by default, this thread is calledThe main threadorThe UI thread

process

  • processRefers to theAn application that is running on the system
  • eachProcesses are independent from each otherEach process runs in its own dedicated and protected memory space
  • The Activity Monitor allows you to view the threads that are started in the MAC system

A process is a container for threads that perform tasks. In iOS, it is a single process development, a process is an APP, the process is independent of each other, such as Alipay, wechat, QQ, etc., these are different processes

The relationship between processes and threads

The relationship between processes and threads mainly involves two aspects:

  • Address space

    • Of the same processThreads share the address space of the process
    • whileProcesses are separate address Spaces
  • Resources have

    • Within the same processThreads share resources of the process, such as memory, I/O, CPU, etc
    • But in theResources are independent between programs

The relationship between the two is equivalent to that between the factory and the assembly line, which are mutually independent, while the assembly line in the factory shares the resources of the factory, that is, the process is equivalent to a factory, and the thread is equivalent to an assembly line in the factory

For processes and threads, there are a few more notes:

  • 1: Multiple processes are more robust than multithreading

    • A process crash in protected mode does not affect other processes
    • whileOne thread crashes and the entire thread dies
  • 2: Application scenario: Frequent switchover and concurrent operation

    • Process switchover consumes large resources and is efficient. So it comes down toFrequent switchingWhen,Threads are better than processes.
    • If the samerequirementsSimultaneous and sharing of variablesThe concurrent operation.Thread onlyUnable to use process
  • 3: Execution process

    • Each individualprocessThere is a programRunning entryOrder,Execution sequenceAnd chengOrder entry
    • butThreads cannot execute independently and must depend on the applicationThe application provides multiple threads of execution control.
  • Thread is the basic unit of processor scheduling, but process is not.

  • 5: Threads have no address space. Threads are contained in the process address space

Thread and Runloop relationship

  • 1:Runloops correspond to threads one by oneA runloop corresponds to a core threadRunloops can be nested, but there can only be one coreTheir relationship is stored in a global dictionary.
  • 2:Runloop is used to manage threads, when the thread runloop is enabled, the thread will go to sleep after completing the task and will be woken up to execute the task.
  • 3: Runloop is created on the first fetch and destroyed at thread end.
  • 4:The main threadSpeaking,Runloop is created by default as soon as the program startsAll right.
  • 5:The child threadSpeaking,Runloops are lazily loaded and only created when we use them, soUse timers on child threadsNote:Make sure the runloop for the child thread is created, otherwise the timer will not call back.

multithreading

Principle of multithreading

  • forA single core CPU.The CPU can only process one thread at a timeThat is, only one thread is working,
  • In the iOSMultithreading simultaneous executionIs the nature ofThe CPU switches between multiple tasks directly and quicklyBecause ofCPU scheduling threadtheTime is fast enoughIs caused byMulti-threaded "simultaneous" executionThe effect. Where the switching interval isTime slice

Meaning of multithreading

advantages

  • To the appropriateImprove the efficiency of program execution
  • To the appropriateImprove resource utilization, such as CPU and memory
  • When the task on the thread is complete,The thread is automatically destroyed

disadvantages

  • Starting a thread takes up a certain amount of memory, by default,Each thread occupies 512KB
  • If openA large number of threads will occupy a large amount of memory space, reducing the performance of the program
  • The more threads, the CPUOn the calling threadThe more it costs
  • Programming is more complex, such as communication between threads, data sharing between multiple threads

Multithreaded life cycle

The life cycle of multithreading is divided into five parts: new – ready – run – block – death, as shown in the following figure

  • New: Mainly instantiates thread objects

  • Ready: a thread object calls the start method, adds the thread object to the schedulable thread pool, and waits for the CPU to call the start method. The start method is not executed immediately, and the thread object enters the ready state. It needs to wait for the CPU to schedule the execution, that is, from the ready state to run state

  • Run: The CPU is responsible for scheduling the execution of threads in a schedulable line city. The state of a thread may switch back and forth between ready and run before its execution is complete. The CPU is responsible for this change and the developer cannot interfere.

  • Blocking: When a predetermined condition is met, sleep, or a synchronous lock, can be used to block thread execution. When sleep is entered, the thread is re-added to the ready. The following Settings for sleep time are for NSThreads

    • sleepUntilDate:Blocks the current thread until the specified time, i.eSleep until a specified time
    • sleepForTimeInterval:Hibernates a thread for a given time interval, i.eSpecify sleep duration
    • Synchronization locks:@ synchronized (self) :
  • Death: falls into two categories

    • Normal deathThat is, the thread completes execution
    • Unnatural deathWhen a condition is met, execution terminates inside the thread (or in the main thread) (exit, etc.)

A running thread has a period of time (called a timeslice) that it can execute.

  • ifTime slice exhausted, the thread will enterReady state queue
  • ifThe time slot is not used upAnd you need to startWait for something, will enterBlocking status queue
  • After an event occurs, the thread will re-enterReady state queue
  • Whenever aThread out of run, that is, after the execution is complete or forced to exit, the system will retryFrom the ready state queueIn theSelect a thread to continue execution

Exit and cancel instructions for the thread

- 'exit' : once the thread is forcibly terminated, all subsequent code will not be executed. - 'cancel' : cancel the current thread, but not the executing threadCopy the code

Does a higher priority of a thread mean faster execution of a task?

No, the speed of thread execution depends not only on priority, but also on resource size (i.e. task complexity) and CPU scheduling. In nsthreads, threadPriority has been replaced by qualityOfService, and the following enumeration values are relevant

Thread Pool Principle

Thread Pool Principle

  • [Step 1] Determine whether the core thread pool is all executing tasks

    • Return NO and create a new worker thread to execute
    • Return YES to enter [Step 2]
  • [Step 2] Determine whether the thread pool work queue is full

    • Returns NO and stores the task to a work queue waiting for CPU scheduling
    • Return YES to enter [Step 3]
  • 【 Step 3 】 Check whether all threads in the thread pool are in the executing state

    • Returns NO to schedule free threads from the schedulable thread pool to execute the task
    • Return YES to enter [Step 4]
  • [Step 4] Give saturation strategies to execute, mainly including the following four strategies (the following four strategies are not found in iOS)

    • AbortPolicy: direct selling RejectedExecutionExeception exceptions to prevent normal operation of system
    • CallerRunsPolicy: rolls back the task to the caller
    • DisOldestPolicy: Drop the most waiting task
    • DisCardPolicy: Directly discards the task

IOS multithreading implementation scheme

There are four ways to implement multithreading in iOS: pThread, NSThread, GCD, and NSOperation, as shown in the figure

Here are simple examples of the above four scenarios

// *********1: pthread********* pthread_t threadId = NULL; //c string char *cString = "HelloCode"; Pthread_t: pointer to the structure of the thread to be created. Normally, if you encounter C structures, the type suffix '_t/Ref' does not need to end with '*' 2. Thread property, nil(NULL object -oc used)/NULL(NULL address, 0 C used) 3. Void *: returns a pointer to any object similar to the id in OC (*): returns a pointer to any object similar to the id in OC (*): returns a pointer to any object similar to the id in OC (*) */ int result = pthread_create(&threaDID, NULL, pthreadTest, cString); If (result == 0) {NSLog(@" success "); } else {NSLog(@" failed "); } *********2, NSThread********* [NSThread detachNewThreadSelector:@selector(threadTest) toTarget:self withObject:nil]; / / * * * * * * * * * 3, the GCD is * * * * * * * * * dispatch_async (dispatch_get_global_queue (0, 0), ^ {[self threadTest]; }); //*********4, NSOperation********* [[[NSOperationQueue alloc] init] addOperationWithBlock:^{[self threadTest];}]; - (void)threadTest{ NSLog(@"begin"); NSInteger count = 1000 * 100; for (NSInteger i = 0; i < count; I ++) {NSInteger num = I; // NSString *name = @"zhang"; NSString *myName = [NSString stringWithFormat:@"%@ - %zd", name, num]; NSLog(@"%@", myName); } NSLog(@"over"); } void *pthreadTest(void *para){// C string // NSLog(@"===> %@ %s", [NSThread currentThread], para); NSString *name = (__bridge NSString *)(para); NSLog(@"===>%@ %@", [NSThread currentThread], name); return NULL; }Copy the code

Bridge between C and OC

The bridge between C and OC is involved, as described below

  • __bridgeI only do type conversions, butDo not modify object (memory) management rights
  • __bridge_retained(Also availableCFBridgingRetain) will beThe object of Objective - CconvertCore Foundation object, while putting the object (memory)The management is in our hands, the subsequentYou need to use CFRelease or related methods to release objects
  • __bridge_transfer(Also availableCFBridgingRelease) will beCore FoundationObject conversion ofIs an Objective-C objectAnd at the same time willObject (memory) management is given to ARC.

Thread safety

When multiple threads access a resource at the same time, data corruption and data security problems may occur. The following two solutions are available

  • Mutex (i.e. synchronous lock) :@synchronized
  • spinlocks

The mutex

  • Used to protect critical sections to ensureOnly one thread can execute at a time
  • If the codeThere's only one place to lock, and most of them use selfTo avoid creating a separate lock object
  • Mutex code, when a new thread access, if another thread is found to be executing the locked code, the new thread will enterdormancy

There are a few other things to note about mutex:

  • The mutexLock in as small a range as possible, the larger the locking range, the worse the efficiency
  • To be able toAny NSObject object that is locked
  • The lock object must be accessible to all threads

spinlocks

  • spinlocksIt is similar to a mutex, but instead of blocking a thread through hibernationIt is blocked by busy waiting (spinning in place, known as spin) until the lock is acquired
  • Usage scenario: The spin lock, property modifier is used when the lock is held for a short time and the thread does not want to spend too much on reschedulingatomicThere is one of themspinlocks
  • A spin lock was added so that when a new thread accesses code, it can use it if it finds that another thread is locking codeInfinite loopMethod that has been waiting for the locked code execution to complete, i.eConstantly trying to execute code, comparing performance

[Interview question] : Spinlocks vs mutex

  • Same: At the same time, only one thread is guaranteed to execute the task, that is, the corresponding synchronization function is guaranteed

  • Different:

    • The mutex: Found another thread executing, current threaddormancy(i.e.The ready state), enter the wait execution, that is, suspend. Wait for another thread to open, then wake up execution
    • spinlocks: Found another thread executing, current threadHave been askedHave been visiting at all timesBusy,.Cost performanceRelatively high
  • Scenario: Different locks are used according to the complexity of the task. However, mutex is used to handle incomplete tasks

    • Current task status comparisonshortWhen usingspinlocks
    • Instead, use mutex

Atomic atomic locks & Nonatomic nonatomic locks

Atomic and nonatomic are mainly used for attribute modification. Here are some related instructions:

  • Atomic is an atomic property, intended for multithreaded development, and is the default property!

    • Only in attributessetterMethod, addLock (spin lock)Can,Ensure that there is only one thread at a timeApply to attributeswriteoperation
    • One (thread) write many (thread) read at the same timetheThread processing technology
    • Used in Mac development
  • Nonatomic is a nonatomic property

    • There is no lock!High performance!
    • Commonly used in mobile development

What is the difference between atomic and nonatomic

  • nonatomic

    • The atomicattribute
    • Non-thread-safe.Suitable for mobile devices with small memory
  • atomic

    • atomicProperty (thread-safety),Design for multi-threadingIs the default value
    • ensureOnly one thread can write at a time(But multiple threads can take the value at the same time.)
    • Atomic has its own lock (spinlocksWrite read alone: Single thread writes and multiple threads can read
    • Thread safety, you need toConsume a lot of resources

iOSThe development ofadvice

  • allattributeAre declared asnonatomic
  • Try to avoid multithreading to grab the same resource as much as possibleThe service logic of locking and resource snatching is handed over to the server to reduce the pressure on mobile clients

Interthread communication

In the Threading Programming Guide, threads can communicate in the following ways

  • Direct messaging: Through a series of performSelector methods, it is possible to execute a task on another thread specified by one thread. Since the execution context of the task is the target thread, messages sent in this manner will be automatically serialized

  • Global variables, shared memory blocks, and objects: Another simple way to pass information between two threads is to use global variables, shared objects, or shared memory blocks. Although shared variables are fast and easy, they are more fragile than direct messaging. Shared variables must be carefully protected using locks or other synchronization mechanisms to ensure correct code. Failure to do so may result in competitive conditions, data corruption, or crashes.

  • Conditional execution: A condition is a synchronization tool that can be used to control when a thread executes a particular part of code. You can treat a condition as a lock and let the thread run only when the specified condition is met.

  • Runloop Sources: A custom Runloop source configuration allows specific application messages to be received on a thread. Because Runloop sources are event-driven, threads automatically go to sleep when there is nothing to do, increasing thread efficiency

  • Ports and Sockets: Port-based communication is a more sophisticated way to communicate between two threads, but it’s also a very reliable technique. More importantly, ports and sockets can be used to communicate with external entities, such as other processes and services. To improve efficiency, the port is implemented using a Runloop source, so the thread goes to sleep when there is no data waiting on the port. Note that port communication needs to add the port to the main thread Runloop, otherwise it will not go to the port callback method

  • Message queues: Traditional multiprocessing services define a first-in, first-out (FIFO) queue abstraction for managing incoming and outgoing data. Although message queues are simple and convenient, they are not as efficient as some other communication technologies

  • Cocoa Distributed Objects: Distributed objects are a Cocoa technology that provides a high-level implementation of port-based communication. Although it is possible to use this technique for interthread communication, it is strongly recommended not to do so because of the overhead involved. Distributed objects are better suited for communicating with other processes, although transactions between these processes are also expensive

atomic

  • atomicHow is the bottom layer implemented?
  • atomicIs it absolutely safe?

With these questions in mind, we went to the objC source code to look at atomic. Since atomic is a property modifier, it must be associated with the property set and GET methods. We found the implementation of the related methods:

static inline void reallySetProperty(id self, SEL _cmd, id newValue, ptrdiff_t offset, bool atomic, bool copy, bool mutableCopy) { if (offset == 0) { object_setClass(self, newValue); return; } id oldValue; id *slot = (id*) ((char*)self + offset); if (copy) { newValue = [newValue copyWithZone:nil]; } else if (mutableCopy) { newValue = [newValue mutableCopyWithZone:nil]; } else { if (*slot == newValue) return; newValue = objc_retain(newValue); } // if (! atomic) { oldValue = *slot; *slot = newValue; } else { spinlock_t& slotlock = PropertyLocks[slot]; slotlock.lock(); oldValue = *slot; *slot = newValue; slotlock.unlock(); } objc_release(oldValue); }Copy the code

If the set method is atomic, the lock and unlock operation will be performed.

Now look at the get method:

id objc_getProperty(id self, SEL _cmd, ptrdiff_t offset, BOOL atomic) { if (offset == 0) { return object_getClass(self);  } // Retain release world id *slot = (id*) ((char*)self + offset); if (! atomic) return *slot; // Atomic retain release world spinlock_t& slotlock = PropertyLocks[slot]; slotlock.lock(); id value = objc_retain(*slot); slotlock.unlock(); // for performance, we (safely) issue the autorelease OUTSIDE of the spinlock. return objc_autoreleaseReturnValue(value); }Copy the code

Obviously, atomic operations are also locked.

We noticed that all the source code for locking is defined as spinlock, so usually when people ask us what our atomic base is, we say,

Os_unfair_lock specifies a mutex_t lock that executes a lock and unlock.

// Property set method if (! atomic) { oldValue = *slot; *slot = newValue; } else { spinlock_t& slotlock = PropertyLocks[slot]; slotlock.lock(); oldValue = *slot; *slot = newValue; slotlock.unlock(); Using spinlock_t = mutex_tt<LOCKDEBUG>; Class mutex_tt: nocopy_t {os_unfair_lock mLock; public: constexpr mutex_tt() : mLock(OS_UNFAIR_LOCK_INIT) { lockdebug_remember_mutex(this); } constexpr mutex_tt(const fork_unsafe_lock_t unsafe) : mLock(OS_UNFAIR_LOCK_INIT) { } void lock() { lockdebug_mutex_lock(this); os_unfair_lock_lock_with_options_inline (&mLock, OS_UNFAIR_LOCK_DATA_SYNCHRONIZATION); } void unlock() { lockdebug_mutex_unlock(this); os_unfair_lock_unlock_inline(&mLock); Void lockdebug_mutex_lock(mutex_t *lock) {auto& locks = ownedLocks(); if (hasLock(locks, lock, MUTEX)) { _objc_fatal("deadlock: relocking mutex"); } setLock(locks, lock, MUTEX); }Copy the code

So the nature of atomic is not spinlock, at least not at the moment. I looked at objC’s previous source code and found that the implementation of older versions of atomic is indeed different:

typedef uintptr_t spin_lock_t;
OBJC_EXTERN void _spin_lock(spin_lock_t *lockp);
OBJC_EXTERN int  _spin_lock_try(spin_lock_t *lockp);
OBJC_EXTERN void _spin_unlock(spin_lock_t *lockp);
Copy the code

It follows that:

Atomic operations simply lock setter and getter methods

Which brings us to the second question: Are atomic absolutely safe? So let’s go ahead and look at the following code first, what’s the final number going to be? 20000?

@property (atomic, assign) NSInteger number; - (void)atomicTest { dispatch_async(dispatch_get_global_queue(0, 0), ^{ for (int i = 0; i < 10000; i ++) { self.number = self.number + 1; NSLog(@"A-self.number is %ld",self.number); }}); dispatch_async(dispatch_get_global_queue(0, 0), ^{ for (int i = 0; i < 10000; i ++) { self.number = self.number + 1; NSLog(@"B-self.number is %ld",self.number); }}); }Copy the code

NO is not 20000, why is that? Our number is locked by atomic, so why do we still have thread safety problems? The answer is already there, but we need to check it with care. Atomic is just locking setter and getter methods. The above code has two asynchronous threads executing simultaneously. Then the CPU immediately switches to thread B to execute his get method and then they do the +1 and execute the setter method, and then the number of both threads will be the same, and our +1 will be thread-safe, and that will cause our numbers to skew, Duplicate values appear.

  • atomicThe underlying implementation of thespinlocksThe new version isThe mutex.
  • atomicIt’s not absolutely thread-safe, it allows code to get ingetter 和 setterMethod is safe, but there is no guarantee that multithreaded access is safe once outgetter 和 setterMethod, its thread safety must be controlled by the programmer himself, soatomicAttributes are not necessarily related to thread safety.