preface

Finally came to the chapter of multi-threading, which I personally think is an important part of the whole OC, because it is widely used both from the bottom layer (LisDispatch) and the application layer. This article is mainly on the principle of multithreading for a simple comb, and then combined with the most commonly used GCD for further explanation.

What is a thread? What is a process?

  • A thread is the basic execution unit of a process, in which all tasks of a process are executed.

  • In order for a process to perform a task, it must have threads, at least one thread

  • By default, the program starts a thread, which is called the main thread or UI thread

  • A process is an application that is running in the system

  • Each process is independent of the other, and each process runs in its own dedicated and protected memory space

  • Processes have memory space, threads have no space

  • Threads within the same process share resources of the same process, but resources between processes are independent.

  • The crash of a process does not affect other processes, but the crash of a thread directly causes the crash of its own process

  • Switching before a process takes more resources, and threads are better.

  • Threads are the basic unit of processor scheduling, but processes are not.

Since IOS is a single process, a single thread would slow things down, so we need multiple threads.

meaning

advantages

  • Can improve the execution efficiency of the program

  • Increase resource utilization (CPU, memory)

  • The task on the thread is automatically destroyed after it completes execution

disadvantages

  • Starting threads takes up a certain amount of memory (512 KB for each thread by default)

  • If a large number of threads are enabled, a large amount of memory space will be occupied and the performance of the program will be reduced

Principle of multithreading

CPU scheduling ability, to make a thread to handle a task, he will not wait for his execution, but will switch to the other threads, scheduling other threads to perform other tasks, because the execution speed quickly, so it’s not perceive, at the same time there is only one thread in a mission, if you want to achieve concurrency in the true sense, that is multicore cpus.

Thread state

Thread state: When a thread is created, it does not execute immediately. Instead, it enters the runable state, waiting for the CPU to schedule. Once the CPU is scheduled, it enters the runing state, dies or blocks after running, and then re-enters the Runable state for scheduling.

Schedulable thread pool: Threads are not created sequentially. When a new task is created, it will first determine whether there are idle threads in the current schedulable thread pool. If so, it will arrange threads to execute the task. If not, it will judge whether the task has reached the threshold. If all are full and all are in the execution state, the saturation strategy is handed over.

Saturation strategy :1. Throw an exception. 2. Roll back the task to the caller. 3. Drop tasks that have been waiting for too long.

Task execution

Factors affecting the speed of task execution

CPU 2. Task complexity 3. Priority 4. Thread status

Priority reversal

IO intensive: threads that wait frequently.

CPU intensive: threads that rarely wait.

For a thread that frequently waits, the CPU has the scheduling capability to raise the priority of the thread. However, raising the priority does not mean that the thread will be executed immediately, but only raises the priority.

Priority factors: 1. User specified 2. Frequency of waiting 3

The lock

In multithreading, because resources are shared in the same process, there must be competition for resources in multithreading, so the concept of lock is introduced.

Spinlock: Discover threads are executing, constantly asking questions, high resource consumption, typically atomic. Good for short, concise tasks.

Mutex: The thread is found to be executing, goes to sleep, waits to wake up, and consumes low resources.

atomic

Default ios properties, atomic properties, single write multiple read, consume a lot of resources.

The GCD introduction

A set of C API.

GCD is apple’s solution to multi-core parallel computing

– GCDS automatically utilize more CPU cores (such as dual core, quad core)

GCD automatically manages thread lifecycles (thread creation, task scheduling, thread destruction)

The programmer only needs to tell the COMMUNIST party what task it wants to perform, without writing any thread management code

Serial and concurrent queues

Serial queue: Tasks are sequenced and queued one by one.

Concurrent queue: scheduling between tasks is concurrent, scheduling between tasks is uncertain, 1234 tasks are possible, if it is a long task cannot be completed at the moment may be suspended, other tasks completed quickly will be executed first, scheduling is not necessarily executed first. The execution of the task depends on the thread.

Deadlock simple case

    dispatch_queue_t queue = dispatch_queue_create("cooci", NULL);

    NSLog(@"1");

    // Asynchronous functions

    dispatch_async(queue, ^{

        NSLog(@"2");

        / / synchronize

        dispatch_sync(queue, ^{

            NSLog(@"3");

        });

        NSLog(@"4");

    });
    NSLog(@"5");
Copy the code

Analysis: Serial queue, first of all, this is a serial queue execution is order, go to 1, find an asynchronous function, can wait, then go to 5, and then into the asynchronous function, go to 2 is no problem, take to is a synchronous function and execution of synchronization function blocked the execution of 4, 4 execution need to wait until after 3 executes, but the implementation of 3 is added later, And is dispatch_sync synchronization, need to wait for 4 execution can be executed, wait for each other to execute modeling deadlock.

dispatch_queue_t queue = dispatch_queue_create("cooci", **NULL**);

    NSLog(@"1");

    // Asynchronous functions

    dispatch_async(queue, ^{

        NSLog(@"2");

        / / synchronize

        dispatch_sync(queue, ^{

            NSLog(@"3");

        });

    });

    NSLog(@"5");
Copy the code

Analysis: As in the previous example, when executing 3, because it is a synchronous function, it needs to wait for the completion of the outermost asynchronous function, but the asynchronous function itself is inside the asynchronous function, so it cannot complete, resulting in deadlock.

WB interview questions

dispatch_queue_t queue = dispatch_queue_create("com.lg.cooci.cn", DISPATCH_QUEUE_SERIAL);

    dispatch_async(queue, ^{

        NSLog(@"1");

    });

    dispatch_async(queue, ^{

        NSLog(@"2");

    });

    dispatch_sync(queue, ^{ NSLog(@"3"); });

    

    NSLog(@"0");

    dispatch_async(queue, ^{

        NSLog(@"Seven");

    });

    dispatch_async(queue, ^{

        NSLog(@"8");

    });

    dispatch_async(queue, ^{

        NSLog(@"9");

    });
Copy the code

1230789

Analysis: This is a serial queue in which tasks are executed sequentially. Even if an asynchronous function starts a new thread, the task waits for completion before continuing to execute the following task. I can put sleep anywhere at 23 and look at the blocking of the thread, if it’s at 2, it will print 2 after the sleep has finished printing 1, which means the serial queue will affect the execution.

MT interview questions


    while (self.num < 5) {

        dispatch_async(dispatch_get_global_queue(0.0), ^{

            self.num++;

        });

    }

    NSLog(@"end : %d",self.num);
Copy the code

The point of the problem: asynchronous functions are concurrent, and the value of num is assigned on multiple threads. While loop, less than 5 can’t get out, but this is a global concurrent queue. So num>=5. We did another experiment and changed it to a serial queue, which is also greater than 5. The difference is that the serial queue is closer to 5 than the parallel queue.

KS interview questions

     for (int i= 0; i<10000; i++) {

        dispatch_async(dispatch_get_global_queue(0.0), ^{

            self.num++;

        });

    }
Copy the code

A for loop is not the same as a while loop. A for loop is a fixed number of times. After 10,000 times, the value is printed regardless of whether the value has changed or not.