Hi 👋

  • Wechat: RyukieW
  • 📦 Archive of technical articles
  • 🐙 making
My personal project Minesweeper Elic Endless Ladder Dream of books
type The game financial
AppStore Elic Umemi

What is a thread

A thread, also known as a lightweight process, is the smallest unit of a program’s execution flow. A standard thread consists of the thread ID, the current instruction pointer PC, the register set, and the stack.

Typically, a process consists of one or more threads that share program memory (including code segments, data segments, heaps, etc.) and process resources (such as open files and signals).

In most software, there is more than one thread. Multiple threads can execute concurrently without interfering with each other and share global variables and heap data of the process. So what are the advantages of multithreading over single-threaded processes?

1.1 Reasons for using multithreading

  • An operation might get stuck in a long wait, and the waiting thread would fall asleep and not be able to continue. Multithreaded execution can make good use of wait time. A typical example is waiting for a network response, which can take seconds or even tens of seconds.
  • An operation (usually a calculation) consumes a lot of time, and if there is only one thread, the interaction between the program and the user breaks down. Multithreading can have one thread responsible for interaction and another for computation.
  • The program logic itself requires artful operations, such as a multitasking download of software.
  • Multi-cpu or multi-core computer, itself has the ability to execute multiple threads at the same time, so the single-threaded program can not give full play to the full computing performance of the computer.
  • Compared with multi-process applications, multi-threading is much more efficient in data sharing.

Access permissions for threads

Thread access is very free, it can access all the data in the process village, even the stack of other threads (if you know the stack address of other threads, usually not), thread field has its own private storage, including the following:

  • Stack (although not completely inaccessible to other threads, but still generally considered private data).
  • TLS: Thread Local Storage. Thread-local storage is the private space that some operating systems provide for threads individually, but usually with very limited capacity.
  • Registers (including PC registers)Registers are the basic structure of the execution flow and are therefore bit thread private.

From a C programmer’s point of view, whether data is private between threads is as follows:

  • Thread private
    • A local variable
    • Parameters of a function
    • The TLS data
  • Sharing between threads (process owned)
    • The global variable
    • Data on the heap
    • A static variable in a function
    • Program code, any thread has the right to read and execute any code
    • Files opened by thread A can be read and written by thread B

Third, thread scheduling

Whether a computer is multi-core or single-core, threads always execute “concurrently.” When the number of threads is less than or equal to the number of processor cores (and the operating system supports multiple processors), the concurrency of threads is true concurrency, with different threads running on different processors without interfering with each other. In cases where the number of threads is greater than the number of processors, thread concurrency is somewhat hindered because at least one processor is running multiple threads.

Concurrency is a simulated state of affairs when processors are multithreaded. The operating system takes these multithreaded programs in turn, executing them for only a short period of time (typically tens to hundreds of milliseconds) so that each thread “appears” to be executing simultaneously. This behavior of constantly switching threads on a processor is called Thread Schedule and has three main states:

  • run
    • The thread is running
  • ready
    • The thread can execute immediately, but the CPU is already occupied
  • Waiting for the
    • The thread is waiting for some event (usually I/O or synchronization) to occur and cannot execute

A thread that is running has a period of time that it can execute, called a time slice.

  • When the time slice runs out, the process entersThe ready state.
  • If a process starts waiting for an event before the time slice runs out, it will enterWait state.
  • Every time a threadOut of operation, the scheduling system will select another ready thread to continue execution.
  • In a positionWait stateAfter an event occurs, the thread will enterThe ready state.

Iv. Priority

Although the current mainstream scheduling methods are different, they all have traces of Priority scheduling and Round Robin.

  • Priority scheduling
    • Determines in what order the threads are executed in turn
    • Threads with a high priority will execute earlier
    • Low-priority threads often cannot execute until there are no more high-priority executable threads in the system.
  • Rotary method
    • Let each thread take turns executing for a short period of time
    • This determines the nature of interleaving execution between threads

The priority Settings of threads can be manually set, and the system will automatically adjust the priority according to the performance of threads to make scheduling more efficient. In general, threads that go into the wait state frequently (by entering the wait state, they give up the share of time that is still available afterwards, such as threads that do I/O processing) are leaner and more welcoming than threads that do so many calculations so frequently that they exhaust the time slice each time. The reason is simple: threads that wait frequently take up very little time, and the CPU likes to be soft first.

We generally refer to the threads that wait frequently as IO intensive threads and the threads that wait rarely as CPU intensive threads. IO intensive is always easier to prioritize than CPU intensive.

4.1 Summarize the change of priority

  • The user specifies the priority
  • Increase or decrease the priority depending on how frequently you enter the wait state
  • Being prioritized for a long time without execution (preventStarvation)

The essence of atomic

Atomic is a common keyword we use to ensure thread-safe properties. Ensure that only one thread can write and multiple threads can read at a time.

You probably know it’s a spin lock. So how do you know it’s a spin lock? Let’s look at the source code (based on: objC4-818.2)

5.1 Interpretation of source code

ReallySetProperty is the source code for implementing the property set method. We can see many familiar faces: atomic and copy

static inline void reallySetProperty(id self, SEL _cmd, id newValue, ptrdiff_t offset, bool atomic, bool copy, bool mutableCopy)
{
    if (offset == 0) {
        object_setClass(self, newValue);
        return;
    }

    id oldValue;
    id *slot = (id*) ((char*)self + offset);

    // Copy keyword processing
    if (copy) {
        newValue = [newValue copyWithZone:nil];
    } else if (mutableCopy) {
        newValue = [newValue mutableCopyWithZone:nil];
    } else {
        if (*slot == newValue) return;
        newValue = objc_retain(newValue);
    }

    // Atomic keyword processing
    if(! atomic) { oldValue = *slot; *slot = newValue; }else {
        spinlock_t& slotlock = PropertyLocks[slot];
        slotlock.lock(a); oldValue = *slot; *slot = newValue; slotlock.unlock(a); }objc_release(oldValue);
}
Copy the code

We see that the atomic used a spinlock_t. Is it a spin lock?

It is actually a spin lock for mutex_tt C++.

Six, think

We know that atomic actually handles the Set method of a property, so is it safe to write or write a member variable directly via KVC?

Feel free to leave your comments

reference

Self-cultivation of the Programmer