Multithreading series chapter plan content: IOS multithreaded programming (a) multithreaded foundation iOS multithreaded programming (two) Pthread iOS multithreaded programming (three) NSThread iOS multithreaded programming (four) GCD iOS multithreaded programming (five) GCD underlying principle iOS multithreaded programming (six) NSOperation IOS multithreaded programming (7) Synchronization mechanism and lock iOS multithreaded programming (8) RunLoop

Pthreads is an operating system level thread standard.

Pthreads is short for POSIX Threads, which is an acronym for Portable Operating System Interface.

It defines a set of apis for creating and manipulating threads. Based on C language, it is difficult to use and requires manual management of thread life cycle.

Function calls in the Pthreads API all start with pthread_ and can be divided into four categories:

  • Thread management

For example, create thread, wait thread (JOIN), query thread status, etc.

  • Mutex

Create, destroy, lock, unlock, set properties and other operations.

  • Condition variables

Create, destroy, wait, notify, set and query properties, etc.

  • The synchronization management

Synchronization management between threads that use mutex

POSIX’s semaphoreAPI works with Pthreads, but it is not a standard for Pthreads. So this part of the API starts with “sem_”, not “pthread_”.

Pthreads data type

1.1 pthread_t

Thread handle. Represents the thread ID.

For porting purposes, it cannot be treated as an integer and may also be a Structure. The pthread_equal() function is used to compare two thread ids, and the pthread_self() function is used to get its own thread ID.

1.2 pthread_attr_t

Thread properties.

These include scope properties, detach properties, stack address, stack size, priority, etc.

1.3 pthread_barrier_t

Synchronization barrier data type

1.4 pthread_mutex_t

Mutex data type

1.5 pthread_cond_t

Condition variable data type

Second, Pthreads function and use

You need to import the header file before using the function

#import <pthread.h>
Copy the code

2.1 Creating a Thread

The pthread_create() function is used to create a thread

pthread_create (pthread_t *restrict newthread,
			   const pthread_attr_t *restrict attr,
			   void *(*start_routine) (void *),
			   void *restrict arg); 
Copy the code

Parameter description: Parameter 1: thread handle. When a new thread is successfully created, the thread handle is returned to the caller to manage the thread. Parameter 2: Thread property. Used to set the properties of the thread. This parameter is optional, and when set to NULL, the thread default property is used. Parameter 3: entry function routine. Thread entry function. This interface returns 0 if the thread was successfully created. Parameter 4: Entry function parameter. As an argument to the entry function. This design allows a thread to have some proprietary data ready before it is created, typically using the this pointer in C++ programming.

Example: Creating a thread

// Prints the thread ID
void printids(const char *s)
{
  pid_t pid;
  pthread_t tid;
 
  pid = getpid(); // Get the process ID
  tid = pthread_self(); // Get the thread ID
  printf("%s pid:%lu, tid:%lu\n", s, (long unsigned)pid, (long unsigned)tid);
}
 
// thread entry function
void *threadRoutine (void *arg) {
    printf( "This is a thread start routine and arg = %d.\n", * (int*)arg);
    printids("new~~~");
    * (int *)arg = 0; // Change the argument passed in from 10 to 0
    return arg;
}

int main(int argc, const char * argv[]) {
    @autoreleasepool {

        // Return the ID of the last created Thread
        pthread_t *restrict tidp;            
        // Specify attributes for the thread, which can be NULL using the default attribute
        const pthread_attr_t *restrict attr = NULL; 
        int arg = 10;  // entry function arguments
        int thread =  pthread_create(&tidp, attr, threadRoutine, &arg);
        if(thread ! =0) { // If 0 is returned, the creation is successful; otherwise, a positive integer is returned
            NSLog(@"create thread fail");
        }
        int *thread_ret = NULL;
        pthread_join(tidp, (void **)& thread_ret); // Wait for thread TIDP to complete
        printf( "thread_ret = %d.\n", *thread_ret );
        printids("main~~~");
 
    }
    return 0;
}
Copy the code

The output is:

This is a thread start routine and arg = 10.
 new~~~ pid:22258, tid:123145305821184
thread_ret = 0 
main~~~ pid:22258, tid:4614421952
Copy the code

Description:

  • ① Parameter 4 can be used as a means of data communication between threads. In the example above, the main thread passes parameter 10 to the newly created child thread, and the child thread can obtain this parameter. Note that this argument is of type void *, so that the thread can accept any type of argument, and then cast it.
  • 2.pthread_join, which blocks the current thread until the merged thread completes. The first argument is the thread handle, and the second argument takes the return value of the thread. In the sample code, the main thread waits for the child thread to complete before continuing to execute the following code. If this line is not added, the program executes main~~ and then new~~. Also, the thread thread entry function changes the entry function argument to 0 and returns, so the printed thread_ret is also 0.

2.2 Merge and separate threads

So what does the pthread_join() interface do?

2.2.1 Merging threads

One of the first things to clarify is what thread merging is. As you’ve seen from the previous description, the pthread_create() interface is responsible for creating a thread. Threads are a system resource, just like memory, and threads themselves have to occupy a certain amount of memory.

A well-known problem in C/C++ programming is that if a block of memory is allocated through malloc() or new, it must be recycled using free() or delete, or else it causes the famous memory leak problem.

Since threads are no different from memory, there must be reclamation to create, or else there will be another well-known resource leak problem, which is also a serious problem. Thread merging is reclaiming thread resources.

Thread merging is an active reclamation of thread resources. A thread merge occurs when one process or thread calls the pthread_join() interface for another thread. This interface blocks the calling process or thread until the merged thread terminates. When the merged thread terminates, the pthread_join() interface reclaims the thread’s resources and returns the thread’s return value to the merger.

2.2.2 Separation of threads

Another thread reclamation mechanism corresponding to thread merge is thread separation, with the call interface being pthread_detach().

Thread separation is to reclaim thread resources by the system automatically, that is, when the separated thread ends, the system will automatically reclaim its resources. Since thread detach is the automatic recycling mechanism that starts the system, the program can’t get the return value of the detached thread, so the pthread_detach() interface needs only one parameter, the handle to the detached thread.

Thread merging and thread splitting are used to reclaim thread resources and can be used according to different business scenarios. Whatever the reason, you have to choose one or the other, or you’ll have a resource leak problem that’s just as scary as a memory leak.

2.3 Attributes of threads

Threads have properties, which are described by a thread property object. Thread attribute objects are initialized by the pthread_attr_init() interface and destroyed by pthread_attr_destory(). Their full definition is:

int pthread_attr_init(pthread_attr_t *attr);  
int pthread_attr_destory(pthread_attr_t *attr);  
Copy the code

What properties does a thread have? In general, threads under Linux have the following properties: binding properties, separation properties, scheduling properties, stack size properties and full alert size properties.

2.3.1 Binding Properties (Scope)

When it comes to this binding property, another concept comes up: Light Weight processes (LWP).

Light processes have the same concept as kernel threads in Linux systems and belong to the scheduling entity of the kernel. A light process can control one or more threads. In computer operating systems, lightweight process (LWP) is an approach to multi-tasking. Compared with normal processes, LWP shares all (or most) of its logical address space and system resources with other processes; Compared with threads, LWP has its own process identifier and has parent-child relationships with other processes. This is the same process generated by the unix-like system call vfork(). In addition, threads can be managed by both the application and the kernel, whereas LWP can only be managed by the kernel and scheduled like a normal process. The Linux kernel is a typical example of LWP support.

By default, for a program with n threads, the state of how many light processes are started and which light processes control which threads are controlled by the operating system is called unbound. The meaning of binding is easy to understand, as long as you specify that a thread is “tied” to a light process, you can call it bound.

The bound thread has higher response times because the operating system’s scheduling body is light processes, and the bound thread ensures that it always has a light process available when needed. That’s what binding properties are for.

The interface for setting binding properties is pthread_attr_setscope(), which is fully defined as:

int pthread_attr_setscope(pthread_attr_t *attr, int scope);
Copy the code

It takes two parameters, the first is a pointer to the thread property object, and the second is the binding type, which has two values: PTHREAD_SCOPE_SYSTEM (bound) and PTHREAD_SCOPE_PROCESS (unbound). The following code demonstrates the use of this property.

Example: Set thread binding properties

... int main( int argc, char *argv[] ) { pthread_attr_t attr; pthread_t th; ... pthread_attr_init( &attr ); pthread_attr_setscope( &attr, PTHREAD_SCOPE_SYSTEM ); pthread_create( &th, &attr, thread, NULL ); ... }Copy the code

It is worth noting:

Linux threads are always bound, so PTHREAD_SCOPE_PROCESS doesn’t work in Linux and returns an ENOTSUP error. If you’re just writing multithreaded programs under Linux, you can ignore this property entirely.

2.3.2 Detach Attributes (Detach)

Indicates whether the new thread is out of sync with other threads in the process. Threads can be merged and detached, and the detached property lets the thread decide before it is created that it should be detached. If set to PTHREAD_CREATE_DETACHED, there is no need to call pthread_join() or pthread_detach() to reclaim thread resources, freeing them up on exit.

The interface for setting the separation property is pthread_attr_setdetachState (), which is fully defined as:

pthread_attr_setdetachstat(pthread_attr_t *attr, int detachstate); 
Copy the code

The second parameter has two values: PTHREAD_CREATE_DETACHED (detached) and PTHREAD_CREATE_JOINABLE (merged, also the default). The following code demonstrates the use of this property.

Example: Set thread separation properties

...int main( int argc, char *argv[] )  
{  
    pthread_attr_t attr;  
    pthread_tth; ... pthread_attr_init( &attr ); pthread_attr_setdetachstate(&attr, PTHREAD_CREATE_DETACHED); pthread_create( &th, &attr, thread,NULL); ... }Copy the code

2.3.3 Scheduling Properties

The scheduling properties of a thread are algorithm, priority and inheritance.

Algorithm (schedpolicy)

Linux provides three thread scheduling algorithms: polling, fifO, and others.

The polling and first-in, first-out scheduling algorithms are specified by POSIX standards, while the others represent the scheduling algorithms that Linux considers more appropriate, so the default scheduling algorithm is the other. Polling and FIFO are real-time scheduling algorithms.

  • Polling refers to time slice rotation. When the time slice of the thread runs out, the system will reallocate the time slice and place it at the end of the ready queue, so as to ensure that polling tasks with the same priority get a fair CPU usage time.

  • First in, first out (FIFO) is a first-come, first-served process that runs until a higher priority thread appears or abandons itself.

The interface for setting the thread scheduling algorithm is pthread_attr_setschedpolicy(), which is fully defined as:

pthread_attr_setschedpolicy(pthread_attr_t *attr, int policy);
Copy the code

Its second parameter has three values: SCHED_RR (real-time, rotation), SCHED_FIFO (real-time, first-in, first-out) and SCHED_OTHER (normal, non-real-time).

priority

Linux thread priorities are not the same as process priorities.

Linux thread priorities range from 1 to 99, with a higher number indicating a higher priority. And note that priorities are only valid if SHCED_RR or SCHED_FIFO scheduling algorithms are used. For threads with SCHED_OTHER scheduling algorithms, the priority is always 0.

The interface for setting thread priority is pthread_attr_setschedparam(), which is fully defined as:

struct sched_param {  
    int sched_priority;  
}  
int pthread_attr_setschedparam(pthread_attr_t *attr, struct sched_param *param);  
Copy the code

The sched_PRIORITY field of the sched_param structure is the priority of the thread.

Moreover, even with SCHED_RR or SCHED_FIFO scheduling algorithms, thread priorities are not arbitrarily set. First, the process must be running as root; Second, you need to give up the inheritance of the thread. What is the right of inheritance?

Inheritance (inheritsched)

Inheritance is when a new thread is created, the new thread inherits the scheduling properties of the parent thread (the creator thread). If you do not want the new thread to inherit the scheduling properties of the parent thread, you must abandon the inheritance. The interface for setting thread inheritance is pthread_attr_setinheritsched(), which is fully defined as:

int pthread_attr_setinheritsched(pthread_attr_t *attr, int inheritsched);  
Copy the code

Its second argument takes on two values: PTHREAD_EXPLICIT_SCHED (relinquishing inheritance) and PTHREAD_INHERIT_SCHED (owning inheritance). The former represents the new thread using an explicitly specified scheduling policy and scheduling parameters (that is, the values in attR), while the latter represents the values inherited from the caller thread. New threads are inherited by default.

The following code can demonstrate the behavior of threads with different scheduling algorithms and priorities, as well as how to modify the scheduling properties of a thread.

Example: Set thread scheduling properties

#include <stdio.h>  
#include <unistd.h>  
#include <stdlib.h>  
#include <pthread.h>

#define THREAD_COUNT 12  
void show_thread_policy( int threadno )  
{  
    int policy;  
    struct sched_param param;  
    pthread_getschedparam( pthread_self(), &policy, param );  
    switch( policy ){  
    case SCHED_OTHER:  
        printf( "SCHED_OTHER %d\n", threadno );  
        break;  
    case SCHED_RR:  
        printf( "SCHDE_RR %d\n", threadno );  
        break;  
    case SCHED_FIFO:  
        printf( "SCHED_FIFO %d\n", threadno );  
        break;  
    default:  
        printf( "UNKNOWN\n"); }}void* thread( void *arg )  
{  
    int i, j;  
    long threadno = (long)arg;  
    printf( "thread %d start\n", threadno );  
    sleep(1);  
    show_thread_policy( threadno );  
    for( i = 0; i < 10; ++i ) {  
        for( j = 0; j < 100000000; ++j ){}  
        printf( "thread %d\n", threadno );  
    }  
    printf( "thread %d exit\n", threadno );  
    return NULL;  
}  
int main( int argc, char *argv[] )  
{  
    long i;  
    pthread_attr_t attr[THREAD_COUNT];  
    pthread_t pth[THREAD_COUNT];  
    struct sched_param param;  
    for( i = 0; i < THREAD_COUNT; ++i )  
        pthread_attr_init( &attr[i] );  
        for( i = 0; i < THREAD_COUNT / 2; ++i ) {  
            param.sched_priority = 10;                    
            pthread_attr_setschedpolicy( &attr[i], SCHED_FIFO );  
            pthread_attr_setschedparam( &attr[i], param );  
            pthread_attr_setinheritsched( &attr[i], PTHREAD_EXPLICIT_SCHED );  
        }  
        for( i = THREAD_COUNT / 2; i < THREAD_COUNT; ++i ) {  
            param.sched_priority = 20;                    
            pthread_attr_setschedpolicy( &attr[i], SCHED_FIFO );  
            pthread_attr_setschedparam( &attr[i], param );  
            pthread_attr_setinheritsched( &attr[i], PTHREAD_EXPLICIT_SCHED );  
        }  
        for( i = 0; i < THREAD_COUNT; ++i )                      
            pthread_create( &pth[i], &attr[i], thread, (void*)i );                
        for( i = 0; i < THREAD_COUNT; ++i )                      
            pthread_join( pth[i], NULL );                      
        for( i = 0; i < THREAD_COUNT; ++i )                      
            pthread_attr_destroy( &attr[i] );                     
    return 0;                             
}  
Copy the code

2.3.4 Stack size properties

The main function of a thread has a very similar property to the main() function of a program in that it can have local variables. Although threads of the same process share memory space, its local variables do not. The reason is that local variables are stored on the stack, and different threads have different stacks. Linux allocates 8MB stack space per thread by default. If you feel that this space is not enough, you can increase it by modifying the stack size property of the thread.

The interface for modifying the thread stacksize attribute is pthread_attr_setstackSize (), which is fully defined as:

int pthread_attr_setstacksize(pthread_attr_t *attr, size_t stacksize);  
Copy the code

Its second argument is the stack size, in bytes. Note that the thread stack must not be smaller than 16KB, and should be allocated as integer multiples of 4KB(32-bit systems) or 2MB (64-bit systems), that is, the size of the page in memory. Also, changing the thread stack size is risky, and if you don’t know what you’re doing, it’s best to leave it alone.

2.3.5 Full stack alert zone attribute

Since threads have a stack, and there is a size limit, you are bound to run out of stack. A full stack of threads is a very dangerous thing, because it can lead to destruction of kernel space, and if exploited by the right people, the consequences can be disastrous. To prevent this from happening, Linux sets a full stack alert for thread stacks. This area is typically a page, an extension of the thread stack. Once code accesses this area, it sends a SIGSEGV signal to notify.

While the full stack alert can play a role in security, it also has the disadvantage of wasting memory space and making the system slow for memory tight systems. So there’s a need to shut down the perimeter. At the same time, if we change the size of the thread stack, the system will assume that we manage the stack ourselves and will disable the alert zone, turning it on if necessary. Pthread_attr_setguardsize () is the interface for modifying the full stack alert zone properties, which is fully defined as:

int pthread_attr_setguardsize(pthread_attr_t *attr, size_t guardsize);  
Copy the code

Its second parameter is the size of the alert area, in bytes. Like setting the thread stack size property, it should be allocated as integer multiples of 4KB or 2MB as possible. When the alert zone size is set to 0, the alert zone is turned off. It wastes a bit of memory, but it greatly improves security, so the loss is worth it. And whenever you change the size of the thread stack, be sure to set this warning zone at the same time.

2.4 Termination of a thread

2.4.1 Termination of a thread

Thread exit conditions can be any of the following:

  • 1. Returns after the thread completes
  • 2. Callpthread_exitFunction exit
void pthread_exit(void * rval_ptr); // rval_ptr points to the return value
Copy the code
  • 3. The thread is used by another thread in the same processpthread_cancel
void pthread_cancel(pthread_t tid)
Copy the code

Pthrea_cancel does not wait for the thread to terminate, but instead makes a request. This causes the specified thread to be cancelled as if pthread_exit(PTHREAD_CANCELLED) had been called. However, the specified thread can choose to ignore or do its own processing. In addition, this function does not cause a Block, but simply issues the request Cancel.

  • 4. The process that creates the threadexec()orexit()
  • 5.main()Finish first and no call is displayedpthread_exit.

If pthread_exit() is not explicitly called, main() completes before the thread it spawned, and all threads terminate. Display a call to pthread_exit(), and main() waits for all threads to complete before terminating.

Example: Thread termination

// Prints the thread ID
void printids(const char *s)
{
  pid_t pid;
  pthread_t tid;
 
  pid = getpid();
  tid = pthread_self();
  printf("%s pid:%lu, tid:%lu\n", s, (long unsigned)pid, (long unsigned)tid);
}
 
/ / thread to return
void *return_thread(void * arg)
{
  printids("thread returning~~~");
  return (void*)0;
}

/ / thread exit
void *exit_thread(void * arg)
{
  printids("thread exiting~~~");
  pthread_exit((void*) 2); // The argument can return a structure, but the structure must be returned to be usable (not allocated on the stack).
}

int main(int argc, const char * argv[]) {
    @autoreleasepool {

        int error;
        pthread_t *restrict tidp1, tidp2;
        void * rVal;
        error = pthread_create(&tidp1, NULL, return_thread, NULL);
        if(error ! =0) {
            NSLog(@"create thread - tidp1 fail");
        }
        error = pthread_create(&tidp2, NULL, exit_thread, NULL);
        if(error ! =0) {
            NSLog(@"create thread - tidp2 fail");
        }
        pthread_join(tidp1, &rVal);
        printf("return_thread return:%ld\n", (long)rVal);
         
        pthread_join(tidp2, &rVal);
        printf("exit_thread return:%ld\n", (long)rVal);
        printids("main~~~");
 
    }
    return 0;
}
Copy the code

The output is:

thread returning~~~ pid:23857, tid:123145461620736
thread exiting~~~   pid:23857, tid:123145462157312
return_thread       return:0
exit_thread         return:2
main~~~             pid:23857, tid:4382375360
Copy the code

2.4.2 Clearing Threads

A thread can schedule some function to be called automatically when it exits, like the atexit() function. The following functions need to be called:

void pthread_cleanup_push(void (*rtn)(void *), void *arg); 
void pthread_cleanup_pop(int execute);
Copy the code

These two functions maintain a Stack of function Pointers that can be pushed /pop with function parameter values. The order of execution is from the top of the stack to the bottom of the stack, which is the reverse of push.

The thread cleanup Handlers specified by pthread_cleanup_push are called in the following case:

  • A. callpthread_exit
  • B. the responsecancelrequest
  • C. Call with a non-0 parameterpthread_cleanup_pop(). (If pthread_cleanup_pop() is called with 0, then the handler is not called and only the cleanup function is removed.
void *thread_func(void *arg)
{pthread_cleanup_push (the cleanup, "handler")// do something
   Pthread_cleanup_pop(0);
    return((void *)0);
}
Copy the code

2.5 Threaded local Storage

Internal threads can share memory address space, and data exchange between threads can be very fast, which is the most significant advantage of threads. However, multiple threads accessing shared data requires expensive synchronization overhead, can cause synchronization related bugs, and even more troublesome is that some data does not want to be shared at all.

Errno in the C library is the most typical example. Errno is a global variable that holds the error code for the last system call. There is no problem in a single-threaded environment. But in a multithreaded environment, because errno can be modified by all threads, it is difficult to determine which system call errno represents. This is known as “non-thread-safe”.

In addition, from the perspective of modern technology, many times the purpose of multithreading is not parallel processing of shared data. More due to the introduction of multi-core CPU technology, in order to make full use of CPU resources and parallel computing (not interfering with each other). In other words, most of the time each thread only cares about its own data and does not need to synchronize with others.

To solve these problems, there are many solutions. Such as using global variables with different names. But a global variable with a fixed name like errno does not. As mentioned earlier, allocating local variables in the thread stack is not shared between threads. However, it has the disadvantage that other functions inside the thread are hard to access.

The simplest solution to this problem is Thread Local Storage, or TLS. With TLS, errno reflects the error code of the last system call in the thread, which is thread-safe. Linux provides full support for TLS through the following interfaces:

int pthread_key_create(pthread_key_t *key, void (*destructor)(void*));  
int pthread_key_delete(pthread_key_t key);  
void* pthread_getspecific(pthread_key_t key);  
int pthread_setspecific(pthread_key_t key, const void *value); 
Copy the code
  • pthread_key_create()The interface is used to create a thread-local store.
  • The first parameter returns a handle to the store, which needs to be held in a global variable so that all threads can access it.
  • The second argument is a callback function pointer to thread-local data, which can be passed NULL if you want to control the thread-local data life cycle yourself.
  • pthread_key_delete()Interface to reclaim thread local storage. Its only argument is the handle to the reclaimed storage area.
  • pthread_getspecific()andpthread_setspecific()These two interfaces are used to get and set data from thread local storage, respectively. The two interfaces will have different results for different threads (the same thread will have the same results), which is the key to thread-local storage.

The following code shows how to use thread-local storage in Linux, notes the execution results, and analyzes some of the features of thread-local storage, as well as the timing of memory reclamation.

Example: Using thread local storage

#include <stdio.h>  
#include <stdlib.h>  
#include <pthread.h>  
#define THREAD_COUNT 10  
pthread_key_t g_key;  
typedef struct thread_data{  
    int thread_no;  
} thread_data_t;  
void show_thread_data(a)  
{  
    thread_data_t *data = pthread_getspecific( g_key );  
    printf( "Thread %d \n", data->thread_no );  
}  
void* thread( void *arg )  
{  
    thread_data_t *data = (thread_data_t *)arg;  
    printf( "Start thread %d\n", data->thread_no );  
    pthread_setspecific( g_key, data );  
    show_thread_data();  
    printf( "Thread %d exit\n", data->thread_no );  
}  
void free_thread_data( void *arg )  
{  
    thread_data_t *data = (thread_data_t*)arg;  
    printf( "Free thread %d data\n", data->thread_no );  
    free( data );  
}  
int main( int argc, char *argv[] )  
{  
    int i;  
    pthread_t pth[THREAD_COUNT];  
    thread_data_t *data = NULL;  
    pthread_key_create( &g_key, free_thread_data );  
    for( i = 0; i < THREAD_COUNT; ++i ) {  
        data = malloc( sizeof( thread_data_t)); data->thread_no = i; pthread_create( &pth[i],NULL, thread, data );  
    }  
    for( i = 0; i < THREAD_COUNT; ++i )  
        pthread_join( pth[i], NULL );  
    pthread_key_delete( g_key );  
    return 0;  
} 
Copy the code

2.6 Thread Synchronization

While thread-local storage prevents threads from accessing shared data, most data is always shared between threads. When it comes to reading and writing shared data, you have to use synchronization, otherwise threads will scramble the shared data, and your data will get messy.

The main thread synchronization mechanisms provided by Linux are mutex and conditional variables.

2.6.1 mutex

First let’s look at mutex. Mutual exclusion means that threads that acquire resources exclude threads that do not acquire resources. Linux uses mutex to implement this mechanism.

Since called lock, add lock and unlock the concept. Once a thread is granted the lock, it will have the lock to itself, and any other thread that attempts to touch it will be stunned. When the locked thread unlocks and abandons the lock, the stunned thread is awakened by the system and continues to fight for the lock. Who will get it, god only knows. But one of them will get it. So other threads to join in the fun were the system to “beat dizzy”… And so on.

In view of the behavior of the mutex, the code between locks and unlocks is equivalent to a single log bridge, with only one thread executing at a time. Globally, this is where all threads running in parallel become queued. The technical term is synchronous execution, and this area of code is called a critical section. Synchronous execution defeats the purpose of thread parallelism, and the larger the critical section, the worse it is. Therefore, in practical application, we should try to avoid the emergence of critical sections. If necessary, the critical section should be as small as possible. Why use multithreading if you can’t even shrink the critical section?

The interfaces for initializing and destroying mutex are pthread_mutex_init() and pthead_mutex_destroy(),

For locking and unlocking there are pthread_mutex_lock(), pthread_mutex_trylock(), and pthread_mutex_unlock().

The full definition of these interfaces is as follows:

int pthread_mutex_init(pthread_mutex_t *restrict mutex,const pthread_mutexattr_t *restrict attr);  
int pthread_mutex_destory(pthread_mutex_t *mutex );  
int pthread_mutex_lock(pthread_mutex_t *mutex);  
int pthread_mutex_trylock(pthread_mutex_t *mutex);  
int pthread_mutex_unlock(pthread_mutex_t *mutex);  
Copy the code

As you can see from these definitions, mutex also has attributes. However, this attribute will not need to be changed in most cases, so use the default attribute. The method is to pass it NULL.

Phtread_mutex_trylock () is unique in that a thread attempting to lock is never “knocked out” by the system, but tells the programmer that the lock is already in use by returning EBUSY. It is up to the programmer to decide whether to continue to “ram” critical sections. The purpose of this interface is not to allow threads to “ram” critical sections. Again, the underlying purpose is to improve parallelism, leaving the thread to do something else that makes sense. Of course, if you’re lucky enough that no one else owns the lock at the time, you’ll get access to the critical section.

The following code demonstrates how to use mutex under Linux.

Example: Using a mutex

#include <stdio.h>  
#include <stdlib.h>  
#include <errno.h>  
#include <unistd.h>  
#include <pthread.h>  
pthread_mutex_t g_mutex;  
int g_lock_var = 0;  
void* thread1( void *arg )  
{  
    int i, ret;  
    time_t end_time;  
    end_time = time(NULL) + 10;  
    while( time(NULL) < end_time ) {  
        ret = pthread_mutex_trylock( &g_mutex );  
        if( EBUSY == ret ) {  
            printf( "thread1: the varible is locked by thread2.\n" );  
        } else {  
            printf( "thread1: lock the variable! \n" );  
            ++g_lock_var;  
            pthread_mutex_unlock( &g_mutex );  
        }  
        sleep(1);  
    }  
    return NULL;  
}  
void* thread2( void *arg )  
{  
    int i;  
    time_t end_time;  
    end_time = time(NULL) + 10;  
    while( time(NULL) < end_time ) {  
        pthread_mutex_lock( &g_mutex );  
        printf( "thread2: lock the variable! \n" );  
        ++g_lock_var;  
        sleep(1);  
        pthread_mutex_unlock( &g_mutex );  
    }  
    return NULL;  
}  
int main( int argc, char *argv[] )  
{  
    int i;  
    pthread_t pth1,pth2;  
    pthread_mutex_init( &g_mutex, NULL );  
    pthread_create( &pth1, NULL, thread1, NULL );  
    pthread_create( &pth2, NULL, thread2, NULL );  
    pthread_join( pth1, NULL );  
    pthread_join( pth2, NULL );  
    pthread_mutex_destroy( &g_mutex );  
    printf( "g_lock_var = %d\n", g_lock_var );  
    return 0;                              
}  
Copy the code

Finally, mutexes are in the same thread and do not have the feature of mutual exclusion. That is, threads cannot use mutex to knock the system out. A good reason for this is that the thread that owns the lock knocks itself out. Who can own the lock again? However, the other situation to avoid is that two threads already have a lock, but still want the lock of the other thread, both threads will be “stunned”. Once this happens, no one can acquire the lock, which is also known as a deadlock. Deadlocks are something to be avoided at all times, as they are extremely damaging.

2.6.2 Condition Variables

The key point of the conditional variable is “variable”. The difference with locks is that when a thread encounters this “variable”, it is not “stunned” by the system like a lock, but chooses whether to wait there or not based on the “condition”. Wait for what? Wait for the “signal” to allow passage. Is this “signal” system-controlled? Apparently not! It is controlled by another thread.

If a mutex can be likened to a foot-bridge, the condition variable can be likened to a traffic light on a road. When a vehicle meets a traffic light, it will certainly judge whether to pass according to the color of the “light”. Then who will control the color of the “light”? Must be the traffic police, at least you and I dare not move it (some people will say that is automatic, but how much time interval conversion is also the traffic police setting is not?) . Then “vehicle” and “traffic police” are two kinds of threads on the road, and in most cases, “vehicle” is more than “traffic police” is less.

To further understand, a condition variable is an event mechanism. One type of thread controls the occurrence of “events” and another type of thread waits for the occurrence of “events”. To implement this mechanism, condition variables must be global variables shared between threads. Also, condition variables need to be used in conjunction with mutex.

The interfaces for initializing and destroying conditional variables are pthread_cond_init() and pthread_cond_destory(); The interface for controlling “events” is pthread_cond_signal() or pthread_cond_broadcast(); The interface waiting for an “event” to occur is pthead_cond_wait() or pthread_cond_timedwait(). Their full definition is as follows:

int pthread_cond_init(pthread_cond_t *cond, const pthread_condattr_t *attr);  
int pthread_cond_destory(pthread_cond_t *cond);  
int pthread_cond_wait(pthread_cond_t *cond, pthread_mutex_t *mutex);  
int pthread_cond_timedwait(pthread_cond_t *cond,pthread_mutex_t *mutex, const timespec *abstime);  
int pthread_cond_signal(pthread_cond_t *cond);  
int pthread_cond_broadcast(pthread_cond_t *cond); 
Copy the code

As can be seen from the name of the interface that waits for “events”, one is to wait indefinitely and the other is to wait for a limited time. The latter is similar to the mutex pthread_mutex_trylock(), in that when the waiting “event” hasn’t happened for some time, do something else meaningful.

For the control of the “event” of the interface is “unicast” and “broadcast”. Unicast means that only one thread is notified that an event has occurred, whereas broadcast means that all threads are notified. In the broadcast case, all threads that are “notified” also pass through the mutex log.

For the use of condition variables, see the following code, which implements a producer-consumer thread synchronization scheme.

Example: Use condition variables

#include <stdio.h>  
#include <stdlib.h>  
#include <pthread.h>  
#define BUFFER_SIZE 5  
pthread_mutex_t g_mutex;  
pthread_cond_t g_cond;  
typedef struct {  
    char buf[BUFFER_SIZE];  
    int count;  
} buffer_t;  
buffer_t g_share = {"".0};  
char g_ch = 'A';  
void* producer( void *arg )  
{  
    printf( "Producer starting.\n" );  
    while( g_ch ! ='Z' ) {  
        pthread_mutex_lock( &g_mutex );  
        if( g_share.count < BUFFER_SIZE ) {  
            g_share.buf[g_share.count++] = g_ch++;  
            printf( "Prodcuer got char[%c]\n", g_ch - 1 );  
            if( BUFFER_SIZE == g_share.count ) {  
                printf( "Producer signaling full.\n" );  
                pthread_cond_signal( &g_cond );  
            }  
        }  
        pthread_mutex_unlock( &g_mutex );  
    }  
    printf( "Producer exit.\n" );  
    return NULL;  
}  
void* consumer( void *arg )  
{  
    int i;  
    printf( "Consumer starting.\n" );  
    while( g_ch ! ='Z' ) {  
        pthread_mutex_lock( &g_mutex );  
        printf( "Consumer waiting\n" );  
        pthread_cond_wait( &g_cond, &g_mutex );  
        printf( "Consumer writing buffer\n" );  
        for( i = 0; g_share.buf[i] && g_share.count; ++i ) {  
            putchar( g_share.buf[i] );  
            --g_share.count;  
        }  
        putchar('\n');  
        pthread_mutex_unlock( &g_mutex );  
    }  
    printf( "Consumer exit.\n" );  
    return NULL;  
}  
int main( int argc, char *argv[] )  
{  
    pthread_t ppth, cpth;  
    pthread_mutex_init( &g_mutex, NULL );  
    pthread_cond_init( &g_cond, NULL );  
    pthread_create( &cpth, NULL, consumer, NULL );  
    pthread_create( &ppth, NULL, producer, NULL );  
    pthread_join( ppth, NULL );  
    pthread_join( cpth, NULL );  
    pthread_mutex_destroy( &g_mutex );  
    pthread_cond_destroy( &g_cond );  
    return 0;  
}  
Copy the code

There is a potential problem with this code: If the producer thread executes faster than the consumer in parallel, the producer thread will acquire the lock first and then signal the consumer, but the consumer cannot acquire the lock and cannot execute at pthead_cond_wait(). A deadlock occurred. Pthread_create (&cpth, NULL, consumer, NULL); And pthread_create(& PPTH, NULL, producer, NULL); Add a long delay function usleep(100) to ensure that the consumer thread executes first at pthead_cond_wait().

As you can see from your code, any interface waiting for an “event” to occur needs to pass a mutex to it. The mutex is actually locked before they are called and unlocked after they are called. Not only that, the lock is placed before calling the interface where the action “event” occurred, and unlocked after the call. The problem with this is that, in this way, “event happening” and “event waiting” are critical sections of each other. That is, if the “event” has not yet occurred, then there are threads waiting for the “event” to prevent the “event” from happening. More simply, the “producer” and “consumer” are going back and forth. But what happens is that the consumer is “notified” of this “event” when the buffer is full, and then prints out character by character and cleans the buffer. The producer does not continue until all the characters of the buffer have been printed out.

Why did this happen? This illustrates what the pthread_cond_wait() interface does with mutex. The answer is: unlock. Pthread_cond_wait () first unlocks the mutex and then waits. At this point the “producer” can enter the critical zone and then signal to the “consumer” when the conditions are met.

When pthead_cond_wait() is notified, it also locks the mutex, which prevents the producer from continuing to work and “padding” the buffer. In addition, it is necessary to qualify the “producer” to work only if the buffer is not sufficient. Because after pthread_cond_wait() is notified, the producer may have re-entered the critical section without locking the mutex, and the consumer is blocked again. It is because of the nature of the condition variable that it must be used in conjunction with a mutex.

In addition, many other types of thread synchronization mechanisms can be modeled using condition variables and mutexes, such as Event and Semaphore.

Pthreads common function definition

Thread manipulation function

  • Pthread_create () : Creates a thread

  • Pthread_exit () : Terminates the current thread

  • Pthread_cancel () : Requests to interrupt another thread. The requested interrupt thread continues running until a cancellation point is reached. A cancellation point is a place where a thread checks to see if it has been canceled and acts on the request. POSIX cancellations have two types. One is PTHREAD_CANCEL_DEFERRED. This is the default cancellation type. The other is asynchronous cancellation (PHREAD_CANCEL_ASYNCHRONOUS), in which a thread can cancel at any time. The cancellation point of a system call is actually the time period between the cancellation type of the function being changed to asynchronous cancellation and the change back to delayed cancellation. Almost any library function that causes a thread to suspend responds to CANCEL and terminates the thread, including delay functions such as sleep and delay.

  • Pthread_join () : blocks the current thread until another thread finishes running

  • Pthread_kill () : sends a signal to a thread with a specified ID. If the thread does not process the signal, the signal behaves as it should. A signal value of 0 is reserved and is used to determine whether the thread is still alive based on the return value of the function.

  • Pthread_cleanup_push () : Threads can schedule functions that need to be called in case of an exception exit. Such functions are called thread cleaners, and threads can create multiple cleaners. The entry address of the thread cleaner is stored on a stack, implementing the principle of advanced post-processing. Thread termination caused by pthread_cancel or pthread_exit executes the functions pushed in by pthread_cleanup_push in sequence. A thread function executing a return statement does not cause the thread cleaner to be executed.

  • Pthread_cleanup_pop () : When called with a non-zero argument, causes the currently popped thread cleaner to execute.

  • Pthread_setcancelstate () : Allows or disallows cancelling another thread.

  • Pthread_setcanceltype () : Sets the cancellation type of a thread to delayed cancellation or asynchronous cancellation.

Thread attribute function

  • Pthread_attr_init () : Initializes thread attribute variables. When run, the pthread_attr_t structure contains default values for all attributes of threads supported by the operating system.

  • Pthread_attr_setdetachstate () : Sets the detachState property of the thread attribute variable (which determines whether the thread can be joinable when terminated).

  • Pthread_attr_getdetachstate () : Gets the attribute of detachState

  • Pthread_attr_setscope () : Sets the __scope attribute of the thread attribute variable

  • Pthread_attr_setschedparam () : Sets the schedparam property of the thread property variable, which is the priority of the call.

  • Pthread_attr_getschedparam () : Gets the schedparam property of the thread property variable, which is the priority of the call.

  • Pthread_attr_destroy () : Removes thread attributes, overwriting them with invalid values

Mutex function:

  • Pthread_mutex_init () : Initializes the mutex

  • Pthread_mutex_destroy () : deletes a mutex

  • Pthread_mutex_lock () : Hold mutex (block operation)

  • Pthread_mutex_trylock () : Attempts to possess the mutex (non-blocking operation). That is, when the mutex is idle, the lock is occupied. Otherwise, return immediately.

  • Pthread_mutex_unlock (): Releases the mutex

  • Pthread_mutexattr_ (): Functions related to the mutex attribute

Conditional variable function

  • Pthread_cond_init () : Initializes the condition variable

  • Pthread_cond_destroy () : destroys the condition variable

  • Pthread_cond_signal (): sends a signal to a thread that is in a blocking wait state in the thread queue of the current condition variable, so that it is released from the blocking state and wakes up to resume execution. The pthread_cond_signal also returns successfully if no threads are in the blocked waiting state. Normally, only a blocked thread is signaled. If multiple threads are blocking waiting for the current condition variable, the priority of each waiting thread determines which thread receives the signal to continue execution. If all threads have the same priority, the length of waiting time is used to determine which thread gets the signal. On multiple processors, however, pthread_cond_signal can wake up multiple threads at the same time. When only one thread can be awakened for a task, the other awakened threads need to continue to wait. The POSIX specification requires that pthread_cond_signal wake up at least one thread on pthread_cond_WAIT, and some implementations wake up multiple threads on a single processor for simplicity. So it’s best to use a while loop to condition pthread_cond_wait().

  • Pthread_cond_wait (): waits for a special condition of a condition variable to occur; Pthread_cond_wait () must be used with a pthread_mutex. This function call actually does three things in sequence: unlocks the current pthread_mutex, suspends the current thread from the thread queue of the current condition variable, and locks the current pthread_mutex after being woken up by another thread’s signal. If a thread is awakened on a signal, it will be re-locked by the matching mutex, and the pthread_cond_wait() function will not return until the thread obtains the matching mutex. It is important to note that a condition variable should not be used with multiple mutex.

  • Pthread_cond_broadcast (): In some applications, such as thread pools, pthread_cond_broadcast wakes up all threads, but we usually only need a few threads to execute the task, so the other threads need to wait.

  • Pthread_condattr_ (): function related to the attribute of the condition variable

Thread private storage (thread-local storage) :

  • Pthread_key_create (): Key of type pthread_KEY_T assigned to identify thread-specific data in the process

  • Pthread_key_delete (): Destroys existing thread-specific data keys

  • Pthread_setspecific (): Sets the binding value for the specific data key of the specified thread

  • Pthread_getspecific (): Gets the key binding value of the calling thread and stores the binding in the location pointed to by value

Synchronous barrier function

  • Pthread_barrier_init (): Synchronization barrier initialization

  • pthread_barrier_wait():

  • pthread_barrier_destory():

Other multithreaded synchronization functions:

  • Pthread_rwlock_ * () : read/write locks

Utility functions:

  • Pthread_equal (): Compares the thread ids of two threads

  • Pthread_detach (): Detach the thread

  • Pthread_self (): Queries the thread id of the thread itself

  • Pthread_once () : Some functions that need to be executed only once. The first parameter, of type pthread_once_t, is an internally implemented mutex that is guaranteed to be executed only once globally in the program.

A semaphoreThe function,

Included in semaphore.h:

  • Sem_open: Creates or opens an existing named semaphore. It can be divided into binary semaphore and counting semaphore. Named semaphores can be shared between processes.

  • Sem_close: Turns off a semaphore without removing it from the system. Named semaphores are persistent with the kernel, and their value remains even if no process currently has a semaphores open.

  • Sem_unlink: Removes semaphores from the system.

  • Sem_getvalue: Returns the current value of the specified semaphore. If the semaphore is currently locked, the return value is either 0 or some negative value, the absolute value of which is the number of threads waiting for the semaphore to unlock.

  • Sem_wait: request shared resources. If the specified semaphore value is greater than 0, reduce it by 1 and return immediately. If the value is 0, the calling thread is put to sleep until the value is greater than 0, at which point it is subtracted by 1, and the function returns. The sem_WAIT operation must be atomic.

  • Sem_trywait: applies for shared resources and does not put the calling thread to sleep when the specified semaphore is already 0. Instead, it returns an EAGAIN error.

  • Sem_post: releases shared resources. The opposite of sem_wait.

  • Sem_init: Initializes an unnamed (memory) semaphore

  • Sem_destroy: Destroys an unnamed semaphore

The Shared memoryfunction

Contained in sys/mman.h, using the RT library when linking:

  • Mmap: Maps a file or a POSIX shared memory area object to the address space of the calling process. Use this function for the following purposes: 1. Use plain files to provide memory-mapped I/O 2. Use special files to provide anonymous memory mapping. 3. Use shm_open to provide POSIX shared memory between unrelated processes.

  • Munmap: Deletes a mapping

  • Msync: file and memory synchronization function

  • Shm_open: creates or opens a shared memory area

  • Shm_unlink: Removes the name of a shared memory area object. Removing a name only prevents subsequent calls to open, MSq_open or sem_open from succeeding.

  • Ftruncate: Adjusts the size of a file or shared memory

  • Fstat: to get information about this object

Reference: blog.csdn.net/jiajun2001/…

Develop reading: www.yolinux.com/TUTORIALS/L… Randu.org/tutorials/t…