I believe that the students who have been exposed to C language are familiar with the linked list data structure. Linked list is a common data structure of C language, and its implementation principle is well known to everyone, but I do not know whether we understand how to realize the linked list in the kernel. Those of you who don’t have a lot of experience with the Linux kernel may not know the kernel linked list or how clever its implementation is. Let me give you one of the advantages.

Compared with the common list implementation, the Implementation of the Linux kernel can be said to be unique. A traditional list can be concatenated in a list by adding a pointer to the next node inside the data. For example, consider a CAT data structure to describe a member of the feline family.

struct cat { unsiged long tail_length; /* Tail length */ unsiged long weight; /* weight */ bool is_fantastic; /* Is this cat wonderful? * /}Copy the code

The way to store this structure in a list is usually to embed a list pointer in the data structure, such as:

struct cat { unsiged long tail_length; /* Tail length */ unsiged long weight; /* weight */ bool is_fantastic; /* Is this cat wonderful? */ struct cat *next; Struct cat *prev; /* Point to a cat */}Copy the code

However, it is obvious that if the Linux kernel adopts this implementation method, because the data type of each device in the kernel is not the same, it will lead to the kernel cannot manage all devices in a unified way, and this implementation method will greatly increase the code repeatability, which is obviously against the concept of clustering kernel. Therefore, kernel linked list implementation was officially introduced in kernel 2.1, which uses a simple and efficient linked list to unify other linked lists. Its data structure is as follows:

Struct list_head {struct list_head *next, *prev; };Copy the code

You can see that there are no data fields in each node of the linked list, which means that no matter how complex the data structure is, it has only front and back Pointers in the linked list. When a data structure (that is, a device structure that describes a device) wants to be managed with a generic linked list, it simply adds fields containing nodes to the structure body.

struct cat { unsiged long tail_length; /* Tail length */ unsiged long weight; /* weight */ bool is_fantastic; /* Is this cat wonderful? */ struct list_head list; /* so the cat structure parameter list */}Copy the code

And the bidirectional list can be traversed from any node before and after the entire list, traversal is very convenient. The use of a circular linked list makes it possible to iterate over management nodes continuously, like process scheduling: the operating system manages ready processes in a generic linked list of ready queues that manage processes, and cycles over and over, allocating time slices to them and acquiring CPU for process scheduling over and over again. Linked lists are available now but obviously not convenient enough. Therefore, the kernel provides a set of list operation routines, such as the list_add () method that adds a new node to the list, and these methods have one common feature: they only accept list_head as an argument.

static inline void __list_add(struct list_head *new, struct list_head *prev, struct list_head *next) { next->prev = new;  new->next = next; new->prev = prev; prev->next = new; }Copy the code

By implementing this node-only linked list, the device structure in the kernel can be managed uniformly. However, the problem still needs to be solved is how to get the first address of the device structure through the list node, and this is the most clever implementation of the kernel list.

#define container_of(ptr, type, member) ({ \ const typeof( ((type *)0)->member ) *__mptr = (ptr); \ (type *)( (char *)__mptr - offsetof(type,member) ); })Copy the code

The kernel lists the initial address of the device structure using the container_of macro. The three parameters of this macro are PTR, type, and member. The container _of macro works by passing the address and name of a member variable in a structure, as well as the structure type. Find the address of the structure variable. This is a compiler trick, which is to find the offset of the member in the structure, and then to find the address of the main structure variable based on the address of the member variable. Typeof: is a type used to return a variable, which is an extension of the GCC compiler. Typeof is compiler-specific, not a c standard. (((type*)0) ->member) : ((type*)0) converts address 0 to structure pointer of type type. (((type*)0) ->member) references the member member in the structure

const typeof( ((type *)0)->member ) *__mptr = (ptr);
Copy the code

Typeof () returns the typeof the member attribute in the struct body, and then defines a temporary pointer variable _mptr to that type. Assign _mptr to the member address that PTR points to to prevent damage to PTR and its contents. Offsetof (type,member)

#define offsetof(TYPE, MEMBER) ((size_t) &((TYPE *)0)->MEMBER)
Copy the code

Find the offset of member relative to address 0.

(type *)( (char *)__mptr - offsetof(type,member) ); })Copy the code

Converts _mptr to char* because offsetof is offset in bytes. Subtract the two to get the starting position of the structure, and then cast to type type. From here we can get the first address of the device structure. Using the Container_of macro, we define a simple function that returns a supertype structure containing list_head. With the list_Entry method, the kernel provides creation, manipulation, and other linked list management methods, and you don’t even need to know which object data structure list_head is embedded in.

It can be seen that the realization of Linux kernel linked list is not not clever, through the transparent understanding of memory address, to achieve a general linked list structure, in order to more convenient unified management of complex devices in the kernel. And there are many other clever implementation ideas in the kernel, a lot of reading and learning Daniel’s implementation ideas and clever methods, is able to significantly improve our daily development efficiency, but also a kind of improvement of thinking.

  • End –

With the rapid development of technology, AMu Lab will keep up with the pace of technology and constantly recommend the latest technology and hardware in the robot industry to everyone. The greatest value of our training is to see our trainees make rapid progress in technology. If you are in the robotics industry, please follow our official account, we will continue to publish the most valuable information and technology in the robotics industry. Amu Lab is committed to providing open source software and hardware tools and course services for robot r&d to make r&d more efficient!