Notice to readers:

In the course of learning with the big guy, constantly follow the source code to view, the main blog I will see in the beginning of the article to emphasize a statement. Due to my limited ability, I found that many things were not as simple as I imagined in the summary process, so the article was temporarily in rotten tail mode….

The reboot is tentatively scheduled for a year from now. Binder is too deep (personally, I can’t sum it up in a short time)

If you want to learn about Binder, you can refer to the following three articles

Design principle of Binder

Binder principle analysis for Android application engineers

Binder source code analysis

And of course in the learning diagram in order to see that part of the metaphor, we also went over the Principles of Internet Protocol and DNS and the principles of Internet protocol

Binder Study Notes 1

preface

In the Android system, each application is composed of one or more of the four Components of Android: Activity, Service, Broadcast, and ContentProvider. The underlying multiprocess communication involved in all four components relies on the Binder IPC mechanism. Binder Binder design principle

Android is an open platform with many developers, and applications come from a wide range of sources. It is very important to ensure the security of intelligent terminals. End users don’t want programs downloaded from the Web that unknowingly peek at private data, connect to a wireless network, run out of battery power for long periods of time with underlying devices, and so on. Traditional IPC has no security measures and relies entirely on upper-level protocols to ensure it. First, the receiver of traditional IPC cannot obtain the reliable UID/PID (user ID/ process ID) of the process of the other party, so it cannot identify the identity of the other party. Android assigns its own UID to each installed application, so a process’s UID is an important indicator of the process’s identity. With traditional IPC, only the user can fill the packet with UID/PID, which is unreliable and easily exploited by malicious programs. Reliable identity tags are only added to the kernel by the IPC mechanism itself. Second, traditional IPC access points are open and cannot be set up as private channels. For example, the name of the named pipe, the key value of the system V, the IP address of the socket or the file name are all open. As long as the program knows these access points, it can establish a connection with the peer end.

Based on the above reasons, Android needs to establish a new IPC mechanism to meet the requirements of communication mode, transmission performance and security of the system, namely Binder. Binder Supports both real-name and anonymous BINders, providing high security, and supports only one copy during transmission.

Binder uses client-server communication. A process functions as a Server to provide services such as video/audio decoding, video capture, address book query, and network connectivity. Multiple processes as clients initiate service requests to the Server to obtain required services. To achieve client-server communication, the following two points must be achieved: First, the Server must have a certain access point or address to accept Client requests, and the Client can obtain the address of the Server through some way; The second is to develop command-reply protocol to transmit data. For example, in network communication, the Server access point is the IP address and port number of the Server host, and the transport protocol is TCP. For Binder, Binder can be regarded as the access point provided by Server to implement a specific service. Client sends requests to Server through this “address” to use the service. To a Client, a Binder can be seen as a conduit to a Server. To communicate with a Server, the conduit must be established and the conduit must be obtained.

Unlike other IPC, binders use an object-oriented approach to describe binders as access points and their entry points in clients: A Binder is an object whose entity resides in a Server that provides a set of methods to implement requests to services, like a class’s member functions. The entries throughout the client can be thought of as’ Pointers’ to the Binder object, which, once obtained, can be invoked to access the server using the object’s methods. From the Client’s point of view, calling its provided methods through Binder ‘Pointers’ is no different from calling methods through Pointers on any other local object, even though the former’s entities reside in the remote Server and the latter in local memory. ‘pointer’ is a C++ term, and more commonly referred to as a reference, with a Client accessing a Server through a reference to a Binder. Another software term, ‘handle’, is also used to describe the way binders exist in clients. From the perspective of communication, binders in clients can also be regarded as “agents” of Server binders, providing services for clients locally on behalf of remote servers.

IPC simple summary understanding

Inter-process communication (inter-process communication or interprocess communication, abbreviated IPC) refers to a technical scheme for data or signal exchange between two or more processes (or threads).

Each Android process can only run in its own virtual address space. For example, the virtual address space is 4GB, where 3GB is the user space and 1GB is the kernel space. Of course, the kernel space size can be adjusted by parameter configuration. For user space, different processes are not shareable with each other, while kernel space is shareable. The Client process communicates with the Server process precisely by using the shared kernel memory space between the processes to complete the underlying communication work. The Client and Server processes often use iocTL and other methods to interact with the driver of the kernel space.

Principle of Binder

SDK ver: Android 6.0

Gityuan Blog Study address Binder section is my notes and summary of your blog study.

Binder communication uses the C/S architecture and consists of Client, Server, ServiceManager, and Binder drivers from a component perspective. The ServiceManager manages various services in the system. The architecture diagram is shown above.

It can be seen that both the process of registering services and obtaining services require ServiceManager. It should be noted that the ServiceManager here refers to the Native layer ServiceManager (C++). The ServiceManager is the manager of the Binder communication mechanism. It is the daemon of the Android interprocess communication mechanism Binder, responsible for querying and registering services. Similar to DNS, a ServiceManager translates a Binder name as a character into a Client reference to the Binder name, enabling the Client to obtain a reference to a Binder entity in the Server.

Figure in the mutual communication between a Client/Server/ServiceManage are based on the mechanism of Binder. Since communication is based on Binder mechanism and also C/S architecture, the three steps in the figure have corresponding Client side and Server side.

  1. Register Service (addService) : The Server process registers Service with the ServiceManager first. This process: The Server is the client, and the ServiceManager is the Server.
  2. GetService: The Client process obtains a Service from the ServiceManager before using a Service. The Client is the Client, and the ServiceManager is the server.
  3. Using a Service: The Client establishes a path to communicate with the Server process of the Service based on the obtained Service information, and then directly interacts with the Service. The process: Client is the client and server is the server.

The interactions among Client,Server and Service Manager in the figure above are dotted lines, because they do not directly interact with each other, but with Binder drivers to achieve IPC communication. Binder drivers are located in kernel space, and Client,Server, and Service Manager are located in user space. Binder drivers and Service Manager can be regarded as the infrastructure of Android platform, while Client and Server are the application layer of Android. Developers only need to customize the implementation of Client and Server. The basic Android platform architecture enables direct IPC communication.

Binder Learning Notes 2

ServerManage is started in Binder

SMgr is one process and Server is another. Registering a Binder with SMgr by a Server necessarily involves interprocess communication. The current implementation of interprocess communication but interprocess communication is used, which is like the egg can hatch a chicken but you need to find a chicken to hatch the egg. Binder’s implementation is clever: It precreates a chicken to hatch eggs: A ServiceManager uses a Binder to communicate with other processes. A ServiceManager is a Server with its own Binder objects (entities), and other processes are clients. A Reference to a Binder is required to register, query, and obtain a Binder. ServiceManager provides a special Binder that has no name and does not require registration, Binder drivers automatically create Binder entities for a process when it registers itself as a ServiceManager using the BINDER_SET_CONTEXT_MGR command (this is the pre-built chicken). Second, the Binder reference is fixed to 0 in all clients without any other means of obtaining it. That is, to register a Server with a ServiceManager with its Binder, a Server must communicate with the ServiceManager’s Binder through reference number 0.

↓↓↓↓↓ ServerManage Startup process

ServiceManager is a daemon process in Binder IPC communication process, itself is also a Binder service, but did not use the multithreaded model in Libbinder to communicate with Binder drivers, but wrote binder.c directly and Binder drivers to communicate, And there is only one loop, binder_loop, to read and process transactions. Let’s look at how the ServiceManager plays with the source code.

By looking at the website Hearing the ServiceManager is the init process by parsing the init. Rc file created, its corresponding executable program/system/bin/ServiceManager, the corresponding source file is service_manager. C. Use the main() function in Service_manager.c as an entry point to view the entire process for starting the ServiceManager.

The main methods appear in the main function, followed by an in-depth look at some of them

/ / the main method
int main(int argc, char **argv) {
    struct binder_state *bs;
    // Bind the binder driver to 128K bytes of memory
    bs = binder_open(128*1024); .// Step 2 become the context manager
    if (binder_become_context_manager(bs)) {
        return -1;
    }

    //selinux permission determines whether the process has the right to register or view
    selinux_enabled = is_selinux_enabled(); 
    sehandle = selinux_android_service_context_handle();
    selinux_status_open(true);

    if (selinux_enabled > 0) {
        if (sehandle == NULL) {  
            abort(); // Cannot get seHandle
        }
        if(getcon(&service_manager_context) ! =0) {
            abort(); // Cannot get the service_manager context}}...// Step 3 enter an infinite loop to process the request from the client. Svcmgr_handler mainly provides service registration and lookup
    binder_loop(bs, svcmgr_handler);
    return 0;
}

Copy the code

The first step is to open the Binder driver

Correspond to steps 1 through 4 in the figure above

  1. The binder device is opened by calling open(). The open() method goes through a system call into the binder driver and then calls binder_open(), which creates a binder_proc object in the binder driver layer. The binder_proc object is then assigned to fd->private_data and placed in the global linked list binder_procs.

  2. Using ioctl(), verify that the current binder version is consistent with the binder driver version.

  3. Mmap () is called for memory mapping, and mmap() is called by the system corresponding to binder_mmap(), which creates Binder_buffer objects in the Binder driver. And put the proc->buffers list for the current binder_proc.

Step 2 Register as a master of Binder services

Correspond to steps 5 through 7 in the figure above

The reference chain is binder_become_context_manager(struct binder_state) -> binder_ioctl(bs->fd, BINDER_SET_CONTEXT_MGR, 0) -> binder_ioctl_set_ctx_mgr(struct file *filp)

The global singleton binder_node object binder_context_mgr_node is created in the binder_iocTL_set_ctx_mgr () method, and the strong and weak references of binder_context_mgr_node are incremented by one. Some code blocks are as follows:

static int binder_ioctl_set_ctx_mgr(struct file *filp)
{
    int ret = 0;
    struct binder_proc *proc = filp->private_data;
    kuid_t curr_euid = current_euid();
    // Ensure that the mgr_node object is created only once
    if(binder_context_mgr_node ! =NULL) {
        ret = -EBUSY; 
        goto out;
    }
    if (uid_valid(binder_context_mgr_uid)) {
        ...
    } else {
        // Set the current thread EUID as the uid of the Service Manager
        binder_context_mgr_uid = curr_euid;
    }
    // Create ServiceManager entity finally found
    binder_context_mgr_node = binder_new_node(proc, 0.0); . binder_context_mgr_node->local_weak_refs++; binder_context_mgr_node->local_strong_refs++; binder_context_mgr_node->has_strong_ref =1;
    binder_context_mgr_node->has_weak_ref = 1;
out:
    return ret;
}
Copy the code

The binder_new_node () code block is as follows:


static struct binder_node *binder_new_node(struct binder_proc *proc,
                       binder_uintptr_t ptr,
                       binder_uintptr_t cookie)
{
    struct rb_node支那p = &proc->nodes.rb_node;
    struct rb_node *parent = NULL;
    struct binder_node *node;
    // The first entry is empty
    while (*p) {
        parent = *p;
        node = rb_entry(parent, struct binder_node, rb_node);

        if (ptr < node->ptr)
            p = &(*p)->rb_left;
        else if (ptr > node->ptr)
            p = &(*p)->rb_right;
        else
            return NULL;
    }

    // Allocate kernel space to the newly created binder_node
    node = kzalloc(sizeof(*node), GFP_KERNEL);
    if (node == NULL)
        return NULL;
    binder_stats_created(BINDER_STAT_NODE);
    // Add the newly created Node object to the Proc red-black tree;
    rb_link_node(&node->rb_node, parent, p);
    rb_insert_color(&node->rb_node, &proc->nodes);
    node->debug_id = ++binder_last_id;
    node->proc = proc;
    node->ptr = ptr;
    node->cookie = cookie;
    node->work.type = BINDER_WORK_NODE; // Set the type of binder_work
    INIT_LIST_HEAD(&node->work.entry);
    INIT_LIST_HEAD(&node->async_todo);
    return node;
}

Copy the code

Create a binder_node structure object in the Binder driver layer and add the current binder_proc to the node->proc of binder_node. Create binder_node async_todo and binder_work queues.

The third step is an infinite loop that processes requests from the client

Correspond to steps 9 through 13 in the figure above

void binder_loop(struct binder_state *bs, binder_handler func) {
    int res;
    struct binder_write_read bwr;
    uint32_t readbuf[32];
    bwr.write_size = 0;
    bwr.write_consumed = 0;
    bwr.write_buffer = 0;
    readbuf[0] = BC_ENTER_LOOPER;
    
    // Send BC_ENTER_LOOPER to the binder driver to loop the Service Manager
    binder_write(bs, readbuf, sizeof(uint32_t));
    for (;;) {
        bwr.read_size = sizeof(readbuf);
        bwr.read_consumed = 0;
        bwr.read_buffer = (uintptr_t) readbuf;
        // Enter the loop, continually binder the read and write process
        res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr); 
        if (res < 0) {
            break;
        }
        // Parse binder information
        res = binder_parse(bs, 0, (uintptr_t) readbuf, bwr.read_consumed, func);
        if (res == 0) {
            break;
        }
        if (res < 0) {
            break; }}}Copy the code

The loop reads and writes, and the func argument passed by the main() method points to svcmgr_handler.

Binder_write sends BC_ENTER_LOOPER to the binder driver via ioctl(), at which point BWR only has data for write_buffer, entering binder_thread_write(). Next, the for loop is entered and ioctl() is executed, at which point the BWR only has read_buffer data, and the binder_thread_read() method is entered.

Binder_write () -> ioctl(bs->fd, BINDER_WRITE_READ, & BWR) -> binder_ioctl_write_read() -> binder_thread_write()

Binder_thread_write method

static int binder_thread_write(struct binder_proc *proc, struct binder_thread *thread, binder_uintptr_t binder_buffer, size_t size, binder_size_t *consumed) {
  uint32_t cmd;
  void __user *buffer = (void __user *)(uintptr_t)binder_buffer;
  void __user *ptr = buffer + *consumed;
  void __user *end = buffer + size;
  
  while (ptr < end && thread->return_error == BR_OK) {
    get_user(cmd, (uint32_t __user *)ptr); // Get the command
    switch (cmd) {
      case BC_ENTER_LOOPER:
          // Sets the thread's looper state
          thread->looper |= BINDER_LOOPER_STATE_ENTERED;
          break;
      case. ; }}}Copy the code

Take CMD data from bwr.write_buffer, in this case BC_ENTER_LOOPER. The looper state of the current thread is set to BINDER_LOOPER_STATE_ENTERED.

The fourth part of Binder message processing


int binder_parse(struct binder_state *bs, struct binder_io *bio,
                 uintptr_t ptr, size_t size, binder_handler func)
{
    int r = 1;
    uintptr_t end = ptr + (uintptr_t) size;

    while (ptr < end) {
        uint32_t cmd = *(uint32_t *) ptr;
        ptr += sizeof(uint32_t);
        switch(cmd) {
        case BR_NOOP:  // Exit the loop with no operation
            break;
        case BR_TRANSACTION_COMPLETE:
            break;
        case BR_INCREFS:
        case BR_ACQUIRE:
        case BR_RELEASE:
        case BR_DECREFS:
            ptr += sizeof(struct binder_ptr_cookie);
            break;
        case BR_TRANSACTION: {
            struct binder_transaction_data *txn =(struct binder_transaction_data *) ptr; . binder_dump_txn(txn);if (func) {
                unsigned rdata[256/4];
                struct binder_io msg; 
                struct binder_io reply;
                int res;
                // Initialize binder_io
                bio_init(&reply, rdata, sizeof(rdata), 4);
                // Parse and copy the information from TXN to binder_io
                bio_init_from_txn(&msg, txn); 
                 //
                res = func(bs, txn, &msg, &reply);
                // Communicate to the binder driver
                binder_send_reply(bs, &reply, txn->data.ptr.buffer, res);
            }
            ptr += sizeof(*txn);
            break;
        }
        case BR_REPLY: {
            struct binder_transaction_data *txn =(struct binder_transaction_data *) ptr; . binder_dump_txn(txn);if (bio) {
                bio_init_from_txn(bio, txn);
                bio = 0;
            }
            ptr += sizeof(*txn);
            r = 0;
            break;
        }
        case BR_DEAD_BINDER: {
            struct binder_death *death = (struct binder_death *)(uintptr_t) * (binder_uintptr_t *)ptr;
            ptr += sizeof(binder_uintptr_t);
            // Binder death message
            death->func(bs, death->ptr);
            break;
        }
        case BR_FAILED_REPLY:
            r = - 1;
            break;
        case BR_DEAD_REPLY:
            r = - 1;
            break;
        default:
            return - 1; }}return r;
}

Copy the code

The binder_parse method first calls svcmgr_handler() and then executes the binder_send_reply procedure. This method calls binder_write into the binder driver and sends BC_FREE_BUFFER and BC_REPLY command protocols to the binder driver to send reply to the client. The data area of data saves TYPE as HANDLE.

ServiceManager Startup summary:

  • Open the binder driver and allocate 128K of memory mapping space by calling the mmap() method in binder.c: binder_open();
  • Notify the binder driver to make it a daemon: binder_become_context_manager(); The ServiceManager object entity class is created using the binder_new_node () method in the Binder driver
  • Verify selinux permissions to determine whether a process has the right to register or view a specified service;
  • Enter the loop state, waiting for the Client request: binder_loop().
  • The process of registering the service is based on the service name, but the same service has been registered, and the previous registration information will be removed before re-registering.
  • Death notification: When a binder process dies, the binder_release method is called, followed by binder_node_release. This process issues a death notification callback.

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — – section line — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — –

Some understanding of Binder

Advantages of Binder

After looking at the Binder driver work above, let’s take a look at why Binder is a mainstay of Android process interaction.

Binder aside for a moment, consider how data gets from sender to receiver in traditional IPC. Typically, the sender stores the prepared data in the cache and calls the API into the kernel via system calls. The kernel server allocates memory in kernel space, copying data from the sender cache into the kernel cache. The receiver also provides a cache when reading data. The kernel copies data from the kernel cache to the cache provided by the receiver and wakes up the receiving thread to complete a data transmission. This store-and-forward mechanism has two drawbacks. First, it is inefficient and requires two copies: user space -> kernel space -> user space. Linux uses copy_from_user() and copy_to_user() to implement these two cross-space copies. If high memory is used in this process, this copy requires the temporary creation/cancellation of page mappings, resulting in a performance penalty. Secondly, the cache of received data should be provided by the receiver. The receiver does not know how much cache is enough, so it can only open up as much space as possible or call API to receive message headers to obtain message body size, and then open up appropriate space to receive message body. Both approaches have their drawbacks. They are a waste of space or time.

Binder takes a new approach: Binder drives the management of the data receive cache. We note that Binder drivers implement mmap() system calls. This is special for character devices because mmap() is typically used on file systems with physical storage media, and character devices like Binders that communicate without physical media do not necessarily support Mmap (). Binder drivers are of course not intended for mapping between physical media and user space, but rather to create cache Spaces for data reception. Let’s see how mmap() is used:

fd = open(“/dev/binder”, O_RDWR);

mmap(NULL, MAP_SIZE, PROT_READ, MAP_PRIVATE, fd, 0);

The recipient of the Binder then has a receive cache of MAP_SIZE. The return value of mmap() is the address of the memory map in user space, but this space is driver-managed and does not have or cannot be accessed directly by the user (mapping type is PROT_READ, read-only mapping).

Once mapped, the receive cache can be used as a buffer pool to receive and store data. As mentioned earlier, the structure of the received packet is binder_transaction_data, but this is just the message header, and the real payload is in the memory pointed to by data.buffer. This memory does not need to be provided by the recipient, but comes from the cache pool of the Mmap () map. When data is copied from the sending cache to the receiving one, the driver uses the best matching algorithm to find an appropriate space in the cache pool according to the size of the sent packet and copies the data from the sending cache. Note that the memory space for the binder_transaction_data structure itself, as well as all the messages in Table 4, is still provided by the receiver, but the data is fixed in size and small enough to not cause inconvenience to the receiver. The mapped cache pool needs to be large enough because the recipient’s thread pool may process multiple concurrent interactions at the same time, each of which requires the destination storage from the cache pool, with unexpected consequences if the cache pool is exhausted.

Where there is distribution, there must be release. After processing the packet, the receiver notifies the driver to free the memory area pointed to by data.buffer. As mentioned in the introduction to the Binder protocol, this is done with the BC_FREE_BUFFER command.

As you can see from the above, the driver does the most tedious work for the receiver: allocating/releasing the payload cache of varying sizes and unpredictable sizes, while the receiver only needs to provide the cache to hold the headers of fixed size and predictable maximum space. In terms of efficiency, since the memory allocated by mmap() is mapped into the recipient user space, the overall effect is equivalent to a direct copy of the payload data from the sender user space to the recipient user space, eliminating the kernel staging step and doubling the performance. By the way, the Linux kernel doesn’t actually have a function to copy directly from one user space to another. You have to copy copy_from_user() to kernel space and copy_to_user() to another user space. In order to copy user space to user space, mmap() allocates memory that is mapped into the kernel space as well as into the recipient process. So calling copy_from_user() to copy data into kernel space also copies data into the user space of the receiver, which is the ‘secret’ that Binder only needs to copy once.

Thread management in Binder

Binder communication is actually communication between threads located in different processes. If process S is Server and provides Binder entities, thread T1 sends requests to process S from Client process C1 via a reference to Binder. S needs to start thread T2 in order to process the request, while thread T1 is waiting to receive the returned data. After processing the request, T2 will return the processing result to T1, and T1 will wake up to get the processing result. In this process, T2 acts as a proxy for T1 in process S, performing remote tasks on behalf of T1, giving T1 the impression that it has crossed into S to execute a piece of code and returned to C1. To make the crossing more realistic, the driver assigns some attributes of T1 to T2, especially T1’s priority nice, so that T2 takes a similar amount of time to complete the task. Many sources use the misleading term ‘thread migration’ to describe this phenomenon. First, it is impossible for threads to jump from process to process, and second, T2 has nothing in common with T1 except its priority, including identity, open files, stack size, signal handling, private data, etc.

For Server process S, many clients may initiate requests at the same time. In order to improve efficiency, thread pools are often created to process the received requests concurrently. How do you implement concurrent processing using thread pools? This has to do with the specific IPC mechanism. Take the socket as an example. The socket on the Server is set to listening mode. A dedicated thread uses the socket to listen for connection requests from the Client, that is, blocked on accept(). The socket is like a chicken that lays an egg. It will lay an egg as soon as it receives a request from the Client – create a new socket and return it from Accept (). The listener thread starts a worker thread from the thread pool and hands it the egg it just laid. Subsequent business processing is done by the thread and the Client is interacting with the order.

With no listening mode and no laying of eggs, how does Binder manage thread pools? A simple way to do this is to create a bunch of threads, each reading Binder with the BINDER_WRITE_READ command. These threads block on wait queues set up by drivers for the Binder and wake up a thread from the queue whenever a data driver from the Client arrives. It’s simple and intuitive, eliminating thread pools, but creating a bunch of threads in the first place is a bit wasteful. The Binder protocol introduces special commands or messages to help users manage thread pools, including:

· INDER_SET_MAX_THREADS // Sets the maximum number of threads

· BC_REGISTER_LOOP // Register

· BC_ENTER_LOOP // enter

· BC_EXIT_LOOP // Exit

· BR_SPAWN_LOOPER // Notifies the thread that it is running out of use, creates the instruction

Managing a thread pool first requires knowing how large the pool is, and the application tells the driver through INDER_SET_MAX_THREADS that the maximum number of threads can be created. BC_REGISTER_LOOP, BC_ENTER_LOOP, and BC_EXIT_LOOP will be used to inform the driver when each thread is created, entered, and exit_loop, respectively, so that the driver can collect and record the current thread pool state. Each time the driver returns to read a Binder thread after receiving a packet, it checks to see if there are no idle threads left. If so, and the total number of threads does not exceed the maximum number of threads in the thread pool, a BR_SPAWN_LOOPER message is appended to the currently read packet to tell the user that the number of threads is running low and to start some more or the next request may not respond in time. Once a new thread is started, the driver is notified of the update status via BC_xxx_LOOP. As long as threads are not running out, there are always idle threads waiting in the queue to process requests in a timely manner.

Binder drivers also make a small optimization for starting worker threads. When thread T1 of process P1 sends a request to process P2, the driver first checks to see if thread T1 is also processing a request from one of the threads of process P2 but has not completed (no reply has been sent). This usually occurs when both processes have Binder entities and make requests to each other. If the driver finds such a thread in process P2, such as T2, it will ask T2 to handle the request from T1. Since T2 has not received a return packet since it sent a request to T1, T2 must (or will) block reading the return packet. It’s better to have T2 do something than just sit there. Moreover, if T2 is not a thread in the thread pool, it can also do some of the work for the thread pool, reducing thread pool utilization.

↓↓↓↓↓↓↓↓↓↓↓↓ The following is the incomplete space

Binder Learning Notes 5

Service registration process in Binder

AddService creates a binder_node in the process where the service resides and a binder_ref in the Servicemanager process. Where the desc of binder_ref is unique within the same process:

  • The handle value of binder_ref recorded by each process binder_proc increments from 1;
  • The binder_ref of handle=0 recorded by all processes binder_proc points to the service manager;
  • The binder_node of the same service can have different handle values in the binder_ref of different processes.

The Media Service registration process involves MediaPlayerService(as a Client process) and Service Manager(as a Service process). The communication flow is as follows:

Process analysis:

  1. The MediaPlayerService process calls ioctl() to send IPC data to Binder drivers. This process can be understood as a binder_transaction(T1). The thread that performs the current operation, binder_thread(thread1), T1->from_parent=NULL, T1-> FROM = thread1, thread1->transaction_stack=T1. IPC data content includes:
  • Binder protocol is BC_TRANSACTION;
  • Handle is equal to 0;
  • The RPC code is ADD_SERVICE;
  • The RPC data is media.player.
  1. The Binder driver receives the request, generates the BR_TRANSACTION command, and selects the target thread to process the request, namely the ServiceManager’s Binder thread (thread2), T1->to_parent = NULL, T1 – > to_thread = thread2. Insert the entire binder_transaction data (named T2) into the toDO queue of the target thread;

  2. When thread2 of the Service Manager receives T2, it calls the Service registration function to register the Service “media.player” in the Service directory. When the service is registered, IPC reply data (BC_REPLY) is generated, T2->form_parent = T1, T2-> FROM = thread2, thread2->transaction_stack = T2.

  3. The Binder driver receives a response request from the Binder and generates BR_REPLY with T2->to_parent = T1, T2->to_thread = thread1, thread1->transaction_stack = T2. After MediaPlayerService receives this command, the service can be used normally until it is registered.

In the whole process, BC_TRANSACTION and BR_TRANSACTION are a complete transaction process. BC_REPLY and BR_REPLY are a complete transaction process.