preface

As we all know, Binder is the main interprocess communication suite in Android system. More specifically, many articles call it a Binder driver. Why is it a driver and what is a driver? The focus of this article is to give the reader a brief understanding of the Binder system, so it is written in general and will be analyzed in more detail in subsequent articles.

What exactly is a Binder

The kernel of Android system is Linux, and each process has its own virtual address space. In 32-bit system, the maximum address space is 4GB, of which 3GB is user space and 1GB is kernel space. Each process user space is relatively independent, while the kernel space is the same and can be shared, as shown below

Linux drivers run in kernel space and are, in the narrow sense, intermediate programs used by the system to control the hardware, but ultimately it is just a program a piece of code, so the implementation does not have to be hardware related. Binder registers itself as a MISC driver, does not operate hardware, and runs in the kernel, so it can be used as a bridge between different processes to implement IPC.

The main feature of Linux is that everything is a file, and drivers are no exception. All drivers are mounted in the file system dev directory. Binder’s corresponding directory is /dev/binder. Binder can then be used in user space through system calls to access files. Let’s take a cursory look at the code.

The device_initcall function is used to register the driver and is called by the system

Binder_init calls misc_register to register a misC driver named Binder, and specifies a function map that maps binder_open to the system call open, Binder_open can then be called with open(“/dev/binder”)

drivers/android/binder.c

// Drive function mapping
static const struct file_operations binder_fops = {
	.owner = THIS_MODULE,
	.poll = binder_poll,
	.unlocked_ioctl = binder_ioctl,
	.compat_ioctl = binder_ioctl,
	.mmap = binder_mmap,
	.open = binder_open,
	.flush = binder_flush,
	.release = binder_release,
};

// Register the driver parameter structure
static struct miscdevice binder_miscdev = {
	.minor = MISC_DYNAMIC_MINOR,
    // Driver name
	.name = "binder",
	.fops = &binder_fops
};

static int binder_open(struct inode *nodp, struct file *filp){... }static int binder_mmap(struct file *filp, struct vm_area_struct *vma){... }static int __init binder_init(void)
{
    int ret;
    // Create a single threaded work queue named binder
    binder_deferred_workqueue = create_singlethread_workqueue("binder");
    if(! binder_deferred_workqueue)return-ENOMEM; .// Register the driver, the misc device is actually a special character deviceret = misc_register(&binder_miscdev); .return ret;
}
// Driver registration function
device_initcall(binder_init);
Copy the code

Binder’s brief communication process

How does a process communicate with another process through a binder? The simplest process is as follows

  1. The receiving process starts a dedicated thread and registers it with the binder driver (kernel) through a system call (create and save a bidner_proc). The driver creates a task queue for the receiving process (biner_proc.todo).
  2. The receiving thread starts an infinite loop, accesses the Binder driver through system calls, returns processing if there is a task on the corresponding task queue, or blocks until a new task is enqueued
  3. The sender also accesses the target process through system call, throws the task to the queue of the target process, and then wakes up the dormant thread of the target process to process the task, that is, the communication is completed

The binder_proc structure in the Binder driver represents a process, binder_thread represents a thread, and binder_proc.todo is the queue of tasks from other processes that a process needs to process.

struct binder_proc {
	// Stores a linked list of all binder_procs
	struct hlist_node proc_node;
	// binder_thread Red-black tree
	struct rb_root threads;
	// binder_proc Red-black tree of binder entities in the process
	struct rb_root nodes;. }Copy the code

A copy of Binder

Binder’s advantage is that one copy is efficient. Many blogs have already said that, so what one copy is, how to do it, and where it happens, is explained as simply as possible here.

As mentioned above, different processes communicate with each other through Binder drivers in the kernel, but user space and kernel space are separated and cannot access each other. Data is passed between them through copy_from_user and copy_to_user system calls. Copy data from user/kernel space memory to kernel/user space memory, so that if two processes need to communicate one-way once, two copies are needed, as shown in the figure below.

Binder only needs to make one copy for each communication, because it uses memory mapping to map a piece of physical memory (several physical pages) to user space and kernel space at the receiving end, so that user space and kernel space can share data.

When the sender wants to send data to the receiver, the kernel directly copies the data to the kernel space mapping area through copy_from_user. At this time, because the physical memory is shared, the memory mapping area of the receiving process can also get the data, as shown in the figure below.

Code implementation part

The binder_mmap function is called to the Binder driver to map the user space through the MMAP system call. This part of the code is more difficult to read.

drivers/android/binder.c

Binder_mmap Creates a binder_buffer that records the process memory mapping information (user space mapped address, kernel space mapped address, etc.).

static int binder_mmap(struct file *filp, struct vm_area_struct *vma)
{
    int ret;
    // Kernel virtual space
    struct vm_struct *area;
    struct binder_proc *proc = filp->private_data;
    const char *failure_string;
    // Each time a Binder transfers data, a binder_buffer is allocated from the Binder's memory cache to store the transferred data
    struct binder_buffer *buffer;

    if(proc->tsk ! = current)return -EINVAL;
    // Ensure that the memory map size does not exceed 4M
    if((vma->vm_end - vma->vm_start) > SZ_4M) vma->vm_end = vma->vm_start + SZ_4M; .// Use IOREMAP to allocate a contiguous kernel virtual space with the same size as the user process virtual space
    // VMA is a virtual space structure transmitted from user space
    area = get_vm_area(vma->vm_end - vma->vm_start, VM_IOREMAP);
    if (area == NULL) {
            ret = -ENOMEM;
            failure_string = "get_vm_area";
            goto err_get_vm_area_failed;
    }
    // The address pointing to the kernel virtual space
    proc->buffer = area->addr;
    // User virtual space start address - Kernel virtual space start address
    proc->user_buffer_offset = vma->vm_start - (uintptr_t)proc->buffer; .// Allocate an array of Pointers to physical pages with the size of the vMA equivalent page number
    proc->pages = kzalloc(sizeof(proc->pages[0]) * ((vma->vm_end - vma->vm_start) / PAGE_SIZE), GFP_KERNEL);
    if (proc->pages == NULL) {
            ret = -ENOMEM;
            failure_string = "alloc page array";
            goto err_alloc_pages_failed;
    }
    proc->buffer_size = vma->vm_end - vma->vm_start;

    vma->vm_ops = &binder_vm_ops;
    vma->vm_private_data = proc;
    // Allocate physical pages that map to both kernel space and process space. Allocate 1 physical page first
    if (binder_update_page_range(proc, 1, proc->buffer, proc->buffer + PAGE_SIZE, vma)) {
            ret = -ENOMEM;
            failure_string = "alloc small buf";
            goto err_alloc_small_buf_failed;
    }
    buffer = proc->buffer;
    // Insert the linked list
    INIT_LIST_HEAD(&proc->buffers);
    list_add(&buffer->entry, &proc->buffers);
    buffer->free = 1;
    binder_insert_free_buffer(proc, buffer);
    // Oneway asynchronous available size is half of the total space
    proc->free_async_space = proc->buffer_size / 2;
    barrier();
    proc->files = get_files_struct(current);
    proc->vma = vma;
    proc->vma_vm_mm = vma->vm_mm;

    /*pr_info("binder_mmap: %d %lx-%lx maps %p\n", proc->pid, vma->vm_start, vma->vm_end, proc->buffer); * /
    return 0;
}
Copy the code

The binder_update_page_range function allocates a physical page to the mapped address, where a physical page (4KB) is first allocated and then mapped to both the user-space address and the memory space address

static int binder_update_page_range(struct binder_proc *proc, int allocate,
				    void *start, void *end,
				    struct vm_area_struct *vma)
{
    // Start address of the kernel mapping area
    void *page_addr;
    // Start address of user mapping area
    unsigned long user_page_addr;
    struct page支那page;
    // Memory structure
    struct mm_struct *mm;
    
    if (end <= start)
        return 0; .// Loop to allocate all physical pages and map them to user space and kernel space respectively
    for (page_addr = start; page_addr < end; page_addr += PAGE_SIZE) {
        int ret;
        page = &proc->pages[(page_addr - proc->buffer) / PAGE_SIZE];

        BUG_ON(*page);
        // Allocate one page of physical memory
        *page = alloc_page(GFP_KERNEL | __GFP_HIGHMEM | __GFP_ZERO);
        if (*page == NULL) {
                pr_err("%d: binder_alloc_buf failed for page at %p\n",
                        proc->pid, page_addr);
                goto err_alloc_page_failed;
        }
        // Physical memory is mapped to the kernel virtual space
        ret = map_kernel_range_noflush((unsigned long)page_addr,
                                PAGE_SIZE, PAGE_KERNEL, page);
        flush_cache_vmap((unsigned long)page_addr,
        // User space address = kernel address + offset
        user_page_addr =
                (uintptr_t)page_addr + proc->user_buffer_offset;
        // Physical space is mapped to user virtual space
        ret = vm_insert_page(vma, user_page_addr, page[0]); }}Copy the code

The binder_update_page_range call in binder_mmap allocates only one physical page to the map. When Binder starts communication, more physical pages are allocated through binder_alloc_buf.

Binder Kit Architecture

The Binder driver of the kernel layer has provided the IPC function. However, it is necessary to provide some call encapsulation for the driver layer in the Framework Native layer to make it easier for framework developers to use. Hence, native Binder is encapsulated. At the same time, as the Framework Native layer is a C/C ++ language implementation, application developers need more convenient encapsulation of the Java layer, and Java Binder is derived. Finally, in order to reduce the duplication of code and to standardize the interface, Java Binder is used to encapsulate AIDL. The layers of encapsulation make AIDL virtually invisible to Binder users.

Here’s an architecture diagram.

Native layer

BpBinder represents an agent of the server Binder. There is an internal member mHandle, which is the handle of the server Binder at the driver layer. The client passes this handle by calling BpBinder:: Transact and generates a session through the driver layer and the server BBinder. Finally, the server calls BBinder::onTransact. Here the session content is identified by a convention code between the two.

As mentioned earlier, processes that need to communicate with a Binder need to register that process with the driver, and each communication requires a thread to read and write the Binder driver in an infinite loop. In the driver layer, a process corresponds to a binder_proc and a thread corresponds to a binder_thread. In the Framework Native layer, process corresponds to a ProcessState, thread corresponds to IPCThreadState, and BpBinder:: TransAct initiates communication through the IPCThreadState. Transact call driver.

In fact, every application process in Android has a Binder driver enabled (registered with the driver). Zygote initializes the process by calling onZygoteInit in app_main. CPP, which creates the process’s ProcessState instance. Open the Binder driver and allocate the mapping area. The driver also creates and stores an instance of binder_proc for the process. I’m going to borrow a picture to illustrate it.

Java layer

The Java layer encapsulates classes related to the native layer. BBinder corresponds to Binder and BpBinder corresponds to BinderProxy. The Java layer will finally call the corresponding function of the native layer

AIDL

The code generated by AIDL further encapsulates the Binder. < interface >.stub corresponds to the server Binder. < interface >.stub. Proxy identifies the client and holds an mRemote instance (BinderProxy) internally. Aidl generates several TRANSACTION_< Function name > code constants based on defined interface methods. Both binders use these codes to parse parameters and call corresponding interface methods. In other words, AIDL encapsulates binderProxy. transact and Binder. OnTransact, so that users do not have to define the code and parameter parsing for each communication.

Afterword.

This article is intended to provide a general overview of the Binder system for those who are not familiar with it. The following articles will analyze the entire IPC process from the AIDL remote service, so it is important to use this article as a prerequisite for further understanding of the Binder system.