1 overview

When a user presses or taps the screen, the touch driver on the screen interrupts. The Linux kernel packages the touch events generated by hardware as events and stores them in the dev/input/ directory of the device. After that, the Android system needs to solve the following problems to realize the distribution and processing of the whole touch event:

  1. How do I read touch events from the device?
  2. How do I dispatch touch events after reading them?
  3. How do I find the target application window to receive the event when sending it?
  4. How do I pass events to the target application window once I find it?
  5. How are events handled inside the target application window?

Below, we will analyze and answer these questions to understand the full picture of the Touch event handling mechanism of the Android system.

2 Read touch events

All Android input devices generate device nodes in the /dev/input directory. Once any input event is generated, the Linux kernel writes the event to these nodes. The insertion and removal of external input devices (mouse, keyboard, etc.) also causes the creation and deletion of these nodes. The system encapsulates an object called EventHub, which uses Linux’s inotify and epoll mechanisms to listen for device event nodes in the /dev/input directory. EventHub’s getEvents interface can be used to listen for and retrieve the event.

When the system starts the System_server process, the InputManagerService core service is created and started. The InputManager object is created through the JNI call, and then an InputReader object is created. Then create a Loop worker thread of InputThread for the InputReader using the start method, whose job is to read the Input event via the EventHub getEvents listener. The detailed process is shown in the figure below:

The brief code flow is as follows:

  /*frameworks/base/services/java/com/android/server/SystemServer.java*/
  private void startOtherServices(@NonNull TimingsTraceAndSlog t){...// 1. Create an InputManagerService object
     inputManager = newInputManagerService(context); .// 2.start Starts the relevant input threadinputManager.start(); . }/*frameworks/base/services/core/java/com/android/server/input/InputManagerService.java*/
   public InputManagerService(Context context){...// 1.JNI calls the Native interface to initialize the InputManager
        mPtr = nativeInit(this, mContext, mHandler.getLooper().getQueue()); . } publicvoid start() {
        // 2.JNI calls native interface to start InputManager worker threadnativeStart(mPtr); . }/*frameworks/base/services/core/jni/com_android_server_input_InputManagerService.cpp*/
 static jlong nativeInit(JNIEnv* env, jclass /* clazz */,
        jobject serviceObj, jobject contextObj, jobject messageQueueObj){...// Create the NativeInputManager object
    NativeInputManager* im = newNativeInputManager(contextObj, serviceObj, messageQueue->getLooper()); . }NativeInputManager::NativeInputManager(jobject contextObj,
        jobject serviceObj, const sp<Looper>& looper) :
        mLooper(looper), mInteractive(true){...// 1. Create an InputManager object
    mInputManager = new InputManager(this.this); . }static void nativeStart(JNIEnv* env, jclass /* clazz */, jlong ptr){...// 2. Call InputManager's start functionstatus_t result = im->getInputManager()->start(); . }/*frameworks/native/services/inputflinger/InputManager.cpp*/
InputManager: :InputManager(
        const sp<InputReaderPolicyInterface>& readerPolicy,
        const sp<InputDispatcherPolicyInterface>& dispatcherPolicy) {
    // Create an InputDispatcher touch event dispatch object
    mDispatcher = createInputDispatcher(dispatcherPolicy);
    mClassifier = new InputClassifier(mDispatcher);
    CreateInputReader Creates an InputReader event reader object and holds a reference to InputDispatcher to notify it of event distribution after the event is read
    mReader = createInputReader(readerPolicy, mClassifier);
}

status_t InputManager::start() {
    // Start the InputDispatcher worker threadstatus_t result = mDispatcher->start(); .2. Start the InputReader worker threadresult = mReader->start(); .return OK;
}

/*frameworks/native/services/inputflinger/reader/InputReader.cpp*/
InputReader::InputReader(std::shared_ptr<EventHubInterface> eventHub,
                         const sp<InputReaderPolicyInterface>& policy,
                         const sp<InputListenerInterface>& listener)
      : mContext(this),
        // Initialize the EventHub object
        mEventHub(eventHub),
        mPolicy(policy),
        mGlobalMetaState(0),
        mGeneration(1),
        mNextInputDeviceId(END_RESERVED_ID),
        mDisableVirtualKeysTimeout(LLONG_MIN),
        mNextTimeout(LLONG_MAX),
        mConfigurationChangesToRefresh(0) {
    // Hold a reference to InputDispatcher
    mQueuedListener = newQueuedInputListener(listener); . } status_t InputReader::start() {
    if (mThread) {
        return ALREADY_EXISTS;
    }
    // Create an InputThread named "InputReader" with a loop worker thread and execute loopOnce in it
    mThread = std::make_unique<InputThread>(
            "InputReader"[this]() { loopOnce(); },this]() { mEventHub->wake(); });
  return OK;
}

void InputReader::loopOnce() {
    int32_t oldGeneration;
    int32_t timeoutMillis;
    bool inputDevicesChanged = false; std::vector<InputDeviceInfo> inputDevices; {...// 1. Listen to read touch events from EventHubsize_t count = mEventHub->getEvents(timeoutMillis, mEventBuffer, EVENT_BUFFER_SIZE); {...if (count) {
            // 2. Handle the read touch eventsprocessEventsLocked(mEventBuffer, count); }... }...// 3. The mQueuedListener is an InputDispatcher object. Flush informs the InputDispatcher dispatch event
    mQueuedListener->flush();
}
Copy the code

Through the above process, input events can be read, after InputReader: : processEventsLocked is preliminary packaging into RawEvent, final notice InputDispatcher distribute events, the festival continues to analyze the distribution of events.

3 Distribution of touch events

As you can see from the code flow analysis in the previous section, when creating an InputManager object, you create not only an event reader object, InputReader, but also an event dispatcher object. After reading the event, InputReader will hand over to InputDispatcher to execute the logic of event distribution, and InputDispatcher also has its own independent worker thread. The advantage of this design is that it conforms to the design idea of single responsibility, and events are read and distributed in their own threads, which is more efficient and avoids event loss. The entire architecture model is shown in a diagram by industry titans:

Following the code analysis, the last line of the InputReader::loopOnce function calls mQueuedListener-> Flush () after the touch event has been read. The mQueuedListener is essentially an InputDispatcher, so calling flush notifies InputDispatcher for event distribution. InputDispatcher, like InputReader, also has an internal Loop worker thread named “InputDispatcher” InputThread, which is woken up to handle the touch event when it arrives. The process here is as follows:

The simplified code flow is as follows:

/*frameworks/native/services/inputflinger/InputListener.cpp*/
void QueuedInputListener::flush() {
    size_t count = mArgsQueue.size();
    for (size_t i = 0; i < count; i++) {
        NotifyArgs* args = mArgsQueue[i];
        // Trigger the notify call
        args->notify(mInnerListener);
        delete args;
    }
    mArgsQueue.clear();
}

void NotifyMotionArgs::notify(const sp<InputListenerInterface>& listener) const {
    // Is the notifyMotion of InputDispatcher
    listener->notifyMotion(this);
}

/*frameworks/native/services/inputflinger/dispatcher/InputDispatcher.cpp*/
void InputDispatcher::notifyMotion(const NotifyMotionArgs* args){...// 1. You can add business logic to do something before the event is distributed
    mPolicy->interceptMotionBeforeQueueing(args->displayId, args->eventTime, /*byref*/policyFlags); . bool needWake; {...if (shouldSendMotionToInputFilterLocked(args)) {
            ...
            // 2. FilterInputEvent Can filter touch events and will not be distributed
            if(! mPolicy->filterInputEvent(&event, policyFlags)) {return; // event was consumed by the filter}}...// 3. Put the touch events into the "IQ" queue for distribution
        needWake = enqueueInboundEventLocked(newEntry);
    } // release lock.if (needWake) {
        // 4. Wake up the loop worker thread and trigger a dispatchOnce logic
        mLooper->wake();
    }
}

bool InputDispatcher::enqueueInboundEventLocked(EventEntry* entry) {
    bool needWake = mInboundQueue.empty();
    mInboundQueue.push_back(entry);
    // Add the systrace tag of "IQ" to the mInboundQueuetraceInboundQueueLengthLocked(); .return needWake;
}

void InputDispatcher::traceInboundQueueLengthLocked() {
    if (ATRACE_ENABLED()) {
        // mInboundQueue has a systrace tag of "IQ".
        ATRACE_INT("iq", mInboundQueue.size()); }}void InputDispatcher::dispatchOnce(){ nsecs_t nextWakeupTime = LONG_LONG_MAX; {...if(! haveCommandsLocked()) {// 1. The event distribution logic is encapsulated in dispatchOnceInnerLockeddispatchOnceInnerLocked(&nextWakeupTime); }... }// release lock.// 2. After the event distribution logic is processed, the loop worker thread enters the hibernation waiting state
    mLooper->pollOnce(timeoutMillis);
}

void InputDispatcher::dispatchOnceInnerLocked(nsecs_t* nextWakeupTime){...switch (mPendingEvent->type) {
        ...
        case EventEntry::Type::MOTION: {
            ...
            // Take Motion Event as an example. The event distribution logic is encapsulated in dispatchMotionLocked
            done = dispatchMotionLocked(currentTime, typedEntry, &dropReason, nextWakeupTime);
            break; }}... } bool InputDispatcher::dispatchMotionLocked(nsecs_t currentTime, MotionEntry* entry, DropReason* dropReason, nsecs_t* nextWakeupTime){ ATRACE_CALL(); .if (isPointerEvent) {
        / / 1. FindTouchedWindowTargetsLocked find specific to receive treatment in the target window touch events
        injectionResult =
                findTouchedWindowTargetsLocked(currentTime, *entry, inputTargets, nextWakeupTime,
                                               &conflictingPointerActions);
    } else{... }...// 2. Distribute the touch event to the target application window
    dispatchEventLocked(currentTime, entry, inputTargets);
    return true;
}
Copy the code

As can be seen from the above code, the core logic of handling touch events is divided into two steps:

  1. First by dealing with the touch of the attacks findTouchedWindowTargetsLocked find the target application window;
  2. Go down and send touch events to the target window via dispatchEventLocked.

The following two sections will analyze each of these steps.

4 Find the target window of the touch event

The Primary Display of the Android system always has multiple visible Windows at the same time, such as status bar, navigation bar, application window, application interface Dialog window, etc. Adb shell dumpsys SurfaceFlinger

Display 19260578257609346HWC layers: ------------------------------------------------------------------------------------------------------------------------ ----------------------- Layer name Z | Window Type | Layer Class | Comp Type | Transform | Disp Frame (LTRB) | Source Crop (LTRB) | Frame Rate (Explicit) [Focused] ------------------------------------------------------------------------------------------------------------------------ ----------------------- com.android.systemui.ImageWallpaper#0
  rel      0 |         2013 |            0 |     DEVICE |          0 |    0    0 1080 2400 |    0.0    0.0 1080.0 2400.0| [] -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- - - - - - - - - - - - - - - - cn.nubia.launcher/com.android.launcher3.Launcher#0
  rel      0 |            1 |            0 |     DEVICE |          0 |    0    0 1080 2400 |    0.0    0.0 1080.0 2400.0| [*] -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- - - - - - - - - - - - - - - - FloatBar#0
  rel      0 |         2038 |            0 |     DEVICE |          0 | 1065  423 1080  623 |    0.0    0.0   15.0  200.0| [] -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- - - - - - - - - - - - - - - - StatusBar#0
  rel      0 |         2000 |            0 |     DEVICE |          0 |    0    0 1080  111 |    0.0    0.0 1080.0  111.0| [] -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- - - - - - - - - - - - - - - - NavigationBar0#0
  rel      0 |         2019 |            0 |     DEVICE |          0 |    0 2280 1080 2400 |    0.0    0.0 1080.0  120.0| [] -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- - - - - - - - - - - - - - - -Copy the code

So how do you find the target window that handles touch events from these Windows? We went on in the previous section analysis to see InputDispatcher: : findTouchedWindowTargetsLocked simplify code logic:

/*frameworks/native/services/inputflinger/dispatcher/InputDispatcher.cpp*/
int32_t InputDispatcher::findTouchedWindowTargetsLocked(nsecs_t currentTime,
                                                        const MotionEntry& entry,
                                                        std::vector<InputTarget>& inputTargets,
                                                        nsecs_t* nextWakeupTime,
                                                        bool* outConflictingPointerActions){ ATRACE_CALL(); .// Read the id of the Display to which the touch event belongsint32_t displayId = entry.displayId; .// Only ACTION_DOWN represents a new touch event, so you need to find a new target window. Subsequent ACTION_MOVE or ACTION_UP touch events send directly to the already found target windowbool newGesture = (maskedAction == AMOTION_EVENT_ACTION_DOWN || maskedAction == AMOTION_EVENT_ACTION_SCROLL || isHoverAction); .if (newGesture || (isSplit && maskedAction == AMOTION_EVENT_ACTION_POINTER_DOWN)) {
        // Read the x and y positions of the touch events
        int32_t x;
        int32_t y;
        if (isFromMouse) {
            ...
        } else {
            x = int32_t(entry.pointerCoords[pointerIndex].getAxisValue(AMOTION_EVENT_AXIS_X));
            y = int32_t(entry.pointerCoords[pointerIndex].getAxisValue(AMOTION_EVENT_AXIS_Y));
        }
        / / according to touch event's displayid, x/y coordinate position, attribute information, such as call findTouchedWindowAtLocked find the target window
        sp<InputWindowHandle> newTouchedWindowHandle =
                findTouchedWindowAtLocked(displayId, x, y, &tempTouchState,
                                          isDown /*addOutsideTargets*/.true /*addPortalWindows*/); . }else{... }... } sp<InputWindowHandle> InputDispatcher::findTouchedWindowAtLocked(int32_t displayId, int32_t x,
                                                                 int32_t y, TouchState* touchState,
																 bool isMirrorInject,/*Nubia add for mirror*/
                                                                 bool addOutsideTargets,
                                                                 bool addPortalWindows){...// 1. Based on the displayID passed in, call getWindowHandlesLocked to find the windowHandles set of all visible Windows under the same display
    const std::vector<sp<InputWindowHandle>> windowHandles = getWindowHandlesLocked(displayId);
    // 2. WindowHandles to find the target window
    for (const sp<InputWindowHandle>& windowHandle : windowHandles) {
        const InputWindowInfo* windowInfo = windowHandle->getInfo();
        if (windowInfo->displayId == displayId) {
            int32_t flags = windowInfo->layoutParamsFlags;
            if (windowInfo->visible) {
                if(! (flags & InputWindowInfo::FLAG_NOT_TOUCHABLE)) { ...// 3. Match the x/ Y coordinate position information of the touch event with the visible area of the window to find the target window
                    if (isTouchModal || windowInfo->touchableRegionContainsPoint(x, y)) {
                        ...
                        returnwindowHandle; }}... }}}return nullptr;
}

std::vector<sp<InputWindowHandle>> InputDispatcher::getWindowHandlesLocked(
        int32_t displayId) const {
    // Retrieve the corresponding set of visible Windows from the set mWindowHandlesByDisplay based on the displayId passed in
    return getValueByKey(mWindowHandlesByDisplay, displayId);
}
Copy the code

Analysis can be seen from the above code, the whole findTouchedWindowTargetsLocked target window about logic is according to read into touch screen displayid events belong to, x, y coordinates, such as property, Iterate to find the target window from mWindowHandlesByDisplay. So where did the mWindowHandlesByDisplay collection of all the visible window information come from? Let’s look at the code:

/*frameworks/native/services/inputflinger/dispatcher/InputDispatcher.cpp*/
void InputDispatcher::updateWindowHandlesForDisplayLocked(
        const std::vector<sp<InputWindowHandle>>& inputWindowHandles, int32_t displayId){...// Insert or replace
    // The only place to assign values to the mWindowHandlesByDisplay collection
    mWindowHandlesByDisplay[displayId] = newHandles;
}

void InputDispatcher::setInputWindowsLocked(
        const std::vector<sp<InputWindowHandle>>& inputWindowHandles, int32_t displayId){... updateWindowHandlesForDisplayLocked(inputWindowHandles, displayId); . }// Assign the mWindowHandlesByDisplay value externally by calling the setInputWindows interface
void InputDispatcher::setInputWindows(
        const std::unordered_map<int32_t, std::vector<sp<InputWindowHandle>>>& handlesPerDisplay){{// acquire lock
        std::scoped_lock _l(mLock);
        for (auto const& i : handlesPerDisplay) { setInputWindowsLocked(i.second, i.first); }}... }Copy the code

As you can see from the above code analysis, the external assignment to mWindowHandlesByDisplay is made by calling the InputDispatcher setInputWindows interface. So who’s going to call this function? Early on the Android version is by the window of the system framework of “big steward” – WindowManagerService InputMonitor through JNI calls indirect invocation InputDispatcher: : setInputWindows, To synchronize visible window information to InputDispatcher. But on the latest Android 11, SurfaceFlinger does this logic instead. SurfaceFlinger will in each frame synthesis task SurfaceFlinger: : gather information on each visible window Layer in OnMessageInvalidate InputDispatcher: calling through the Binder: setInputWindows synchronization can be See window message to InputDispatcher. The detailed process is shown in the figure below:

Personally, I understand that there are two possible reasons for Google engineers to make such changes: 1. In fact, the distribution of touch events only cares about the information of all visible Windows, while SurfaceFlinger knows the touch area, transparency, Layer Z_Order and other information of the Layer of all visible Windows. 2. In this way, WindowManagerService only needs to synchronize window information to SurfaceFlinger. It does not maintain the logic to synchronize visible window information to InputDispatcher at the same time. Finally, let’s look at simplified code analysis:

/*frameworks/native/services/surfaceflinger/SurfaceFlinger.cpp*/
void SurfaceFlinger::onMessageInvalidate(nsecs_t expectedVSyncTime){ ATRACE_CALL(); . updateInputFlinger(); . }void SurfaceFlinger::updateInputFlinger() {
    ATRACE_CALL();
    if(! mInputFlinger) {return;
    }
    // Call updateInputWindowInfo to synchronize window information to InputDispatcher when the window Layer dirty is detected or the touch properties of the Layer Layer change
    if (mVisibleRegionsDirty || mInputInfoChanged) {
        mInputInfoChanged = false;
        updateInputWindowInfo();
    } else if(mInputWindowCommands.syncInputWindows) { ... }... }void SurfaceFlinger::updateInputWindowInfo() {
    std::vector<InputWindowInfo> inputHandles;
    // 1. Iterate through all visible window layers in mDrawingState
    mDrawingState.traverseInReverseZOrder([&](Layer* layer) {
        if (layer->needsInputInfo()) {
            // When calculating the screen bounds we ignore the transparent region since it may
            // result in an unwanted offset.
            // 2. Call The Layer::fillInputInfo interface, read the visible area, transparency, touch area and other information from each Layer, and encapsulate the InputWindowInfo object and place it in the inputHandles collectioninputHandles.push_back(layer->fillInputInfo()); }});// 3. SetInputWindows invokes InputManager server through Binder to indirectly synchronize encapsulated visible window touch information to InputDispatcher
    mInputFlinger->setInputWindows(inputHandles,
                                   mInputWindowCommands.syncInputWindows ? mSetInputWindowsListener
                                                                         : nullptr);
}

/*frameworks/native/services/surfaceflinger/Layer.cpp*/
InputWindowInfo Layer::fillInputInfo() {
    if(! hasInputInfo()) { mDrawingState.inputInfo.name = getName(); mDrawingState.inputInfo.ownerUid = mCallingUid; mDrawingState.inputInfo.ownerPid = mCallingPid; mDrawingState.inputInfo.inputFeatures = InputWindowInfo::INPUT_FEATURE_NO_INPUT_CHANNEL; mDrawingState.inputInfo.layoutParamsFlags = InputWindowInfo::FLAG_NOT_TOUCH_MODAL; mDrawingState.inputInfo.displayId = getLayerStack(); } InputWindowInfo info = mDrawingState.inputInfo; .// Synchronize the Layer touch area informationinfo.touchableRegion = info.touchableRegion.translate(info.frameLeft, info.frameTop); .return info;
}

Copy the code

5 The touch event is sent to the target window

5.1 Process of sending touch events to the target window

So far, we have read the touch event and found the target window corresponding to the touch event. So the next question is how do you send touch event notifications to the target window? According to the previous analysis, InputReader and InputDispatcher are both created in the system system_server process to carry out specific work, and the target window belongs to the App application process, so it involves the problem of cross-process communication. Analysis of the source code we can find that the current Android system is using the Socket mechanism to achieve cross-process touch event notification to the target application window. Below we went on to the previous analysis, from InputDispatcher: : dispatchEventLocked function, this paper analysis system is how to implement step by step will touch events sent to the target window. The whole process is shown in the figure below:

The simplified code flow is as follows:

/*frameworks/native/services/inputflinger/dispatcher/InputDispatcher.cpp*/
void InputDispatcher::dispatchEventLocked(nsecs_t currentTime, EventEntry* eventEntry,
                                          const std::vector<InputTarget>& inputTargets){ ATRACE_CALL(); .// Iterate over the target window inputTargets that needs to send touch events
    for (const InputTarget& inputTarget : inputTargets) {
        // Get the connection to the target window
        sp<Connection> connection =
                getConnectionLocked(inputTarget.inputChannel->getConnectionToken());
        if(connection ! = nullptr) {/ / execution prepareDispatchCycleLocked accomplish specific touch events send action
            prepareDispatchCycleLocked(currentTime, connection, eventEntry, inputTarget);
        } else{... }}}void InputDispatcher::prepareDispatchCycleLocked(nsecs_t currentTime,
                                                 const sp<Connection>& connection,
                                                 EventEntry* eventEntry,
                                                 const InputTarget& inputTarget){...// Skip this event if the connection status is not normal.
    // We don't want to enqueue additional outbound events if the connection is broken.
    if(connection->status ! = Connection::STATUS_NORMAL) {// The current connection status is abnormal.return; }.../ / call enqueueDispatchEntriesLocked continue
    enqueueDispatchEntriesLocked(currentTime, connection, eventEntry, inputTarget);
}

void InputDispatcher::enqueueDispatchEntriesLocked(nsecs_t currentTime,
                                                   const sp<Connection>& connection,
                                                   EventEntry* eventEntry,
                                                   const InputTarget& inputTarget){...// Check whether the connection outboundQueue "oq" of the target window is empty
    bool wasEmpty = connection->outboundQueue.empty();
    // Put the touch event into the outboundQueue of the target window
    enqueueDispatchEntryLocked(connection, eventEntry, inputTarget,
                               InputTarget::FLAG_DISPATCH_AS_HOVER_EXIT); .// If the outbound queue was previously empty, start the dispatch cycle going.
    if(wasEmpty && ! connection->outboundQueue.empty()) {// Call startDispatchCycleLocked for further processingstartDispatchCycleLocked(currentTime, connection); }}void InputDispatcher::startDispatchCycleLocked(nsecs_t currentTime,
                                               const sp<Connection>& connection){...while(connection->status == Connection::STATUS_NORMAL && ! connection->outboundQueue.empty()) { ...// 1. The connection status of the target window is normal and the connected outboundQueue is not empty
        // Publish the event.
        status_t status;
        EventEntry* eventEntry = dispatchEntry->eventEntry;
        switch (eventEntry->type) {
            ...
            case EventEntry::Type::MOTION: {
                ...
                // 2. Determine that it is a touch event (we are only concerned with this type of touch event for the moment in this analysis) and call publishMotionEvent to distribute the button
                status = connection->inputPublisher
                                 .publishMotionEvent(dispatchEntry->seq,
                                                     dispatchEntry->resolvedEventId,
                                                     motionEntry->deviceId, motionEntry->source,
                                                     motionEntry->displayId, std::move(hmac), dispatchEntry->resolvedAction, motionEntry->actionButton, dispatchEntry->resolvedFlags, motionEntry->edgeFlags, motionEntry->metaState, motionEntry->buttonState, motionEntry->classification, xScale, yScale, xOffset, yOffset, motionEntry->xPrecision, motionEntry->yPrecision, motionEntry->xCursorPosition, motionEntry->yCursorPosition, motionEntry->downTime, motionEntry->eventTime, motionEntry->pointerCount, motionEntry->pointerProperties, usingCoords); . }... }...// 3. Remove dispatchEntry from the Connection outboundQueue and oQ queue
        // Re-enqueue the event on the wait queue.
        connection->outboundQueue.erase(std::remove(connection->outboundQueue.begin(),
                                                    connection->outboundQueue.end(),
                                                    dispatchEntry));
        // Update systrace's associated "OQ" queue tag print
        traceOutboundQueueLength(connection);
        // The dispatchEntry of the outboundQueue queue is placed in the connection waitQueue, removed from the Oq queue, and placed in the wq queue for application processing
        connection->waitQueue.push_back(dispatchEntry);
        if (connection->responsive) {
            // 4. Enable application ANR tracing for touch event processing
            mAnrTracker.insert(dispatchEntry->timeoutTime,
                               connection->inputChannel->getConnectionToken());
        }
        // Update systrace tag printtraceWaitQueueLength(connection); }}Copy the code

Continue parsing the publishKeyEvent function:

/*frameworks/native/libs/input/InputTransport.cpp*/
status_t InputPublisher::publishMotionEvent(
        uint32_t seq, int32_t eventId, int32_t deviceId, int32_t source, int32_t displayId,
        std::array<uint8_t, 32> hmac, int32_t action, int32_t actionButton, int32_t flags,
        int32_t edgeFlags, int32_t metaState, int32_t buttonState,
        MotionClassification classification, float xScale, float yScale, float xOffset,
        float yOffset, float xPrecision, float yPrecision, float xCursorPosition,
        float yCursorPosition, nsecs_t downTime, nsecs_t eventTime, uint32_t pointerCount,
        const PointerProperties* pointerProperties, const PointerCoords* pointerCoords){...// 1. Construct an InputMessage after receiving various information about the input event
    InputMessage msg;
    msg.header.type = InputMessage::Type::MOTION;
    msg.body.motion.seq = seq;
    msg.body.motion.eventId = eventId;
    msg.body.motion.deviceId = deviceId;
    msg.body.motion.source = source;
    msg.body.motion.displayId = displayId;
    msg.body.motion.hmac = std::move(hmac);
    msg.body.motion.action = action;
    msg.body.motion.actionButton = actionButton;
    msg.body.motion.flags = flags;
    msg.body.motion.edgeFlags = edgeFlags;
    msg.body.motion.metaState = metaState;
    msg.body.motion.buttonState = buttonState;
    msg.body.motion.classification = classification;
    msg.body.motion.xScale = xScale;
    msg.body.motion.yScale = yScale;
    msg.body.motion.xOffset = xOffset;
    msg.body.motion.yOffset = yOffset;
    msg.body.motion.xPrecision = xPrecision;
    msg.body.motion.yPrecision = yPrecision;
    msg.body.motion.xCursorPosition = xCursorPosition;
    msg.body.motion.yCursorPosition = yCursorPosition;
    msg.body.motion.downTime = downTime;
    msg.body.motion.eventTime = eventTime;
    msg.body.motion.pointerCount = pointerCount;
    for (uint32_t i = 0; i < pointerCount; i++) {
        msg.body.motion.pointers[i].properties.copyFrom(pointerProperties[i]);
        msg.body.motion.pointers[i].coords.copyFrom(pointerCoords[i]);
    }
    // 2. Finally, send the InputMessage to the target window through the InputChannel
    return mChannel->sendMessage(&msg);
}

status_t InputChannel::sendMessage(const InputMessage* msg) {
    const size_t msgLength = msg->size();
    InputMessage cleanMsg;
    msg->getSanitizedCopy(&cleanMsg);
    ssize_t nWrite;
    do {
        // Write data to mFd through socket mechanism and send touch events to peer target window process
        nWrite = ::send(mFd.get(), &cleanMsg, msgLength, MSG_DONTWAIT | MSG_NOSIGNAL);
    } while (nWrite == -1&& errno == EINTR); .return OK;
}
Copy the code

Finally, the core of sendMessage is writing data to an mFd. What is an mFd? An InputChannel is a pair of sockets, one representing the “client” side and the other representing the “server” side. The “server” side is registered with the InputDispatcher. The “client” is returned to the APP, and both InputDispatcher and APP are listening on their sockets, so the APP and InputDispatcher are communicating. How this socket is created and registered is discussed in the next section.

5.2 Application APP and InputDispatcher register and monitor InputChannel

As we know from the analysis in the previous section, touch events are sent by InputDispatcher using the Socket encapsulated by InputChannel to the App application process to which the target application window belongs. So when was this InputChannel created and registered? Or how do you make this InputChannel connection between InputDispatcher and your APP? To answer this question, we need to start with the process of creating an application window when the App is launched:

Generally an App starts, all you need to call WindowManagerGlobal: : addView interface completed WMS service application window is added to the system logic. As usual, let’s start with a flow chart to see how this works:

Take a look at the flow further in the code:

/*frameworks/base/core/java/android/view/WindowManagerGlobal.java*/
public void addView(View view, ViewGroup.LayoutParams params, Display display, Window parentWindow, int userId){... synchronized (mLock) { ...// Create an instance of ViewRootImpl and call setView to complete the specific logic for adding the window
            root = newViewRootImpl(view.getContext(), display); .// do this last because it fires off messages to start doing things
            try {
                root.setView(view, wparams, panelParentView, userId);
            } catch(RuntimeException e) { ... }}}/*frameworks/base/core/java/android/view/ViewRootImpl.java*/   
 public void setView(View view, WindowManager.LayoutParams attrs, View panelParentView, int userId) {
        synchronized (this) {
            if (mView == null) {... InputChannel inputChannel =null;
                if ((mWindowAttributes.inputFeatures
                        & WindowManager.LayoutParams.INPUT_FEATURE_NO_INPUT_CHANNEL) == 0) {
                    1. Create an empty InputChannel
                    inputChannel = new InputChannel();
                }
                try{...// 2.Binder calls the addToDisplayAsUser interface of WMS and passes InputChanel as a parameter to complete the assignment
                    res = mWindowSession.addToDisplayAsUser(mWindow, mSeq, mWindowAttributes,
                            getHostVisibility(), mDisplay.getDisplayId(), userId, mTmpFrame,
                            mAttachInfo.mContentInsets, mAttachInfo.mStableInsets,
                            mAttachInfo.mDisplayCutout, inputChannel,
                            mTempInsets, mTempControls);
                } catch(RemoteException e) { ... }...if(inputChannel ! =null) {...// 3. Create a WindowInputEventReceiver object based on the assigned InputChannel encapsulation for subsequent input events
                    mInputEventReceiver = newWindowInputEventReceiver(inputChannel, Looper.myLooper()); }... }}}Copy the code

During the process of adding an application window to WMS through addView at APP startup, an empty InputChanel will be created. Then, addToDisplayAsUser interface of WMS will be called by Binder to add an application window, which will be transmitted to WMS as a parameter. So assigning InputChannel is done in WMS. Let’s continue with the processing in WMS:

 /*frameworks/base/services/core/java/com/android/wm/Session.java*/
  @Override
  public int addToDisplayAsUser(IWindow window, int seq, WindowManager.LayoutParams attrs,
            int viewVisibility, int displayId, int userId, Rect outFrame,
            Rect outContentInsets, Rect outStableInsets,
            DisplayCutout.ParcelableWrapper outDisplayCutout, InputChannel outInputChannel,
            InsetsState outInsetsState, InsetsSourceControl[] outActiveControls) {
        // call WMS addWindow to addWindow logic
        return mService.addWindow(this.window, seq, attrs, viewVisibility, displayId, outFrame,
                outContentInsets, outStableInsets, outDisplayCutout, outInputChannel,
                outInsetsState, outActiveControls, userId);
    }

  /*frameworks/base/services/core/java/com/android/wm/WindowManagerService.java*/
  public int addWindow(Session session, IWindow client, int seq, LayoutParams attrs, int viewVisibility, int displayId, Rect outFrame, Rect outContentInsets, Rect outStableInsets, DisplayCutout.ParcelableWrapper outDisplayCutout, InputChannel outInputChannel, InsetsState outInsetsState, InsetsSourceControl[] outActiveControls, int requestUserId) {
            ...
            final boolean openInputChannels = (outInputChannel != null
                    && (attrs.inputFeatures & INPUT_FEATURE_NO_INPUT_CHANNEL) == 0);
            if  (openInputChannels) {
                // Call openInputChannel of WindowState to complete the assignment of InputChannelwin.openInputChannel(outInputChannel); }...return res;
    }
   
   /*frameworks/base/services/core/java/com/android/wm/WindowState.java*/
   void openInputChannel(InputChannel outInputChannel){...// Application window name
        String name = getName();
        // 1. Create InputChannelPair to create a socketPair full-duplex communication channel
        InputChannel[] inputChannels = InputChannel.openInputChannelPair(name);
        
        mInputChannel = inputChannels[0];
        mClientChannel = inputChannels[1];
        
        // 2. Register server InputChannel with InputDispatcher
        mWmService.mInputManager.registerInputChannel(mInputChannel);
        // This token uniquely identifies the App window that receives the input event (it is important to note that this token remains in the WMS, is synchronized to SurfaceFlinger and then to InputDispatcher, and will eventually be used to match the target window of the touch event. It is used to find the connect connection with the App according to the token match.
        mInputWindowHandle.token = mInputChannel.getToken();
        if(outInputChannel ! =null) {
            // 3. Assign client InputChannel to outInputChannel and Binder back to viewrotimPL in the application process
            mClientChannel.transferTo(outInputChannel);
            mClientChannel.dispose();
            mClientChannel = null;
        } else{... }... }Copy the code

In fact, the core logic of the whole InputChannel connection process is:

  1. Through InputChannel: first: create socketpair openInputChannelPair full-duplex channel, and fill in the “server” InputChannel from the server and client InputChannel “client”.

  2. Again through the InputManager: : registerInputChannel will “server” service side InputChannel registered in InputDispatcher.

  3. Finally, through InputChannel::transferTo, outInputChannel is sent back to APP end through Binder, so that the APP end can realize the logic of listening to input events.

The model of the entire InputChannel channel is shown in the figure below:

Here’s a closer look at the three processes with code:

5.2.1 Creating an InputChannel for the Client and server

From InputChannel. We continue to the analysis of the openInputChannelPair:

   /*frameworks/base/core/java/android/view/InputChannel.java*/
   public static InputChannel[] openInputChannelPair(String name){...// According to the method name, this is a typical JNI call implementation, which will call the corresponding interface implementation function of the native layer
        return nativeOpenInputChannelPair(name);
   }
   
   /*frameworks/base/core/jni/android_view_InputChannel.cpp*/
   static const JNINativeMethod gInputChannelMethods[] = {
    /* name, signature, funcPtr */
    { "nativeOpenInputChannelPair"."(Ljava/lang/String;) [Landroid/view/InputChannel;",
            (void*)android_view_InputChannel_nativeOpenInputChannelPair },
        ...
    }
    
    static jobjectArray android_view_InputChannel_nativeOpenInputChannelPair(JNIEnv* env, jclass clazz, jstring nameObj){...// 1. Build a pair of Native layer InputChannelsstatus_t result = InputChannel::openInputChannelPair(name, serverChannel, clientChannel); .// Construct a Java layer array of type InputChannel
    jobjectArray channelPair = env->NewObjectArray(2, gInputChannelClassInfo.clazz, nullptr); .// 2. Convert native InputChannel to Java InputChanneljobject serverChannelObj = android_view_InputChannel_createInputChannel(env, serverChannel); jobject clientChannelObj = android_view_InputChannel_createInputChannel(env, clientChannel); .// Store the transformed Java layer InputChannel in the previously constructed Java array
    env->SetObjectArrayElement(channelPair, 0, serverChannelObj);
    env->SetObjectArrayElement(channelPair, 1, clientChannelObj);
    // 3. Return to the Java layer
    return channelPair;
}
Copy the code

InputChannel: : the main logic of openInputChannelPair is JNI calls InputChannel of a pair of native layer is constructed, and then create a Java layer according to its InputChannel, finally returned to the Java layer. Let’s move on to the logic for creating a Native InputChannel:

/*frameworks/native/libs/input/InputTransport.cpp*/
status_t InputChannel::openInputChannelPair(const std::string& name,
        sp<InputChannel>& outServerChannel, sp<InputChannel>& outClientChannel) {
    int sockets[2];
    // 1. Create a Full-duplex Socket channel for a Linux-based Socketpair
    if (socketpair(AF_UNIX, SOCK_SEQPACKET, 0, sockets)) {
        ...
        return result;
    }
    // The current buffer size SOCKET_BUFFER_SIZE default value is 32*1024 (32K)
    int bufferSize = SOCKET_BUFFER_SIZE;
    // 2. Set the size of the socket send and receive buffer to bufferSize
    setsockopt(sockets[0], SOL_SOCKET, SO_SNDBUF, &bufferSize, sizeof(bufferSize));
    setsockopt(sockets[0], SOL_SOCKET, SO_RCVBUF, &bufferSize, sizeof(bufferSize));
    setsockopt(sockets[1], SOL_SOCKET, SO_SNDBUF, &bufferSize, sizeof(bufferSize));
    setsockopt(sockets[1], SOL_SOCKET, SO_RCVBUF, &bufferSize, sizeof(bufferSize));
    
    // 3. Create a BBinder application window that uniquely identifies the APP process
    sp<IBinder> token = new BBinder();

    std::string serverChannelName = name + " (server)";
    android::base::unique_fd serverFd(sockets[0]);
     // The server InputChannel stores the FD of the server socket. Note that the InputChannel is created with a token unique identifier
    outServerChannel = InputChannel::create(serverChannelName, std::move(serverFd), token);

    std::string clientChannelName = name + " (client)";
    android::base::unique_fd clientFd(sockets[1]);
    // Client InputChannel stores the FD of the client socket. Note that the InputChannel is created with a token unique identifier
    outClientChannel = InputChannel::create(clientChannelName, std::move(clientFd), token);
    return OK;
}
Copy the code

First, the standard Linux interface Socketpair was called to create a full-duplex communication channel (the creation and access of socketpair is actually based on file descriptors), and the size of the transmission buffer was set. The client InputChannel is created and holds the SOCKET FD of the client client, and the server InputChannel is created and holds the socket FD of the Server server. Eventually both client and server inputChannels are converted into Java objects and returned to Windows State.

5.2.2 Register server InputChannel with InputDispatcher

We went back to WindowState: : openInputChannel continue to look down registerInputChannel logic:

 /*frameworks/base/services/core/java/com/android/server/input/InputManagerService.java*/
 public void registerInputChannel(InputChannel inputChannel){...// According to the method name, this is a typical JNI call implementation, which will call the corresponding interface implementation function of the native layer
        nativeRegisterInputChannel(mPtr, inputChannel);
 }
 
 /*frameworks/base/services/core/jni/com_android_server_input_InputManagerService.cpp*/
 static void nativeRegisterInputChannel(JNIEnv* env, jclass /* clazz */,
        jlong ptr, jobject inputChannelObj) {
    NativeInputManager* im = reinterpret_cast<NativeInputManager*>(ptr);
    // 1. Get the InputChannel of native layersp<InputChannel> inputChannel = android_view_InputChannel_getInputChannel(env, inputChannelObj); .// 2. Invoke the registerInputChannel interface of inputManager. CPPstatus_t status = im->registerInputChannel(env, inputChannel); . } status_t NativeInputManager::registerInputChannel(JNIEnv* /* env */.const sp<InputChannel>& inputChannel) {
    ATRACE_CALL();
    // Call inputDispatcher. CPP registerInputChannel to implement the registration logic
    return mInputManager->getDispatcher()->registerInputChannel(inputChannel);
}

/*frameworks/native/services/inputflinger/dispatcher/InputDispatcher.cpp*/
status_t InputDispatcher::registerInputChannel(const sp<InputChannel>& inputChannel){... {// acquire lock
        std::scoped_lock _l(mLock);
        // If a Connection already exists, there is no need to register again
        sp<Connection> existingConnection = getConnectionLocked(inputChannel->getConnectionToken());
        if(existingConnection ! = nullptr) { ...return BAD_VALUE;
        }
        // 1. Encapsulate create a Connection and pass in the server inputChannel
        sp<Connection> connection = new Connection(inputChannel, false /*monitor*/, mIdGenerator);
        // Read the socket fd of server inputChannel
        int fd = inputChannel->getFd();
        
        // 2. Save the socket fd as key and connection as value to map mConnectionsByFd
        mConnectionsByFd[fd] = connection;
        
        // 3. Use the token as the key and inputChannel as the value, which are saved in the Map mInputChannelsByToken
        mInputChannelsByToken[inputChannel->getConnectionToken()] = inputChannel;
        
        // 4. Add the socket fd of the server to the Looper listener. After the client sends the event, the handleReceiveCallback function will be triggered
        mLooper->addFd(fd, 0, ALOOPER_EVENT_INPUT, handleReceiveCallback, this);
    } // release lock

    // Wake the looper because some connections have changed.
    mLooper->wake();
    return OK;
}

Copy the code

The core logic is:

1. If InputChannel is registered for the first time, a Connection will be created, and the socket fd of InputChannel will be the key, and the Connection will be the value. Stored in InputDispatcher’s mConnectionsByFd, InputChannel’s token object ensures the uniqueness of the window registered for InputChannel. The token is the key and the InputChannel is the value stored in the mInputChannelsByToken of the InputDispatcher.

2. Add the FD of the server socket to the Looper of the InputDispatcher for listening. When the client socket of the application APP writes data, the handleReceiveCallback function is triggered for processing.

(important) let us recall combination analysis in section 5.1 the touch events sent to the target window of logic, in InputDispatcher: : dispatchEventLocked function in the first step is to need to traverse the target window, Then the corresponding Connection can be directly retrieved according to the token of the target window. In fact, these target Windows have completed the registration process mentioned above when they are created. The information has been added to the mConnectionsByFd and mInputChannelsByToken collections, so that when the event is distributed it can be fetched directly from the collection based on the unique TOKE identifier.

5.2.3 App client listens to touch events

We went back to WindowState: : openInputChannel continue to look down, in the service side InputChannel registration is completed, MClientChannel. TransferTo (outInputChannel) will give outInputChannel client InputChannel assignment, and Binder ViewRootImpl back to the App application process. After that, the App application process needs to register the logic of listening to touch events according to the InputChannel of the client that changes ships, so that the application can respond in time after the touch events of the subsequent framework InputDispatcher are delivered. Let’s start with a flow chart:

Let’s go back to the setView of the application client viewrotimpl and analyze the process in combination with the code:

/*frameworks/base/core/java/android/view/ViewRootImpl.java*/   
 public void setView(View view, WindowManager.LayoutParams attrs, View panelParentView, int userId) {
        synchronized (this) {
            if (mView == null) {... InputChannel inputChannel =null;
                if ((mWindowAttributes.inputFeatures
                        & WindowManager.LayoutParams.INPUT_FEATURE_NO_INPUT_CHANNEL) == 0) {
                    1. Create an empty InputChannel
                    inputChannel = new InputChannel();
                }
                try{...// 2.Binder calls the addToDisplayAsUser interface of WMS and passes InputChanel as a parameter to complete the assignment
                    res = mWindowSession.addToDisplayAsUser(mWindow, mSeq, mWindowAttributes,
                            getHostVisibility(), mDisplay.getDisplayId(), userId, mTmpFrame,
                            mAttachInfo.mContentInsets, mAttachInfo.mStableInsets,
                            mAttachInfo.mDisplayCutout, inputChannel,
                            mTempInsets, mTempControls);
                } catch(RemoteException e) { ... }...if(inputChannel ! =null) {...// 3. Create a WindowInputEventReceiver object based on the assigned InputChannel encapsulation for subsequent input events
                    mInputEventReceiver = newWindowInputEventReceiver(inputChannel, Looper.myLooper()); }... }}}Copy the code

After assigning an empty client InputChannel to WMS’s addToDisplayAsUser, create a WindowInputEventReceiver whose constructor receives the client’s InputChannel and the application UI main thread’s Looper object.

  /*frameworks/base/core/java/android/view/ViewRootImpl.java*/  
  final class WindowInputEventReceiver extends InputEventReceiver {
        public WindowInputEventReceiver(InputChannel inputChannel, Looper looper) {
            // Call the constructor of the parent class InputEventReceiver
            super(inputChannel, looper); }}/*frameworks/base/core/java/android/view/InputEventReceiver.java*/  
  public InputEventReceiver(InputChannel inputChannel, Looper looper){...// Initialization is done through a JNI call to nativeInit
        mReceiverPtr = nativeInit(new WeakReference<InputEventReceiver>(this), inputChannel, mMessageQueue); . }/*frameworks/base/core/jni/android_view_InputEventReceiver.cpp*/  
  static jlong nativeInit(JNIEnv* env, jclass clazz, jobject receiverWeak, jobject inputChannelObj, jobject messageQueueObj) {
    // 1. Java layer InputChannel is converted to native layer InputChannelsp<InputChannel> inputChannel = android_view_InputChannel_getInputChannel(env, inputChannelObj); .// 2.APP UI thread corresponds to MessageQueue of native layersp<MessageQueue> messageQueue = android_os_MessageQueue_getMessageQueue(env, messageQueueObj); .// 3. Create NativeInputEventReceiver, save inputChannel and messageQueue to its internal member variables
    sp<NativeInputEventReceiver> receiver = newNativeInputEventReceiver(env, receiverWeak, inputChannel, messageQueue); status_t status = receiver->initialize(); . } status_t NativeInputEventReceiver::initialize() {
    setFdEvents(ALOOPER_EVENT_INPUT);
    return OK;
}

void NativeInputEventReceiver::setFdEvents(int events) {
    if(mFdEvents ! = events) { mFdEvents = events;// 1. MInputConsumer is the client InputChannel stored inside it, and fd points to the client socket inside the client InputChannel
        int fd = mInputConsumer.getChannel()->getFd();
        if (events) {
            // 2. Add the fd of the socket on the client to Looper for listening. The listening event type is ALOOPER_EVENT_INPUT
            mMessageQueue->getLooper()->addFd(fd, 0, events, this, nullptr);
        } else{ mMessageQueue->getLooper()->removeFd(fd); }}}Copy the code

The core logic is: Take the fd of the socket of the client InputChannel, add it to the Looper of the UI thread of the APP process to listen, and listen to the event type of ALOOPER_EVENT_INPUT. After receiving the event (when the socket of the server side writes data, Is the analysis of the front InputChannel: : sendMessage InputDispatcher send touch events send to the target window) in the UI thread callback NativeInputEventReceiver: : handleEvent function for processing. Finally to see NativeInputEventReceiver: : handleEvent of treatment:

/*frameworks/base/core/jni/android_view_InputEventReceiver.cpp*/  
int NativeInputEventReceiver::handleEvent(int receiveFd, int events, void* data){...if (events & ALOOPER_EVENT_INPUT) {
        JNIEnv* env = AndroidRuntime::getJNIEnv();
        // Call consumeEvents to consume the ALOOPER_EVENT_INPUT event
        status_t status = consumeEvents(env, false /*consumeBatches*/, -1, nullptr);
        mMessageQueue->raiseAndClearException(env, "handleReceiveCallback");
        return status == OK || status == NO_MEMORY ? 1 : 0; }...return 1;
}

status_t NativeInputEventReceiver::consumeEvents(JNIEnv* env, bool consumeBatches, nsecs_t frameTime, bool* outConsumedBatch){...for(;;) {... InputEvent* inputEvent;// 1. Read the input eventstatus_t status = mInputConsumer.consume(&mInputEventFactory, consumeBatches, frameTime, &seq, &inputEvent, &motionEventType, &touchMoveNum, &flag); .if(! skipCallbacks) { ...if (inputEventObj) {
                if (kDebugDispatchCycle) {
                    ALOGD("channel '%s' ~ Dispatching input event.", getInputChannelName().c_str());
                }
               JNI calls the dispatchInputEvent function in the Java InputDispatcherReceiverenv->CallVoidMethod(receiverObj.get(), gInputEventReceiverClassInfo.dispatchInputEvent, seq, inputEventObj); . }else{... }}... }}/*frameworks/base/core/java/android/view/InputEventReceiver.java*/  
  // Called from native code.
    @SuppressWarnings("unused")
    @UnsupportedAppUsage
    private void dispatchInputEvent(int seq, InputEvent event) {
        mSeqMap.put(event.getSequenceNumber(), seq);
        // Call onInputEvent to handle input event touch events
        onInputEvent(event);
    }
Copy the code

For the APP UI thread, it will call back the handleEvent function of the native layer object NativeInputEventReceiver corresponding to InputEventReceiver when receiving the message from the server socket. And then will inform the callback to the application of App Java JNI layer ViewRootImpl: : WindowInputDispatcherReceiver: : onInputEvent function further processing, The following logic is the event transfer within the target window of the App that needs to be specifically analyzed in the following section.

6 Event transfer mechanism inside the target window

This brings us to the part that App developers are most familiar with and most concerned about. Input touch events have entered the App process and the World of the Java layer. In this section we will be connected to the analysis of the section, from InputEventReceiver: : onInputEvent function of two section analysis Input touch events within the target application window transfer process.

6.1 Distribution and processing of touch events by UI thread

From ViewRootImpl: : WindowInputDispatcherReceiver: : onInputEvent function, the UI thread to handle touch events process as shown in the figure below:

As can be seen from the above flow chart, the Input event is first put into the “AQ” queue by enqueueInputEvent function. The simplified code is as follows:

/*frameworks/base/core/java/android/view/ViewRootImpl.java*/
  @UnsupportedAppUsage
    void enqueueInputEvent(InputEvent event, InputEventReceiver receiver, int flags, boolean processImmediately) {
        // Create a QueuedInputEvent object, use obtainQueuedInputEvent to construct the object, store the recycled object as a pool of objects, and use the linked list data structure internally
        QueuedInputEvent q = obtainQueuedInputEvent(event, receiver, flags);
        
        QueuedInputEvent last = mPendingInputEventTail;
        if (last == null) {
            mPendingInputEventHead = q;
            mPendingInputEventTail = q;
        } else {
            last.mNext = q;
            mPendingInputEventTail = q;
        }
        mPendingInputEventCount += 1;
        QueuedInputEvent = aq:pending:XXX; // QueuedInputEvent = aq:pending:XXX
        Trace.traceCounter(Trace.TRACE_TAG_INPUT, mPendingInputEventQueueLengthCounterName,
                mPendingInputEventCount);

        if (processImmediately) {
            // doProcessInputEvents Handles this input event specifically
            doProcessInputEvents();
        } else{ scheduleProcessInputEvents(); }}Copy the code

The specific processing logic of input touch events is encapsulated in InputUsage. When ViewRootImpl creates a Window, setView logic will create multiple InputUsage objects of different types for event processing in turn. The design adopts the responsibility chain mode. Wire each InputStage implementation class through the mNext variable. The main logic of each InputUsage is to process the touch event in the OnProcess function, and then determine whether the processing is complete. If not, onDeliverToNext passes to the next InputUsage to continue processing. Otherwise, finishInputEvent is called to end the event. The code flow is as follows:

/*frameworks/base/core/java/android/view/ViewRootImpl.java*/

    public void setView(View view, WindowManager.LayoutParams attrs, View panelParentView, int userId){...// Set up the input pipeline.
                CharSequence counterSuffix = attrs.getTitle();
                // Create several different types of InputUsage to handle different types of touch events
                mSyntheticInputStage = new SyntheticInputStage();
                // ViewPostImeInputStage encapsulates our application of general touch event processing logic (important)
                InputStage viewPostImeStage = new ViewPostImeInputStage(mSyntheticInputStage);
                InputStage nativePostImeStage = new NativePostImeInputStage(viewPostImeStage,
                        "aq:native-post-ime:"+ counterSuffix); . } abstractclass InputStage {.../** * Delivers an event to be processed. */
        public final void deliver(QueuedInputEvent q) {
            // Whether to flag with FLAG_FINISHED, and if so, issue events to mNext
            if((q.mFlags & QueuedInputEvent.FLAG_FINISHED) ! =0) {
                forward(q);
             // Whether events should be discarded
            } else if (shouldDropInputEvent(q)) {
                finish(q, false);
            } else {
                traceEvent(q, Trace.TRACE_TAG_VIEW);
                final int result;
                try {
                    // 1. Actually handle the event
                    result = onProcess(q);
                } finally {
                    Trace.traceEnd(Trace.TRACE_TAG_VIEW);
                }
                // 2. After receiving the result of handling the incident, decide to take further action
                apply(q, result);
            }
        }
        
        protected void apply(QueuedInputEvent q, int result) {
            if (result == FORWARD) {
                forward(q);
            } else if (result == FINISH_HANDLED) {
                finish(q, true);
            } else if (result == FINISH_NOT_HANDLED) {
                finish(q, false);
            }
        }
        
        protected void finish(QueuedInputEvent q, boolean handled) {
            q.mFlags |= QueuedInputEvent.FLAG_FINISHED;
            if (handled) {
                q.mFlags |= QueuedInputEvent.FLAG_FINISHED_HANDLED;
            }
            // Pass to the next InputUsage to continue processing
            forward(q);
        }
        
        protected void forward(QueuedInputEvent q) {
            // Pass to the next InputUsage to continue processing
            onDeliverToNext(q);
        }
        
        protected void onDeliverToNext(QueuedInputEvent q) {
            if (DEBUG_INPUT_STAGES) {
                Log.v(mTag, "Done with " + getClass().getSimpleName() + "." + q);
            }
            if(mNext ! =null) {
               // 1. Pass to the next InputUsage to continue processing
                mNext.deliver(q);
            } else {
                // 2. Call finishInputEvent to terminate the touch event after all InputUsage has been processedfinishInputEvent(q); }}}Copy the code

Note that finishInputEvent will be called after the touch event processing is completed to end the application’s touch event processing logic, which will call sendFinishedSignal of the Native InputConsumer through JNI. Finally, the InputDispatcher event is notified through sendMessage of the client InputChannel. The InputDispatcher on the server side is mainly to remove the touch event from the waitQueue and reset the ANR time after receiving the message. (Therefore, if there is a long time consuming or blocking operation in the UI thread during application development, As a result, the main thread cannot process the touch event logic in time and sends back the finishInputEvent message to the framework InputDispatcher. Finally, the waitQueue timed out and ANR problem occurred.

6.2 View’s touch event distribution mechanism

Following the analysis in the previous section, we know: Touch events are first dispatched to the dispatchTouchEvent function of the DecorView, the root node of the View tree. This is also where touch events are handled in the View tree of the application window layout.

/*frameworks/base/core/java/com/android/internal/policy/DecorView.java*/
 @Override
    public boolean dispatchTouchEvent(MotionEvent ev) {
        final Window.Callback cb = mWindow.getCallback();
        // The application Activity's dispatchTouchEvent handles touch events
        returncb ! =null && !mWindow.isDestroyed() && mFeatureId < 0
                ? cb.dispatchTouchEvent(ev) : super.dispatchTouchEvent(ev);
    }
Copy the code

Callback refers to the current Activity. When the Activity starts, it calls its own attach method, which passes itself as a Callback to the Window and continues with the Activity’s dispatchTouchEvent processing logic:

/*frameworks/base/core/java/android/app/Activity.java*/
public boolean dispatchTouchEvent(MotionEvent ev){...// Start with the PhoneWindow
        if (getWindow().superDispatchTouchEvent(ev)) {
            return true;
        }
        // PhoneWindow has no consumption event and continues to pass onTouchEvent to the Activity
        return onTouchEvent(ev);
}
Copy the code

The Activity passes the event to the PhoneWindow first, which is actually to the DecorView, using the following logic:

/*frameworks/base/core/java/com/android/internal/policy/DecorView.java*/
    public boolean superDispatchTouchEvent(MotionEvent event) {
        // This is handled by its parent ViewGroup
        return super.dispatchTouchEvent(event);
    }
Copy the code

Moving on to processing in ViewGroup:

 /*frameworks/base/core/java/android/ViewGroup.java*/
  @Override
  public boolean dispatchTouchEvent(MotionEvent ev){...if (onFilterTouchEventForSecurity(ev)) {
            ...
            if(actionMasked == MotionEvent.ACTION_DOWN || mFirstTouchTarget ! =null) {
                // 1. Check whether the child View is set to prohibit the parent ViewGroup to intercept touch events, resolving the sliding conflict keyfinal boolean disallowIntercept = (mGroupFlags & FLAG_DISALLOW_INTERCEPT) ! =0;
                if(! disallowIntercept) {// onInterceptTouchEvent Checks whether the ViewGroup intercepts touch eventsintercepted = onInterceptTouchEvent(ev); }}...if(! canceled && ! intercepted) {if (actionMasked == MotionEvent.ACTION_DOWN
                        || (split && actionMasked == MotionEvent.ACTION_POINTER_DOWN)
                        || actionMasked == MotionEvent.ACTION_HOVER_MOVE) {
                    ...
                    if (newTouchTarget == null&& childrenCount ! =0) {... final View[] children = mChildren;// 2. Iterate over all child Views
                        for (int i = childrenCount - 1; i >= 0; i--) {
                            ...
                            if(! child.canReceivePointerEvents() || ! isTransformedTouchPointInView(x, y, child,null)) {
                               // 3. If the View cannot receive the event, or if the current event is not in the View area, return to the next loop
                                continue; }...// 4. This will execute the sub-view event distribution logic
                            if (dispatchTransformedTouchEvent(ev, false, child, idBitsToAssign)) {
                                ...
                                break; }... }}... }}... } public booleanonInterceptTouchEvent(MotionEvent ev) {
        if (ev.isFromSource(InputDevice.SOURCE_MOUSE)
                && ev.getAction() == MotionEvent.ACTION_DOWN
                && ev.isButtonPressed(MotionEvent.BUTTON_PRIMARY)
                && isOnScrollbarThumb(ev.getX(), ev.getY())) {
            return true;
        }
        // ViewGroup does not intercept events by default
        return false;
    }
    
    private boolean dispatchTransformedTouchEvent(MotionEvent event, boolean cancel, View child, int desiredPointerIdBits){...// Perform any necessary transformations and dispatch.
        if (child == null) {
            handled = super.dispatchTouchEvent(transformedEvent);
        } else{...// Call the child's dispatchTouchEvent to continue event distributionhandled = child.dispatchTouchEvent(transformedEvent); }... }Copy the code

The main logic for piece distribution in a ViewGroup is: To determine whether a child View is set to ban the parent ViewGroup to intercept touch events (a main train of thought is the child of resolving conflict in the sliding View by calling requestDisallowInterceptTouchEvent banned the parent ViewGroup to intercept touch events, If not, onInterceptTouchEvent determines whether to intercept the event. If not, it traverses all of the child views to find a matching child View, and then passes to that child View’s dispatchTouchEvent to continue processing.

Let’s move on to View’s handling logic for touch events:

 /*frameworks/base/core/java/android/View.java*/
    public boolean dispatchTouchEvent(MotionEvent event){... boolean result =false;
        if (onFilterTouchEventForSecurity(event)) {
            ...
            // 1. Check whether the setOnTouchListener method has been called. If so, pass the event to its onTouch method for processing and return the result
            if(li ! =null&& li.mOnTouchListener ! =null
                    && (mViewFlags & ENABLED_MASK) == ENABLED
                    && li.mOnTouchListener.onTouch(this, event)) {
                result = true;
            }
            // 2. Determine whether to consume touch events in onTouchEvent
            if(! result && onTouchEvent(event)) { result =true; }}...return result;
    }
Copy the code

First, determine whether the View is set to touch listener, that is, whether the setOnTouchListener method is called. If so, the event will be passed to its onTouch method for processing and return the result. If not, the View’s onTouchEvent method will be called. This method calls onClick conditionally, in the order in which events are processed: onTouch, onTouchEvent, onClick.

In general, the processing logic of View for touch events is as follows:

7 Debugging mechanism for touch problems

According to years of experience, the author summarizes some common system touch debugging mechanisms:

1. View the supported touch input device nodes:

2,Adb getevent the adb getevent command is used to view on-screen reports:adb shell getevent -ltrYou can slide the screen to check whether the touch point is normal and uniform:

3, display touch small white dot:

Settings -> Developer Options -> Show Touch action

When you touch the screen after opening it, you can see the real-time display of the small white dots, and you can subjectively feel the sliding and hand degree of the screen.

4. Check the whole process of touch event distribution on Systrace:

As can be seen from the previous analysis: If InputReader reads a touch event and wakes up the InputDispatcher, it will put the InputDispatcher into the “IQ” queue. In the event distribution process, each target window has the corresponding queue “Oq” and the queue “wq” waiting for the event processing in the target window. Finally, after the application receives the touch event, there is the corresponding queue “aq”, as shown in the following figure:Target window application App process:Systrace tag applied to UI thread processing:

Adb shell Dumpsys Input adb shell Dumpsys Input

8 summarizes

In the end, a diagram made by the industry leader is used to describe the overall picture of the Android system touch event processing mechanism as a conclusion.

reference

10 minutes to learn about Android touch events: juejin.cn/post/684490…

AndroidR Input subsystem: blog.csdn.net/qq_34211365…