This paper addresses

When the user’s finger is pressed at a certain point on the screen, the screen receives the click signal and converts the click position into specific coordinates, and then the click is packaged as a click event UIEvent. Finally, a view will respond to the event for processing, and the process of finding the response view for UIEvent is called response chain search, and there are two crucial classes in the whole process: UIResponder and UIView

The responder

A responder is a concrete object that can handle events. A responder should be an instance object of UIResponder or a subclass of UIResponder. By design, UIResponder provides three main types of interfaces:

  • Query the interface of the responder upward, as shown innextResponderThis unique interface
  • The processing interface for user operations, includingtouch,pressandremoteProcessing of three types of events
  • Whether processing is availableactionAnd the ability to find for ittargetThe ability to

In general, UIResponder provides processing power for the entire event lookup process

view

A view is a visual element displayed on an interface, including but not limited to text, buttons, images, and other visible styles. Even though UIResponder provides the ability to make an object respond to an event, the responder that has the ability to handle an event cannot be observed by the user. In other words, the user cannot click on the object that has the ability to handle an event, so UIView provides a visual carrier, From an interface perspective UIView provides three capabilities:

  • View tree structure. althoughresponderThe same tree structure exists, but it must be expressed by relying on a visual carrier
  • Visualize content. throughframeProperties, such as the ability to determine the visual range of a view on the screen, and the ability to associate click coordinates with responding visual objects
  • Content layout redrawn. Views render to the screen although very complex, but according to differentlayoutMethod provides different stages of the redraw up interface, making the subclass has a strong customization

The structure of the view tree is as follows, and since UIView is a subclass of UIResponder, you can access the parent view via nextResponder, but since responder is not all objects with visual carriers, Looking up through the nextResponder may result in not being able to find the responder by location calculation

Find responders

With that said, it’s time to talk about the process of finding responders. So, responder determines that the object has the ability to handle the response, and UIView provides the ability to associate the visual carrier with the click coordinates. In other words, looking for a responder is actually looking for an object that has the ability to handle the event and is within its visual range of the click point. Because we need to find the responder first, then we can proceed with further processing, so we can directly find the interface of the latter, two apis:

- (BOOL)pointInside:(CGPoint)point withEvent:(nullable UIEvent *)event; - (nullable UIView *)hitTest:(CGPoint)point withEvent:(nullable UIEvent *)event;Copy the code

It’s easy to figure out the order of the responders’ searches by exchanging the first method:

- (BOOL)sl_pointInside: (CGPoint)point withEvent: (UIEvent *)event {
    BOOL res = [self sl_pointInside: point withEvent: event];
    if (res) {
        NSLog(@"[%@ can answer]", self.class);
    } else {
        NSLog(@"non answer in %@", self.class);
    }
    return res;
}
Copy the code

Create the same layout structure as shown in the picture, then click BView to get the log:

[UIStatusBarWindow can answer]
non answer in UIStatusBar
non answer in UIStatusBarForegroundView
non answer in UIStatusBarServiceItemView
non answer in UIStatusBarDataNetworkItemView
non answer in UIStatusBarBatteryItemView
non answer in UIStatusBarTimeItemView
[UIWindow can answer]
[UIView can answer]
non answer in CView
[AView can answer]
[BView can answer]
Copy the code

The log output shows that the search sequence has two priorities:

  1. A higher prioritywindowGive priority to match
  2. The samewindowFrom the parent view to the child view

Once pointInside: determines which view is in range of the click coordinates, another method is called to find the real responder. If you hook this method, the parent view will call the child view from the log, and output the recursive form:

- (UIView *)sl_hitTest: (CGPoint)point withEvent: (UIEvent *)event { UIView *res = [self sl_hitTest: point withEvent: event]; NSLog(@"hit view is: %@ and self is: %@", res.class, self.class); return res; } hit view is: (null) and self is: CView hit view is: BView and self is: BView hit view is: BView and self is: BView AView hit view is: BView and self is: UIView hit view is: BView and self is: UIWindowCopy the code

When it is determined that CView is the last view to drop, it starts at CView and looks up to the responder for the event. This search process is linked by responder after responder, which is why the response chain is named. In addition, treebased view hierarchy, we only need to hold the root node to traverse the entire tree, which is why the search process starts from window, right

Responder processing

The pointInside: and hitTest: methods identify the visual responder at the top of the click position, but do not mean that the responder will handle the event. Implement the Touches method in AView based on the demo above:

@implementation AView

- (void)touchesBegan: (NSSet<UITouch *> *)touches withEvent: (UIEvent *)event {
    NSLog(@"A began");
}

- (void)touchesCancelled: (NSSet<UITouch *> *)touches withEvent: (UIEvent *)event {
    NSLog(@"A canceled");
}

- (void)touchesEnded: (NSSet<UITouch *> *)touches withEvent: (UIEvent *)event {
    NSLog(@"A ended");
}

@end
Copy the code

UIResponder provides an interface for handling user actions, but it’s clear that the interface to Touches is not implemented by default, so BView, even though it’s the topmost node in the response chain, can’t handle the clicked event. Instead, it looks for the responder along the response chain:

void handleTouch(UIResponder *responder, NSSet<UITouch *> *touches UIEvent *event) {
    if (!responder) {
        return;
    }
    if ([responder respondsToSelector: @selector(touchesBegan:withEvent:)]) {
        [responder touchesBegan: touches withEvent: event];
    } else {
        handleTouch(responder.nextResponder, touches, event);
    }
}
Copy the code

Another interesting aspect is gesture blocking. In addition to implementing the Touches method to make the View responsive, we can also actively add gestures to the view for callbacks:

- (void)viewDidLoad { [super viewDidLoad]; [_a addGestureRecognizer: [[UITapGestureRecognizer alloc] initWithTarget: self action: @selector(clickedA:)]]; } - (void)clickedA: (UITapGestureRecognizer *)tap { NSLog(@"gesture clicked A"); } /// A Began gesture clicked A canceledCopy the code

As you can see from the log, the gesture was processed after the ‘touchesBegan’ and the ‘touches’ chain was broken, so you can be sure that even the gesture is still handled by the UIResponder event

summary

The processing of touch events is divided into two stages: find responder and responder processing, which are functionally supported by UIView and UIResponder respectively. In addition, because the two key interfaces of pointInside: and hitTest: are exposed to the outside world, modifying these two methods through hook or inherit can make the response scope of the view larger than the display scope

application

There was a recent requirement to pop up a bubble above the tabbar and allow the user to click on the bubble for an interactive event. At the view level, tabbars are nested in multiple layers of views that are the size of a menu bar:

If you want to pop bubbles from item and make them interactive, there are two possible solutions:

  1. Add bubbles toViewControllerOn the view of
  2. Modify thetabbarThe response chain search interface to achieve clickable processing outside the display range

Considering that a bunch of repetitive codes may be written if the same pop-up requirements exist in the project, the pop-up action and touch screen judgment are encapsulated, and the automatic touch screen detection function of pop-up bubbles is realized by hook

Interface design

According to the minimum number of interfaces, only two eject interfaces are exposed:

/ *! */ typedef NS_ENUM(NSInteger, SLViewDirection) { SLViewDirectionCenter, SLViewDirectionTop, SLViewDirectionLeft, SLViewDirectionBottom, SLViewDirectionRight }; / *! * @category UIView+SLFreedomPop */ @interface UIView (SLFreedomPop) /*! * @method sl_popView: * center popView * @param View To pop view */ - (void)sl_popView: (UIView *)view; / *! * @method sl_popView:WithDirection: * control popover direction * @param view to pop up view * @param direction pop up direction */ - (void)sl_popView: (UIView *)view withDirection: (SLViewDirection)direction; @endCopy the code

Placement test

Two questions were considered:

  1. A view may have more than one child view that is out of display range
  2. Views Have child views that are not displayed by the parent view

The solutions to these two problems are as follows:

  1. With a viewkey, store oneextraRectThe list of
  2. When the pop-up view is out of its scope, call the parent view to ensure that the parent view can handle the response

The final detection code is as follows:

#define SLRectOverflow(subrect, rect) \ subrect.origin.x < 0 || \ subrect.origin.y < 0 || \ CGRectGetMaxX(subrect) > CGRectGetWidth(rect) || \ CGRectGetMaxY(subrect) > CGRectGetHeight(rect) #pragma mark - Private - (BOOL)_sl_pointInsideExtraRects: (CGPoint)point { NSArray *extraRects = [self extraHitRects].allValues; if (extraRects.count == 0) { return NO; } for (NSSet *rects in extraRects) { for (NSString *rectStr in rects) { if (CGRectContainsPoint(CGRectFromString(rectStr), point)) { return YES; } } } return NO; } #pragma mark - Rects - (void)_sl_addExtraRect: (CGRect)extraRect inSubview: (UIView *)subview { CGRect curRect = [subview convertRect: extraRect toView: self]; if (SLRectOverflow(curRect, self.frame)) { [self _sl_expandExtraRects: curRect forKey: [NSValue valueWithBytes: &subview objCType: @encode(typeof(subview))]]; [self.superview _sl_addExtraRect: curRect inSubview: self]; } } #pragma mark - Hook - (BOOL)sl_pointInside: (CGPoint)point withEvent: (UIEvent *)event { BOOL res = [self sl_pointInside: point withEvent: event]; if (! res) { return [self _sl_pointInsideExtraRects: point]; } return res; } - (UIView *)sl_hitTest: (CGPoint)point withEvent: (UIEvent *)event { UIView *res = [self sl_hitTest: point withEvent: event]; if (! res) { if ([self _sl_pointInsideExtraRects: point]) { return self; } } return res; }Copy the code

Running effect

The red view pops up the green view and goes beyond the display range of itself and its parent view.

Custom improvement

As the core function is event Handle, currently the code only provides a simple eject interface. If the eject interface capability needs to be further expanded, two improvements can be considered:

  • increaseconfigurationTo customize popover styles
  • provideanimationTo complete the custom animation

The source address of the article