GestureDetector components

GestureDetector is a universal gesture operation component in the development of Flutter and supports common gesture operations such as click, double click, long press, drag and zoom. GestureDetector is a stateless component of StatelessWidget. Depending on the type of gesture recognition is divided into eight types of gestures: TapGestureRecognizer, DoubleTapGestureRecognizer, LongPressGestureRecognizer, VerticalDragGestureRecognize R, HorizontalDragGestureRecognizer, PanGestureRecognizer, ScaleGestureRecognizer, ForcePressGestureRecognizer.

Source code analysis

In the build method of GestureDetector, RawGestureDetector is implemented to deliver the results generated by the above eight gesture recognizers to RawGestureDetector for operation.

The build method

return RawGestureDetector(
  gestures: gestures,
  behavior: behavior,
  excludeFromSemantics: excludeFromSemantics,
  child: child,
);
Copy the code

RawGestureDetector

The RawGestureDetector is a stateful component, depending on the RawGestureDetectorState implementation. The _syncAll initState method initializes the gesture recognizer.

@override
void initState() {
  super.initState();
  _semantics = widget.semantics ?? _DefaultSemanticsGestureDelegate(this);
  _syncAll(widget.gestures);
}
Copy the code

Looking at the Build implementation, the gesture Listener is controlled primarily by the Listener’s onPointerDown Listener. The Listener can be understood as a GestureDetector ancestor.

@override
Widget build(BuildContext context) {
  Widget result = Listener(
    onPointerDown: _handlePointerDown,
    behavior: widget.behavior ?? _defaultBehavior,
    child: widget.child,
  );
  if(! widget.excludeFromSemantics) result = _GestureSemantics( child: result, assignSemantics: _updateSemanticsForRenderObject, );return result;
}
Copy the code

The _handlePointerDown method sends the gesture click event to each gesture recognizer

void _handlePointerDown(PointerDownEvent event) {
  assert(_recognizers ! =null);
  for (GestureRecognizer recognizer in _recognizers.values)
    recognizer.addPointer(event);
}
Copy the code

Call the addPointer method of the Recognizer. AddAllowedPointer may be implemented differently with each Recognizer, but it is very easy to recognize BaseTapGestureRecognizer. After assigning the event value to _DOWN, wait for the _deadline timer operation, and call _checkDown to confirm whether there is a click when the scheduled time is up

void addPointer(PointerDownEvent event) {
  _pointerToKind[event.pointer] = event.kind;
  if (isPointerAllowed(event)) {
    addAllowedPointer(event);
  } else{ handleNonAllowedPointer(event); }}/// TapGestureRecognizer is Recognizer's addAllowedPointer implementation
@override
void addAllowedPointer(PointerDownEvent event) {
  if (state == GestureRecognizerState.ready) {
    // `_down` must be assigned in this method instead of `handlePrimaryPointer`,
    // because `acceptGesture` might be called before `handlePrimaryPointer`,
    // which relies on `_down` to call `handleTapDown`.
    _down = event;
  }
  super.addAllowedPointer(event);
}
Copy the code

The handleTapDown method is then called to perform the onTapDown or onSecondaryTapDown callback to the upper layer.

@protected
@override
void handleTapDown({PointerDownEvent down}) {
  final TapDownDetails details = TapDownDetails(
    globalPosition: down.position,
    localPosition: down.localPosition,
    kind: getKindForPointer(down.pointer),
  );
  switch (down.buttons) {
    case kPrimaryButton:
      if(onTapDown ! =null)
        invokeCallback<void> ('onTapDown', () => onTapDown(details));
      break;
    case kSecondaryButton:
      if(onSecondaryTapDown ! =null)
        invokeCallback<void> ('onSecondaryTapDown',
          () => onSecondaryTapDown(details));
      break;
    default:}}Copy the code

gestures

Tap & DoubleTap & LongPress

General operation methods, such as click, double click and long press functions, can be realized only by writing Function. Of course, during these operations, there are also state monitoring of the intermediate process, such as click, press, lift, cancel click and so on.

GestureDetector(
  onTap: (){
    showSnack(context, "OnTap");
  },
  onDoubleTap: (){
    showSnack(context, "onDoubleTap");
  },
  onLongPress: (){
    showSnack(context, "onLongPress");
  },
  child: Transform.translate(
    offset: offset,
    child: Text(
      "Transform",
      style: TextStyle(fontSize: 25),),),);Copy the code

Drag

GestureDetector provides Horizontal drag, Vertical drag, and Pan support for drag.Drag vertically

GestureDetector(
  child: Container(
    child: Center(
      child: Text(verticalDragEvent),
    ),
    color: Colors.red,
  ),
  onVerticalDragStart: (DragStart) {},
  onVerticalDragUpdate: (DragUpdate) {
    verticalDragEvent = DragUpdate.delta.toString();
    setState(() {});
  },
  onVerticalDragDown: (DragDown) {},
  onVerticalDragCancel: () {},
  onVerticalDragEnd: (DragEnd) {},
)
Copy the code

Drag horizontally

GestureDetector(
  child: Container(
    child: Center(
      child: Text(horizontalDragEvent),
    ),
    color: Colors.blue,
  ),
  onHorizontalDragStart: (DragStart) {},
  onHorizontalDragUpdate: (DragUpdate) {
    horizontalDragEvent = DragUpdate.delta.toString();
    setState(() {});
  },
  onHorizontalDragDown: (DragDown) {},
  onHorizontalDragCancel: () {},
  onHorizontalDragEnd: (DragEnd) {},
),
Copy the code

Drag vertically and horizontally

GestureDetector(
  child: Container(
    child: Center(
      child: Text(panDragEvent),
    ),
    color: Colors.green,
  ),
  onPanStart: (panStart) {},
  onPanDown: (panDown) {},
  onPanEnd: (panEnd) {},
  onPanUpdate: (panUpdate) {
    panDragEvent = panUpdate.delta.toString();
    setState(() {});
  },
  onPanCancel: () {},
),
Copy the code

Offset (globalPosition vs. localPosition)

There are two important attribute values in GestureDetector: globalPosition and localPosition, both of which are Offset objects. GlobalPosition, as it is named, represents the offset (dx, dy) of the current gesture contact’s position in the global coordinate system from the corresponding component vertex coordinates, while localPosition represents the offset (dx, dy) of the current gesture contact’s position in the corresponding component coordinate system from the corresponding component vertex coordinates.

For example, in the following code, set the GestureDetector gesture listener for the Container, get the updateDetail object in the Update callback, and display the globalPosition offset in the Text. The value of dx in the globalPosition is the same as the value of dx in the localPosition, but the value of dy is different. Because the Scaffold has the AppBar set, its global coordinate system is not itself relative to the body. However, if the AppBar in that Scaffold is removed and the body fills up the Scaffold, the globalPosition and localPosition offsets obtained in the gesture listener will be the same.

Note that globalPosition on Android also includes the height of the phone’s status bar.

MaterialApp(
      theme: AppTheme.themes[store.state.appThemeState.themeType],
      home: Scaffold(
        appBar: AppBar(),
        body: GestureDetector(
          onPanStart: (detail) {
            showLog(detail.runtimeType, detail.localPosition,
                detail.globalPosition);
          },
          onPanUpdate: (detail) {
            showLog(detail.runtimeType, detail.localPosition,
                detail.globalPosition);
            setState(() {
              offsetText = "globalPosition: ${Offset(detail.globalPosition.dx, detail.globalPosition.dy).toString()} \n"
                  "localPosition: ${Offset(detail.localPosition.dx, detail.localPosition.dy).toString()}";
            });
          },
          onPanEnd: (detail) {
            setState(() {
              offsetText = "end";
            });
          },
          child: Container(
            color: Colors.red,
            width: double.infinity,
            height: double.infinity,
            child: Center(
              child: Text(
                 offsetText,
                style: TextStyle(
                  color: Colors.white,
                  fontSize: 25"" "" "" "" "" ""Copy the code

You can see the difference in the Container’s output between the two offsets when the Scaffold contains and does not contain the AppBar.

Scale (Scale)

Scaling requires two pointer objects to work together. When there is only one pointer object on the screen, that is, a contact, the scale ratio is 1 and the Angle is 0; When two pointer objects appear on the screen, the scale ratio is initialized to 1 and the Angle is initialized to 0 based on the two initial positions as the reference points, that is, the initial position. The distance position and the Angle between the two points are obtained by the gesture changes to calculate the scale ratio and Angle value.

GestureDetector(
  child: Transform(
    origin: Offset(size.width / 2, size.height / 2),
    transform: Matrix4.rotationZ(rotation).scaled(scale),
    child: Image.asset("res/img/jay.jpg"),
  ),
  onScaleStart: (scaleStartDetail) {},
  onScaleUpdate: (scaleUpdateDetail) {
    scale = scaleUpdateDetail.scale;
    rotation = scaleUpdateDetail.rotation;
    setState(() {});
  },
  onScaleEnd: (scaleEndDetail) {},
),
Copy the code

Get the ScaleUpdateDetails object from the onScaleUpdate method. This object contains information about the two contact implementations: scale, Angle, and focalPoint. Through the above code and can realize the gesture combined with the Transform to achieve the component zoom and rotation function.

🚀 See here 🚀 for the full code

At the end

The simple use of gestures is not particularly difficult, but the difficulty lies in the combination of multiple gestures and the handling of gesture conflicts. Components such as SingleChildScrollView, Draggable, etc., are involved in gestures. Perhaps the best way to learn about gesture manipulation and customize various gesture interactions is to look at the source code.