Idle fish technology – mirror space

Introduction:

After the release of APP, the biggest headache for developers is how to solve the restoration and positioning of user-side problems after delivery, which is a blank field in the industry without a complete set of systematic solutions. The technology team of Xianyu proposed a new set of technical ideas to solve this problem on FLUTTER based on their business pain points.

We capture the flow of UI events and business data through the underlying system, and use the captured data to reproduce problems online through event playback mechanisms. This paper first introduces the principle of flutter touch gesture events, then explains how to record flutter UI gesture events, then explains how to restore and playback flutter UI gesture events, and finally provides the overall framework including native recording and playback. In order to understand this article, readers can first read my previous article on Native recording and playback technology for Thousands of Online Problems.

background

Nowadays, apps generally provide an entrance for users to feedback problems. However, there are two ways to provide users with feedback problems:

  • Direct text input expression, or screenshots
  • Record video feedback directly

These two types of feedback often lead to the following complaints:

  • User: Input text is time-consuming and laborious
  • Dev 1: Can’t understand what user feedback is saying?
  • Development 2: Basically understand what the user said, but I can’t reproduce offline
  • Development 3: I watched the video recorded by the user, but I could not reproduce it offline, nor locate the problem

Therefore: In order to solve the above problems, we use a set of new ideas to design the online problem replay system

The basics of Flutter gestures

To record and play back a FLUTTER UI event, we must first understand the fundamentals of the Flutter UI gesture.

1. The Flutter UI touches the original data Pointer

The gesture system in Flutter can be understood in two concepts. The first layer is the primitive touch data (pointer), which describes the time, type, position, and movement of Pointers (for example, touch, mouse, and stylus) on the screen. The second concept is gestures, which describe semantic actions consisting of one or more raw moving data. In general, raw touch data alone doesn’t mean anything. The original touch data is transmitted from the system to native, which is then transmitted to flutter through the flutter View channel. The original data interface that FLUTTER receives from Native is as follows:

  void _handlePointerDataPacket(ui.PointerDataPacket packet) {
    // We convert pointer data to logical pixels so that e.g. the touch slop can be
    // defined in a device-independent manner.
    _pendingPointerEvents.addAll(PointerEventConverter.expand(packet.data, ui.window.devicePixelRatio));
    if(! locked) _flushPointerEventQueue(); }Copy the code

2. Flutter UI crash test

When the screen receives a touch, the DART Framework performs a collision test on your application to determine which views (RenderObjects) exist where the touch touches the screen. The touch event is then distributed to the innermost RenderObject. Starting with the innermost RenderObject, these events bubble up through the RenderObject tree, and by bubbling through all of the RenderObjects, as you can imagine, The last thing in the iterated renderObject list is a WidgetsFlutterBinding (strictly speaking, it is not a RenderObject), which we will cover later.

void _handlePointerEvent(PointerEvent event) { assert(! locked); HitTestResult result;if(event is PointerDownEvent) { assert(! _hitTests.containsKey(event.pointer)); result = HitTestResult(); hitTest(result, event.position); _hitTests[event.pointer] = result; assert(() {if (debugPrintHitTestResults)
          debugPrint('$event: $result');
        return true; } ()); }else if (event is PointerUpEvent || event is PointerCancelEvent) {
      result = _hitTests.remove(event.pointer);
    } else if (event.down) {
      result = _hitTests[event.pointer];
    } else {
      return; // We currently ignore add, remove, and hover move events.
    }
    if(result ! = null) dispatchEvent(event, result); }Copy the code

The above code detects which views are involved in the current touch Pointer event using histTest(). Finally, the event is handled by dispatchEvent(Event, result).

void dispatchEvent(PointerEvent event, HitTestResult result) { assert(! locked); assert(result ! = null);for (HitTestEntry entry in result.path) {
      try {
        entry.target.handleEvent(event, entry);
      } catch (exception, stack) {
      }
    }
  }Copy the code

The above code is used to call the gesture recognizer for each view individually to handle the current touch event (to decide whether or not to receive the event). All renderObjects are required to implement (implements) the HitTestTarget class interface. So every RenderObject needs to implement the handleEvent interface, which is used to handle gesture recognition.

abstract class RenderObject extends AbstractNode with DiagnosticableTreeMixin implements HitTestTargetCopy the code

With the exception of the last WidgetsFlutterBinding, the RenderObject calls its own handleEvent to recognize gestures, which determines whether the current gesture should be discarded. If you don’t give up, you throw it into a router (which is the gesture arena) and then WidgetsFlutterBinding calls handleEvent to decide which of these gesture recognizers wins. So here WidgetsFlutterBinding. HandleEvent is unified interface, its code is as follows:

  void handleEvent(PointerEvent event, HitTestEntry entry) {
    pointerRouter.route(event);
    if (event is PointerDownEvent) {
      gestureArena.close(event.pointer);
    } else if(event is PointerUpEvent) { gestureArena.sweep(event.pointer); }}Copy the code

3. Flutter UI gesture resolution

It follows from the above that a single touch event may trigger more than one gesture recognizer. The framework decides which gesture the user wants by having each recognizer join a “gesture field”. Gesture Field uses the following rules to decide which gesture wins, and it’s pretty simple

  1. At any time, any recognizer can declare defeat and voluntarily leave the “gesture competition”. If only one recognizer is left in the current “play field,” that leaves the winner, who is meant to receive the touch event and respond to it alone
  2. At any time, any recognizer can declare victory for itself, and ultimately it wins, and all remaining recognizers lose

4. Flutter UI gesture example

The following example shows that the screen window consists of ABCDEFKG views, where view A is the root view, which is the bottom view. The red circle represents the touch point position, and the touch falls in the middle of the G view.

Based on the collision test, the view path that responds to this touch event is traversed: WidgetsFlutterBinding < -a < -c < -k < -g (where GKCA is renderObject)

After traversing the path list, the respective view (GKCA) entry.target.HandleEvent is called to put its recognizer into the arena to participate in the resolution, although some views voluntarily give up recognizing the touch event based on their own logical judgment. This process is shown below

Call the handleEvent() method in G->K->C->A->WidgetsFlutterBinding order, and then call your own handleEvent() interface through WidgetsFlutterBinding to collectively decide which gesture recognizer wins.

The winning gesture recognizer is called back to the upper-level business code through a callback method, as follows

Flutter UI recording

From flutter gesture processing above, we can intercept the gesture callback by wrapping it around the gesture recognizer callback, so that we can read the view tree of the WidgetsFlutterBinding < -a < -c < -k < -g link during interception. We only need to record the tree, node related attributes and gesture types in the tree. Then, when playing back, these information can be matched to the corresponding view on the current interface. The following is the recording code for tap events. The recording code for other gestures works the same way and is skipped here.

  static GestureTapCallback onTapWithRecord(GestureTapCallback orgOnTap,       BuildContext context)
  {
    if(null ! = orgOnTap && null ! = context) { final GestureTapCallback onTapWithRecord = () {if(bStartRecord)
        {
          saveTapInfo(context, TouchEventUIType.OnTap,null);
        }
        if (null != orgOnTap)
        {
          orgOnTap();
        }
      };
      return onTapWithRecord;
    }
    return orgOnTap;
  }
  
static void saveTapInfo(BuildContext context, TouchEventUIType type, Offset point)
  {
    if(null == point && null ! = pointerPacketList && pointerPacketList.isNotEmpty) { final ui.PointerDataPacket last = pointerPacketList.last;if(null ! = last && null ! = last.data && last.data.isNotEmpty) { final ui.Rect rect = QueReplayTool.getWindowRect(context); point = new Offset(last.data.last.physicalX / ui.window.devicePixelRatio - rect.left, last.data.last.physicalY /ui.window.devicePixelRatio - rect.top); } } final RecordInfo record = createTapRecordInfo(context,type, point);
    if(null ! = record) { FlutterQuestionReplayPlugin.saveRecordDataToNative(record); } clearPointerPacketList(); }Copy the code

The recording flowchart is as follows:

Flutter UI playback

UI playback is divided into two parts. The first part matches the recorded relevant information to the corresponding view of the current interface. The second part simulates the relevant gestures on this view. If these information Settings are not reasonable or wrong, crash will be caused, and the rolling distance is inconsistent with compensation, how to compensate and so on. The following is a scroll event playback flow chart. The same principle applies to other gestures.

The above preprocessing, recognition consumption refers to the distance that the gesture recognizer needs to roll to determine whether the rolling gesture conforms to the rolling gesture at the beginning of the scroll. So in order for the control to scroll, we first need to generate some touch point data that the gesture recognizer recognizes as a scroll event. So that you can do the following scrolling. Here is the code for the scroll processing logic, as follows:

 void verticalScroll(double dstPoint, double moveDis) {
    preReplayPacket = null;
    if(0.0! = moveDis) {// Calculate the scroll direction and the scroll unit pixel offset, Because the code is too long skip int count = ((UI) window) devicePixelRatio * moveDis)/(unit. The abs ())). The round () * 2;if(count < minCount) { count = minCount; If the count is too small, the count does not scroll before it is consumed by the scroll view. This is the end of the touch (ui.pointer.up). That may cause the cell / / click jump events} final double physicalX = the rect. Center. The dx * UI window. DevicePixelRatio; / / 376.0; double physicalY; final double needOffset = (count * unit).abs(); final double targetHeight = rect.size.height * ui.window.devicePixelRatio; final int scrollPadding = rect.height ~/ 4;if (needOffset <= targetHeight / 2) {
        physicalY = rect.center.dy * ui.window.devicePixelRatio;
      } else if (needOffset > targetHeight / 2 && needOffset < targetHeight) {
        physicalY = (orgMoveDis > 0)
            ? (rect.bottom - scrollPadding) * ui.window.devicePixelRatio
            : (rect.top + scrollPadding) * ui.window.devicePixelRatio;
      } else {
        physicalY = (orgMoveDis > 0)
            ? (rect.bottom - scrollPadding) * ui.window.devicePixelRatio
            : (rect.top + scrollPadding) * ui.window.devicePixelRatio;
        count = ((rect.height - 2 * scrollPadding) *
                ui.window.devicePixelRatio /
                unit.abs())
            .round();
      }
      final List<ui.PointerDataPacket> packetList =createTouchDataList(count, unit, physicalY, physicalX);
      exeScroolTouch(packetList,dstPoint);
    } else{ new Timer(const Duration(microseconds: fpsInterval), () { replayScrollEvent(); }); }}Copy the code

1. Calculate the scrolling direction and offset cell 2 for each generated touch data. Calculate the start position of rolling 3. Generate the list of original touch data of rolling 4. Emit the original touch data in a cycle and calculate whether to roll to the specified position

The code to generate the scrolling raw touch data list is as follows: The first data is down touch data, and the other data are move touch data. Up data does not need to be generated here. Up touch data will be generated only when the scrolling distance reaches the target position. Why is it designed this way? I leave it to you to think!

List<ui.PointerDataPacket>  createTouchDataList(int count,double unit,double physicalY,double physicalX)
  {
      final List<ui.PointerDataPacket> packetList =  <ui.PointerDataPacket>[];
      int uptime = 0;
      for (int i = 0; i < count; i++) {
      ui.PointerChange change;
      if (0 == i) {
      change = ui.PointerChange.down;
      } else {
      change = ui.PointerChange.move;
      physicalY += unit;
      if{physicalY += unit; {physicalY += unit; physicalY += unit; } } uptime += replayOnePointDuration; final ui.PointerData pointer = new ui.PointerData( timeStamp: new Duration(microseconds: uptime), change: change, kind: UI. PointerDeviceKind. Touch, device: 1, physicalX: physicalX, physicalY: physicalY, buttons: 0, pressure: 0.0, pressureMin0.0, pressureMax: touchPressureMax, Distance: 0.0, distanceMax: 0.0, radiusMajor: downRadiusMajor, radiusMinor: 0.0, radiusMin: downRadiusMin,
      radiusMax: downRadiusMax,
      orientation: orientation,
      tilt: 0.0);
      final List<ui.PointerData> pointerList = <ui.PointerData>[];
      pointerList.add(pointer);
      final ui.PointerDataPacket packet =
      new ui.PointerDataPacket(data: pointerList);
      packetList.add(packet);
      }
      return packetList;
  }Copy the code

The original touch data is cyclically emitted, and the replenishment code is judged as follows: We continuously send touch data to the system with timer, and each time before sending data, we need to judge whether the target position has been reached.

void exeScroolTouch(List<ui.PointerDataPacket> packetList,double dstPoint){ Timer.periodic(const Duration(microseconds: fpsInterval), (Timer timer) { final ScrollableState state = element.state; final double curPoint = state.position.pixels; //ui.window.physicalSize.height*state.position.pixels/RecordInfo.recordedWindowH; final double offset = (dstPoint - curPoint).abs(); final bool existOffset = offset > 1 ?true : false;
  if (packetList.isNotEmpty && existOffset) {
    sendTouchData(packetList, offset);
  } else if (packetList.isNotEmpty) {
  record.succ = true;
  timer.cancel();
  packetList.clear();
  if(null ! = preReplayPacket) { final ui.PointerDataPacket packet = createUpTouchPointPacket();if(null ! = packet) { ui.window.onPointerDataPacket(packet); } } new Timer(const Duration(microseconds: fpsInterval), () { replayScrollEvent(); }); }else if (existOffset) {
  record.succ = true;
  timer.cancel();
  packetList.clear();
  final ui.PointerDataPacket packet =
  createUpTouchPointPacket();
  if(null ! = packet) { ui.window.onPointerDataPacket(packet); } verticalScroll(dstPoint, dstPoint - curPoint); }else{ finishReplay(); }}); }Copy the code

Problem playback overall frame diagram

The following figure includes Native and FLUTTER, including UI and data.

conclusion

  • The core part of this paper is composed of four parts: one is the principle of the flutter gesture, the second is the recording of the flutter UI, the third is the replay of the flutter UI, and the fourth is the whole frame diagram. Due to the limited space, the four parts are relatively general and not detailed enough, please understand! There is a lot of code for flutter recording playback. I have just attached some important and easy to understand code here. Other code that is not important or difficult to read is omitted.
  • If you are interested in the technical points inside, you can follow our official number. We will make a detailed and in-depth analysis of the technical points in the document later.
  • Please point it out if you think there’s something wrong with it. thank you

Follow-up in-depth

So far, our current Flutter UI recording playback has been developed, but we need to continue to optimize and further. We will further optimize from two points: 1. How to simulate touch events in playback to make them more realistic, such as rolling acceleration; a rolling is actually a curve changing process 2. Resolve inconsistencies between recording and playback gestures. 123, for example, in the keyboard input, we recorded intercepted by gestures 123, but due to a bug of the upper business input 3 no response at the time, showing only 12 in the input box, we playback simulation gestures, 123, after the final replay input box shows 123, so it lead to record and playback inconsistencies, how to solve this problem? That’s a tricky problem, and we’ll deal with that later. And there is a solution.

Yq.aliyun.com/articles/69…