Unlike many built-in controls, SwiftUI doesn’t wrap UIGestureRecognizer (or NSGestureRecognizer), but reconstructs its own gesture architecture. SwiftUI gestures lower the bar somewhat, but the lack of an API to provide the underlying data severely limits developers’ ability to deeply customize. Under SwiftUI, we don’t have the same ability to build a new UIGestureRecongnizer. The so-called custom gesture, in fact, is just the reconstruction of the system preset gesture. This article will show you how to customize the required gestures using the native SwiftUI tools through several examples.

The original post was posted on my blog wwww.fatbobman.com

Welcome to subscribe my public account: [Elbow’s Swift Notepad]

basis

Preset gestures

SwiftUI currently offers five preset gestures: Click, long press, drag, Zoom and rotate. Calls like onTapGesture are actually view extensions created for convenience.

  • Tap (Gesture)

    You can set the number of clicks (click, double click). Is one of the most frequently used hand gestures.

  • Long press (LongPressGesture)

    When pressed for a specified length of time, the specified closure is triggered.

  • DragGesture

    SwiftUI combines Pan and Swipe into one, providing drag data when position changes.

  • The MagnificationGesture

    Two fingers zoom.

  • RotationGesture

    Two fingers rotate.

Click, long press, and drag with only one finger. SwiftUI doesn’t offer a finger index setting function.

In addition to the gestures for developers mentioned above, SwiftUI also has a number of internal (non-public) gestures for system controls, such as ScrollGesture, _ButtonGesture, etc.

Button’s built-in gesture is more complex to implement than TapGesture. In addition to providing more call times, it also supports intelligent handling of the size of the pressing area (improving the success rate of finger strokes).

Value

SwiftUI provides different data content based on the type of gesture.

  • Click: data type Void
  • Long press: The data type is Bool and the value is true after the press is started
  • Drag and drop: provides the most comprehensive data information, including current location, offset, event time, predicted end point, predicted offset, etc
  • Scale: The data type is CGFloat, scale amount
  • Rotation: The value is of the data type Angle and indicates the rotation Angle

Using the Map method, you can convert the data provided by the gesture to another type for later invocation.

The timing

There is no state inside the SwiftUI gesture. By setting a closure corresponding to the specified time, the gesture is automatically invoked at the appropriate time.

  • onEnded

    The action performed at the end of the gesture

  • onChanged

    The action performed when the value provided by the gesture changes. Provided only if Value is Equatable, so TapGesture is not supported.

  • updating

    The execution time is the same as onChanged. There is no special convention for Value, which adds the ability to update gesture properties (GestureState) and get transactions over onChanged.

Different gestures focus on timing differently. Clicks usually just focus on onEnded; OnChanged (or updating) is more useful for dragging, scaling, or rotating; Long presses will only call onEnded if the set duration is met.

GestureState

A property wrapper type developed specifically for SwiftUI gestures that can be updated as a dependency driven view. Compared with State, there are the following differences:

  • It can only be modified in the updating method of the gesture and is read-only elsewhere in the view
  • When the gesture ends, the gesture associated with it (using updating) automatically restores its contents to its initial value
  • ResetTransaction allows you to set the animation state when restoring the initial data

Means of combining gestures

SwiftUI provides several combination methods for gestures that can be connected to form gestures for other purposes.

  • Siml97.(Identify simultaneously)

    Combine one gesture with another to create a new gesture that recognizes both gestures simultaneously. For example, the combination of zooming gesture and rotation gesture can realize zooming and rotation of the picture at the same time.

  • Sequenced (sequence identification)

    Connect the two gestures so that the second gesture is executed only if the first gesture succeeds. For example, long presses and drags are linked so that drags are allowed only after a certain amount of time has been held.

  • Autistic (Exclusive Identification)

    Combine two gestures, but only one of them can be recognized. The system takes precedence over the first gesture.

The Value type of the combined gesture will also change. You can still use Map to convert it to a more user-friendly data type.

A defined form of gesture

Typically, developers create custom gestures inside the view, which is less code and easier to combine with other data in the view. For example, the following code creates a gesture in the view that supports both zooming and rotation:

struct GestureDemo: View {
    @GestureState(resetTransaction: .init(animation: .easeInOut)) var gestureValue = RotateAndMagnify(a)var body: some View {
        let rotateAndMagnifyGesture = MagnificationGesture()
            .simultaneously(with: RotationGesture())
            .updating($gestureValue) { value, state, _ in
                state.angle = value.second ?? .zero
                state.scale = value.first ?? 0
            }

        return Rectangle()
            .fill(LinearGradient(colors: [.blue, .green, .pink], startPoint: .top, endPoint: .bottom))
            .frame(width: 100, height: 100)
            .shadow(radius: 8)
            .rotationEffect(gestureValue.angle)
            .scaleEffect(gestureValue.scale)
            .gesture(rotateAndMagnifyGesture)
    }

    struct RotateAndMagnify {
        var scale: CGFloat = 1.0
        var angle: Angle = .zero
    }
}
Copy the code

Alternatively, it is possible to create a Gesture structure that conforms to the Gesture protocol, so that the Gesture is defined so that it is good for repeated use.

This can be further simplified by encapsulating gestures or gesture processing logic as view extensions.

To highlight some aspects of functionality, the demo code provided below may seem cumbersome. In practice, it can be simplified by itself.

Example 1: Swipe

1.1 the target

Create a Swipe Gesture that highlights how to create a structure that conforms to the Gesture protocol and transform Gesture data.

1.2 train of thought

Of the SwiftUI preset gestures, only the DragGesture provides data that can be used to determine the direction of movement. The light sweep direction is determined according to the offset, and the map is used to convert complex data into simple directional data.

1.3 implementation

public struct SwipeGesture: Gesture {
    public enum Direction: String {
        case left, right, up, down
    }

    public typealias Value = Direction

    private let minimumDistance: CGFloat
    private let coordinateSpace: CoordinateSpace

    public init(minimumDistance: CGFloat = 10.coordinateSpace: CoordinateSpace = .local) {
        self.minimumDistance = minimumDistance
        self.coordinateSpace = coordinateSpace
    }

    public var body: AnyGesture<Value> {
        AnyGesture(
            DragGesture(minimumDistance: minimumDistance, coordinateSpace: coordinateSpace)
                .map { value in
                    let horizontalAmount = value.translation.width
                    let verticalAmount = value.translation.height

                    if abs(horizontalAmount) > abs(verticalAmount) {
                        if horizontalAmount < 0 { return .left } else { return .right }
                    } else {
                        if verticalAmount < 0 { return .up } else { return .down }
                    }
                }
        )
    }
}

public extension View {
    func onSwipe(minimumDistance: CGFloat = 10.coordinateSpace: CoordinateSpace = .local,
                 perform: @escaping (SwipeGesture.Direction) - >Void) -> some View {
        gesture(
            SwipeGesture(minimumDistance: minimumDistance, coordinateSpace: coordinateSpace)
                .onEnded(perform)
        )
    }
}
Copy the code

1.4 presentation

struct SwipeTestView: View {
    @State var direction = ""
    var body: some View {
        Rectangle()
            .fill(.blue)
            .frame(width: 200, height: 200)
            .overlay(Text(direction))
            .onSwipe { direction in
                self.direction = direction.rawValue
            }
    }
}
Copy the code

1.5 illustrates

  • Why use AnyGesture

    In the Gesture protocol, you need to implement a hidden type method: _makeGesture. Apple has not provided documentation on how it should be implemented, but SwiftUI provides a default implementation with constraints. SwiftUI can infer self.body. Value when we’re not using a custom Value type in a structure, we can declare the Body as some Gesture. But because the custom Value type is used in this example, the body must be declared as AnyGesture

    to satisfy the condition for enabling the default implementation of _makeGesture.

  extension Gesture where Self.Value= =Self.Body.Value {
    public static func _makeGesture(gesture: SwiftUI._GraphValue<Self>.inputs: SwiftUI._GestureInputs) -> SwiftUI._GestureOutputs<Self.Body.Value>
  }
Copy the code

1.6 Deficiencies and improvement methods

In this example, there is no comprehensive consideration of gesture duration, movement speed and other factors, and the current implementation is not really a swipe in the strict sense. If you want to achieve a strict sense of the swipe can be used as follows:

  • Instead of using example 2, wrap the DragGesture with the ViewModifier
  • Record the slide time with State
  • In onEnded, the user’s closure is called back and directions are passed only if the requirements for speed, distance, deviation, and so on are met

Example 2: Timing pinch

2.1 the target

Implement a press gesture that can record the length of time. Gesture A callback similar to onChanged can be performed at a specified time interval during a pinch. This example focuses on the method of wrapping a gesture through a view decorator and the use of GestureState.

2.2 train of thought

Passes the duration of the current press to the closure after a specified time interval through a timer. Use GestureState to save the start time of the click. After the press is finished, the start time of the last press is automatically cleared by the gesture.

2.3 implementation

public struct PressGestureViewModifier: ViewModifier {
    @GestureState private var startTimestamp: Date?
    @State private var timePublisher: Publishers.Autoconnect<Timer.TimerPublisher>
    private var onPressing: (TimeInterval) - >Void
    private var onEnded: () -> Void

    public init(interval: TimeInterval = 0.016.onPressing: @escaping (TimeInterval) - >Void.onEnded: @escaping() - >Void) {
        _timePublisher = State(wrappedValue: Timer.publish(every: interval, tolerance: nil, on: .current, in: .common).autoconnect())
        self.onPressing = onPressing
        self.onEnded = onEnded
    }

    public func body(content: Content) -> some View {
        content
            .gesture(
                DragGesture(minimumDistance: 0, coordinateSpace: .local)
                    .updating($startTimestamp, body: { _, current, _ in
                        if current = = nil {
                            current = Date()
                        }
                    })
                    .onEnded { _ in
                        onEnded()
                    }
            )
            .onReceive(timePublisher, perform: { timer in
                if let startTimestamp = startTimestamp {
                    let duration = timer.timeIntervalSince(startTimestamp)
                    onPressing(duration)
                }
            })
    }
}

public extension View {
    func onPress(interval: TimeInterval = 0.016.onPressing: @escaping (TimeInterval) - >Void.onEnded: @escaping() - >Void) -> some View {
        modifier(PressGestureViewModifier(interval: interval, onPressing: onPressing, onEnded: onEnded))
    }
}
Copy the code

2.4 presentation

struct PressGestureView: View {
    @State var scale: CGFloat = 1
    @State var duration: TimeInterval = 0
    var body: some View {
        VStack {
            Circle()
                .fill(scale = = 1 ? .blue : .orange)
                .frame(width: 50, height: 50)
                .scaleEffect(scale)
                .overlay(Text(duration, format: .number.precision(.fractionLength(1))))
                .onPress { duration in
                    self.duration = duration
                    scale = 1 + duration * 2
                } onEnded: {
                    if duration > 1 {
                        withAnimation(.easeInOut(duration: 2)) {
                            scale = 1}}else {
                        withAnimation(.easeInOut) {
                            scale = 1
                        }
                    }
                    duration = 0}}}}Copy the code

2.5 illustrates

  • GestureState Data recovery time before onEnded, in which startTimestamp has been restored to nil
  • DragGesture is still the best vehicle for implementation. TapGesture and LongPressGesture both automatically terminate gestures when triggering conditions are met, and support for arbitrary duration cannot be realized

2.6 Deficiencies and improvement methods

The current solution does not provide a setting like the LongPressGesture gesture offset limit, nor does it provide the total duration of the gesture in onEnded.

  • In updating, the offset is judged and the timing is interrupted if the offset of the pinch point is outside the specified range. And in updating, the user-supplied onEnded closure is called and flagged
  • In the gesture onEnded, if the user-provided onEnded closure has already been called, it will not be called again
  • Replace GestureState with State, so that the total duration can be provided in the gesture’s onEnded. Write your own data recovery code for State
  • Since GestureState is replaced with State, the logical judgment can be moved from updating to onChanged

Example 3: Click with location information

3.1 the target

The realization provides the touch position information of the click gesture (support the click times set). This example demonstrates usage simultaneously and how to choose the appropriate callback time (onEnded).

3.2 train of thought

The responsive feel of the gesture should be exactly the same as TapGesture. Use simultaneously to combine both gestures to get position data from DrageGesture and exit from TapGesture.

3.3 implementation

public struct TapWithLocation: ViewModifier {
    @State private var locations: CGPoint?
    private let count: Int
    private let coordinateSpace: CoordinateSpace
    private var perform: (CGPoint) - >Void

    init(count: Int = 1.coordinateSpace: CoordinateSpace = .local, perform: @escaping (CGPoint) - >Void) {
        self.count = count
        self.coordinateSpace = coordinateSpace
        self.perform = perform
    }

    public func body(content: Content) -> some View {
        content
            .gesture(
                DragGesture(minimumDistance: 0, coordinateSpace: coordinateSpace)
                    .onChanged { value in
                        locations = value.location
                    }
                    .simultaneously(with:
                        TapGesture(count: count)
                            .onEnded {
                                perform(locations ?? .zero)
                                locations = nil}))}}public extension View {
    func onTapGesture(count: Int = 1.coordinateSpace: CoordinateSpace = .local, perform: @escaping (CGPoint) - >Void) -> some View {
        modifier(TapWithLocation(count: count, coordinateSpace: coordinateSpace, perform: perform))
    }
}

Copy the code

3.4 presentation

struct TapWithLocationView: View {
    @State var unitPoint: UnitPoint = .center
    var body: some View {
        Rectangle()
            .fill(RadialGradient(colors: [.yellow, .orange, .red, .pink], center: unitPoint, startRadius: 10, endRadius: 170))
            .frame(width: 300, height: 300)
            .onTapGesture(count:2) { point in
                withAnimation(.easeInOut) {
                    unitPoint = UnitPoint(x: point.x / 300, y: point.y / 300)}}}}Copy the code

3.5 illustrates

  • When DragGesture’s minimumDistance is set to 0, its first data must be generated before TapGesture(count:1) is activated
  • In Simultaneously, there are three onEndend moments. OnEnded for gesture 1, onEnded for gesture 2, and onEnded for merged gestures. In this case, we choose to call back to the user’s closure in TapGesture’s onEnded

conclusion

Currently, SwiftUI gestures have a low threshold of usage but insufficient upper limit of ability. It is impossible to achieve very complex gesture logic using only the original means of SwiftUI. At some point in the future we’ll look at other articles on priority between gestures, selective failure using GestureMask, and how to create complex gestures in collaboration with UIGestureRecognizer.

I hope this article has been helpful to you.

The original post was posted on my blog wwww.fatbobman.com

Welcome to subscribe my public account: [Elbow’s Swift Notepad]