PinchImageView uses the GestureDetector to handle long press, click, double click, inertial slide events, double finger zoom and single finger movement events in onTouchEvent. There are two matrices inside, one is the external transformation Matrix(mOuterMatrix), which mainly records the result of gesture operation, and the other is the internal transformation Matrix(getInnerMatrix(Matrix)), which is the initial Matrix after scaling and translation according to fitCenter and other scaling modes. The distinction between the two matrices is probably based on the experience of PhotoView, gesture and original scaling do not affect each other, and the final scaling after gesture only needs to be multiplied by the two matrices. The following code analysis will not necessarily be completely pasted source code, some have been slightly modified.

1. Double click and inertial sliding

Long presses and clicks call callbacks, but we’ll focus on double clicks and inertial slides.

1.1 double click

PinchImageView has only one zoom, meaning it can only switch between maximum and initial zoom values. Basic principle: capture the double click event, get the x and Y coordinates of the double click point, scale the picture and move the double click point to the middle of the view. The code is longer, so let’s break it up a little bit. Here’s the ObjectsPool for PinchImageView. ObjectsPool maintains a queue of objects that can be reused within capacity. The general usage process is shown in the figure below:

  1. Fetch in the queueinnerMatrixObject (take()), if the queue is empty, a new object will be returned; otherwise, an object will be returned after the queue is reset.
  2. Fetch in the queuetargetMatrixObject.
  3. Using thetargetMatrixReturn (given (obj)).
  4. Using theinnerMatrixReturned.

The order of return doesn’t matter.

/** * object pool ** Prevents memory jitter for frequent new objects. * Due to the maximum object pool size limit, jitter can still occur if the throughput exceeds the object pool capacity. * In this case, the object pool capacity needs to be increased, but more memory will be used. **@param<T> The object type held by the object pool */
private static abstract class ObjectsPool<T> {
    /** * Maximum capacity of the object pool */
    private int mSize;
    /** * Object pool queue */
    private Queue<T> mQueue;
    
    public ObjectsPool(int size) {
        mSize = size;
        mQueue = new LinkedList<T>();
    }
    
    public T take(a) {
        // Create one if the pool is empty
        if (mQueue.size() == 0) {
            return newInstance();
        } else {
            // If there is an object in the pool, one is returned from the top
            returnresetInstance(mQueue.poll()); }}public void given(T obj) {
        Return the object if there is an empty space in the object pool
        if(obj ! =null&& mQueue.size() < mSize) { mQueue.offer(obj); }}abstract protected T newInstance(a);
    
    abstract protected T resetInstance(T obj);
}
Copy the code

Moving on to the processing of the double-click event.

private void doubleTap(float x, float y) {
    // Get the first layer transformation matrixMatrix innerMatrix = MathUtils.matrixTake(); getInnerMatrix(innerMatrix); ... MathUtils.matrixGiven(innerMatrix); }Copy the code

The first is to get the internal transformation matrix. Mathutils.matrixtake () fetches a Matrix object from the Matrix object pool.

public static Matrix matrixTake(a) {
    return mMatrixPool.take();
}
/** * gets a copy of a matrix */
public static Matrix matrixTake(Matrix matrix) {
    Matrix result = mMatrixPool.take();
    if(matrix ! =null) {
        result.set(matrix);
    }
    return result;
}
Copy the code

And then we get the inner transformation matrix, and we store it in the innerMatrix.

public Matrix getInnerMatrix(Matrix matrix) {...// The original size
    RectF tempSrc = MathUtils.rectFTake(0.0, getDrawable().getIntrinsicWidth(), getDrawable().getIntrinsicHeight());
    // Control size
    RectF tempDst = MathUtils.rectFTake(0.0, getWidth(), getHeight());
    // Calculate fit Center matrixmatrix.setRectToRect(tempSrc, tempDst, Matrix.ScaleToFit.CENTER); ...return matrix;
}
Copy the code

Mathutils. rectFTake is the same as the matrixTake method, except that rectF is extracted. The key is the matrix.setRectTorect method, which was introduced above. Read on:

// Total current scaling
float innerScale = MathUtils.getMatrixScale(innerMatrix)[0];
float outerScale = MathUtils.getMatrixScale(mOuterMatrix)[0];
float currentScale = innerScale * outerScale;
Copy the code

I’m multiplying the scale of the inner matrix times the scale of the outer matrix to get the final scale, and it’s really nice that it doesn’t affect the inside and the outside. Next, we’ll calculate and scale.

float nextScale = currentScale < MAX_SCALE ? MAX_SCALE : innerScale;
// If the subsequent magnification is greater than the maximum value or less than the fit center value, then the boundary is taken
if (nextScale > maxScale) {
    nextScale = maxScale;
}
if (nextScale < innerScale) {
    nextScale = innerScale;
}
// Start calculating the resulting matrix of the zoom animation
Matrix animEnd = MathUtils.matrixTake(mOuterMatrix);
// Calculate the multiples that need to be scaled
animEnd.postScale(nextScale / currentScale, nextScale / currentScale, x, y);
// Move the zoom point to the center of the control
animEnd.postTranslate(displayWidth / 2f - x, displayHeight / 2f- y); ...// Start matrix animation
mScaleAnimator = new ScaleAnimator(mOuterMatrix, animEnd);
mScaleAnimator.start();
Copy the code

This code is very sexy, let’s first comb through the idea of zooming: Double-click the image, it must be done in the form of animation, so the beginning of the animation is naturally the current transform position, transform to the target scale value nextScale multiple is nextScale/currentScale, in accordance with the principle of gesture operation recorded in the external matrix mOuterMatrix, Animation initial matrix copied from mOuterMatrix. There is something wrong with this code. The innerScale value is 0.2F, maxScale is 2, there is no gesture operation, and outerScale is 1.


c u r r e n t S c a l e = i n n e r S c a l e x o u t e r S c a l e = 0.2 x 1 = 0.2 n e x t S c a l e = 0.2 < 2   ?   2 : 0.2 = 2 n e x t S c a l e c u r r e n t S c a l e = 2 0.2 = 10 CurrentScale = innerScale \times outerScale = 0.2 \times 1 = 0.2 \ nextScale = 0.2 < 2\? \ 2:0.2 = 2 \\ \frac{nextScale}{currentScale} = \frac{2}{0.2} = 10

So you double click, and all of a sudden you see the image magnified 10 times… Remember that many of these images are larger than the width and height of the phone screen… The only thing ScaleAnimator does is keep updating the mOuterMatrix value, then invalidate and refresh the view in onDraw.

@Override
public void onAnimationUpdate(ValueAnimator animation) {
    // Get the animation progress
    float value = (Float) animation.getAnimatedValue();
    // Calculate the intermediate interpolation according to the animation progress
    for (int i = 0; i < 9; i++) {
        mResult[i] = mStart[i] + (mEnd[i] - mStart[i]) * value;
    }
    // Set the matrix to be redrawnmOuterMatrix.setValues(mResult); ... invalidate(); }@Override
protected void onDraw(Canvas canvas) {...// Set the transformation matrix before drawingsetImageMatrix(getCurrentImageMatrix(matrix)); ...super.onDraw(canvas); ... }Copy the code

After zooming and panning, the border of the picture may enter the picture control, and the position needs to be corrected. With the final zoom after the image boundary and control boundary comparison correction can be.

Matrix testMatrix = MathUtils.matrixTake(innerMatrix);
testMatrix.postConcat(animEnd);
RectF testBound = MathUtils.rectFTake(0.0, getDrawable().getIntrinsicWidth(), getDrawable().getIntrinsicHeight());
testMatrix.mapRect(testBound);
Copy the code

As you already know, the animEnd records the result of the current double-click transformation applied to the outer matrix. Multiply this by the innerMatrix to obtain the testMatrix of the final testBound.

// Fix the position
float postX = 0;
float postY = 0;
if (testBound.right - testBound.left < displayWidth) {
    postX = displayWidth / 2f - (testBound.right + testBound.left) / 2f;
} else if (testBound.left > 0) {
    postX = -testBound.left;
} else if(testBound.right < displayWidth) { postX = displayWidth - testBound.right; }...// Apply the correction position
animEnd.postTranslate(postX, postY);
Copy the code

PostX = displayWidth / 2f – (testbound.right + testbound.left) / 2f; postX = displayWidth / 2f – (testbound.right + testbound.left) / 2f; Testbound. right + testbound. left should be testbound. right – testbound. left. PostY that is not posted should also be changed.

1.2 Inertia Sliding

PinchImageView’s inertial slide handles attenuation on its own… The degree of attenuation is the same each time, and there is no interpolator support, which is a bit crude compared to PhotoView’s use of OverScroller for sliding. GestureDetector’s onFling(MotionEvent E1, MotionEvent E2, Float velocityX, Float velocityY) contains x and Y axis accelerations in pixels per second at 60 frames per second. Convert to pixel/frame velocityX/60, velocityY/60. PinchImageView animates using the FlingAnimator, which updates the initial slide distance velocityX/60 and then multiplys the decay value (FLING_DAMPING_FACTOR, 0.9) to be used in the next update.

// Move the image and give the result
boolean result = scrollBy(mVector[0], mVector[1].null);
mVector[0] *= FLING_DAMPING_FACTOR;
mVector[1] *= FLING_DAMPING_FACTOR;
// The speed is too small or can not move the end
if(! result || MathUtils.getDistance(0.0, mVector[0], mVector[1]) < 1f) {
    animation.cancel();
}
Copy the code

The scrollBy(float xDiff, float yDiff, MotionEvent MotionEvent) method handles the scrolling, mainly considering the processing of the image boundary and the control boundary, the same principle as the above correction position when scaling, image boundary is also obtained as the same as when scaling.

// Get the inner transformation matrix
matrix = getInnerMatrix(matrix);
// Multiply the external transformation matrix
matrix.postConcat(mOuterMatrix);
rectF.set(0.0, getDrawable().getIntrinsicWidth(), getDrawable().getIntrinsicHeight());
matrix.mapRect(rectF);
Copy the code

Finally, postTranslate is applied to the mOuterMatrix, and invalidate triggers onDraw to set a new matrix for the image.

3.2 Double finger zooming and single finger movement

Two-finger zooming and one-finger moving are done in onTouch.

3.2.1 Two-finger scaling

Principle: record the distance between the two fingers on the screen, the quotient of the scaling value of the external matrix divided by this distance is the scaling value of the unit distance, multiply the scaling value by the distance after the sliding of the two fingers to get a new scaling value, and use this scaling value to do the scaling transformation of the external matrix to get the final external matrix.


m S c a l e B a s e = m O u t e r M a t r i x . s c a l e i n i t i a l D i s t a n c e n e x t S c a l e = m S c a l e B a s e x n e w D i s t a n c e y ( n e x t S c a l e ) = k ( m S c a l e B a s e ) x 0 ( n e w D i s t a n c e ) mScaleBase = \frac{mOuterMatrix.scale}{initialDistance}\\ nextScale= mScaleBase \times newDistance\\ y(nextScale) = k(mScaleBase)x_0(newDistance)

Obviously, the scaling value of mScaleBase per unit distance is the slope, which determines how fast the two fingers scale. Then the factors that determine the scaling speed of the two fingers are: the scaling size of the current external matrix and the initial distance between the two fingers. The larger the scale of the external matrix, the smaller the initial distance between the two fingers, the faster the sliding scale of the two fingers. Another thing to note is the zoom center of the image. In PinchImageView, the two-finger zoom transformation is performed in the identity matrix. Note the center point of the external matrix before the transformation when the two fingers are pressed. Use the mScaleCenter member variable to record this point in the source code (PS: it is suggested to mask the comments in the source code where this variable is used, you will be confused). Take a quick look at the relevant code:

private PointF mScaleCenter = new PointF();
private float mScaleBase = 0; ...public boolean onTouchEvent(MotionEvent event) {...int action = event.getAction() & MotionEvent.ACTION_MASK;
    if (action == MotionEvent.ACTION_POINTER_DOWN) {
        // Switch to zoom mode
        mPinchMode = PINCH_MODE_SCALE;
        // Save two fingers for scaling
        saveScaleContext(event.getX(0), event.getY(0), event.getX(1), event.getY(1));
    }else if(action == motionEvent.action_move) {...// The distance between two zoom points
        float distance = MathUtils.getDistance(event.getX(0), event.getY(0), event.getX(1), event.getY(1));
        // Save the mid-point of the zoom point
        float[] lineCenter = MathUtils.getCenterPoint(event.getX(0), event.getY(0), event.getX(1), event.getY(1));
        mLastMovePoint.set(lineCenter[0], lineCenter[1]);
        // Handle scaling
        scale(mScaleCenter, mScaleBase, distance, mLastMovePoint);
        ……
    }
}
Copy the code

Record the current two-finger scaling mode when multi-finger pressing, saveScaleContext() records the mScaleBase and mScaleCenter mentioned above. Handle the scaling logic in motionEvent.action_move. Look at the handling of saveScaleContext.

private void saveScaleContext(float x1, float y1, float x2, float y2) {
    mScaleBase = MathUtils.getMatrixScale(mOuterMatrix)[0] / MathUtils.getDistance(x1, y1, x2, y2);
    float[] center = MathUtils.inverseMatrixPoint(MathUtils.getCenterPoint(x1, y1, x2, y2), mOuterMatrix);
    mScaleCenter.set(center[0], center[1]);
}
Copy the code

MScaleBase has been covered above, but here we mainly mention inverseMatrixPoint and look at the method definition:

public static float[] inverseMatrixPoint(float[] point, Matrix matrix) {
    if(point ! =null&& matrix ! =null) {
        float[] dst = new float[2];
        // Compute the inverse of matrix
        Matrix inverse = matrixTake();
        matrix.invert(inverse);
        // Use the inverse matrix transform point to DST, DST is the result
        inverse.mapPoints(dst, point);
        // Clear temporary variables
        matrixGiven(inverse);
        return dst;
    } else {
        return new float[2]; }}Copy the code

SrcMatrix. Invert (targetMatrix) save srcMatrix in targetMatrix, martrix.mapPoints(targetPoint, srcPoint); Apply the matrix transformation to srcPoint and place it in targetPoint. And obviously what this method is doing is finding the point before the matrix transformation. The mScaleCenter stores exactly the position of the point before the external matrix transformation. Now let’s look at scaling.

private void scale(PointF scaleCenter, float scaleBase, float distance, PointF lineCenter) {...// Calculate the scale of the image from fit Center state to target state
    float scale = scaleBase * distance;
    Matrix matrix = MathUtils.matrixTake();
    // Scale the image according to the zoom center, and set the zoom center at the point of the zoom point
    matrix.postScale(scale, scale, scaleCenter.x, scaleCenter.y);
    // Make the image zoom center point follow the finger zoom center pointmatrix.postTranslate(lineCenter.x - scaleCenter.x, lineCenter.y - scaleCenter.y); mOuterMatrix.set(matrix); ... }Copy the code

It’s easy to read. It’s all covered. Here is a joke, if the mOuterMatrix error cutting, rotation, perspective transformation, it will not be obsolete? There is also a case of multiple fingers lifting one finger. The comments have been modified to make them easy to read.

if (action == MotionEvent.ACTION_POINTER_UP) {
    if (mPinchMode == PINCH_MODE_SCALE) {
        Event.getpointercount () represents the number of points when the finger was raised, including the point that was raised
        if (event.getPointerCount() > 2) {
            //event.getAction() >> 8 returns the index of the currently lifted point. The first point is lifted, so let the second point and the third point be the scaling control points
            if (event.getAction() >> 8= =0) {
                saveScaleContext(event.getX(1), event.getY(1), event.getX(2), event.getY(2));
                // The second point is raised, so let the first point and the third point be the scaling control points
            } else if (event.getAction() >> 8= =1) {
                saveScaleContext(event.getX(0), event.getY(0), event.getX(2), event.getY(2)); }}// If the raised point is equal to 2, then there is only one point left, and it is not allowed to enter single-finger mode because the image may not be in the correct position}}Copy the code

Finally, you need to correct the lower boundary when you let go. Go to the scaleEnd method. Most of the code was actually analyzed, but there’s only one variable here, scalePost.

private void scaleEnd(a) {
    ……
    getCurrentImageMatrix(currentMatrix);
    float currentScale = MathUtils.getMatrixScale(currentMatrix)[0];
    float outerScale = MathUtils.getMatrixScale(mOuterMatrix)[0];
    // Scale correction
    float scalePost = 1f;
    // If the overall scale is greater than the maximum scale, modify the scale
    if (currentScale > maxScale) {
        scalePost = maxScale / currentScale;
    }
    // If the scaling modification results in the outer matrix scaling less than 1 (the initial value of the outer matrix is 1, if the operation results in less than the initial value, revert back), modify the scaling again
    if (outerScale * scalePost < 1f) {
        scalePost = 1f/ outerScale; }}Copy the code

I changed the notes.

3.2.1 Single-finger movement

One-finger movement is primarily a call to scrollBy, as analyzed previously.

So that’s the end of the analysis.