This title article concepts related reference: www.cnblogs.com/gcczhongdua…

1, zoom

resize(src, dsize, dst=None, fx=None, fy=None, interpolation=None)

parameter meaning
src The input image
dsize The size of the output image
dst
fx The scaling factor in the x direction
fy Scale factor in the y direction
interpolation The interpolation method
interpolation
The values meaning
INTER_AREA Resampling based on local pixels (scaling recommended)
INTER_BITS
INTER_BITS2
INTER_CUBIC Cubic interpolation method based on 4×4 pixel Neighborhood
INTER_LANCZOS4 Lanczos interpolation based on 8×8 pixel neighborhood
INTER_LINEAR (Enlarge recommended)
INTER_MAX
INTER_NEAREST Bilinear interpolation (default)
INTER_TAB_SIZE
INTER_TAB_SIZE2
# coding:utf-8

import cv2
import numpy as np

def show(img):
    # cv2.namedWindow('aa', cv2.WINDOW_NORMAL)
    cv2.imshow('aa', img)
    cv2.waitKey(0)
    cv2.destroyAllWindows()


if __name__ == '__main__':

    img = cv2.imread('./001_720x1080.jpg'Resize (img,dsize=(width*2, height*2), fx=0.5, fy=0.5, interpolation=cv2.INTER_LINEAR) show(out) passCopy the code

2, the translation

warpAffine(src, M, dsize, dst=None, flags=None, borderMode=None, borderValue=None)

parameter meaning
src The input image
M Transformation matrix
dsize Enter the dimensions of the image
dst
flags The interpolation method
borderMode Interpolation pattern of blank points? (None for black)
borderValue
flags
The values meaning
WARP_FILL_OUTLIERS = 8 Fills all the pixels of the output image. If some pixels fall outside the boundary of the input image, their value is set to fillval.
WARP_INVERSE_MAP = 16 Specifies that map_matrix is the inverse transformation of the output image to the input image, so it can be used directly to do pixel interpolation. Otherwise, the function gets the inverse transformation from map_matrix.

Building a Movement Matrix

I moved tx and TY along the x and y axes

M =\begin{bmatrix}
    1 & 0 & t_x \\
   0 & 1 & t_y
\end{bmatrix}
Copy the code
# coding:utf-8

import cv2
import numpy as np

def show(img):
    # cv2.namedWindow('aa', cv2.WINDOW_NORMAL)
    cv2.imshow('aa', img)
    cv2.waitKey(0)
    cv2.destroyAllWindows()


if __name__ == '__main__':

    img = cv2.imread('./001_720x1080.jpg'Unchanged) M = np.array([[1,0,100],[0,1,50]],np.float32) height, width = img.shape[:2] out = cv2.warpAffine(img, M, (width, height)) show(out)Copy the code

3, rotate

getRotationMatrix2D(center, angle, scale)

Gets the matrix of rotation

parameter meaning
center Center of rotation
angle Rotation Angle (clockwise Angle is negative)
scale Scale (not 1)

Build rotation matrix, rotation matrix is a bit complicated, OpencV to simplify the building process, provides a function cv2.getRotationMatrix2D.

Rotate the image 45° clockwise (clockwise Angle negative) around the center of the image without scaling

# coding:utf-8

import cv2
import numpy as np

def show(img):
    # cv2.namedWindow('aa', cv2.WINDOW_NORMAL)
    cv2.imshow('aa', img)
    cv2.waitKey(0)
    cv2.destroyAllWindows()


if __name__ == '__main__':

    img = cv2.imread('./001_720x1080.jpg', cv2.IMREAD_UNCHANGED)
    height, width = img.shape[:2]
    M = cv2.getRotationMatrix2D((width/2, height/2), -45, 1)
    out = cv2.warpAffine(img, M, (width, height))
    show(out)
Copy the code

4. affine transformation

getAffineTransform(src, dst)

The matrix used to compute the radial transformation

parameter meaning
src Three points on the original picture
dst Three points after the conversion

In an affine transformation all the parallel lines in the original image are parallel in the resulting image

# coding:utf-8

import cv2
import numpy as np

def show(img):
    # cv2.namedWindow('aa', cv2.WINDOW_NORMAL)
    cv2.imshow('aa', img)
    cv2.waitKey(0)
    cv2.destroyAllWindows()


if __name__ == '__main__':

    img = cv2.imread('./001_720x1080.jpg', cv2.IMREAD_UNCHANGED)
    height, width = img.shape[:2]
    pts1 = np.float32([[50, 50], [200, 50], [50, 200]])
    pts2 = np.float32([[10, 100], [200, 50], [100, 250]])
    M = cv2.getAffineTransform(pts1, pts2)
    out = cv2.warpAffine(img, M, (width, height))
    show(out)
Copy the code

5. Perspective transformation

getPerspectiveTransform(src, dst)

A matrix used to calculate perspective transformations

parameter meaning
src Four points on the original picture
dst Four points after conversion

warpPerspective(src, M, dsize, dst=None, flags=None, borderMode=None, borderValue=None)

Perspective transformation

parameter meaning
src The input image
M Transformation matrix
dsize Enter the dimensions of the image
dst
flags The interpolation method
borderMode Interpolation pattern of blank points? (None for black)
borderValue

Let’s say we take a photo of an ID card and we want to correct it

# coding:utf-8

import cv2
import numpy as np


def show(img):
    # cv2.namedWindow('aa', cv2.WINDOW_NORMAL)
    cv2.imshow('aa', img)
    cv2.waitKey(0)
    cv2.destroyAllWindows()

if __name__ == '__main__':
    # Take points from (upper left, lower left, lower right, upper right) once
    img = cv2.imread('./sfz.jpg', cv2.IMREAD_UNCHANGED)
    height, width = img.shape[:2]
    print("%s, %s"% (width, height)) pts1 = np. Float32 ([[50, 65], [78, 292], [457, 238], [414, 17]]) pts2 = np. Float32 ([[0, 0], [0, 540]. [856, 540], [856, 0]]) M = cv2.getPerspectiveTransform(pts1, pts2) out = cv2.warpPerspective(img, M, (856, 540)) show(out)Copy the code