Machine learning 049- Extracting SIFT feature points from images

(Python libraries and versions used in this article: Python 3.6, Numpy 1.14, Scikit-learn 0.19, matplotlib 2.2)

Feature points in the image, it is a key point of image is different from other image, in the detection of these key point, we want to consider a few questions, namely 1, no matter how to rotate the goal, to keep the target feature points is constant (i.e., rotation invariance), 2, whether the goal is bigger or smaller, its feature points will remain the same (i.e., scale invariance), There are also requirements for illumination invariance and so on.

At present, there are many methods and operators for the description of feature points, the common ones are SIFT, SURF, ORB, HOG, LBP and Harr. On the difference between these operators and feature description, you can refer to blog: image feature detection description (a):SIFT, SURF, ORB, HOG, LBP feature principle overview and OpenCV code implementation

SIFT feature points, namely scale-invariant feature Transform (SIFT), is a description used in image processing field. SIFT feature points play an important role in image processing and computer vision.

SIFT feature points have many advantages:

1.SIFT feature is a local feature of the image, which maintains invariance to rotation, scale scaling and brightness changes, and also maintains a certain degree of stability to perspective changes, affine transformation and noise;

2. Good differentiation, rich information, suitable for rapid and accurate matching in massive feature database;

3. Multiplicity, even a few objects can produce a large number of SIFT feature vectors;

4. High speed, the optimized SIFT matching algorithm can even achieve real-time requirements;

5. Expansibility, which can be easily combined with other forms of feature vectors.

The extraction of SIFT feature points mainly includes the following four steps:

1. Extremum detection of scale space: search for image positions at all scales. Potential points of interest for scale and rotation invariant are identified by gaussian differential functions.

2. Key point positioning: At each candidate location, a fine-fitting model is used to determine the location and scale. Key points are chosen based on their stability.

3. Direction determination: Assign one or more directions to each key point based on the local gradient direction of the image. All subsequent operations on the image data transform relative to the direction, scale, and position of the key points, thus providing invariance to these transformations.

4. Description of key points: In the neighborhood around each key point, measure the local gradient of the image at the selected scale. These gradients are transformed into a representation that allows for relatively large local shape deformations and illumination variations.

For the mathematical derivation and specific meaning of SIFT, you can refer to this blog post: SIFT features in detail


1. Extract SIFT feature points

1.1 Installing the Opencv-contrib-Python module

We usually use the Opencv-Python module, but the xFeatures2d method is not included in this module, because the SIFT algorithm is patented and therefore removed from Opencv-Python.

There are several versions of this module available on the web, and I found that the following method is available: uninstall the original Opencv-Python module first (if the original Opencv-Python module is 3.4.2.16, you don’t need to uninstall it). Then install opencv-python and opencv-contrib-python in 3.4.2.16.

Installation method:

PIP install opencv – python = = 3.4.2.16

PIP install opencv – contrib – python = = 3.4.2.16

1.2 SIFT feature points extraction

Firstly, SIFT feature point detector object is constructed, and then the detector object is used to detect feature points in gray-scale image

sift = cv2.xfeatures2d.SIFT_create() Construct SIFT feature point detector object
keypoints = sift.detect(gray, None) SIFT feature point detector object is used to detect feature points in gray-scale image
Copy the code
Draw keypoints to the original image
img_sift = np.copy(img)
cv2.drawKeypoints(img, keypoints, img_sift, flags=cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)

# Display the image drawn with feature points
plt.figure(12,figsize=(15.30))
plt.subplot(121)
img_rgb=cv2.cvtColor(img,cv2.COLOR_BGR2RGB)
plt.imshow(img_rgb)
plt.title('Raw Img')

plt.subplot(122)
img_sift_rgb=cv2.cvtColor(img_sift,cv2.COLOR_BGR2RGB)
plt.imshow(img_sift_rgb)
plt.title('Img with SIFT features')
Copy the code

# # # # # # # # # # # # # # # # # # # # # # # # small * * * * * * * * * * and # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #

1.SIFT feature points can be extracted using the cv2.xFeatures2D.sift_create ().detect() function, but the Opencv-contrib-Python module must be installed in advance.

# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #


Note: This part of the code has been uploaded to (my Github), welcome to download.

References:

1, Classic Examples of Python machine learning, by Prateek Joshi, translated by Tao Junjie and Chen Xiaoli