The target

In this chapter,

  • We’ll see how features in one image can be matched with other images.
  • We will use the Brute-Force matchers and FLANN matchers in OpenCV

The basis of the Brute-Force matcher

Brute force matchers are simple. It takes the descriptor of one feature in the first set and uses some distance calculation to match it to all the other features in the second set. And returns the nearest one.

For the BF matcher, first we must create the BFMatcher object using CV.BFMatcher(). It takes two optional arguments. The first is normType, which specifies the range measure to use. The default is Cv.norm_l2. Useful for SIFT, SURF, etc. (also CV.NORM_L1). For descriptors based on binary strings, such as ORB, BRIEF, BRISK, etc., use Cv. NORM_HAMMING, which uses hamming distance as a metric. If the ORB uses WTA_K == 3 or 4, CV.NORM_HAMMING2 should be used.

The second argument is the Boolean variable, crossCheck, which is false by default. If true, Matcher returns only those matches with the value (I, j) so that the ith descriptor in set A has the JTH descriptor in set B as the best match, and vice versa. That is, two characteristics in two groups should match each other. It provides consistent results and is a good alternative to the ratio test proposed by D.Lowe in the SIFT paper.

Once created, the two important methods are bfmatcher.match () and bfmatcher.knnmatch (). The first returns the best match. The second method returns k best matches, where k is specified by the user. It can be useful when we need to do other work on this.

Just as we used CV. DrawKeypoints () to drawKeypoints, CV. DrawMatches () helps us drawMatches. It stacks two images horizontally and draws lines from the first image to the second to show the best match. And CV. DrawMatchesKnn draws all k best matches. If k=2, it will draw two matching lines for each key point. Therefore, if you want to draw selectively, you must go through a mask.

Let’s look at an example of SIFT and ORB (both using different distance measurements).

Brute-force matching is performed using the ORB descriptor

Here, we’ll see a simple example of how to match features between two images. In this case, I have a queryImage and trainImage. We will try to find queryImage in trainImage using feature matching. (image/samples/data/box, PNG and/samples/data/boxinscene PNG)

We are using ORB descriptors to match features. So let’s start by loading images, finding descriptors, and so on.

import numpy as np import cv2 as cv import matplotlib.pyplot as plt img1 = cv.imread('box.png',cv.IMREAD_GRAYSCALE) # Img2 = CV. Imread ('box_in_scene.png', CV.IMREAD_GRAYSCALE kp1, des1 = orb.detectAndCompute(img1,None) kp2, des2 = orb.detectAndCompute(img2,None)Copy the code

Next, we create a BFMatcher object with a distance measure of CV.norm_hamming (because we are using ORB) and enable CrossCheck to get better results. We then use the matcher.match () method to get the best match between the two images. We sort them in ascending order of distance so that the best match (low distance) comes first. Then we only pull out the top 10 matches (just to improve visibility. You can add it as needed.)

BF = CV.BFMatcher(CV.NORM_HAMMING, Matches = bf.match(des1,des2) Key = lambda x: x.dance) # draw the first 10 matches img3 = cv.drawMatches(img1,kp1,img2,kp2,matches[:10],None,flags=cv.DrawMatchesFlags_NOT_DRAW_SINGLE_POINTS) plt.imshow(img3),plt.show()Copy the code

The following results are obtained:

What is a Matcher object?

The result of the matchs = bf. Match (des1,des2) line is a list of DMatch objects. The DMatch object has the following properties:

  • Dmate. distance- The distance between descriptors. The lower the better.
  • Dmatch-. trainIdx- Index of descriptors in train descriptors
  • Dmatch-. queryIdx- The descriptor index in the query descriptor
  • Dmate. imgIdx- Index of train images.

Brute-force matching with SIFT descriptor and scale test

This time, we’ll use bfMatcher.knnmatch () to get k best matches. In this example, we set k = 2 so that we can apply the scale test described by D.Lowe in his paper.

import numpy as np import cv2 as cv import matplotlib.pyplot as plt img1 = cv.imread('box.png',cv.IMREAD_GRAYSCALE) # Index image img2 = CV. Imread ('box_in_scene.png', CV.IMREAD_GRAYSCALE) # Training image # Initialize SIFT descriptor SIFT = CV. Xfeatures2d.sift_create () # DetectAndCompute (IMg1,None) kp2, des1 = SIFT. DetectAndCompute (IMg1,None) BFMatcher() matches = BF. KnnMatch (des1,des2,k=2 Good = [] for m,n in matches: if m.dance < 0.75* n.dance: good. Append ([m]) # CV. DrawMatchesKnn img3 = cv.drawMatchesKnn(img1,kp1,img2,kp2,good,None,flags=cv.DrawMatchesFlags_NOT_DRAW_SINGLE_POINTS) plt.imshow(img3),plt.show()Copy the code

View the following results:

FLANN based on matcher

FLANN is a fast library that approximates nearest neighbor. It contains a set of algorithms optimized for fast nearest neighbor searches and high-dimensional features in large data sets. For large data sets, it runs faster than BFMatcher. We’ll see a second example of a FLann-based matcher.

For a FLann-based matcher, we need to pass two dictionaries specifying the algorithm to be used, its associated parameters, and so on. The first one is IndexParams. For each algorithm, the information to be passed is described in the FLANN documentation. Generally speaking, for SIFT, SURF and other algorithms, you can perform the following operations:

FLANN_INDEX_KDTREE = 1 
index_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 5)Copy the code

When using ORB, you can refer to the following. The documentation recommends annotated values, but in some cases the required parameters are not provided. Other values work fine as well.

FLANN_INDEX_LSH = 6
index_params= dict(algorithm = FLANN_INDEX_LSH,
                   table_number = 6, # 12
                   key_size = 12,     # 20
                   multi_probe_level = 1) #2Copy the code

The second dictionary is SearchParams. It specifies how many times the tree in the index should be recursively traversed. Higher values provide better accuracy, but also require more time. If you want to change the value, pass search_params = dict(checks = 100)

With this information, we are easy.

import numpy as np import cv2 as cv import matplotlib.pyplot as plt img1 = cv.imread('box.png',cv.IMREAD_GRAYSCALE) # Index image img2 = CV. Imread ('box_in_scene.png', CV.IMREAD_GRAYSCALE) # Training image # Initialize SIFT descriptor SIFT = CV. Xfeatures2d.sift_create () # DetectAndCompute (IMg1,None) kp2, des1 = SIFT. DetectAndCompute (IMg1,None) DetectAndCompute (IMg2,None) # FLANN_INDEX_KDTREE = 1 index_params = dict(algorithm = FLANN_INDEX_KDTREE, Trees =5) search_params = dict(checks=50) # or pass an empty dictionary flann = CV.FlannBasedMatcher(index_params,search_params) matches = Flann. knnMatch(des1,des2,k=2) # For I,(m,n) in enumerate(matches) = [[0,0] for I in range(len(matches))] If m. istance < 0.7 * n.d istance: Draw_params = dict(matchColor = (0,255,0), draw_params = dict(matchColor = (0,255,0), matchesMask = matchesMask, flags = cv.DrawMatchesFlags_DEFAULT) img3 = cv.drawMatchesKnn(img1,kp1,img2,kp2,matches,None,**draw_params) plt.imshow(img3,),plt.show()Copy the code

View the following results:

Rock and the AI technology blog resources summary station: http://docs.panchuang.net/PyTorch, the official Chinese tutorial station: Chinese official document: http://pytorch.panchuang.net/OpenCV http://woshicver.com/