This is the second day of my participation in the First Challenge 2022

The basic idea

So let’s talk about what the code is doing today, but first of all, it’s finding good feature points in the image, like corners, that have rotation invariance, translation invariance, and scale invariance. We want these points to be evenly distributed on the screen. And then you need to figure out the descriptors for those feature points, and what are the descriptors? It describes a feature point by a series of features or indicators, for example, describing an article, our article type, author, length, these are all features or descriptors. These descriptions are used to test the correlation between two feature points. After getting the feature point descriptor, feature points can be matched in the frame before and after the descriptor, that is, the previous frame and the next frame. So that’s what we’re going to do

First, how to get good feature points. Last time, feature points were extracted by sampling goodFeaturesToTrack for matching. Feature points extracted by goodFeaturesToTrack were better than those extracted by orb before. We will extract the feature points as a separate class into a file. In the FeatureExtractor class, the initialization function __init__ function initializes an orb FeatureExtractor, which can be used to create orb objects with ORB_create. The maximum number of feature points can be specified with the nfeatures parameter when creating objects


class FeatureExtractor(object) :

    def __init__(self) - >None:
        self.orb = cv2.ORB_create()
  
        self.bf = cv2.BFMatcher()
        self.last = None
    def extract(self,img) :
        feats = cv2.goodFeaturesToTrack(np.mean(img,axis=2).astype(np.uint8),
            3000, qualityLevel=0.01, minDistance=3)
        # keyPoints   
        kps = [cv2.KeyPoint(x=f[0] [0], y=f[0] [1],_size=20) for f in feats]
        kps,des = self.orb.compute(img, kps)

        if self.last is not None:
            matches = self.bf.match(des,self.last['des'])
            print(matches)

        self.last = {'kps':kps,'des':des}
        return kps,des
Copy the code

The distribution of features extracted by goodFeaturesToTrack is better than that obtained by ORB before. With feature points, we need to use ORB.com Pute to calculate the descriptor DES of feature points, which can reflect some features of feature points. We match feature points by feature points. Orb provides compute to compute feature point descriptors, accepted

import cv2
import numpy as np

class Extractor(object) :

    def __init__(self) - >None:
        self.orb = cv2.ORB_create()
        # 
        self.bf = cv2.BFMatcher()
        self.last = None
    def extract(self,img) :
        # detection
        feats = cv2.goodFeaturesToTrack(np.mean(img,axis=2).astype(np.uint8),
            3000, qualityLevel=0.01, minDistance=3)
        
        
        # extraction   
        kps = [cv2.KeyPoint(x=f[0] [0], y=f[0] [1],_size=20) for f in feats]
        kps,des = self.orb.compute(img, kps)

        
        matches = None
        if self.last is not None:
            matches = self.bf.match(des,self.last['des'])

        self.last = {'kps':kps,'des':des}
        return kps,des,matches
Copy the code

Self.bf = cv2.BFMatcher() = 2nd-force = 2nd-force = 2nd-force Select a key point in the current frame of the image and then measure the distance (descriptor) with each key point in the last frame of the image, and finally return the nearest key point.

    def extract(self,img) :
        # detection
        feats = cv2.goodFeaturesToTrack(np.mean(img,axis=2).astype(np.uint8),
            3000, qualityLevel=0.01, minDistance=3)
        
        
        # extraction   
        kps = [cv2.KeyPoint(x=f[0] [0], y=f[0] [1],_size=20) for f in feats]
        kps,des = self.orb.compute(img, kps)

        
        matches = None
        if self.last is not None:
            matches = self.bf.match(des,self.last['des'])
            matches = zip([kps[m.queryIdx] for m in matches],[self.last['kps'][m.trainIdx] for m in matches])

        self.last = {'kps':kps,'des':des}
        return matches
        
Copy the code

The trainIdx index matches queryIdx to KPS at the key point in the current frame.

Here I would like to add that we learn from masters mainly about their coding habits and how to solve problems when they encounter programming problems, so as to broaden our thinking.

class Extractor(object) :

    def __init__(self) - >None:
        self.orb = cv2.ORB_create(100)
        # 
        self.bf = cv2.BFMatcher(cv2.NORM_HAMMING)
        self.last = None
    def extract(self,img) :
        # detection
        feats = cv2.goodFeaturesToTrack(np.mean(img,axis=2).astype(np.uint8),
            3000, qualityLevel=0.01, minDistance=3)
        
        
        # extraction   
        kps = [cv2.KeyPoint(x=f[0] [0], y=f[0] [1],_size=20) for f in feats]
        kps,des = self.orb.compute(img, kps)

        
        # matches = None
        ret = []
        if self.last is not None:
            matches = self.bf.knnMatch(des,self.last['des'],k=2)
            for m, n in matches:
                if m.distance < 0.75*n.distance:
                    ret.append((kps[m.queryIdx],self.last['kps'][m.trainIdx]))

        self.last = {'kps':kps,'des':des}
        return ret
Copy the code

In the above code, we use BF’s knnMatch to match the k points closest to the feature point through machine learning KNN method. Here, K in knnMatch means the result of returning 2, passingM. istance < 0.75 * n.d istanceTo screen. This time is much better than last time, only a few connections, and incv2.BFMatcher(cv2.NORM_HAMMING)Set toNORM_HAMMING.