Feature matching

Once we have extracted features and their descriptors from two (or more) images, we can start asking whether some of these features show up in both (or all) images. For example, if we have descriptors for both our object of interest (self.desc_train) and the current video frame (desc_query), we can try to find regions of the current frame that look like our object of interest. This is done by the following method, which makes use of the Fast Library for Approximate Nearest Neighbors (FLANN):

good_matches = self._match_features(desc_query)

The process of finding frame-to-frame correspondences can be formulated as the search for the nearest neighbor from one set of descriptors for every element of another set.

The first set of descriptors ...

Get OpenCV: Computer Vision Projects with Python now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.