Chapter 8. Basic Feature Detection

The human brain does a lot of pattern recognition to make sense of raw visual inputs. After the eye focuses on an object, the brain identifies the characteristics of the object—such as its shape, color, or texture—and then compares these to the characteristics of familiar objects to match and recognize the object. In computer vision, that process of deciding what to focus on is called feature detection. A feature in this sense can be formally defined as “one or more measurements of some quantifiable property of an object, computed so that it quantifies some significant characteristics of the object” (Kenneth R. Castleman, Digital Image Processing, Prentice Hall, 1996). An easier way to think of it, though, is that a feature is an “interesting” part of an image. What makes it interesting? Consider a photograph of a red ball on a gray sidewalk. The sidewalk itself probably isn’t that interesting. The ball, however, is probably more interesting, because it is significantly different from the rest of the photograph. Similarly, when a computer analyzes the photograph, the gray pixels representing the sidewalk could be treated as background. The pixels that represent the ball probably convey more information, like how big the ball is or where on the sidewalk it lies.

A good vision system should not waste time—or processing power—analyzing the unimportant or uninteresting parts of an image, so feature detection helps determine which pixels to focus on. ...

Get Practical Computer Vision with SimpleCV now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.