9
Background Subtraction
0BOverview of Background Subtraction
Because of its simplicity and because camera locations are fixed in many contexts, background subtraction
(aka background differencing) is a fundamental image processing operation for video security applications.
Toyama, Krumm, Brumitt, and Meyers give a good overview and comparison with many techniques
[Toyama99]. In order to perform background subtraction, we first must “learn” a model of the background.
Once learned, this background model is compared against the current image and then the known
background parts are subtracted away. The objects left after subtraction are presumably new foreground
objects.
Of course, “background” is an ill-defined concept that varies by application. For example, if you are
watching a highway, perhaps average traffic flow should be considered background. Normally, background
is considered to be any static or periodically moving parts of a scene that remain static or periodic over the
period of interest. The whole ensemble may have time-varying components, such as trees waving in
morning and evening wind but standing still at noon. Two common but substantially distinct environment
categories that are likely to be encountered are indoor and outdoor scenes. We are interested in tools that
will help us in both of these environments. First we will discuss the weaknesses of typical background
models and then will move on to discuss higher-level scene models. In that context, we present a quick
method that is mostly good for indoor static background scenes whose lighting doesn’t change much. We
then follow this by a “codebook” method that is slightly slower but can work in both outdoor and indoor
scenes; it allows for periodic movements (such as the trees waving in the wind) and for lighting to change
slowly or periodically. This method is also tolerant to learning the background even when there are
occasional foreground objects moving by. We’ll top this off by another discussion of connected
components (first seen in Chapter 5) in the context of cleaning up foreground object detection. We will
then compare the quick background method against the codebook background method. This chapter will
conclude with a discussion of the implementations available in the OpenCV library of two modern
algorithms for background subtraction. These algorithms use the principles discussed in the chapter, but
also include both extensions and implementation details which make them more suitable for real-world
application.
1BWeaknesses of Background Subtraction
Although the background modeling methods mentioned here work fairly well for simple scenes, they suffer
from an assumption that is often violated: namely that the behavior of all of the pixels in the image is
statistically independent from the behavior of all of the others. Notably, the methods we describe here learn

Get Learning OpenCV, 2nd Edition now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.