5

Computational Models for Top-down Visual Attention

The computational models of visual attention introduced in Chapters 03 and 04 mainly simulate pure bottom-up attention. However, in practice, the human visual system hardly works without top-down visual attention, especially while searching a target in a scene. For example, your five-year-old son vanishes from your view in a public park, and you search for him anxiously according to the prior knowledge in your brain: his clothes and the way he walks and so on, which are related to the top-down attention. If his clothes (e.g., red jacket) is conspicuous compared with the environment (green lawn or shrubs), you only need to search the salient areas from bottom-up attention mechanism (candidate regions in red colour pop out from the background in green colour). And then find your son by using your top-down knowledge, which speeds up your search process since you do not need to search all the places in the scene. However, when your son does not pop out from the environment, in other words, when the candidate salient locations from the bottom-up attention mechanism do not indicate your son, the top-down attention becomes even more critical after fast scanning these candidate regions. In human behaviour, bottom-up and top-down attentions are intertwined, that is, overall visual attention is the interaction of both bottom-up and top-down attentions. Hence, all existing top-down computational models are combined with bottom-up activation ...

Get Selective Visual Attention: Computational Models and Applications now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.