Safari, the world’s most comprehensive technology and business learning platform.

Find the exact information you need to solve a problem on the fly, or go deeper to master the technologies and skills you need to succeed

Start Free Trial

No credit card required

O'Reilly logo
Selective Visual Attention: Computational Models and Applications

Book Description

Visual attention is a relatively new area of study combining a number of disciplines: artificial neural networks, artificial intelligence, vision science and psychology. The aim is to build computational models similar to human vision in order to solve tough problems for many potential applications including object recognition, unmanned vehicle navigation, and image and video coding and processing. In this book, the authors provide an up to date and highly applied introduction to the topic of visual attention, aiding researchers in creating powerful computer vision systems. Areas covered include the significance of vision research, psychology and computer vision, existing computational visual attention models, and the authors' contributions on visual attention models, and applications in various image and video processing tasks.

This book is geared for graduates students and researchers in neural networks, image processing, machine learning, computer vision, and other areas of biologically inspired model building and applications. The book can also be used by practicing engineers looking for techniques involving the application of image coding, video processing, machine vision and brain-like robots to real-world systems. Other students and researchers with interdisciplinary interests will also find this book appealing.

  • Provides a key knowledge boost to developers of image processing applications

  • Is unique in emphasizing the practical utility of attention mechanisms

  • Includes a number of real-world examples that readers can implement in their own work:

  • robot navigation and object selection

  • image and video quality assessment

  • image and video coding

  • Provides codes for users to apply in practical attentional models and mechanisms

Table of Contents

  1. Cover
  2. Title Page
  3. Copyright
  4. Preface
  5. Part I: Basic Concepts and Theory
    1. Chapter 1: Introduction to Visual Attention
      1. 1.1 The Concept of Visual Attention
      2. 1.2 Types of Selective Visual Attention
      3. 1.3 Change Blindness and Inhibition of Return
      4. 1.4 Visual Attention Model Development
      5. 1.5 Scope of This Book
      6. References
    2. Chapter 2: Background of Visual Attention – Theory and Experiments
      1. 2.1 Human Visual System (HVS)
      2. 2.2 Feature Integration Theory (FIT) of Visual Attention
      3. 2.3 Guided Search Theory
      4. 2.4 Binding Theory Based on Oscillatory Synchrony
      5. 2.5 Competition, Normalization and Whitening
      6. 2.6 Statistical Signal Processing
      7. References
  6. Part II: Computational Attention Models
    1. Chapter 3: Computational Models in the Spatial Domain
      1. 3.1 Baseline Saliency Model for Images
      2. 3.2 Modelling for Videos
      3. 3.3 Variations and More Details of BS Model
      4. 3.4 Graph-based Visual Saliency
      5. 3.5 Attention Modelling Based on Information Maximizing
      6. 3.6 Discriminant Saliency Based on Centre–Surround
      7. 3.7 Saliency Using More Comprehensive Statistics
      8. 3.8 Saliency Based on Bayesian Surprise
      9. 3.9 Summary
      10. References
    2. Chapter 4: Fast Bottom-up Computational Models in the Spectral Domain
      1. 4.1 Frequency Spectrum of Images
      2. 4.2 Spectral Residual Approach
      3. 4.3 Phase Fourier Transform Approach
      4. 4.4 Phase Spectrum of the Quaternion Fourier Transform Approach
      5. 4.5 Pulsed Discrete Cosine Transform Approach
      6. 4.6 Divisive Normalization Model in the Frequency Domain
      7. 4.7 Amplitude Spectrum of Quaternion Fourier Transform (AQFT) Approach
      8. 4.8 Modelling from a Bit-stream
      9. 4.9 Further Discussions of Frequency Domain Approach
      10. References
    3. Chapter 5: Computational Models for Top-down Visual Attention
      1. 5.1 Attention of Population-based Inference
      2. 5.2 Hierarchical Object Search with Top-down Instructions
      3. 5.3 Computational Model under Top-down Influence
      4. 5.4 Attention with Memory of Learning and Amnesic Function
      5. 5.5 Top-down Computation in the Visual Attention System: VOCUS
      6. 5.6 Hybrid Model of Bottom-up Saliency with Top-down Attention Process
      7. 5.7 Top-down Modelling in the Bayesian Framework
      8. 5.8 Summary
      9. References
    4. Chapter 6: Validation and Evaluation for Visual Attention Models
      1. 6.1 Simple Man-made Visual Patterns
      2. 6.2 Human-labelled Images
      3. 6.3 Eye-tracking Data
      4. 6.4 Quantitative Evaluation
      5. 6.5 Quantifying the Performance of a Saliency Model to Human Eye Movement in Static and Dynamic Scenes
      6. 6.6 Spearman's Rank Order Correlation with Visual Conspicuity
      7. References
  7. Part III: Applications of Attention Selection Models
    1. Chapter 7: Applications in Computer Vision, Image Retrieval and Robotics
      1. 7.1 Object Detection and Recognition in Computer Vision
      2. 7.2 Attention Based Object Detection and Recognition in a Natural Scene
      3. 7.3 Object Detection and Recognition in Satellite Imagery
      4. 7.4 Image Retrieval via Visual Attention
      5. 7.5 Applications of Visual Attention in Robots
      6. 7.6 Summary
      7. References
    2. Chapter 8: Application of Attention Models in Image Processing
      1. 8.1 Attention-modulated Just Noticeable Difference
      2. 8.2 Use of Visual Attention in Quality Assessment
      3. 8.3 Applications in Image/Video Coding
      4. 8.4 Visual Attention for Image Retargeting
      5. 8.5 Application in Compressive Sampling
      6. 8.6 Summary
      7. References
  8. Part IV: Summary
    1. Chapter 9: Summary, Further Discussions and Conclusions
      1. 9.1 Summary
      2. 9.2 Further Discussions
      3. 9.3 Conclusions
      4. References
  9. Index