5.8 Summary

In summary, the attention computational models tuned by top-down attention are manifold on the knowledge learning, storage and modulation to bottom-up attention. In terms of knowledge learning, decision trees (Sections 5.3 and 5.4) and neural networks (ART in Section 5.6 and SVM in Section 5.7) are often employed to learn and store prior knowledge in the training stage, and the required target's features are commonly stored in working memory, and short- and long-term memories. Thus decision trees or neural networks may be considered as the working memory or long-term memory. For top-down adjustment, there are the biologically plausible models based on cell population inference in Section 5.1, and the models of weighted feature maps or feature channels related to the required target (Sections 5.3–5.5), the model with top-down instructions directly from the human brain to realize the hierarchical search (Section 5.2) and conditional probability computational model (Section 5.7).

This chapter has discussed these different types of computational models with top-down tuning in order to present readers with different existing ways to implement top-down computation, including complex computational models that simulate the human brain or simply realizable models for engineering applications.

Get Selective Visual Attention: Computational Models and Applications now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.