Modeling visual attention via selective tuning
Artificial Intelligence - Special volume on computer vision
The computational perception of scene dynamics
Computer Vision and Image Understanding - Special issue on physics-based modeling and reasoning in computer vision
Nonrigid motion analysis: articulated and elastic motion
Computer Vision and Image Understanding
The handbook of brain theory and neural networks
The visual analysis of human movement: a survey
Computer Vision and Image Understanding
Human motion analysis: a review
Computer Vision and Image Understanding
Tracking persons in monocular image sequences
Computer Vision and Image Understanding
Motion Understanding: Task-Directed Attention and Representations that Link Perception with Action
International Journal of Computer Vision
A framework for visual motion understanding
A framework for visual motion understanding
A Neural Model of Smooth Pursuit Control and Motion Perception by Cortical Area MST
Journal of Cognitive Neuroscience
Computer Vision and Image Understanding - Special issue: Attention and performance in computer vision
Attention links sensing to recognition
Image and Vision Computing
Computer Vision and Image Understanding - Special issue: Attention and performance in computer vision
TarzaNN: a general purpose neural network simulator for visual attention modeling
WAPCV'04 Proceedings of the Second international conference on Attention and Performance in Computational Vision
Hi-index | 0.00 |
The Selective Tuning Model is a proposal for modelling visual attention in primates and humans. Although supported by significant biological evidence, it is not without its weaknesses. The main one addressed by this paper is that the levels of representation on which it was previously demonstrated (spatial Gaussian pyramids) were not biologically plausible. The motion domain was chosen because enough is known about motion processing to enable a reasonable attempt at defining the feedforward pyramid. The effort is unique because it seems that no past model presents a motion hierarchy plus attention to motion. We propose a neurally-inspired model of the primate visual motion system attempting to explain how a hierarchical feedforward network consisting of layers representing cortical areas V1, MT, MST, and 7a detects and classifies different kinds of motion patterns. The STM model is then integrated into this hierarchy demonstrating that successfully attending to motion patterns, results in localization and labelling of those patterns.