Towards a computational theory of cognitive maps
Artificial Intelligence
Artificial Intelligence
CVGIP: Image Understanding - Special issue on purposive, qualitative, active vision
Promising directions in active vision
International Journal of Computer Vision
Control of selective perception using Bayes nets and decision theory
International Journal of Computer Vision - Special issue on active vision II
TRICLOPS: a tool for studying active vision
International Journal of Computer Vision - Special issue on active vision II
Modeling visual attention via selective tuning
Artificial Intelligence - Special volume on computer vision
Retina-like visual sensor for fast tracking and navigation robots
Machine Vision and Applications
Computing a representation of the local environment
Artificial Intelligence
A Maximum-Likelihood Strategy for Directing Attention during Visual Search
IEEE Transactions on Pattern Analysis and Machine Intelligence
Attention Control for Robot Vision
CVPR '96 Proceedings of the 1996 Conference on Computer Vision and Pattern Recognition (CVPR '96)
A space-variant approach to oculomotor control
ISCV '95 Proceedings of the International Symposium on Computer Vision
Attentional sequence-based recognition: Markovian and evidential reasoning
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Autonomous Attentive Exploration in Search and Rescue Scenarios
Attention in Cognitive Systems. Theories and Systems from an Interdisciplinary Viewpoint
Bubble space and place representation in topological maps
International Journal of Robotics Research
Hi-index | 0.00 |
Robot vision systems--inspired by human-like vision--are required to employ mechanisms similar to those that have proven to be crucial in human visual performance. One of these mechanisms is attentive perception. Findings from vision science research suggest that attentive perception requires a multitude of properties: A retina with fovea-periphery distinction, an attention mechanism that can be manipulated both mechanically and internally, an extensive set of visual primitives that enable different representation modes, an integration mechanism that can infer the appropriate visual information in spite of eye, head, body and target motion, and finally memory for guiding eye movements and modeling the environment. In this paper we present an attentively "perceiving" robot called APES. The novelty of this system stems from the fact that it incorporates all of these properties simultaneously. As is explained, original approaches have to be taken to realize each of the properties so that they can be integrated together in an attentive perception framework.