Technical Note: \cal Q-Learning
Machine Learning
The nature of statistical learning theory
The nature of statistical learning theory
A Model of Saliency-Based Visual Attention for Rapid Scene Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
A reinforcement learning model of selective visual attention
Proceedings of the fifth international conference on Autonomous agents
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
A Framework for Attention and Object Categorization Using a Stereo Head Robot
SIBGRAPI '99 Proceedings of the XII Brazilian Symposium on Computer Graphics and Image Processing
Reinforcement learning with selective perception and hidden state
Reinforcement learning with selective perception and hidden state
Distinctive Image Features from Scale-Invariant Keypoints
International Journal of Computer Vision
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Workshops - Volume 03
An Integrated Model of Top-Down and Bottom-Up Attention for Optimizing Detection Speed
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 2
How the Body Shapes the Way We Think: A New View of Intelligence (Bradford Books)
How the Body Shapes the Way We Think: A New View of Intelligence (Bradford Books)
Integrated Models of Cognitive Systems (Advances in Cognitive Models and Architectures)
Integrated Models of Cognitive Systems (Advances in Cognitive Models and Architectures)
2006 Special Issue: Modeling attention to salient proto-objects
Neural Networks
Robust Object Recognition with Cortex-Like Mechanisms
IEEE Transactions on Pattern Analysis and Machine Intelligence
Modeling embodied visual behaviors
ACM Transactions on Applied Perception (TAP)
Applying computational tools to predict gaze direction in interactive visual environments
ACM Transactions on Applied Perception (TAP)
Robust Handwritten Character Recognition with Features Inspired by Visual Ventral Stream
Neural Processing Letters
Closed-loop learning of visual control policies
Journal of Artificial Intelligence Research
Selective visual attention enables learning and recognition of multiple objects in cluttered scenes
Computer Vision and Image Understanding - Special issue: Attention and performance in computer vision
Comprehensive learning particle swarm optimizer for global optimization of multimodal functions
IEEE Transactions on Evolutionary Computation
Where brain, body, and world collide
Cognitive Systems Research
A novel biologically inspired attention mechanism for a social robot
EURASIP Journal on Advances in Signal Processing - Special issue on biologically inspired signal processing: analyses, algorithms and applications
Non-local spatial redundancy reduction for bottom-up saliency estimation
Journal of Visual Communication and Image Representation
A critical review of selective attention: an interdisciplinary perspective
Artificial Intelligence Review
METAL: A framework for mixture-of-experts task and attention learning
Journal of Intelligent & Fuzzy Systems: Applications in Engineering and Technology
Hi-index | 0.00 |
We propose a biologically-motivated computational model for learning task-driven and object-based visual attention control in interactive environments. In this model, top-down attention is learned interactively and is used to search for a desired object in the scene through biasing the bottom-up attention in order to form a need-based and object-driven state representation of the environment. Our model consists of three layers. First, in the early visual processing layer, most salient location of a scene is derived using the biased saliency-based bottom-up model of visual attention. Then a cognitive component in the higher visual processing layer performs an application specific operation like object recognition at the focus of attention. From this information, a state is derived in the decision making and learning layer. Top-down attention is learned by the U-TREE algorithm which successively grows an object-based binary tree. Internal nodes in this tree check the existence of a specific object in the scene by biasing the early vision and the object recognition parts. Its leaves point to states in the action value table. Motor actions are associated with the leaves. After performing a motor action, the agent receives a reinforcement signal from the critic. This signal is alternately used for modifying the tree or updating the action selection policy. The proposed model is evaluated on visual navigation tasks, where obtained results lend support to the applicability and usefulness of the developed method for robotics.