A qualitative physics based on confluences
Artificial Intelligence - Special volume on qualitative reasoning about physical systems
A Model of Saliency-Based Visual Attention for Rapid Scene Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Reinforcement learning with selective perception and hidden state
Reinforcement learning with selective perception and hidden state
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Workshops - Volume 03
Reinforcement Learning in Continuous Time and Space
Neural Computation
Attention links sensing to recognition
Image and Vision Computing
A BIOLOGICALLY INSPIRED METHOD FOR CONCEPTUAL IMITATION USING REINFORCEMENT LEARNING
Applied Artificial Intelligence
Comparing Learning Attention Control in Perceptual and Decision Space
Attention in Cognitive Systems
Online learning of task-driven object-based visual attention control
Image and Vision Computing
Speeding up top-down attention control learning by using full observation knowledge
CIRA'09 Proceedings of the 8th IEEE international conference on Computational intelligence in robotics and automation
Expertness based cooperative Q-learning
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Hi-index | 0.00 |
Rapid increase in the size and complexity of sensory systems demands for attention control in real world robotic tasks. However, attention control and the task are often highly interlaced which demands for interactive learning. In this paper, a framework called METAL mixture-of-experts task and attention learning is proposed to cope with this complex learning problem. METAL consists of three consecutive learning phases, where the first two phases provide an initial knowledge about the task, while in the third phase the attention control is learned concurrently with the task. The mind of the robot is composed of a set of tiny agents learning and acting in parallel in addition to an attention control learning ACL agent. Each tiny agent provides the ACL agent with some partial knowledge about the task in the form of its decision preference-called policy as well. The ACL agent in the third phase learns how to make the final decision by attending the least possible number of tiny agents. It acts on a continuous decision space which gives METAL the ability to integrate different sources of knowledge with ease. A Bayesian continuous RL method is utilized at both levels of learning on perceptual and decision spaces. Implementation of METAL on an E-puck robot in a miniature highway driving task along with farther simulation studies in Webots™ environment verify the applicability and effectiveness of the proposed framework, where a smooth driving behavior is shaped. It is also shown that even though the robot has learned to discard some sensory data, probability of raising aliasing in the decision space is very low, which means that the robot can learn the task as well as attention control simultaneously.