Babbling and first words: phonetic similarities and differences
Speech Communication - Special issue on speech production: models and data
Emergence and Categorization of Coordinated Visual Behavior ThroughEmbodied Interaction
Machine Learning - Special issue on learning in autonomous robots
Adaptive Behavior
A quantitative analysis for decomposing visual signal of the gaze displacement
VIP '01 Proceedings of the Pan-Sydney area workshop on Visual information processing - Volume 11
ICAL 2003 Proceedings of the eighth international conference on Artificial life
Staged Competence Learning in Developmental Robotics
Adaptive Behavior - Animals, Animats, Software Agents, Robots, Adaptive Systems
Linear ensemble-coding in midbrain superior colliculus specifies the saccade kinematics
Biological Cybernetics - Special Issue: Object Localization
Biomimetic Eye-Neck Coordination
DEVLRN '09 Proceedings of the 2009 IEEE 8th International Conference on Development and Learning
A developmental algorithm for ocular-motor coordination
Robotics and Autonomous Systems
The iCub humanoid robot: an open platform for research in embodied cognition
PerMIS '08 Proceedings of the 8th Workshop on Performance Metrics for Intelligent Systems
Autonomous development of gaze control for natural human-robot interaction
Proceedings of the 2010 workshop on Eye gaze in intelligent human machine interaction
Journal of Intelligent and Robotic Systems
SAB'06 Proceedings of the 9th international conference on From Animals to Animats: simulation of Adaptive Behavior
Learning robotic hand-eye coordination through a developmental constraint driven approach
International Journal of Automation and Computing
Hi-index | 0.00 |
In this paper we describe a biologically constrained architecture for developmental learning of eye---head gaze control on an iCub robot. In contrast to other computational implementations, the developmental approach aims to acquire sensorimotor competence through growth processes modelled on data and theory from infant psychology. Constraints help shape learning in infancy by limiting the complexity of interactions between the body and environment, and we use this idea to produce efficient, effective learning in autonomous robots. Our architecture is based on current thinking surrounding the gaze mechanism, and experimentally derived models of stereotypical eye---head gaze contributions. It is built using our proven constraint-based field-mapping approach. We identify stages in the development of infant gaze control, and propose a framework of artificial constraints to shape learning on the robot in a similar manner. We demonstrate the impact these constraints have on learning, and the resulting ability of the robot to make controlled gaze shifts.