Self-organization and associative memory: 3rd edition
Self-organization and associative memory: 3rd edition
Schemas for prey-catching in frog and toad
Proceedings of the first international conference on simulation of adaptive behavior on From animals to animats
Behavior-based primitives for articulated control
Proceedings of the fifth international conference on simulation of adaptive behavior on From animals to animats 5
Modeling parietal-premotor interactions in primate control of grasping
Neural Networks - Special issue on neural control and robotics: biology and technology
The handbook of brain theory and neural networks
Synthetic brain imaging: grasping, mirror neurons and imitation
Neural Networks - Special issue on the global brain: imaging and modelling
Computer-Aided Multivariate Analysis
Computer-Aided Multivariate Analysis
Automated Derivation of Primitives for Movement Classification
Autonomous Robots
Getting Humanoids to Move and Imitate
IEEE Intelligent Systems
Mobile Robotics: A Practical Introduction
Mobile Robotics: A Practical Introduction
Hi-index | 0.01 |
We present a system that models perception-action coupling through imitation and attention. Our interest is in imitation and in social learning more generally. Through social learning the experience of an agent is governed by the actions of an expert, and the structures that develop within the agent's "brain" are influenced by its social situatedness. We are inspired from biological findings in primates of the existence of mirror neurons, which are believed to be involved in imitation. The visual and motor properties of these neurons suggest a tight perception-action coupling, where affordances could be expressed. Our system is designed to model the functional properties of the mirror neurons and therefore express the functionality of objects. The system builds up perceptual and motoric structures from experience using temporal attention and forms perceptual-motor connections. The experience arises through imitation, where an agent can perceive objects and the interactions upon them. We have successfully applied our system on three different platforms, two in simulation and the third on a real robot learning from a human. The system is able to segment the perceptual-motor experience into distinct structures that can be used to recognize and reproduce the task in each experiment. Some unexpected results showed us that the motoric complexity in these experiments was not high enough to expose the full potential of our system, and we suggest future work that will address these results.