A case study in the behavior-oriented design of autonomous agents
SAB94 Proceedings of the third international conference on Simulation of adaptive behavior : from animals to animats 3: from animals to animats 3
A self-organizing spatial vocabulary
Artificial Life
Neural Networks for Pattern Recognition
Neural Networks for Pattern Recognition
Imitation: a means to enhance learning of a synthetic protolanguage in autonomous robots
Imitation in animals and artifacts
Imitation in animals and artifacts
Learning words from sights and sounds: a computational model
Learning words from sights and sounds: a computational model
How the Body Shapes the Way We Think: A New View of Intelligence (Bradford Books)
How the Body Shapes the Way We Think: A New View of Intelligence (Bradford Books)
Emergence of Mirror Neurons in a Model of Gaze Following
Adaptive Behavior - Animals, Animats, Software Agents, Robots, Adaptive Systems
Grounding the lexical semantics of verbs in visual perception using force dynamics and event logic
Journal of Artificial Intelligence Research
Real-time path planning for humanoid robot navigation
IJCAI'05 Proceedings of the 19th international joint conference on Artificial intelligence
The future of content is in ourselves
Computers in Entertainment (CIE) - SPECIAL ISSUE: Media Arts
How experience of the body shapes language about space
IJCAI'09 Proceedings of the 21st international jont conference on Artifical intelligence
Integrating high-level cognitive systems with sensorimotor control
Advanced Engineering Informatics
Biologically inspired posture recognition and posture change detection for humanoid robots
ROBIO'09 Proceedings of the 2009 international conference on Robotics and biomimetics
Proceedings of the 2010 conference on ECAI 2010: 19th European Conference on Artificial Intelligence
The pragmatics of event-driven business processes
Proceedings of the 7th International Conference on Semantic Systems
Hi-index | 0.00 |
Humans maintain a body image of themselves, which plays a central role in controlling bodily movement, planning action, recognising and naming actions performed by others, and requesting or executing commands. This paper explores through experiments with autonomous humanoid robots how such a body image could form. Robots play a situated embodied language game called the Action Game in which they ask each other to perform bodily actions. They start without any prior inventory of names, without categories for visually recognising body movements of others, and without knowing the relation between visual images of motor behaviours carried out by others and their own motor behaviours. Through diagnostic and repair strategies carried out within the context of action games, they progressively self-organise an effective lexicon as well as bi-directional mappings between the visual and the motor domain. The agents thus establish and continuously adapt networks linking perception, body representation, action and language.