Multimodal word learning from infant directed speech

  • Authors:
  • Jonas Hörnstein;Lisa Gustavsson;Francisco Lacerda;José Santos-Victor

  • Affiliations:
  • Institute for System and Robotics, Instituto Superior Técnico, Lisbon, Portugal;Department of Linguistics, Stockholm University, Stockholm, Sweden;Department of Linguistics, Stockholm University, Stockholm, Sweden;Institute for System and Robotics, Instituto Superior Técnico, Lisbon, Portugal

  • Venue:
  • IROS'09 Proceedings of the 2009 IEEE/RSJ international conference on Intelligent robots and systems
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

When adults talk to infants they do that in a different way compared to how they communicate with other adults. This kind of Infant Directed Speech (IDS) typically highlights target words using focal stress and utterance final position. Also, speech directed to infants often refers to objects, people and events in the world surrounding the infant. Because of this, the sound sequences the infant hears are very likely to co-occur with actual objects or events in the infant's visual field. In this work we present a model that is able to learn wordlike structures from multimodal information sources without any pre-programmed linguistic knowlege, by taking advantage of the characteristics of IDS. The model is implemented on a humanoid robot platform and is able to extract wordlike patterns and associating these to objects in the visual surrounding.