The origins of syntax in visually grounded robotic agents
Artificial Intelligence - Special issue: artificial intelligence 40 years later
Proceedings of the 2008 conference on Artificial General Intelligence 2008: Proceedings of the First AGI Conference
Grounding the lexical semantics of verbs in visual perception using force dynamics and event logic
Journal of Artificial Intelligence Research
HERB: a home exploring robotic butler
Autonomous Robots
Mixed-initiative in human augmented mapping
ICRA'09 Proceedings of the 2009 IEEE international conference on Robotics and Automation
Grounded spoken language acquisition: experiments in word learning
IEEE Transactions on Multimedia
Typing candidate answers using type coercion
IBM Journal of Research and Development
Hi-index | 0.00 |
This paper describes our Extensible Language Interface (ELI) for robots. The system is intended to interpret far-field speech commands in order to perform fetch-and-carry tasks, potentially for use in an eldercare context. By "extensible" we mean that the robot is able to learn new nouns and verbs by simple interaction with its user. An associated video [1] illustrates the range of phenomena handled by our implemented real-time system.