Mental models: towards a cognitive science of language, inference, and consciousness
Mental models: towards a cognitive science of language, inference, and consciousness
A computational model of color perception and color naming
A computational model of color perception and color naming
Language and Learning for Robots
Language and Learning for Robots
Theory of Mind for a Humanoid Robot
Autonomous Robots
Language Games for Autonomous Robots
IEEE Intelligent Systems
Learning Audio-Visual Associations Using Mutual Information
SPELMG '99 Proceedings of the Integration of Speech and Image Understanding
Karma: knowledge-based active representations for metaphor and aspect
Karma: knowledge-based active representations for metaphor and aspect
When push comes to shove: a computational model of the role of motor control in the acquisition of action verbs
Connecting simulation to the mission operational environment
Connecting simulation to the mission operational environment
Learning words from sights and sounds: a computational model
Learning words from sights and sounds: a computational model
Grounded semantic composition for visual scenes
Journal of Artificial Intelligence Research
Grounding the lexical semantics of verbs in visual perception using force dynamics and event logic
Journal of Artificial Intelligence Research
Learning and generalising semantic knowledge from object scenes
Robotics and Autonomous Systems
Lingodroids: socially grounding place names in privately grounded cognitive maps
Adaptive Behavior - Animals, Animats, Software Agents, Robots, Adaptive Systems
Hi-index | 0.00 |
How can we build robots that engage in fluid spoken conversations with people, moving beyond canned responses to words and towards actually understanding? As a step towards addressing this question, we introduce a robotic architecture that provides a basis for grounding word meanings. The architecture provides perceptual, procedural, and affordance representations for grounding words. A perceptually-coupled on-line simulator enables sensory-motor representations that can shift points of view. Held together, we show that this architecture provides a rich set of data structures and procedures that provide the foundations for grounding the meaning of certain classes of words.