A computational theory of grounding in natural language conversation
A computational theory of grounding in natural language conversation
A Model of Saliency-Based Visual Attention for Rapid Scene Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
Spontaneous speech understanding for robust multi-modal human-robot communication
COLING-ACL '06 Proceedings of the COLING/ACL on Main conference poster sessions
A computational model of multi-modal grounding for human robot interaction
SigDIAL '06 Proceedings of the 7th SIGdial Workshop on Discourse and Dialogue
IJCAI'09 Proceedings of the 21st international jont conference on Artifical intelligence
Knowledge acquisition through human---robot multimodal interaction
Intelligent Service Robotics
Hi-index | 0.00 |
In research on human-robot interaction the interest is currently shifting from uni-modal dialog systems to multi-modal interaction schemes. We present a system for human-style interaction with a robot that is integrated on our mobile robot BIRON. To model the dialog we adopt an extended grounding concept with a mechanism to handle multi-modal in- and output where object references are resolved by the interaction with an object attention system (OAS). The OAS integrates multiple input from, e.g., the object and gesture recognition systems and provides the information for a common representation. This representation can be accessed by both modules and combines symbolic verbal attributes with sensor-based features. We argue that such a representation is necessary to achieve a robust and efficient information processing.