PARADISE: a framework for evaluating spoken dialogue agents
ACL '98 Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics and Eighth Conference of the European Chapter of the Association for Computational Linguistics
The Corpus DIMEx100: transcription and evaluation
Language Resources and Evaluation
Robotic orientation towards speaker for human-robot interaction
IBERAMIA'10 Proceedings of the 12th Ibero-American conference on Advances in artificial intelligence
Dialogue model specification and interpretation for intelligent multimodal HCI
IBERAMIA'10 Proceedings of the 12th Ibero-American conference on Advances in artificial intelligence
Specification and evaluation of a Spanish conversational system using dialogue models
IBERAMIA'10 Proceedings of the 12th Ibero-American conference on Advances in artificial intelligence
Enabling Multimodal Human–Robot Interaction for the Karlsruhe Humanoid Robot
IEEE Transactions on Robotics
Robotic orientation towards speaker for human-robot interaction
IBERAMIA'10 Proceedings of the 12th Ibero-American conference on Advances in artificial intelligence
Dialogue model specification and interpretation for intelligent multimodal HCI
IBERAMIA'10 Proceedings of the 12th Ibero-American conference on Advances in artificial intelligence
Specification and evaluation of a Spanish conversational system using dialogue models
IBERAMIA'10 Proceedings of the 12th Ibero-American conference on Advances in artificial intelligence
Gesture-based interaction with voice feedback for a tour-guide robot
Journal of Visual Communication and Image Representation
Hi-index | 0.00 |
In this paper, we present the development of a tour-guide robot that conducts a poster session through spoken Spanish. The robot is able to navigate around its environment, visually identify informational posters, and explain sections of the posters that users request via pointing gestures. We specify the task by means of dialogue models. A dialogue model defines conversational situations, expectations and robot actions. Dialogue models are integrated into a novel cognitive architecture that allow us to coordinate both human-robot interaction and robot capabilities in a flexible and simple manner. Our robot also incorporates a confidence score on visual outcomes, the history of the conversation and error prevention strategies. Our initial evaluation of the dialogue structure shows the reliability of the overall approach, and the suitability of our dialogue model and architecture to represent complex human-robot interactions, with promising results.