What can I say?: evaluating a spoken language interface to Email
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Designing and Evaluating an Adaptive Spoken Dialogue System
User Modeling and User-Adapted Interaction
Towards developing general models of usability with PARADISE
Natural Language Engineering
PARADISE: a framework for evaluating spoken dialogue agents
ACL '98 Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics and Eighth Conference of the European Chapter of the Association for Computational Linguistics
BLEU: a method for automatic evaluation of machine translation
ACL '02 Proceedings of the 40th Annual Meeting on Association for Computational Linguistics
Predicting the quality and usability of spoken dialogue services
Speech Communication
Journal of Artificial Intelligence Research
Explorations in engagement for humans and robots
Artificial Intelligence
Evaluating description and reference strategies in a cooperative human-robot dialogue system
IJCAI'09 Proceedings of the 21st international jont conference on Artifical intelligence
INLG '08 Proceedings of the Fifth International Natural Language Generation Conference
Integrating language, vision and action for human robot dialog systems
UAHCI'07 Proceedings of the 4th international conference on Universal access in human-computer interaction: ambient interaction
Evaluating description and reference strategies in a cooperative human-robot dialogue system
IJCAI'09 Proceedings of the 21st international jont conference on Artifical intelligence
Situated reference in a hybrid human-robot interaction system
INLG '10 Proceedings of the 6th International Natural Language Generation Conference
Talking with robots about objects: a system-level evaluation in HRI
HRI '12 Proceedings of the seventh annual ACM/IEEE international conference on Human-Robot Interaction
A regression-based approach to modeling addressee backchannels
SIGDIAL '12 Proceedings of the 13th Annual Meeting of the Special Interest Group on Discourse and Dialogue
A task-performance evaluation of referring expressions in situated collaborative task dialogues
Language Resources and Evaluation
Hi-index | 0.01 |
We present a human-robot dialogue system that enables a robot to work together with a human user to build wooden construction toys. We then describe a study in which naïve subjects interacted with this system under a range of conditions and then completed a user-satisfaction questionnaire. The results of this study provide a wide range of subjective and objective measures of the quality of the interactions. To assess which aspects of the interaction had the greatest impact on the users' opinions of the system, we used a method based on the PARADISE evaluation framework (Walker et al., 1997) to derive a performance function from our data. The major contributors to user satisfaction were the number of repetition requests (which had a negative effect on satisfaction), the dialogue length, and the users' recall of the system instructions (both of which contributed positively).