Empirically evaluating an adaptable spoken dialogue system
UM '99 Proceedings of the seventh international conference on User modeling
An architecture for more realistic conversational systems
Proceedings of the 6th international conference on Intelligent user interfaces
Characterizing and Predicting Corrections in Spoken Dialogue Systems
Computational Linguistics
Error handling in the RavenClaw dialog management framework
HLT '05 Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing
Error awareness and recovery in conversational spoken language interfaces
Error awareness and recovery in conversational spoken language interfaces
The RavenClaw dialog management framework: Architecture and systems
Computer Speech and Language
Learning to interpret utterances using dialogue history
EACL '09 Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics
Improving a virtual human using a model of degrees of grounding
IJCAI'09 Proceedings of the 21st international jont conference on Artifical intelligence
Learning about voice search for spoken dialogue systems
HLT '10 Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics
Conversation as action under uncertainty
UAI'00 Proceedings of the Sixteenth conference on Uncertainty in artificial intelligence
PARADISE-style evaluation of a human-human library corpus
SIGDIAL '11 Proceedings of the SIGDIAL 2011 Conference
Data mining to support human-machine dialogue for autonomous agents
ADMI'11 Proceedings of the 7th international conference on Agents and Data Mining Interaction
Collaborative effort towards common ground in situated human-robot dialogue
Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction
Hi-index | 0.00 |
This paper reports on an experiment that investigates clarification subdialogues in intentionally noisy speech recognition. The architecture learns weights for mixtures of grounding strategies from examples provided by a human wizard embedded in the system. Results indicate that the architecture learns to eliminate misunderstandings reliably despite high word error rate.