Empirical methods for artificial intelligence
Empirical methods for artificial intelligence
Empirically evaluating an adaptable spoken dialogue system
UM '99 Proceedings of the seventh international conference on User modeling
Towards developing general models of usability with PARADISE
Natural Language Engineering
NAACL 2000 Proceedings of the 1st North American chapter of the Association for Computational Linguistics conference
Predicting automatic speech recognition performance using prosodic cues
NAACL 2000 Proceedings of the 1st North American chapter of the Association for Computational Linguistics conference
Characterizing and recognizing spoken corrections in human-computer dialogue
COLING '98 Proceedings of the 17th international conference on Computational linguistics - Volume 1
Automatic detection of poor speech recognition at the dialogue level
ACL '99 Proceedings of the 37th annual meeting of the Association for Computational Linguistics on Computational Linguistics
Learning trees and rules with set-valued features
AAAI'96 Proceedings of the thirteenth national conference on Artificial intelligence - Volume 1
The disambiguation of nominalizations
Computational Linguistics
Predicting user reactions to system error
ACL '01 Proceedings of the 39th Annual Meeting on Association for Computational Linguistics
Labeling corrections and aware sites in spoken dialogue systems
SIGDIAL '01 Proceedings of the Second SIGdial Workshop on Discourse and Dialogue - Volume 16
Exceptionality and natural language learning
CONLL '03 Proceedings of the seventh conference on Natural language learning at HLT-NAACL 2003 - Volume 4
Using model trees for evaluating dialog error conditions based on acoustic information
Proceedings of the 1st ACM international workshop on Human-centered multimedia
Towards conversational QA: automatic identification of problematic situations and user intent
COLING-ACL '06 Proceedings of the COLING/ACL on Main conference poster sessions
Detecting communication errors from visual cues during the system's conversational turn
Proceedings of the 9th international conference on Multimodal interfaces
Automatically training a problematic dialogue predictor for a spoken dialogue system
Journal of Artificial Intelligence Research
Autonomous self-assessment of autocorrections: exploring text message dialogues
NAACL HLT '12 Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Hi-index | 0.00 |
We present results of machine learning experiments designed to identify user corrections of speech recognition errors in a corpus collected from a train information spoken dialogue system. We investigate the predictive power of features automatically computable from the prosody of the turn, the speech recognition process, experimental conditions, and the dialogue history. Our best performing features reduce classification error from baselines of 25.70-28.99% to 15.72%.