C4.5: programs for machine learning
C4.5: programs for machine learning
The rise/fall/connection model of intonation
Speech Communication
Designing SpeechActs: issues in speech user interfaces
CHI '95 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Empirical studies on the disambiguation of cue phrases
Computational Linguistics
Detecting and correcting speech repairs
ACL '94 Proceedings of the 32nd annual meeting on Association for Computational Linguistics
ACL '92 Proceedings of the 30th annual meeting on Association for Computational Linguistics
Multimodal error correction for speech user interfaces
ACM Transactions on Computer-Human Interaction (TOCHI)
Designing and Evaluating an Adaptive Spoken Dialogue System
User Modeling and User-Adapted Interaction
Changes in syllable magnitude and timing due to repeated correction
Speech Communication - Special issue on speech and emotion
How to find trouble in communication
Speech Communication - Special issue on speech and emotion
Surface-marker-based dialog modelling: A progress report on the MAREDI project
Natural Language Engineering
NAACL 2000 Proceedings of the 1st North American chapter of the Association for Computational Linguistics conference
Predicting automatic speech recognition performance using prosodic cues
NAACL 2000 Proceedings of the 1st North American chapter of the Association for Computational Linguistics conference
Automatic detection of poor speech recognition at the dialogue level
ACL '99 Proceedings of the 37th annual meeting of the Association for Computational Linguistics on Computational Linguistics
Finding errors automatically in semantically tagged dialogues
HLT '01 Proceedings of the first international conference on Human language technology research
Predicting user reactions to system error
ACL '01 Proceedings of the 39th Annual Meeting on Association for Computational Linguistics
ACL '02 Proceedings of the 40th Annual Meeting on Association for Computational Linguistics
Identifying user corrections automatically in spoken dialogue systems
NAACL '01 Proceedings of the second meeting of the North American Chapter of the Association for Computational Linguistics on Language technologies
Labeling corrections and aware sites in spoken dialogue systems
SIGDIAL '01 Proceedings of the Second SIGdial Workshop on Discourse and Dialogue - Volume 16
Characterizing and Predicting Corrections in Spoken Dialogue Systems
Computational Linguistics
A co-evolving decision tree classification method
Expert Systems with Applications: An International Journal
Towards conversational QA: automatic identification of problematic situations and user intent
COLING-ACL '06 Proceedings of the COLING/ACL on Main conference poster sessions
Supplier selection based on hierarchical potential support vector machine
Expert Systems with Applications: An International Journal
Automatically training a problematic dialogue predictor for a spoken dialogue system
Journal of Artificial Intelligence Research
Predicting concept types in user corrections in dialog
SRSL '09 Proceedings of the 2nd Workshop on Semantic Representation of Spoken Language
The dynamics of action corrections in situated interaction
SIGDIAL '10 Proceedings of the 11th Annual Meeting of the Special Interest Group on Discourse and Dialogue
Autonomous self-assessment of autocorrections: exploring text message dialogues
NAACL HLT '12 Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Hi-index | 0.00 |
Miscommunication in speech recognition systems is unavoidable, but a detailed characterization of user corrections will enable speech systems to identify when a correction is taking place and to more accurately recognize the content of correction utterances. In this paper we investigate the adaptations of users when they encounter recognition errors in interactions with a voice-in/voice-out spoken language system. In analyzing more than 300 pairs of original and repeat correction utterances, matched on speaker and lexical content, we found overall increases in both utterance and pause duration from original to correction. Interestingly, corrections of misrecognition erros (CME) exhibited significantly heightened pitch variability, while corrections of rejection errors (CRE) showed only a small but significant decrease in pitch minimum. CME's demonstrated much greater increases in measures of duration and pitch variability than CRE's. These contrasts allow the development of decision trees which distinguish CME's from CRE's and from original inputs at 70--75% accuracy based on duration, pitch, and amplitude features.