C4.5: programs for machine learning
C4.5: programs for machine learning
Empirically evaluating an adaptable spoken dialogue system
UM '99 Proceedings of the seventh international conference on User modeling
MATCH: an architecture for multimodal dialogue systems
ACL '02 Proceedings of the 40th Annual Meeting on Association for Computational Linguistics
Recent improvements in the CMU spoken language understanding system
HLT '94 Proceedings of the workshop on Human Language Technology
Characterizing and Predicting Corrections in Spoken Dialogue Systems
Computational Linguistics
Data Mining: Practical Machine Learning Tools and Techniques, Second Edition (Morgan Kaufmann Series in Data Management Systems)
The RavenClaw dialog management framework: Architecture and systems
Computer Speech and Language
MICA: a probabilistic dependency parser based on tree insertion grammars application note
NAACL-Short '09 Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Companion Volume: Short Papers
Predicting concept types in user corrections in dialog
SRSL '09 Proceedings of the 2nd Workshop on Semantic Representation of Spoken Language
SIGDIAL '11 Proceedings of the SIGDIAL 2011 Conference
Learning to balance grounding rationales for dialogue systems
SIGDIAL '11 Proceedings of the SIGDIAL 2011 Conference
PARADISE-style evaluation of a human-human library corpus
SIGDIAL '11 Proceedings of the SIGDIAL 2011 Conference
Data mining to support human-machine dialogue for autonomous agents
ADMI'11 Proceedings of the 7th international conference on Agents and Data Mining Interaction
Hi-index | 0.00 |
In a Wizard-of-Oz experiment with multiple wizard subjects, each wizard viewed automated speech recognition (ASR) results for utterances whose interpretation is critical to task success: requests for books by title from a library database. To avoid non-understandings, the wizard directly queried the application database with the ASR hypothesis (voice search). To learn how to avoid misunderstandings, we investigated how wizards dealt with uncertainty in voice search results. Wizards were quite successful at selecting the correct title from query results that included a match. The most successful wizard could also tell when the query results did not contain the requested title. Our learned models of the best wizard's behavior combine features available to wizards with some that are not, such as recognition confidence and acoustic model scores.