Promoting extension and reuse in a spoken dialog manager: An evaluation of the queen's communicator
ACM Transactions on Speech and Language Processing (TSLP)
Extending boosting for large scale spoken language understanding
Machine Learning
Extending boosting for large scale spoken language understanding
Machine Learning
Bootstrapping spoken dialogue systems by exploiting reusable libraries
Natural Language Engineering
Multi-domain spoken language understanding with transfer learning
Speech Communication
IEICE - Transactions on Information and Systems
Using semantic and syntactic graphs for call classification
FeatureEng '05 Proceedings of the ACL Workshop on Feature Engineering for Machine Learning in Natural Language Processing
Robust understanding in multimodal interfaces
Computational Linguistics
Cascaded model adaptation for dialog act segmentation and tagging
Computer Speech and Language
Spoken language understanding using weakly supervised learning
Computer Speech and Language
Standoff coordination for multi-tool annotation in a dialogue corpus
LAW '07 Proceedings of the Linguistic Annotation Workshop
Addressing how-to questions using a spoken dialogue system: a viable approach?
KRAQ '09 Proceedings of the 2009 Workshop on Knowledge and Reasoning for Answering Questions
Emotion detection in email customer care
CAAGET '10 Proceedings of the NAACL HLT 2010 Workshop on Computational Approaches to Analysis and Generation of Emotion in Text
Grounding emotions in human-machine conversational systems
INTETAIN'05 Proceedings of the First international conference on Intelligent Technologies for Interactive Entertainment
Evaluating language understanding accuracy with respect to objective outcomes in a dialogue system
EACL '12 Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics
Hi-index | 0.00 |
Spoken language understanding (SLU) aims at extracting meaning from natural language speech. Over the past decade, a variety of practical goal-oriented spoken dialog systems have been built for limited domains. SLU in these systems ranges from understanding predetermined phrases through fixed grammars, extracting some predefined named entities, extracting users' intents for call classification, to combinations of users' intents and named entities. In this paper, we present the SLU system of VoiceTone® (a service provided by AT&T where AT&T develops, deploys and hosts spoken dialog applications for enterprise customers). The SLU system includes extracting both intents and the named entities from the users' utterances. For intent determination, we use statistical classifiers trained from labeled data, and for named entity extraction we use rule-based fixed grammars. The focus of our work is to exploit data and to use machine learning techniques to create scalable SLU systems which can be quickly deployed for new domains with minimal human intervention. These objectives are achieved by 1) using the predicate-argument representation of semantic content of an utterance; 2) extending statistical classifiers to seamlessly integrate hand crafted classification rules with the rules learned from data; and 3) developing an active learning framework to minimize the human labeling effort for quickly building the classifier models and adapting them to changes. We present an evaluation of this system using two deployed applications of VoiceTone®.