Using Natural Language Processing and discourse Features to Identify Understanding Errors
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Optimizing search engines using clickthrough data
Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining
Beyond n-grams: can linguistic sophistication improve language modeling?
COLING '98 Proceedings of the 17th international conference on Computational linguistics - Volume 1
Corpus-based discourse understanding in spoken dialogue systems
ACL '03 Proceedings of the 41st Annual Meeting on Association for Computational Linguistics - Volume 1
Classifying recognition results for spoken dialog systems
ACL '03 Proceedings of the 41st Annual Meeting on Association for Computational Linguistics - Volume 2
Confidence estimation for NLP applications
ACM Transactions on Speech and Language Processing (TSLP)
Putting Linguistics into Speech Recognition: The Regulus Grammar Compiler (Studies in Computational Linguistics (Stanford, Calif.).)
ACL '04 Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics
Error handling in the RavenClaw dialog management framework
HLT '05 Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing
Re-ranking models for spoken language understanding
EACL '09 Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics
Re-ranking models based-on small training data for spoken language understanding
EMNLP '09 Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 3 - Volume 3
Sequential Decision Strategies for Machine Interpretation of Speech
IEEE Transactions on Audio, Speech, and Language Processing
Hi-index | 0.00 |
In this paper, we reduce the rescoring problem in a spoken dialogue understanding task to a classification problem, by using the semantic error rate as the reranking target value. The classifiers we consider here are trained with linguistically motivated features. We present comparative experimental evaluation results of four supervised machine learning methods: Support Vector Machines, Weighted K-Nearest Neighbors, Naïve Bayes and Conditional Inference Trees. We provide a quantitative evaluation of learning and generalization during the classification supervised training, using cross validation and ROC analysis procedures. The reranking is derived using the posterior knowledge given by the classification algorithms.