The nature of statistical learning theory
The nature of statistical learning theory
Kernel Methods for Pattern Analysis
Kernel Methods for Pattern Analysis
ACL '02 Proceedings of the 40th Annual Meeting on Association for Computational Linguistics
Chunking with support vector machines
NAACL '01 Proceedings of the second meeting of the North American Chapter of the Association for Computational Linguistics on Language technologies
Automatic learning of textual entailments with cross-pair similarities
ACL-44 Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics
Tree kernels for semantic role labeling
Computational Linguistics
Kernel methods, syntax and semantics for relational text categorization
Proceedings of the 17th ACM conference on Information and knowledge management
Semantic role labeling via tree kernel joint inference
CoNLL-X '06 Proceedings of the Tenth Conference on Computational Natural Language Learning
Efficient convolution kernels for dependency and constituent syntactic trees
ECML'06 Proceedings of the 17th European conference on Machine Learning
Re-ranking models based-on small training data for spoken language understanding
EMNLP '09 Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 3 - Volume 3
Spoken language understanding via supervised learning and linguistically motivated features
NLDB'10 Proceedings of the Natural language processing and information systems, and 15th international conference on Applications of natural language to information systems
COLING '10 Proceedings of the 23rd International Conference on Computational Linguistics: Posters
A joint model for discovery of aspects in utterances
ACL '12 Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers - Volume 1
Hi-index | 0.00 |
Spoken Language Understanding aims at mapping a natural language spoken sentence into a semantic representation. In the last decade two main approaches have been pursued: generative and discriminative models. The former is more robust to overfitting whereas the latter is more robust to many irrelevant features. Additionally, the way in which these approaches encode prior knowledge is very different and their relative performance changes based on the task. In this paper we describe a machine learning framework where both models are used: a generative model produces a list of ranked hypotheses whereas a discriminative model based on structure kernels and Support Vector Machines, re-ranks such list. We tested our approach on the MEDIA corpus (human-machine dialogs) and on a new corpus (human-machine and human-human dialogs) produced in the European LUNA project. The results show a large improvement on the state-of-the-art in concept segmentation and labeling.