A new discriminative kernel from probabilistic models
Neural Computation
Discriminative Reranking for Natural Language Parsing
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Head-driven statistical models for natural language parsing
Head-driven statistical models for natural language parsing
Building a large annotated corpus of English: the penn treebank
Computational Linguistics - Special issue on using large corpora: II
Parsing algorithms and metrics
ACL '96 Proceedings of the 34th annual meeting on Association for Computational Linguistics
More accurate tests for the statistical significance of result differences
COLING '00 Proceedings of the 18th conference on Computational linguistics - Volume 2
Support vector machine learning for interdependent and structured output spaces
ICML '04 Proceedings of the twenty-first international conference on Machine learning
ACL '02 Proceedings of the 40th Annual Meeting on Association for Computational Linguistics
Inducing history representations for broad coverage statistical parsing
NAACL '03 Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology - Volume 1
An SVM based voting algorithm with application to parse reranking
CONLL '03 Proceedings of the seventh conference on Natural language learning at HLT-NAACL 2003 - Volume 4
Discriminative Reranking for Natural Language Parsing
Computational Linguistics
Discriminative training of a neural network statistical parser
ACL '04 Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics
A study on convolution kernels for shallow semantic parsing
ACL '04 Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics
Coarse-to-fine n-best parsing and MaxEnt discriminative reranking
ACL '05 Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics
Data-defined kernels for parse reranking derived from probabilistic models
ACL '05 Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics
Hidden-variable models for discriminative reranking
HLT '05 Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing
Parsing '05 Proceedings of the Ninth International Workshop on Parsing Technology
Lattice Minimum Bayes-Risk decoding for statistical machine translation
EMNLP '08 Proceedings of the Conference on Empirical Methods in Natural Language Processing
Fast consensus decoding over translation forests
ACL '09 Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2 - Volume 2
Products of random latent variable grammars
HLT '10 Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics
Incremental Sigmoid Belief Networks for Grammar Learning
The Journal of Machine Learning Research
Spectral learning for non-deterministic dependency parsing
EACL '12 Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics
Hi-index | 0.00 |
We propose a general method for reranker construction which targets choosing the candidate with the least expected loss, rather than the most probable candidate. Different approaches to expected loss approximation are considered, including estimating from the probabilistic model used to generate the candidates, estimating from a discriminative model trained to rerank the candidates, and learning to approximate the expected loss. The proposed methods are applied to the parse reranking task, with various baseline models, achieving significant improvement both over the probabilistic models and the discriminative rerankers. When a neural network parser is used as the probabilistic model and the Voted Perceptron algorithm with data-defined kernels as the learning algorithm, the loss minimization model achieves 90.0% labeled constituents F1 score on the standard WSJ parsing task.