Class-based n-gram models of natural language
Computational Linguistics
Probabilistic Networks and Expert Systems
Probabilistic Networks and Expert Systems
Discriminative Reranking for Natural Language Parsing
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Understanding belief propagation and its generalizations
Exploring artificial intelligence in the new millennium
Head-driven statistical models for natural language parsing
Head-driven statistical models for natural language parsing
Building a large annotated corpus of English: the penn treebank
Computational Linguistics - Special issue on using large corpora: II
Distributional clustering of English words
ACL '93 Proceedings of the 31st annual meeting on Association for Computational Linguistics
Estimators for stochastic "Unification-Based" grammars
ACL '99 Proceedings of the 37th annual meeting of the Association for Computational Linguistics on Computational Linguistics
ACL '02 Proceedings of the 40th Annual Meeting on Association for Computational Linguistics
ACL '02 Proceedings of the 40th Annual Meeting on Association for Computational Linguistics
Discriminative training and maximum entropy models for statistical machine translation
ACL '02 Proceedings of the 40th Annual Meeting on Association for Computational Linguistics
Ranking algorithms for named-entity extraction: boosting and the voted perceptron
ACL '02 Proceedings of the 40th Annual Meeting on Association for Computational Linguistics
SPoT: a trainable sentence planner
NAACL '01 Proceedings of the second meeting of the North American Chapter of the Association for Computational Linguistics on Language technologies
A statistical model for parsing and word-sense disambiguation
EMNLP '00 Proceedings of the 2000 Joint SIGDAT conference on Empirical methods in natural language processing and very large corpora: held in conjunction with the 38th Annual Meeting of the Association for Computational Linguistics - Volume 13
An SVM based voting algorithm with application to parse reranking
CONLL '03 Proceedings of the seventh conference on Natural language learning at HLT-NAACL 2003 - Volume 4
Supersense tagging of unknown nouns in WordNet
EMNLP '03 Proceedings of the 2003 conference on Empirical methods in natural language processing
Parsing the WSJ using CCG and log-linear models
ACL '04 Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics
Probabilistic CFG with latent annotations
ACL '05 Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics
Coarse-to-fine n-best parsing and MaxEnt discriminative reranking
ACL '05 Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics
An end-to-end discriminative approach to machine translation
ACL-44 Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics
Incremental Bayesian networks for structure prediction
Proceedings of the 24th international conference on Machine learning
Hidden Conditional Random Fields
IEEE Transactions on Pattern Analysis and Machine Intelligence
Boosting with incomplete information
Proceedings of the 25th international conference on Machine learning
Porting statistical parsers with data-defined kernels
CoNLL-X '06 Proceedings of the Tenth Conference on Computational Natural Language Learning
EACL '09 Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics
Loss minimization in parse reranking
EMNLP '06 Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing
Broad-coverage sense disambiguation and information extraction with a supersense sequence tagger
EMNLP '06 Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing
Sparse multi-scale grammars for discriminative latent variable parsing
EMNLP '08 Proceedings of the Conference on Empirical Methods in Natural Language Processing
Hidden dynamic probabilistic models for labeling sequence data
AAAI'08 Proceedings of the 23rd national conference on Artificial intelligence - Volume 2
Dependency parsing with second-order feature maps and annotated semantic information
IWPT '07 Proceedings of the 10th International Conference on Parsing Technologies
A latent variable model for generative dependency parsing
IWPT '07 Proceedings of the 10th International Conference on Parsing Technologies
Gesture salience as a hidden variable for coreference resolution and keyframe extraction
Journal of Artificial Intelligence Research
Feature-rich translation by quasi-synchronous lattice parsing
EMNLP '09 Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 1 - Volume 1
Hard constraints for grammatical function labelling
ACL '10 Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics
Semi-supervised abstraction-augmented string kernel for multi-level bio-relation extraction
ECML PKDD'10 Proceedings of the 2010 European conference on Machine learning and knowledge discovery in databases: Part II
Semantic domains and supersense tagging for domain-specific ontology learning
Large Scale Semantic Access to Content (Text, Image, Video, and Sound)
Incremental Sigmoid Belief Networks for Grammar Learning
The Journal of Machine Learning Research
Hi-index | 0.00 |
We describe a new method for the representation of NLP structures within reranking approaches. We make use of a conditional log-linear model, with hidden variables representing the assignment of lexical items to word clusters or word senses. The model learns to automatically make these assignments based on a discriminative training criterion. Training and decoding with the model requires summing over an exponential number of hidden-variable assignments: the required summations can be computed efficiently and exactly using dynamic programming. As a case study, we apply the model to parse reranking. The model gives an F-measure improvement of ≈ 1.25% beyond the base parser, and an ≈ 0.25% improvement beyond the Collins (2000) reranker. Although our experiments are focused on parsing, the techniques described generalize naturally to NLP structures other than parse trees.