Learning to Parse Natural Language with Maximum Entropy Models
Machine Learning - Special issue on natural language learning
Head-driven statistical models for natural language parsing
Head-driven statistical models for natural language parsing
Building a large annotated corpus of English: the penn treebank
Computational Linguistics - Special issue on using large corpora: II
Three generative, lexicalised models for statistical parsing
ACL '98 Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics and Eighth Conference of the European Chapter of the Association for Computational Linguistics
More accurate tests for the statistical significance of result differences
COLING '00 Proceedings of the 18th conference on Computational linguistics - Volume 2
Support vector machine learning for interdependent and structured output spaces
ICML '04 Proceedings of the twenty-first international conference on Machine learning
ACL '02 Proceedings of the 40th Annual Meeting on Association for Computational Linguistics
Inducing history representations for broad coverage statistical parsing
NAACL '03 Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology - Volume 1
Supervised and unsupervised PCFG adaptation to novel domains
NAACL '03 Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology - Volume 1
An SVM based voting algorithm with application to parse reranking
CONLL '03 Proceedings of the seventh conference on Natural language learning at HLT-NAACL 2003 - Volume 4
Probabilistic CFG with latent annotations
ACL '05 Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics
Data-defined kernels for parse reranking derived from probabilistic models
ACL '05 Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics
Hidden-variable models for discriminative reranking
HLT '05 Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing
Efficient linearization of tree kernel functions
CoNLL '09 Proceedings of the Thirteenth Conference on Computational Natural Language Learning
IWPT '07 Proceedings of the 10th International Conference on Parsing Technologies
On reverse feature engineering of syntactic tree kernels
CoNLL '10 Proceedings of the Fourteenth Conference on Computational Natural Language Learning
Parsing natural language queries for life science knowledge
BioNLP '11 Proceedings of BioNLP 2011 Workshop
Using syntactic and semantic structural kernels for classifying definition questions in Jeopardy!
EMNLP '11 Proceedings of the Conference on Empirical Methods in Natural Language Processing
Structured lexical similarity via convolution kernels on dependency trees
EMNLP '11 Proceedings of the Conference on Empirical Methods in Natural Language Processing
Verb classification using distributional similarity in syntactic and semantic structures
ACL '12 Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers - Volume 1
Modeling topic dependencies in hierarchical text categorization
ACL '12 Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers - Volume 1
Hi-index | 0.00 |
Previous results have shown disappointing performance when porting a parser trained on one domain to another domain where only a small amount of data is available. We propose the use of data-defined kernels as a way to exploit statistics from a source domain while still specializing a parser to a target domain. A probabilistic model trained on the source domain (and possibly also the target domain) is used to define a kernel, which is then used in a large margin classifier trained only on the target domain. With a SVM classifier and a neural network probabilistic model, this method achieves improved performance over the probabilistic model alone.