A maximum entropy approach to natural language processing
Computational Linguistics
Stochastic attribute-value grammars
Computational Linguistics
Exploiting auxiliary distributions in stochastic unification-based grammars
NAACL 2000 Proceedings of the 1st North American chapter of the Association for Computational Linguistics conference
Estimation of stochastic attribute-value grammars using an informative sample
COLING '00 Proceedings of the 18th conference on Computational linguistics - Volume 1
Estimators for stochastic "Unification-Based" grammars
ACL '99 Proceedings of the 37th annual meeting of the Association for Computational Linguistics on Computational Linguistics
Effective self-training for parsing
HLT-NAACL '06 Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics
A look at parsing and its applications
AAAI'06 proceedings of the 21st national conference on Artificial intelligence - Volume 2
Using self-trained bilexical preferences to improve disambiguation accuracy
IWPT '07 Proceedings of the 10th International Conference on Parsing Technologies
Adapting a probabilistic disambiguation model of an HPSG parser to a new domain
IJCNLP'05 Proceedings of the Second international joint conference on Natural Language Processing
Structural correspondence learning for parse disambiguation
EACL '09 Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics: Student Research Workshop
Grammar-driven versus data-driven: which parsing system is more affected by domain shifts?
NLPLING '10 Proceedings of the 2010 Workshop on NLP and Linguistics: Finding the Common Ground
Cross-Domain Effects on Parse Selection for Precision Grammars
Research on Language and Computation
Hi-index | 0.00 |
We investigate auxiliary distributions (Johnson and Riezler, 2000) for domain adaptation of a supervised parsing system of Dutch. To overcome the limited target domain training data, we exploit an original and larger out-of-domain model as auxiliary distribution. However, our empirical results exhibit that the auxiliary distribution does not help: even when very little target training data is available the incorporation of the out-of-domain model does not contribute to parsing accuracy on the target domain; instead, better results are achieved either without adaptation or by simple model combination.