Computational Linguistics
An Efficient Boosting Algorithm for Combining Preferences
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
Head-driven statistical models for natural language parsing
Head-driven statistical models for natural language parsing
A simple but useful approach to conjunct identification
ACL '92 Proceedings of the 30th annual meeting on Association for Computational Linguistics
Annotating topological fields and chunks: and revising POS tags at the same time
COLING '02 Proceedings of the 19th international conference on Computational linguistics - Volume 1
Discriminative Reranking for Natural Language Parsing
Computational Linguistics
Coarse-to-fine n-best parsing and MaxEnt discriminative reranking
ACL '05 Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics
Efficient parsing of highly ambiguous context-free grammars with bit vectors
COLING '04 Proceedings of the 20th international conference on Computational Linguistics
Parsing German with latent variable grammars
PaGe '08 Proceedings of the Workshop on Parsing German
The PaGe 2008 shared task on parsing German
PaGe '08 Proceedings of the Workshop on Parsing German
Utilizing extra-sentential context for parsing
EMNLP '10 Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing
Annotating coordination in the Penn treebank
LAW VI '12 Proceedings of the Sixth Linguistic Annotation Workshop
Hi-index | 0.00 |
The present paper is concerned with statistical parsing of constituent structures in German. The paper presents four experiments that aim at improving parsing performance of coordinate structure: 1) reranking the n-best parses of a PCFG parser, 2) enriching the input to a PCFG parser by gold scopes for any conjunct, 3) reranking the parser output for all possible scopes for conjuncts that are permissible with regard to clause structure. Experiment 4 reranks a combination of parses from experiments 1 and 3. The experiments presented show that n-best parsing combined with reranking improves results by a large margin. Providing the parser with different scope possibilities and reranking the resulting parses results in an increase in F-score from 69.76 for the baseline to 74.69. While the F-score is similar to the one of the first experiment (n-best parsing and reranking), the first experiment results in higher recall (75.48% vs. 73.69%) and the third one in higher precision (75.43% vs. 73.26%). Combining the two methods results in the best result with an F-score of 76.69.