Procedure for quantitatively comparing the syntactic coverage of English grammars
HLT '91 Proceedings of the workshop on Speech and Natural Language
Evaluation of broad-coverage natural-language parsers
Survey of the state of the art in human language technology
The TREC question answering track
Natural Language Engineering
A non-projective dependency parser
ANLC '97 Proceedings of the fifth conference on Applied natural language processing
ACL '85 Proceedings of the 23rd annual meeting on Association for Computational Linguistics
A new statistical parser based on bigram lexical dependencies
ACL '96 Proceedings of the 34th annual meeting on Association for Computational Linguistics
A dependency-based method for evaluating broad-coverage parsers
IJCAI'95 Proceedings of the 14th international joint conference on Artificial intelligence - Volume 2
Automatic thesaurus construction
ACSC '08 Proceedings of the thirty-first Australasian conference on Computer science - Volume 74
The Stanford typed dependencies representation
CrossParser '08 Coling 2008: Proceedings of the workshop on Cross-Framework and Cross-Domain Parser Evaluation
Benchmarking for syntax-based sentential inference
COLING '10 Proceedings of the 23rd International Conference on Computational Linguistics: Posters
Recognizing textual entailment via atomic propositions
MLCW'05 Proceedings of the First international conference on Machine Learning Challenges: evaluating Predictive Uncertainty Visual Object Classification, and Recognizing Textual Entailment
Relation mining over a corpus of scientific literature
AIME'05 Proceedings of the 10th conference on Artificial Intelligence in Medicine
Hi-index | 0.00 |
A wide range of parser and/or grammar evaluation methods have been reported in the literature. However, in most cases these evaluations take the parsers independently (intrinsic evaluations), and only in a few cases has the effect of different parsers in real applications been measured (extrinsic evaluations). This paper compares two evaluations of the Link Grammar parser and the Conexor Functional Dependency Grammar parser. The parsing systems, despite both being dependency-based, return different types of dependencies, making a direct comparison impossible. In the intrinsic evaluation, the accuracy of the parsers is compared independently by converting the dependencies into grammatical relations and using the methodology of Carroll et al. (1998) for parser comparison. In the extrinsic evaluation, the parsers' impact in a practical application is compared within the context of answer extraction. The differences in the results are significant.