Simple fast algorithms for the editing distance between trees and related problems
SIAM Journal on Computing
Procedure for quantitatively comparing the syntactic coverage of English grammars
HLT '91 Proceedings of the workshop on Speech and Natural Language
Empirical methods for artificial intelligence
Empirical methods for artificial intelligence
Building a large annotated corpus of English: the penn treebank
Computational Linguistics - Special issue on using large corpora: II
A survey on tree edit distance and related problems
Theoretical Computer Science
Coarse-to-fine n-best parsing and MaxEnt discriminative reranking
ACL '05 Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics
Learning accurate, compact, and interpretable tree annotation
ACL-44 Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics
Non-projective dependency parsing using spanning tree algorithms
HLT '05 Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing
HLT '05 Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing
Fully parsing the Penn Treebank
HLT-NAACL '06 Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics
Wide-coverage deep statistical parsing using automatic dependency structure annotation
Computational Linguistics
CoNLL-X shared task on multilingual dependency parsing
CoNLL-X '06 Proceedings of the Tenth Conference on Computational Natural Language Learning
Relational-realizational parsing
COLING '08 Proceedings of the 22nd International Conference on Computational Linguistics - Volume 1
A dependency-based method for evaluating broad-coverage parsers
IJCAI'95 Proceedings of the 14th international joint conference on Artificial intelligence - Volume 2
The simple truth about dependency and phrase structure representations: an opinion piece
HLT '10 Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics
Evaluation of dependency parsers on unbounded dependencies
COLING '10 Proceedings of the 23rd International Conference on Computational Linguistics
Neutralizing linguistically problematic annotations in unsupervised dependency parsing evaluation
HLT '11 Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies - Volume 1
Evaluating dependency parsing: robust and heuristics-free cross-nnotation evaluation
EMNLP '11 Proceedings of the Conference on Empirical Methods in Natural Language Processing
Joint evaluation of morphological segmentation and syntactic parsing
ACL '12 Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Short Papers - Volume 2
Hi-index | 0.00 |
A serious bottleneck of comparative parser evaluation is the fact that different parsers subscribe to different formal frameworks and theoretical assumptions. Converting outputs from one framework to another is less than optimal as it easily introduces noise into the process. Here we present a principled protocol for evaluating parsing results across frameworks based on function trees, tree generalization and edit distance metrics. This extends a previously proposed framework for cross-theory evaluation and allows us to compare a wider class of parsers. We demonstrate the usefulness and language independence of our procedure by evaluating constituency and dependency parsers on English and Swedish.