A maximum entropy approach to natural language processing
Computational Linguistics
New figures of merit for best-first probabilistic chart parsing
Computational Linguistics
A maximum-entropy-inspired parser
NAACL 2000 Proceedings of the 1st North American chapter of the Association for Computational Linguistics conference
CommandTalk: a spoken-language interface for battlefield simulations
ANLC '97 Proceedings of the fifth conference on Applied natural language processing
Automatic compensation for parser figure-of-merit flaws
ACL '99 Proceedings of the 37th annual meeting of the Association for Computational Linguistics on Computational Linguistics
Measuring efficiency in high-accuracy, broad-coverage statistical parsing
Proceedings of the COLING-2000 Workshop on Efficiency In Large-Scale Parsing Systems
Enhancing Best Analysis Selection and Parser Comparison
TSD '02 Proceedings of the 5th International Conference on Text, Speech and Dialogue
Measuring efficiency in high-accuracy, broad-coverage statistical parsing
Proceedings of the COLING-2000 Workshop on Efficiency In Large-Scale Parsing Systems
Hi-index | 0.00 |
Charniak and his colleagues have proposed implementation-independent metrics as a way of comparing the efficiency of parsing algorithms implemented on different platforms, in different languages, and with different degrees of "incidental optimization". We argue that there are easily immaginable circumstances in which their proposed metrics would mask significant differences in efficiency; we point out that their data do not, in fact, support the usability of such metrics for comparing the efficiency of different algorithms; and we analyze data for a similar metric to try to quantify the degree of variation one might expect between such metrics and actual parse time. Finally, we propose a methodology for making cross-platform comparisons through the use of reference parser implementations.