An algorithm for pronominal anaphora resolution
Computational Linguistics
Message Understanding Conference-6: a brief history
COLING '96 Proceedings of the 16th conference on Computational linguistics - Volume 1
TIPSTER '98 Proceedings of a workshop on held at Baltimore, Maryland: October 13-15, 1998
Broad-coverage sense disambiguation and information extraction with a supersense sequence tagger
EMNLP '06 Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing
Understanding the value of features for coreference resolution
EMNLP '08 Proceedings of the Conference on Empirical Methods in Natural Language Processing
Simple coreference resolution with rich syntactic and semantic features
EMNLP '09 Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 3 - Volume 3
Anaphora resolution: to what extent does it help nlp applications?
DAARC'07 Proceedings of the 6th discourse anaphora and anaphor resolution conference on Anaphora: analysis, algorithms and applications
Hi-index | 0.00 |
Benchmarking is an established way for evaluating automatic systems which tackle the same task. This paper presents the results of benchmarking the Anaphora Resolution Systems (ARS) developed at MIMOS against several similar systems, and the lessons learnt from it. The dataset used for this benchmarking effort consists of texts with Pronominal Anaphora, Definite Noun Phrase Anaphora, Pleonastic Anaphora and Reader/Writer Anaphora. The authors used the Recall, Precision and F-measure (F1 score) to measure the results of this evaluation.