A model-theoretic coreference scoring scheme
MUC6 '95 Proceedings of the 6th conference on Message understanding
On coreference resolution performance metrics
HLT '05 Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing
Generation of repeated references to discourse entities
ENLG '07 Proceedings of the Eleventh European Workshop on Natural Language Generation
A task-performance evaluation of referring expressions in situated collaborative task dialogues
Language Resources and Evaluation
Hi-index | 0.00 |
There were three GREC Tasks at Generation Challenges 2010: GREC-NER required participating systems to identify all people references in texts; for GRECNEG, systems selected coreference chains for all people entities in texts; and GREC-Full combined the NER and NEG tasks, i.e. systems identified and, if appropriate, replaced references to people in texts. Five teams submitted 10 systems in total, and we additionally created baseline systems for each task. Systems were evaluated automatically using a range of intrinsic metrics. In addition, systems were assessed by human judges using preference strength judgements. This report presents the evaluation results, along with descriptions of the three GREC tasks, the evaluation methods, and the participating systems.