Bootstrapping path-based pronoun resolution
ACL-44 Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics
Extracting Event Temporal Information Based on Web
KAM '09 Proceedings of the 2009 Second International Symposium on Knowledge Acquisition and Modeling - Volume 01
A multi-pass sieve for coreference resolution
EMNLP '10 Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing
CoNLL-2011 shared task: modeling unrestricted coreference in OntoNotes
CONLL Shared Task '11 Proceedings of the Fifteenth Conference on Computational Natural Language Learning: Shared Task
Stanford's multi-pass sieve coreference resolution system at the CoNLL-2011 shared task
CONLL Shared Task '11 Proceedings of the Fifteenth Conference on Computational Natural Language Learning: Shared Task
RelaxCor participation in CoNLL shared task on coreference resolution
CONLL Shared Task '11 Proceedings of the Fifteenth Conference on Computational Natural Language Learning: Shared Task
Inference protocols for coreference resolution
CONLL Shared Task '11 Proceedings of the Fifteenth Conference on Computational Natural Language Learning: Shared Task
Rule and tree ensembles for unrestricted coreference resolution
CONLL Shared Task '11 Proceedings of the Fifteenth Conference on Computational Natural Language Learning: Shared Task
CoNLL-2012 shared task: Modeling Multilingual Unrestricted Coreference in OntoNotes
CoNLL '12 Joint Conference on EMNLP and CoNLL - Shared Task
Deterministic coreference resolution based on entity-centric, precision-ranked rules
Computational Linguistics
Hi-index | 0.00 |
This paper presents a mixed deterministic model for coreference resolution in the CoNLL-2012 shared task. We separate the two main stages of our model, mention detection and coreference resolution, into several sub-tasks which are solved by machine learning method and deterministic rules based on multi-filters, such as lexical, syntactic, semantic, gender and number information. We participate in the closed track for English and Chinese, and also submit an open result for Chinese using tools to generate the required features. Finally, we reach the average F1 scores 58.68, 60.69 and 61.02 on the English closed task, Chinese closed and open tasks.