A maximum entropy approach to natural language processing
Computational Linguistics
Head-driven statistical models for natural language parsing
Head-driven statistical models for natural language parsing
An empirically based system for processing definite descriptions
Computational Linguistics
A machine learning approach to coreference resolution of noun phrases
Computational Linguistics - Special issue on computational anaphora resolution
Automatic retrieval and clustering of similar words
COLING '98 Proceedings of the 17th international conference on Computational linguistics - Volume 2
A model-theoretic coreference scoring scheme
MUC6 '95 Proceedings of the 6th conference on Message understanding
Text and knowledge mining for coreference resolution
NAACL '01 Proceedings of the second meeting of the North American Chapter of the Association for Computational Linguistics on Language technologies
Optimization, maxent models, and conditional estimation without magic
NAACL-Tutorials '03 Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology: Tutorials - Volume 5
Comparing Knowledge Sources for Nominal Anaphora Resolution
Computational Linguistics
A mention-synchronous coreference resolution algorithm based on the Bell tree
ACL '04 Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics
Learning to resolve bridging references
ACL '04 Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics
Incorporating non-local information into information extraction systems by Gibbs sampling
ACL '05 Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics
Factorizing complex models: a case study in mention detection
ACL-44 Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics
HLT '05 Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing
Exploiting semantic role labeling, WordNet and Wikipedia for coreference resolution
HLT-NAACL '06 Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics
BART: a modular toolkit for coreference resolution
HLT-Demonstrations '08 Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics on Human Language Technologies: Demo Session
BestCut: a graph algorithm for coreference resolution
EMNLP '06 Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing
Shallow semantics for coreference resolution
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
Supervised noun phrase coreference research: the first fifteen years
ACL '10 Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics
Inducing fine-grained semantic classes via hierarchical and collective classification
COLING '10 Proceedings of the 23rd International Conference on Computational Linguistics
Journal of Biomedical Informatics
Stanford's multi-pass sieve coreference resolution system at the CoNLL-2011 shared task
CONLL Shared Task '11 Proceedings of the Fifteenth Conference on Computational Natural Language Learning: Shared Task
Hi-index | 0.00 |
There have been considerable attempts to incorporate semantic knowledge into coreference resolution systems: different knowledge sources such as WordNet and Wikipedia have been used to boost the performance. In this paper, we propose new ways to extract WordNet feature. This feature, along with other features such as named entity feature, can be used to build an accurate semantic class (SC) classifier. In addition, we analyze the SC classification errors and propose to use relaxed SC agreement features. The proposed accurate SC classifier and the relaxation of SC agreement features on ACE2 coreference evaluation can boost our baseline system by 10.4% and 9.7% using MUC score and anaphor accuracy respectively.