C4.5: programs for machine learning
C4.5: programs for machine learning
An algorithm for pronominal anaphora resolution
Computational Linguistics
Centering: a framework for modeling the local coherence of discourse
Computational Linguistics
The nature of statistical learning theory
The nature of statistical learning theory
A maximum entropy approach to natural language processing
Computational Linguistics
Understanding Natural Language
Understanding Natural Language
Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition
Optimizing search engines using clickthrough data
Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining
A machine learning approach to coreference resolution of noun phrases
Computational Linguistics - Special issue on computational anaphora resolution
Evaluating automated and manual acquisition of anaphora resolution strategies
ACL '95 Proceedings of the 33rd annual meeting on Association for Computational Linguistics
A model-theoretic coreference scoring scheme
MUC6 '95 Proceedings of the 6th conference on Message understanding
Improving machine learning approaches to coreference resolution
ACL '02 Proceedings of the 40th Annual Meeting on Association for Computational Linguistics
Named entity recognition using an HMM-based chunk tagger
ACL '02 Proceedings of the 40th Annual Meeting on Association for Computational Linguistics
A machine learning approach to pronoun resolution in spoken dialogue
ACL '03 Proceedings of the 41st Annual Meeting on Association for Computational Linguistics - Volume 1
Error-driven HMM-based chunk tagger with context-dependent lexicon
EMNLP '00 Proceedings of the 2000 Joint SIGDAT conference on Empirical methods in natural language processing and very large corpora: held in conjunction with the 38th Annual Meeting of the Association for Computational Linguistics - Volume 13
Combining sample selection and error-driven pruning for machine learning of coreference rules
EMNLP '02 Proceedings of the ACL-02 conference on Empirical methods in natural language processing - Volume 10
A mention-synchronous coreference resolution algorithm based on the Bell tree
ACL '04 Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics
Machine learning for coreference resolution: from local classification to global ranking
ACL '05 Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics
Improving pronoun resolution using statistics-based semantic compatibility information
ACL '05 Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics
Corpus annotation and reference resolution
ANARESOLUTION '97 Proceedings of a Workshop on Operational Factors in Practical, Robust Anaphora Resolution for Unrestricted Texts
A ranking approach to pronoun resolution
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
A machine learning approach to identification and resolution of one-anaphora
IJCAI'05 Proceedings of the 19th international joint conference on Artificial intelligence
Using decision trees for conference resolution
IJCAI'95 Proceedings of the 14th international joint conference on Artificial intelligence - Volume 2
Supervised noun phrase coreference research: the first fifteen years
ACL '10 Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics
Resolving event noun phrases to their verbal mentions
EMNLP '10 Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing
A tree kernel-based unified framework for Chinese zero anaphora resolution
EMNLP '10 Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing
End-to-end coreference resolution via hypergraph partitioning
COLING '10 Proceedings of the 23rd International Conference on Computational Linguistics
A twin-candidate based approach for event pronoun resolution using composite kernel
COLING '10 Proceedings of the 23rd International Conference on Computational Linguistics
Dependency-driven anaphoricity determination for coreference resolution
COLING '10 Proceedings of the 23rd International Conference on Computational Linguistics
Identification of non-referential zero pronouns for Korean-English machine translation
PRICAI'10 Proceedings of the 11th Pacific Rim international conference on Trends in artificial intelligence
A survey of paraphrasing and textual entailment methods
Journal of Artificial Intelligence Research
Evaluation metrics for end-to-end coreference resolution systems
SIGDIAL '10 Proceedings of the 11th Annual Meeting of the Special Interest Group on Discourse and Dialogue
Learning noun phrase anaphoricity in coreference resolution via label propagation
Journal of Computer Science and Technology - Special issue on natural language processing
A cross-lingual ILP solution to zero anaphora resolution
HLT '11 Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies - Volume 1
Collaborative ranking: a case study on entity linking
EMNLP '11 Proceedings of the Conference on Empirical Methods in Natural Language Processing
Improve tree kernel-based event pronoun resolution with competitive information
IJCAI'11 Proceedings of the Twenty-Second international joint conference on Artificial Intelligence - Volume Volume Three
Semantic role labeling of implicit arguments for nominal predicates
Computational Linguistics
Random walks down the mention graphs for event coreference resolution
ACM Transactions on Intelligent Systems and Technology (TIST) - Survey papers, special sections on the semantic adaptive social web, intelligent systems for health informatics, regular papers
Hi-index | 0.00 |
The traditional single-candidate learning model for anaphora resolution considers the antecedent candidates of an anaphor in isolation, and thus cannot effectively capture the preference relationships between competing candidates for its learning and resolution. To deal with this problem, we propose a twin-candidate model for anaphora resolution. The main idea behind the model is to recast anaphora resolution as a preference classification problem. Specifically, the model learns a classifier that determines the preference between competing candidates, and, during resolution, chooses the antecedent of a given anaphor based on the ranking of the candidates. We present in detail the framework of the twin-candidate model for anaphora resolution. Further, we explore how to deploy the model in the more complicated coreference resolution task. We evaluate the twin-candidate model in different domains using the Automatic Content Extraction data sets. The experimental results indicate that our twin-candidate model is superior to the single-candidate model for the task of pronominal anaphora resolution. For the task of coreference resolution, it also performs equally well, or better.