Centering: a framework for modeling the local coherence of discourse
Computational Linguistics
The nature of statistical learning theory
The nature of statistical learning theory
ICML '99 Proceedings of the Sixteenth International Conference on Machine Learning
Japanese discourse and the process of centering
Computational Linguistics
A property-sharing constraint in Centering
ACL '86 Proceedings of the 24th annual meeting on Association for Computational Linguistics
Evaluating automated and manual acquisition of anaphora resolution strategies
ACL '95 Proceedings of the 33rd annual meeting on Association for Computational Linguistics
Zero pronoun resolution in Japanese discourse based on centering theory
COLING '96 Proceedings of the 16th conference on Computational linguistics - Volume 2
Automatic linguistic analysis for language teachers: the case of zeros
COLING '02 Proceedings of the 19th international conference on Computational linguistics - Volume 1
Extracting important sentences with support vector machines
COLING '02 Proceedings of the 19th international conference on Computational linguistics - Volume 1
Efficient support vector classifiers for named entity recognition
COLING '02 Proceedings of the 19th international conference on Computational linguistics - Volume 1
COLING '02 Proceedings of the 19th international conference on Computational linguistics - Volume 1
Chunking with support vector machines
NAACL '01 Proceedings of the second meeting of the North American Chapter of the Association for Computational Linguistics on Language technologies
Improving Japanese zero pronoun resolution by global word sense disambiguation
COLING '04 Proceedings of the 20th international conference on Computational Linguistics
Zero-anaphora resolution by learning rich syntactic pattern features
ACM Transactions on Asian Language Information Processing (TALIP)
Identification of Subject Shareness for Korean-English Machine Translation
PRICAI '08 Proceedings of the 10th Pacific Rim International Conference on Artificial Intelligence: Trends in Artificial Intelligence
Zero-Anaphora Resolution in Chinese Using Maximum Entropy
IEICE - Transactions on Information and Systems
A fully-lexicalized probabilistic model for Japanese zero anaphora resolution
COLING '08 Proceedings of the 22nd International Conference on Computational Linguistics - Volume 1
Capturing salience with a trainable cache model for zero-anaphora resolution
ACL '09 Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2 - Volume 2
A tree kernel-based unified framework for Chinese zero anaphora resolution
EMNLP '10 Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing
A cross-lingual ILP solution to zero anaphora resolution
HLT '11 Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies - Volume 1
Automatic occupation coding with combination of machine learning and hand-crafted rules
PAKDD'05 Proceedings of the 9th Pacific-Asia conference on Advances in Knowledge Discovery and Data Mining
Incremental learning of transfer rules for customized machine translation
INAP'04/WLP'04 Proceedings of the 15th international conference on Applications of Declarative Programming and Knowledge Management, and 18th international conference on Workshop on Logic Programming
Zero anaphora resolution in chinese and its application in chinese-english machine translation
NLDB'07 Proceedings of the 12th international conference on Applications of Natural Language to Information Systems
Hi-index | 0.00 |
Anaphora resolution is one of the most important research topics in Natural Language Processing. In English, overt pronouns such as she and definite noun phrases such as the company are anaphors that refer to preceding entities (antecedents). In Japanese, anaphors are often omitted, and these omissions are called zero pronouns. There are two major approaches to zero pronoun resolution: the heuristic approach and the machine learning approach. Since we have to take various factors into consideration, it is difficult to find a good combination of heuristic rules. Therefore, the machine learning approach is attractive, but it requires a large amount of training data. In this paper, we propose a method that combines ranking rules and machine learning. The ranking rules are simple and effective, while machine learning can take more factors into account. From the results of our experiments, this combination gives better performance than either of the two previous approaches.