ACL '93 Proceedings of the 31st annual meeting on Association for Computational Linguistics
A freely available wide coverage morphological analyzer for English
COLING '92 Proceedings of the 14th conference on Computational linguistics - Volume 3
Syntactic features and word similarity for supervised metonymy resolution
ACL '03 Proceedings of the 41st Annual Meeting on Association for Computational Linguistics - Volume 1
Example-based metonymy recognition for proper nouns
EACL '06 Proceedings of the Eleventh Conference of the European Chapter of the Association for Computational Linguistics: Student Research Workshop
SemEval-2007 task 08: metonymy resolution at SemEval-2007
SemEval '07 Proceedings of the 4th International Workshop on Semantic Evaluations
Local and global context for supervised and unsupervised metonymy resolution
EMNLP-CoNLL '12 Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning
Hi-index | 0.00 |
For the metonymy resolution task at SemEval-2007, the use of a memory-based learner to train classifiers for the identification of metonymic location names is investigated. Metonymy is resolved on different levels of granularity, differentiating between literal and non-literal readings on the coarse level; literal, metonymic, and mixed readings on the medium level; and a number of classes covering regular cases of metonymy on a fine level. Different kinds of context are employed to obtain different features: 1) a sequence of n1 synset IDs representing subordination information for nouns and for verbs, 2) n2 prepositions, articles, modal, and main verbs in the same sentence, and 3) properties of n3 tokens in a context window to the left and to the right of the location name. Different classifiers were trained on the Mascara data set to determine which values for the context sizes n1, n2, and n3 yield the highest accuracy (n1 = 4, n2 = 3, and n3 = 7, determined with the leave-one-out method). Results from these classifiers served as features for a combined classifier. In the training phase, the combined classifier achieved a considerably higher precision for the Mascara data. In the SemEval submission, an accuracy of 79.8% on the coarse, 79.5% on the medium, and 78.5% on the fine level is achieved (the baseline accuracy is 79.4%).