The disambiguation of nominalizations
Computational Linguistics
A semantic scattering model for the automatic interpretation of genitives
HLT '05 Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing
Interpreting semantic relations in noun compounds via verb semantics
COLING-ACL '06 Proceedings of the COLING/ACL on Main conference poster sessions
Learning noun-modifier semantic relations with corpus-based and WordNet-based features
AAAI'06 Proceedings of the 21st national conference on Artificial intelligence - Volume 1
SemEval-2007 task 04: classification of semantic relations between nominals
SemEval '07 Proceedings of the 4th International Workshop on Semantic Evaluations
UIUC: a knowledge-rich approach to identifying semantic relations between nominals
SemEval '07 Proceedings of the 4th International Workshop on Semantic Evaluations
A knowledge-rich approach to identifying semantic relations between nominals
Information Processing and Management: an International Journal
On learning subtypes of the part-whole relation: do not mix your seeds
ACL '10 Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics
FBK_NK: A WordNet-based system for multi-way classification of semantic relations
SemEval '10 Proceedings of the 5th International Workshop on Semantic Evaluation
Data & Knowledge Engineering
Hi-index | 0.00 |
This paper addresses the task of automatic classification of semantic relations between nouns. We present an improved WordNet-based learning model which relies on the semantic information of the constituent nouns. The representation of each noun's meaning captures conceptual features which play a key role in the identification of the semantic relation. We report substantial improvements over previous WordNet-based methods on the 2007 SemEval data. Moreover, our experiments show that WordNet's IS-A hierarchy is better suited for some semantic relations compared with others. We also compute various learning curves and show that our model does not need a large number of training examples.