Automatic retrieval and clustering of similar words
COLING '98 Proceedings of the 17th international conference on Computational linguistics - Volume 2
Measures of distributional similarity
ACL '99 Proceedings of the 37th annual meeting of the Association for Computational Linguistics on Computational Linguistics
Japanese case frame construction by coupling the verb and its closest case component
HLT '01 Proceedings of the first international conference on Human language technology research
Disambiguating Nouns, Verbs, and Adjectives Using Automatically Acquired Selectional Preferences
Computational Linguistics
Learning class-to-class selectional preferences
ConLL '01 Proceedings of the 2001 workshop on Computational Natural Language Learning - Volume 7
A general framework for distributional similarity
EMNLP '03 Proceedings of the 2003 conference on Empirical methods in natural language processing
Finding predominant word senses in untagged text
ACL '04 Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics
Exploiting semantic role labeling, WordNet and Wikipedia for coreference resolution
HLT-NAACL '06 Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics
Dependency-Based Construction of Semantic Space Models
Computational Linguistics
Improving Unsupervised WSD with a Dynamic Thesaurus
TSD '08 Proceedings of the 11th international conference on Text, Speech and Dialogue
Unsupervised learning of verb argument structures
CICLing'06 Proceedings of the 7th international conference on Computational Linguistics and Intelligent Text Processing
CICLing'05 Proceedings of the 6th international conference on Computational Linguistics and Intelligent Text Processing
Detection and correction of malapropisms in spanish by means of internet search
TSD'05 Proceedings of the 8th international conference on Text, Speech and Dialogue
Hi-index | 0.01 |
We propose a model based on the Word Space Model for calculating the plausibility of candidate arguments given one verb and one argument. The resulting information can be used in co-reference resolution, zero-pronoun resolution or syntactic ambiguity tasks. Previous work such as Selectional Preferences or Semantic Frames acquisition focuses on this task using supervised resources, or predicting arguments independently from each other. On this work we explore the extraction of plausible arguments considering their co-relation, and using no more information than that provided by the dependency parser. This creates a data sparseness problem alleviated by using a distributional thesaurus built from the same data for smoothing. We compare our model with the traditional PLSI method.