Automatic retrieval and clustering of similar words
COLING '98 Proceedings of the 17th international conference on Computational linguistics - Volume 2
Dependence language model for information retrieval
Proceedings of the 27th annual international ACM SIGIR conference on Research and development in information retrieval
Measures of distributional similarity
ACL '99 Proceedings of the 37th annual meeting of the Association for Computational Linguistics on Computational Linguistics
Japanese case frame construction by coupling the verb and its closest case component
HLT '01 Proceedings of the first international conference on Human language technology research
Unsupervised learning of dependency structure for language modeling
ACL '03 Proceedings of the 41st Annual Meeting on Association for Computational Linguistics - Volume 1
Disambiguating Nouns, Verbs, and Adjectives Using Automatically Acquired Selectional Preferences
Computational Linguistics
Learning class-to-class selectional preferences
ConLL '01 Proceedings of the 2001 workshop on Computational Natural Language Learning - Volume 7
A general framework for distributional similarity
EMNLP '03 Proceedings of the 2003 conference on Empirical methods in natural language processing
Finding predominant word senses in untagged text
ACL '04 Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics
Exploiting semantic role labeling, WordNet and Wikipedia for coreference resolution
HLT-NAACL '06 Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics
Dependency-Based Construction of Semantic Space Models
Computational Linguistics
CICLing'05 Proceedings of the 6th international conference on Computational Linguistics and Intelligent Text Processing
Detection and correction of malapropisms in spanish by means of internet search
TSD'05 Proceedings of the 8th international conference on Text, Speech and Dialogue
Hi-index | 0.00 |
In this paper we present a comparison of two language models based on dependency triples. We explore using the verb only for predicting the most plausible argument as in selectional preferences, as well as using both the verb and argument for predicting another argument. This latter causes a problem of data sparseness that must be solved by different techniques for data smoothing. Based on our results on the K-Nearest Neighbor model (KNN) algorithm we conclude that adding more information is useful for attaining higher precision, while the PLSI model was inconveniently sensitive to this information, yielding better results for the simpler model (using the verb only). Our results suggest that combining the strengths of both algorithms would provide best results.