A vector space model for automatic indexing
Communications of the ACM
Placing search in context: the concept revisited
ACM Transactions on Information Systems (TOIS)
Explorations in Automatic Thesaurus Discovery
Explorations in Automatic Thesaurus Discovery
The Journal of Machine Learning Research
Automatic word sense discrimination
Computational Linguistics - Special issue on word sense disambiguation
Automatic retrieval and clustering of similar words
COLING '98 Proceedings of the 17th international conference on Computational linguistics - Volume 2
Visual information in semantic representation
HLT '10 Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics
Distributional semantics from text and images
GEMS '11 Proceedings of the GEMS 2011 Workshop on GEometrical Models of Natural Language Semantics
Hi-index | 0.00 |
A popular tradition of studying semantic representation has been driven by the assumption that word meaning can be learned from the linguistic environment, despite ample evidence suggesting that language is grounded in perception and action. In this paper we present a comparative study of models that represent word meaning based on linguistic and perceptual data. Linguistic information is approximated by naturally occurring corpora and sensorimotor experience by feature norms (i.e., attributes native speakers consider important in describing the meaning of a word). The models differ in terms of the mechanisms by which they integrate the two modalities. Experimental results show that a closer correspondence to human data can be obtained by uncovering latent information shared among the textual and perceptual modalities rather than arriving at semantic knowledge by concatenating the two.