Document clustering with committees
SIGIR '02 Proceedings of the 25th annual international ACM SIGIR conference on Research and development in information retrieval
Automatic word sense discrimination
Computational Linguistics - Special issue on word sense disambiguation
Dependency-Based Construction of Semantic Space Models
Computational Linguistics
EACL '09 Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics
Semantic density analysis: comparing word meaning across time and phonetic space
GEMS '09 Proceedings of the Workshop on Geometrical Models of Natural Language Semantics
From frequency to meaning: vector space models of semantics
Journal of Artificial Intelligence Research
The automatic identification of lexical variation between language varieties
Natural Language Engineering
Towards tracking semantic change by visual analytics
HLT '11 Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers - Volume 2
How we BLESSed distributional semantic evaluation
GEMS '11 Proceedings of the GEMS 2011 Workshop on GEometrical Models of Natural Language Semantics
Hi-index | 0.00 |
In statistical NLP, Semantic Vector Spaces (SVS) are the standard technique for the automatic modeling of lexical semantics. However, it is largely unclear how these black-box techniques exactly capture word meaning. To explore the way an SVS structures the individual occurrences of words, we use a non-parametric MDS solution of a token-by-token similarity matrix. The MDS solution is visualized in an interactive plot with the Google Chart Tools. As a case study, we look at the occurrences of 476 Dutch nouns grouped in 214 synsets.