ACM SIGIR Forum
Squibs and discussions: human variation and lexical choice
Computational Linguistics - Summarization
Near-synonymy and lexical choice
Computational Linguistics
Lexical choice criteria in language generation
EACL '93 Proceedings of the sixth conference on European chapter of the Association for Computational Linguistics
Choosing the word most typical in context using a lexical co-occurrence network
ACL '98 Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics and Eighth Conference of the European Chapter of the Association for Computational Linguistics
Building and Using a Lexical Knowledge Base of Near-Synonym Differences
Computational Linguistics
A statistical model for near-synonym choice
ACM Transactions on Speech and Language Processing (TSLP)
Developing a computer-facilitated tool for acquiring near-synonyms in Chinese and English
IWCS-8 '09 Proceedings of the Eighth International Conference on Computational Semantics
Exploring extensive linguistic feature sets in near-synonym lexical choice
CICLing'12 Proceedings of the 13th international conference on Computational Linguistics and Intelligent Text Processing - Volume Part II
A framework for robust discovery of entity synonyms
Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining
Independent component analysis for near-synonym choice
Decision Support Systems
Hi-index | 0.00 |
We explore the near-synonym lexical choice problem using a novel representation of near-synonyms and their contexts in the latent semantic space. In contrast to traditional latent semantic analysis (LSA), our model is built on the lexical level of co-occurrence, which has been empirically proven to be effective in providing higher dimensional information on the subtle differences among near-synonyms. By employing supervised learning on the latent features, our system achieves an accuracy of 74.5% in a "fill-in-the-blank" task. The improvement over the current state-of-the-art is statistically significant. We also formalize the notion of subtlety through its relation to semantic space dimensionality. Using this formalization and our learning models, several of our intuitions about subtlety, dimensionality, and context are quantified and empirically tested.