DIRT @SBT@discovery of inference rules from text
Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining
The Journal of Machine Learning Research
Automatic word sense discrimination
Computational Linguistics - Special issue on word sense disambiguation
Discovery of inference rules for question-answering
Natural Language Engineering
Fast collapsed gibbs sampling for latent dirichlet allocation
Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining
Context Sensitive Paraphrasing with a Global Unsupervised Classifier
ECML '07 Proceedings of the 18th European conference on Machine Learning
PLDA: Parallel Latent Dirichlet Allocation for Large-Scale Applications
AAIM '09 Proceedings of the 5th International Conference on Algorithmic Aspects in Information and Management
EACL '09 Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics
A structured vector space model for word meaning in context
EMNLP '08 Proceedings of the Conference on Empirical Methods in Natural Language Processing
SemEval-2007 task 10: English lexical substitution task
SemEval '07 Proceedings of the 4th International Workshop on Semantic Evaluations
NUS-ML: improving word sense disambiguation using topic features
SemEval '07 Proceedings of the 4th International Workshop on Semantic Evaluations
Ranking paraphrases in context
TextInfer '09 Proceedings of the 2009 Workshop on Applied Textual Inference
Classification-based contextual preferences
TIWTE '11 Proceedings of the TextInfer 2011 Workshop on Textual Entailment
Hi-index | 0.00 |
Recent work on distributional methods for similarity focuses on using the context in which a target word occurs to derive context-sensitive similarity computations. In this paper we present a method for computing similarity which builds vector representations for words in context by modeling senses as latent variables in a large corpus. We apply this to the Lexical Substitution Task and we show that our model significantly outperforms typical distributional methods.