Word association norms, mutual information, and lexicography
Computational Linguistics
A Multilinear Singular Value Decomposition
SIAM Journal on Matrix Analysis and Applications
Multilinear Analysis of Image Ensembles: TensorFaces
ECCV '02 Proceedings of the 7th European Conference on Computer Vision-Part I
Distributional clustering of English words
ACL '93 Proceedings of the 31st annual meeting on Association for Computational Linguistics
Inducing a semantically annotated lexicon via EM-based clustering
ACL '99 Proceedings of the 37th annual meeting of the Association for Computational Linguistics on Computational Linguistics
Non-negative tensor factorization with applications to statistics and computer vision
ICML '05 Proceedings of the 22nd international conference on Machine learning
Probabilistic latent semantic analysis
UAI'99 Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence
Towards a matrix-based distributional model of meaning
HLT-SRWS '10 Proceedings of the NAACL HLT 2010 Student Research Workshop
From frequency to meaning: vector space models of semantics
Journal of Artificial Intelligence Research
Distributional memory: A general framework for corpus-based semantics
Computational Linguistics
GPU-Accelerated non-negative matrix factorization for text mining
NLDB'12 Proceedings of the 17th international conference on Applications of Natural Language Processing and Information Systems
Distributional phrasal paraphrase generation for statistical machine translation
ACM Transactions on Intelligent Systems and Technology (TIST) - Special Sections on Paraphrasing; Intelligent Systems for Socially Aware Computing; Social Computing, Behavioral-Cultural Modeling, and Prediction
Hi-index | 0.00 |
Distributional similarity methods have proven to be a valuable tool for the induction of semantic similarity. Up till now, most algorithms use two-way co-occurrence data to compute the meaning of words. Co-occurrence frequencies, however, need not be pairwise. One can easily imagine situations where it is desirable to investigate co-occurrence frequencies of three modes and beyond. This paper will investigate a tensor factorization method called non-negative tensor factorization to build a model of three-way co-occurrences. The approach is applied to the problem of selectional preference induction, and automatically evaluated in a pseudo-disambiguation task. The results show that non-negative tensor factorization is a promising tool for NLP.