Non-negative Matrix Factorization with Sparseness Constraints
The Journal of Machine Learning Research
Relation between PLSA and NMF and implications
Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval
ACORNS - towards computational modeling of communication and recognition skills
COGINF '07 Proceedings of the 6th IEEE International Conference on Cognitive Informatics
LVA/ICA'12 Proceedings of the 10th international conference on Latent Variable Analysis and Signal Separation
Probabilistic modulation spectrum factorization for robust speech recognition
ROCLING '11 ROCLING 2011 Poster Papers
Nonnegative Least-Squares Methods for the Classification of High-Dimensional Biological Data
IEEE/ACM Transactions on Computational Biology and Bioinformatics (TCBB)
Hi-index | 0.01 |
During the early stages of language acquisition, young infants face the task of learning a basic vocabulary without the aid of prior linguistic knowledge. Attempts have been made to model this complex behaviour computationally, using a variety of machine learning algorithms, a.o. non-negative matrix factorization (NMF). In this paper, we replace NMF in a vocabulary learning setting with a conceptually similar algorithm, probabilistic latent semantic analysis (PLSA), which can learn word representations incrementally by Bayesian updating. We further show that this learning framework is capable of modelling certain cognitive behaviours, e.g. forgetting, in a simple way.