Probabilistic latent semantic indexing
Proceedings of the 22nd annual international ACM SIGIR conference on Research and development in information retrieval
Automatic evaluation of summaries using N-gram co-occurrence statistics
NAACL '03 Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology - Volume 1
Clustering with Bregman Divergences
The Journal of Machine Learning Research
Projected Gradient Methods for Nonnegative Matrix Factorization
Neural Computation
Computational Statistics & Data Analysis
A Unified View of Matrix Factorization Models
ECML PKDD '08 Proceedings of the European conference on Machine Learning and Knowledge Discovery in Databases - Part II
SIAM Journal on Matrix Analysis and Applications
Toward Faster Nonnegative Matrix Factorization: A New Algorithm and Comparisons
ICDM '08 Proceedings of the 2008 Eighth IEEE International Conference on Data Mining
Nonnegative Matrix and Tensor Factorizations: Applications to Exploratory Multi-way Data Analysis and Blind Source Separation
A coordinate gradient descent method for l1-regularized convex minimization
Computational Optimization and Applications
Fast coordinate descent methods with variable selection for non-negative matrix factorization
Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining
Axiomatic geometry of conditional models
IEEE Transactions on Information Theory
Fast Nonnegative Matrix Factorization: An Active-Set-Like Method and Comparisons
SIAM Journal on Scientific Computing
Journal of Global Optimization
Hi-index | 0.00 |
Non-negative matrix factorization (NMF) provides a lower rank approximation of a matrix. Due to nonnegativity imposed on the factors, it gives a latent structure that is often more physically meaningful than other lower rank approximations such as singular value decomposition (SVD). Most of the algorithms proposed in literature for NMF have been based on minimizing the Frobenius norm. This is partly due to the fact that the minimization problem based on the Frobenius norm provides much more flexibility in algebraic manipulation than other divergences. In this paper we propose a fast NMF algorithm that is applicable to general Bregman divergences. Through Taylor series expansion of the Bregman divergences, we reveal a relationship between Bregman divergences and Euclidean distance. This key relationship provides a new direction for NMF algorithms with general Bregman divergences when combined with the scalar block coordinate descent method. The proposed algorithm generalizes several recently proposed methods for computation of NMF with Bregman divergences and is computationally faster than existing alternatives. We demonstrate the effectiveness of our approach with experiments conducted on artificial as well as real world data.