Generative model-based document clustering: a comparative study
Knowledge and Information Systems
Projected Gradient Methods for Nonnegative Matrix Factorization
Neural Computation
SIAM Journal on Matrix Analysis and Applications
Toward Faster Nonnegative Matrix Factorization: A New Algorithm and Comparisons
ICDM '08 Proceedings of the 2008 Eighth IEEE International Conference on Data Mining
Document clustering using nonnegative matrix factorization
Information Processing and Management: an International Journal
Hierarchical ALS algorithms for nonnegative matrix and 3D tensor factorization
ICA'07 Proceedings of the 7th international conference on Independent component analysis and signal separation
Nonnegative Matrix and Tensor Factorizations: Applications to Exploratory Multi-way Data Analysis and Blind Source Separation
On the Complexity of Nonnegative Matrix Factorization
SIAM Journal on Optimization
Non-negative matrix factorization with quasi-newton optimization
ICAISC'06 Proceedings of the 8th international conference on Artificial Intelligence and Soft Computing
On the Convergence of Multiplicative Update Algorithms for Nonnegative Matrix Factorization
IEEE Transactions on Neural Networks
Fast bregman divergence NMF using taylor expansion and coordinate descent
Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining
Sparse and unique nonnegative matrix factorization through data preprocessing
The Journal of Machine Learning Research
Journal of Global Optimization
Global convergence of modified multiplicative updates for nonnegative matrix factorization
Computational Optimization and Applications
Hi-index | 0.00 |
Nonnegative matrix factorization (NMF) is a data analysis technique used in a great variety of applications such as text mining, image processing, hyperspectral data analysis, computational biology, and clustering. In this letter, we consider two well-known algorithms designed to solve NMF problems: the multiplicative updates of Lee and Seung and the hierarchical alternating least squares of Cichocki et al. We propose a simple way to significantly accelerate these schemes, based on a careful analysis of the computational cost needed at each iteration, while preserving their convergence properties. This acceleration technique can also be applied to other algorithms, which we illustrate on the projected gradient method of Lin. The efficiency of the accelerated algorithms is empirically demonstrated on image and text data sets and compares favorably with a state-of-the-art alternating nonnegative least squares algorithm.