Distributed nonnegative matrix factorization for web-scale dyadic data analysis on mapreduce
Proceedings of the 19th international conference on World wide web
On economic heavy hitters: shapley value analysis of 95th-percentile pricing
IMC '10 Proceedings of the 10th ACM SIGCOMM conference on Internet measurement
Kullback-Leibler divergence for nonnegative matrix factorization
ICANN'11 Proceedings of the 21th international conference on Artificial neural networks - Volume Part I
Algorithms for nonnegative matrix factorization with the β-divergence
Neural Computation
Quadratic nonnegative matrix factorization
Pattern Recognition
Efficient Nonnegative Matrix Factorization via projected Newton method
Pattern Recognition
A convergent algorithm for orthogonal nonnegative matrix factorization
Journal of Computational and Applied Mathematics
Hi-index | 0.00 |
Nonnegative matrix approximation (NNMA) is a popular matrix decomposition technique that has proven to be useful across a diverse variety of fields with applications ranging from document analysis and image processing to bioinformatics and signal processing. Over the years, several algorithms for NNMA have been proposed, e.g. Lee and Seung's multiplicative updates, alternating least squares (ALS), and gradient descent-based procedures. However, most of these procedures suffer from either slow convergence, numerical instability, or at worst, serious theoretical drawbacks. In this paper, we develop a new and improved algorithmic framework for the least-squares NNMA problem, which is not only theoretically well-founded, but also overcomes many deficiencies of other methods. Our framework readily admits powerful optimization techniques and as concrete realizations we present implementations based on the Newton, BFGS and conjugate gradient methods. Our algorithms provide numerical results superior to both Lee and Seung's method as well as to the alternating least squares heuristic, which was reported to work well in some situations but has no theoretical guarantees [1]. Our approach extends naturally to include regularization and box-constraints without sacrificing convergence guarantees. We present experimental results on both synthetic and real-world datasets that demonstrate the superiority of our methods, both in terms of better approximations as well as computational efficiency. Copyright © 2007 Wiley Periodicals, Inc., A Wiley Company Statistical Analy Data Mining 1: 000-000, 2007