Projected Gradient Methods for Nonnegative Matrix Factorization
Neural Computation
Fast Projection-Based Methods for the Least Squares Nonnegative Matrix Approximation Problem
Statistical Analysis and Data Mining
Computational Statistics & Data Analysis
SIAM Journal on Matrix Analysis and Applications
Nonnegative Matrix and Tensor Factorizations: Applications to Exploratory Multi-way Data Analysis and Blind Source Separation
On the Convergence of Multiplicative Update Algorithms for Nonnegative Matrix Factorization
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
The I-divergence or unnormalized generalization of Kullback-Leibler (KL) divergence is commonly used in Nonnegative Matrix Factorization (NMF). This divergence has the drawback that its gradients with respect to the factorizing matrices depend heavily on the scales of the matrices, and learning the scales in gradient-descent optimization may require many iterations. This is often handled by explicit normalization of one of the matrices, but this step may actually increase the I-divergence and is not included in the NMF monotonicity proof. A simple remedy that we study here is to normalize the input data. Such normalization allows the replacement of the I-divergence with the original KL-divergence for NMF and its variants. We show that using KL-divergence takes the normalization structure into account in a very natural way and brings improvements for nonnegative matrix factorizations: the gradients of the normalized KL-divergence are well-scaled and thus lead to a new projected gradient method for NMF which runs faster or yields better approximation than three other widely used NMF algorithms.