Kullback-Leibler divergence for nonnegative matrix factorization

  • Authors:
  • Zhirong Yang;He Zhang;Zhijian Yuan;Erkki Oja

  • Affiliations:
  • Department of Information and Computer Science, Aalto University, Espoo, Finland;Department of Information and Computer Science, Aalto University, Espoo, Finland;Department of Information and Computer Science, Aalto University, Espoo, Finland;Department of Information and Computer Science, Aalto University, Espoo, Finland

  • Venue:
  • ICANN'11 Proceedings of the 21th international conference on Artificial neural networks - Volume Part I
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

The I-divergence or unnormalized generalization of Kullback-Leibler (KL) divergence is commonly used in Nonnegative Matrix Factorization (NMF). This divergence has the drawback that its gradients with respect to the factorizing matrices depend heavily on the scales of the matrices, and learning the scales in gradient-descent optimization may require many iterations. This is often handled by explicit normalization of one of the matrices, but this step may actually increase the I-divergence and is not included in the NMF monotonicity proof. A simple remedy that we study here is to normalize the input data. Such normalization allows the replacement of the I-divergence with the original KL-divergence for NMF and its variants. We show that using KL-divergence takes the normalization structure into account in a very natural way and brings improvements for nonnegative matrix factorizations: the gradients of the normalized KL-divergence are well-scaled and thus lead to a new projected gradient method for NMF which runs faster or yields better approximation than three other widely used NMF algorithms.