Non-negative matrix factorization with quasi-newton optimization

  • Authors:
  • Rafal Zdunek;Andrzej Cichocki

  • Affiliations:
  • Laboratory for Advanced Brain Signal Processing, BSI, RIKEN, Wako-shi, Japan;Laboratory for Advanced Brain Signal Processing, BSI, RIKEN, Wako-shi, Japan

  • Venue:
  • ICAISC'06 Proceedings of the 8th international conference on Artificial Intelligence and Soft Computing
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Non-negative matrix factorization (NMF) is an emerging method with wide spectrum of potential applications in data analysis, feature extraction and blind source separation. Currently, most applications use relative simple multiplicative NMF learning algorithms which were proposed by Lee and Seung, and are based on minimization of the Kullback-Leibler divergence and Frobenius norm. Unfortunately, these algorithms are relatively slow and often need a few thousands of iterations to achieve a local minimum. In order to increase a convergence rate and to improve performance of NMF, we proposed to use a more general cost function: so-called Amari alpha divergence. Taking into account a special structure of the Hessian of this cost function, we derived a relatively simple second-order quasi-Newton method for NMF. The validity and performance of the proposed algorithm has been extensively tested for blind source separation problems, both for signals and images. The performance of the developed NMF algorithm is illustrated for separation of statistically dependent signals and images from their linear mixtures.