Concept decompositions for large sparse text data using clustering
Machine Learning
Introducing a weighted non-negative matrix factorization for image classification
Pattern Recognition Letters
A Nonnegatively Constrained Convex Programming Method for Image Reconstruction
SIAM Journal on Scientific Computing
Non-negative matrix factorization based methods for object recognition
Pattern Recognition Letters
CSB '04 Proceedings of the 2004 IEEE Computational Systems Bioinformatics Conference
Non-negative Matrix Factorization with Sparseness Constraints
The Journal of Machine Learning Research
Nonnegative features of spectro-temporal sounds for classification
Pattern Recognition Letters
Nonsmooth Nonnegative Matrix Factorization (nsNMF)
IEEE Transactions on Pattern Analysis and Machine Intelligence
Document clustering using nonnegative matrix factorization
Information Processing and Management: an International Journal
Learning Image Components for Object Recognition
The Journal of Machine Learning Research
Projected Gradient Methods for Nonnegative Matrix Factorization
Neural Computation
Regularized Alternating Least Squares Algorithms for Non-negative Matrix/Tensor Factorization
ISNN '07 Proceedings of the 4th international symposium on Neural Networks: Advances in Neural Networks, Part III
Csiszár’s divergences for non-negative matrix factorization: family of new algorithms
ICA'06 Proceedings of the 6th international conference on Independent Component Analysis and Blind Signal Separation
Extended SMART algorithms for non-negative matrix factorization
ICAISC'06 Proceedings of the 8th international conference on Artificial Intelligence and Soft Computing
Non-negative matrix factorization with quasi-newton optimization
ICAISC'06 Proceedings of the 8th international conference on Artificial Intelligence and Soft Computing
Sparse solutions to linear inverse problems with multiple measurement vectors
IEEE Transactions on Signal Processing
Accelerating the EMML algorithm and related iterative algorithms by rescaled block-iterative methods
IEEE Transactions on Image Processing
Sparse Super Symmetric Tensor Factorization
Neural Information Processing
Blind Image Separation Using Nonnegative Matrix Factorization with Gibbs Smoothing
Neural Information Processing
Data Clustering with Semi-binary Nonnegative Matrix Factorization
ICAISC '08 Proceedings of the 9th international conference on Artificial Intelligence and Soft Computing
Computational Intelligence and Neuroscience - Advances in Nonnegative Matrix and Tensor Factorization
Hierarchical ALS algorithms for nonnegative matrix and 3D tensor factorization
ICA'07 Proceedings of the 7th international conference on Independent component analysis and signal separation
Nonnegative matrix factorization on orthogonal subspace with smoothed l0 norm constrained
IScIDE'12 Proceedings of the third Sino-foreign-interchange conference on Intelligent Science and Intelligent Data Engineering
Hi-index | 0.08 |
Nonnegative matrix factorization (NMF) solves the following problem: find nonnegative matrices A@?R"+^M^x^R and X@?R"+^R^x^T such that Y@?AX, given only Y@?R^M^x^T and the assigned index R. This method has found a wide spectrum of applications in signal and image processing, such as blind source separation (BSS), spectra recovering, pattern recognition, segmentation or clustering. Such a factorization is usually performed with an alternating gradient descent technique that is applied to the squared Euclidean distance or Kullback-Leibler divergence. This approach has been used in the widely known Lee-Seung NMF algorithms that belong to a class of multiplicative iterative algorithms. It is well known that these algorithms, in spite of their low complexity, are slowly convergent, give only a strictly positive solution, and can easily fall into local minima of a nonconvex cost function. In this paper, we propose to take advantage of the second-order terms of a cost function to overcome the disadvantages of gradient (multiplicative) algorithms. First, a projected quasi-Newton method is presented, where a regularized Hessian with the Levenberg-Marquardt approach is inverted with the Q-less QR decomposition. Since the matrices A and/or X are usually sparse, a more sophisticated hybrid approach based on the gradient projection conjugate gradient (GPCG) algorithm, which was invented by More and Toraldo, is adapted for NMF. The gradient projection (GP) method is exploited to find zero-value components (active), and then the Newton steps are taken only to compute positive components (inactive) with the conjugate gradient (CG) method. As a cost function, we used the @a-divergence that unifies many well-known cost functions. We applied our new NMF method to a BSS problem with mixed signals and images. The results demonstrate the high robustness of our method.