Automatic text processing: the transformation, analysis, and retrieval of information by computer
Automatic text processing: the transformation, analysis, and retrieval of information by computer
Introduction to matrix analysis (2nd ed.)
Introduction to matrix analysis (2nd ed.)
A Multilinear Singular Value Decomposition
SIAM Journal on Matrix Analysis and Applications
Statistical Models in S
Clustering Large Graphs via the Singular Value Decomposition
Machine Learning
Non-negative Matrix Factorization with Sparseness Constraints
The Journal of Machine Learning Research
Email Surveillance Using Non-negative Matrix Factorization
Computational & Mathematical Organization Theory
Pattern Recognition and Machine Learning (Information Science and Statistics)
Pattern Recognition and Machine Learning (Information Science and Statistics)
Matrix Methods in Data Mining and Pattern Recognition (Fundamentals of Algorithms)
Matrix Methods in Data Mining and Pattern Recognition (Fundamentals of Algorithms)
Singular value decomposition in additive, multiplicative, and logistic forms
Pattern Recognition
Two-parameter ridge regression and its convergence to the eventual pairwise model
Mathematical and Computer Modelling: An International Journal
Mathematical and Computer Modelling: An International Journal
Semantic Pattern Transformation: Applying Knowledge Discovery Processes in Heterogeneous Domains
Proceedings of the 13th International Conference on Knowledge Management and Knowledge Technologies
Hi-index | 0.01 |
Principal component analysis (PCA) and singular value decomposition (SVD) are widely used in multivariate statistical analysis for data reduction. The work considers exponential, logit, and multinomial parameterization of the eigenvectors' elements that always yields nonnegative loadings of shares for variable aggregation. In contrast to regular PCA and SVD, matrix decomposition by the positive shares shows explicitly which variables and with which percent are composed into each group, so what is each variable contribution to data approximation. The least squares objective of matrix fit is reduced to Rayleigh quotient for variational description of the eigenvalues. Eigenvectors with the nonlinear parameterization can be found in Newton-Raphson optimizing procedure. Numerical examples compare the classical and nonnegative loadings results, with interpretation by the Perron-Frobenius theory for each subset of variables identified by sparse loading vectors.