Matrix computations (3rd ed.)
Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection
IEEE Transactions on Pattern Analysis and Machine Intelligence
Clustering in large graphs and matrices
Proceedings of the tenth annual ACM-SIAM symposium on Discrete algorithms
Dimensionality reduction for similarity searching in dynamic databases
Computer Vision and Image Understanding - Special issue on content-based access for image and video libraries
A Multilinear Singular Value Decomposition
SIAM Journal on Matrix Analysis and Applications
Fast computation of low rank matrix approximations
STOC '01 Proceedings of the thirty-third annual ACM symposium on Theory of computing
Incremental Singular Value Decomposition of Uncertain Data with Missing Values
ECCV '02 Proceedings of the 7th European Conference on Computer Vision-Part I
Two-Dimensional PCA: A New Approach to Appearance-Based Face Representation and Recognition
IEEE Transactions on Pattern Analysis and Machine Intelligence
Generalized low rank approximations of matrices
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Fast monte-carlo algorithms for finding low-rank approximations
Journal of the ACM (JACM)
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Volume 2 - Volume 02
Rank-R Approximation of Tensors: Using Image-as-Matrix Representation
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Volume 2 - Volume 02
Feature extraction approaches based on matrix pattern: MatPCA and MatFLDA
Pattern Recognition Letters
Generalized Low Rank Approximations of Matrices
Machine Learning
Non-iterative generalized low rank approximation of matrices
Pattern Recognition Letters
Equivalence of Non-Iterative Algorithms for Simultaneous Low Rank Approximations of Matrices
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 1
The theoretical analysis of GLRAM and its applications
Pattern Recognition
Journal of Cognitive Neuroscience
Representing image matrices: eigenimages versus eigenvectors
ISNN'05 Proceedings of the Second international conference on Advances in neural networks - Volume Part II
MPCA: Multilinear Principal Component Analysis of Tensor Objects
IEEE Transactions on Neural Networks
Low-rank matrix decomposition in L1-norm by dynamic systems
Image and Vision Computing
Hi-index | 0.00 |
Compared to singular value decomposition (SVD), generalized low-rank approximations of matrices (GLRAM) can consume less computation time, obtain higher compression ratio, and yield competitive classification performance. GLRAM has been successfully applied to applications such as image compression and retrieval, and quite a few extensions have been successively proposed. However, in literature, some basic properties and crucial problems with regard to GLRAM have not been explored or solved yet. For this sake, we revisit GLRAM in this paper. First, we reveal such a close relationship between GLRAM and SVD that GLRAM's objective function is identical to SVD's objective function except the imposed constraints. Second, we derive a lower bound of GLRAM's objective function, and discuss when the lower bound can be touched. Moreover, from the viewpoint of minimizing the lower bound, we answer one open problem raised by Ye (Machine Learning, 2005), i.e., a theoretical justification of the experimental phenomenon that, under given number of reduced dimension, the lowest reconstruction error is obtained when the left and right transformations have equal number of columns. Third, we explore when and why GLRAM can perform well in terms of compression, which is a fundamental problem concerning the usability of GLRAM.