Matrix analysis
The symmetric eigenvalue problem
The symmetric eigenvalue problem
Sparse principal component analysis via regularized low rank matrix approximation
Journal of Multivariate Analysis
Optimal Solutions for Sparse Principal Component Analysis
The Journal of Machine Learning Research
Optimization Algorithms on Matrix Manifolds
Optimization Algorithms on Matrix Manifolds
Comments on selected fundamental aspects of microarray analysis
Computational Biology and Chemistry
Sparse similarity-based fisherfaces
SCIA'11 Proceedings of the 17th Scandinavian conference on Image analysis
Improve robustness of sparse PCA by L1-norm maximization
Pattern Recognition
Improved Bounds on Restricted Isometry Constants for Gaussian Matrices
SIAM Journal on Matrix Analysis and Applications
Sparse PCA by iterative elimination algorithm
Advances in Computational Mathematics
Extracting plants core genes responding to abiotic stresses by penalized matrix decomposition
Computers in Biology and Medicine
Online learning in the embedded manifold of low-rank matrices
The Journal of Machine Learning Research
Feature extraction based on Lp-norm generalized principal component analysis
Pattern Recognition Letters
Truncated power method for sparse eigenvalue problems
The Journal of Machine Learning Research
Evaluation of supervised and unsupervised 3D star visualisation algorithms
International Journal of Data Mining and Bioinformatics
Hi-index | 0.00 |
In this paper we develop a new approach to sparse principal component analysis (sparse PCA). We propose two single-unit and two block optimization formulations of the sparse PCA problem, aimed at extracting a single sparse dominant principal component of a data matrix, or more components at once, respectively. While the initial formulations involve nonconvex functions, and are therefore computationally intractable, we rewrite them into the form of an optimization program involving maximization of a convex function on a compact set. The dimension of the search space is decreased enormously if the data matrix has many more columns (variables) than rows. We then propose and analyze a simple gradient method suited for the task. It appears that our algorithm has best convergence properties in the case when either the objective function or the feasible set are strongly convex, which is the case with our single-unit formulations and can be enforced in the block case. Finally, we demonstrate numerically on a set of random and gene expression test problems that our approach outperforms existing algorithms both in quality of the obtained solution and in computational speed.