Convex optimization techniques for fitting sparse Gaussian graphical models
ICML '06 Proceedings of the 23rd international conference on Machine learning
Generalized spectral bounds for sparse LDA
ICML '06 Proceedings of the 23rd international conference on Machine learning
Multi-class Discriminant Kernel Learning via Convex Programming
The Journal of Machine Learning Research
Unsupervised feature selection for principal components analysis
Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining
Supervised dimensionality reduction via sequential semidefinite programming
Pattern Recognition
Optimal Solutions for Sparse Principal Component Analysis
The Journal of Machine Learning Research
Protein Expression Molecular Pattern Discovery by Nonnegative Principal Component Analysis
PRIB '08 Proceedings of the Third IAPR International Conference on Pattern Recognition in Bioinformatics
Non-negative Sparse Principal Component Analysis for Multidimensional Constrained Optimization
PRICAI '08 Proceedings of the 10th Pacific Rim International Conference on Artificial Intelligence: Trends in Artificial Intelligence
ISIT'09 Proceedings of the 2009 IEEE international conference on Symposium on Information Theory - Volume 1
Using underapproximations for sparse nonnegative matrix factorization
Pattern Recognition
Online Learning for Matrix Factorization and Sparse Coding
The Journal of Machine Learning Research
Generalized Power Method for Sparse Principal Component Analysis
The Journal of Machine Learning Research
Bayesian orthogonal component analysis for sparse representation
IEEE Transactions on Signal Processing
Nonnegative Principal Component Analysis for Cancer Molecular Pattern Discovery
IEEE/ACM Transactions on Computational Biology and Bioinformatics (TCBB)
Bayesian Learning in Sparse Graphical Factor Models via Variational Mean-Field Annealing
The Journal of Machine Learning Research
Sparse unsupervised dimensionality reduction algorithms
ECML PKDD'10 Proceedings of the 2010 European conference on Machine learning and knowledge discovery in databases: Part I
Low-Rank Optimization on the Cone of Positive Semidefinite Matrices
SIAM Journal on Optimization
Manifold elastic net: a unified framework for sparse dimension reduction
Data Mining and Knowledge Discovery
Approximating Semidefinite Packing Programs
SIAM Journal on Optimization
Sparse PCA by iterative elimination algorithm
Advances in Computational Mathematics
Reverse-Convex programming for sparse image codes
EMMCVPR'05 Proceedings of the 5th international conference on Energy Minimization Methods in Computer Vision and Pattern Recognition
Convex approximations to sparse PCA via Lagrangian duality
Operations Research Letters
Low-Rank Matrix Approximation with Weights or Missing Data Is NP-Hard
SIAM Journal on Matrix Analysis and Applications
Low-rank mechanism: optimizing batch queries under differential privacy
Proceedings of the VLDB Endowment
Sparse principal component analysis by choice of norm
Journal of Multivariate Analysis
Consistency of sparse PCA in High Dimension, Low Sample Size contexts
Journal of Multivariate Analysis
Sparse non Gaussian component analysis by semidefinite programming
Machine Learning
Truncated power method for sparse eigenvalue problems
The Journal of Machine Learning Research
Learning a factor model via regularized PCA
Machine Learning
A robust elastic net approach for feature learning
Journal of Visual Communication and Image Representation
Hi-index | 0.00 |
Given a covariance matrix, we consider the problem of maximizing the variance explained by a particular linear combination of the input variables while constraining the number of nonzero coefficients in this combination. This problem arises in the decomposition of a covariance matrix into sparse factors or sparse principal component analysis (PCA), and has wide applications ranging from biology to finance. We use a modification of the classical variational representation of the largest eigenvalue of a symmetric matrix, where cardinality is constrained, and derive a semidefinite programming-based relaxation for our problem. We also discuss Nesterov's smooth minimization technique applied to the semidefinite program arising in the semidefinite relaxation of the sparse PCA problem. The method has complexity $O(n^4 \sqrt{\log(n)}/\epsilon)$, where $n$ is the size of the underlying covariance matrix and $\epsilon$ is the desired absolute accuracy on the optimal value of the problem.