Laplacian Eigenmaps for dimensionality reduction and data representation
Neural Computation
Feature selection, L1 vs. L2 regularization, and rotational invariance
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Proceedings of the 5th international conference on Information processing in sensor networks
Quantitative Robust Uncertainty Principles and Optimally Sparse Decompositions
Foundations of Computational Mathematics
Algorithms for simultaneous sparse approximation: part I: Greedy pursuit
Signal Processing - Sparse approximations in signal and image processing
An Interior-Point Method for Large-Scale l1-Regularized Logistic Regression
The Journal of Machine Learning Research
The Journal of Machine Learning Research
Consistency of the Group Lasso and Multiple Kernel Learning
The Journal of Machine Learning Research
Bioinformatics
Convex multi-task feature learning
Machine Learning
Group lasso with overlap and graph lasso
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
An efficient projection for l1, ∞ regularization
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
The Split Bregman Method for L1-Regularized Problems
SIAM Journal on Imaging Sciences
Analysis of Multi-stage Convex Relaxation for Sparse Regularization
The Journal of Machine Learning Research
Multi-task feature learning via efficient l2, 1-norm minimization
UAI '09 Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence
Structured Variable Selection with Sparsity-Inducing Norms
The Journal of Machine Learning Research
Learning with Structured Sparsity
The Journal of Machine Learning Research
Missing values: sparse inverse covariance estimation and an extension to sparse regression
Statistics and Computing
Foundations and Trends® in Machine Learning
IEEE Transactions on Information Theory
IEEE Transactions on Information Theory
Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?
IEEE Transactions on Information Theory
IEEE Transactions on Image Processing
Feature grouping and selection over an undirected graph
Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining
Robust principal component analysis via capped norms
Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining
Hi-index | 0.00 |
Following recent technological revolutions, the investigation of massive biomedical data with growing scale, diversity, and complexity has taken a center stage in modern data analysis. Although complex, the underlying representations of many biomedical data are often sparse. For example, for a certain disease such as leukemia, even though humans have tens of thousands of genes, only a few genes are relevant to the disease; a gene network is sparse since a regulatory pathway involves only a small number of genes; many biomedical signals are sparse or compressible in the sense that they have concise representations when expressed in a proper basis. Therefore, finding sparse representations is fundamentally important for scientific discovery. Sparse methods based on the '1 norm have attracted a great amount of research efforts in the past decade due to its sparsity-inducing property, convenient convexity, and strong theoretical guarantees. They have achieved great success in various applications such as biomarker selection, biological network construction, and magnetic resonance imaging. In this paper, we review state-of-the-art sparse methods and their applications to biomedical data.