Backward error and condition of structured linear systems
SIAM Journal on Matrix Analysis and Applications
Collinearity and Total Least Squares
SIAM Journal on Matrix Analysis and Applications
Robust Solutions to Least-Squares Problems with Uncertain Data
SIAM Journal on Matrix Analysis and Applications
An equivalence between sparse approximation and support vector machines
Neural Computation
Atomic Decomposition by Basis Pursuit
SIAM Journal on Scientific Computing
The Journal of Machine Learning Research
Robust Mean-Covariance Solutions for Stochastic Optimization
Operations Research
Second Order Cone Programming Approaches for Handling Missing and Uncertain Data
The Journal of Machine Learning Research
Robustness and Regularization of Support Vector Machines
The Journal of Machine Learning Research
Robust decision making and its applications in machine learning
Robust decision making and its applications in machine learning
On sparse representation in pairs of bases
IEEE Transactions on Information Theory
On sparse representations in arbitrary redundant bases
IEEE Transactions on Information Theory
Greed is good: algorithmic results for sparse approximation
IEEE Transactions on Information Theory
IEEE Transactions on Information Theory
Just relax: convex programming methods for identifying sparse signals in noise
IEEE Transactions on Information Theory
IEEE Transactions on Information Theory
Theory and Applications of Robust Optimization
SIAM Review
A Distributional Interpretation of Robust Optimization
Mathematics of Operations Research
Robust twin support vector machine for pattern classification
Pattern Recognition
Hi-index | 754.85 |
Lasso, or l1 regularized least squares, has been explored extensively for its remarkable sparsity properties. In this paper it is shown that the solution to Lasso, in addition to its sparsity, has robustness properties: it is the solution to a robust optimization problem. This has two important consequences. First, robustness provides a connection of the regularizer to a physical property, namely, protection from noise. This allows a principled selection of the regularizer, and in particular, generalizations of Lasso that also yield convex optimization problems are obtained by considering different uncertainty sets. Second, robustness can itself be used as an avenue for exploring different properties of the solution. In particular, it is shown that robustness of the solution explains why the solution is sparse. The analysis as well as the specific results obtained differ from standard sparsity results, providing different geometric intuition. Furthermore, it is shown that the robust optimization formulation is related to kernel density estimation, and based on this approach, a proof that Lasso is consistent is given, using robustness directly. Finally, a theorem is proved which states that sparsity and algorithmic stability contradict each other, and hence Lasso is not stable.