Estimating High-Dimensional Directed Acyclic Graphs with the PC-Algorithm
The Journal of Machine Learning Research
Bolasso: model consistent Lasso estimation through the bootstrap
Proceedings of the 25th international conference on Machine learning
The Journal of Machine Learning Research
A Recursive Method for Structural Learning of Directed Acyclic Graphs
The Journal of Machine Learning Research
Consistency of Trace Norm Minimization
The Journal of Machine Learning Research
Consistency of the Group Lasso and Multiple Kernel Learning
The Journal of Machine Learning Research
Fast Optimization Methods for L1 Regularization: A Comparative Study and Two New Approaches
ECML '07 Proceedings of the 18th European conference on Machine Learning
Optimal Solutions for Sparse Principal Component Analysis
The Journal of Machine Learning Research
Shrinkage and model selection with correlated variables via weighted fusion
Computational Statistics & Data Analysis
Expert Systems with Applications: An International Journal
Elastic-net regularization in learning theory
Journal of Complexity
ICA with Sparse Connections: Revisited
ICA '09 Proceedings of the 8th International Conference on Independent Component Analysis and Signal Separation
Boosting with structural sparsity
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Group lasso with overlap and graph lasso
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Robust sparse rank learning for non-smooth ranking measures
Proceedings of the 32nd international ACM SIGIR conference on Research and development in information retrieval
On the distribution of penalized maximum likelihood estimators: The LASSO, SCAD, and thresholding
Journal of Multivariate Analysis
An l 1 Regularization Framework for Optimal Rule Combination
ECML PKDD '09 Proceedings of the European Conference on Machine Learning and Knowledge Discovery in Databases: Part I
Compressed and privacy-sensitive sparse regression
IEEE Transactions on Information Theory
IEEE Transactions on Information Theory
Compressed sensing of time-varying signals
DSP'09 Proceedings of the 16th international conference on Digital Signal Processing
Information-theoretic limits on sparsity recovery in the high-dimensional and noisy setting
IEEE Transactions on Information Theory
Recovering sparse signals with a certain family of nonconvex penalties and DC programming
IEEE Transactions on Signal Processing
Ultrahigh Dimensional Feature Selection: Beyond The Linear Model
The Journal of Machine Learning Research
Efficient Online and Batch Learning Using Forward Backward Splitting
The Journal of Machine Learning Research
A Fast Hybrid Algorithm for Large-Scale l1-Regularized Logistic Regression
The Journal of Machine Learning Research
Analysis of Multi-stage Convex Relaxation for Sparse Regularization
The Journal of Machine Learning Research
Distributed spectrum sensing for cognitive radio networks by exploiting sparsity
IEEE Transactions on Signal Processing
Minimax rates of convergence for high-dimensional regression under lq-ball sparsity
Allerton'09 Proceedings of the 47th annual Allerton conference on Communication, control, and computing
A note on optimal support recovery in compressed sensing
Asilomar'09 Proceedings of the 43rd Asilomar conference on Signals, systems and computers
Quadratic approximation on SCAD penalized estimation
Computational Statistics & Data Analysis
The Journal of Machine Learning Research
Asymptotic analysis of robust LASSOs in the presence of noise with large variance
IEEE Transactions on Information Theory
Learning an L1-regularized Gaussian Bayesian network in the equivalence class space
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Efficient and numerically stable sparse learning
ECML PKDD'10 Proceedings of the 2010 European conference on Machine learning and knowledge discovery in databases: Part III
A sparse version of the ridge logistic regression for large-scale text categorization
Pattern Recognition Letters
Autoregressive process modeling via the Lasso procedure
Journal of Multivariate Analysis
The Journal of Machine Learning Research
Rate Minimaxity of the Lasso and Dantzig Selector for the lq Loss in lr Balls
The Journal of Machine Learning Research
Minimum Description Length Penalization for Group and Multi-Task Sparse Learning
The Journal of Machine Learning Research
A consistent algorithm to solve Lasso, elastic-net and Tikhonov regularization
Journal of Complexity
Variable selection through correlation sifting
RECOMB'11 Proceedings of the 15th Annual international conference on Research in computational molecular biology
Super-Linear Convergence of Dual Augmented Lagrangian Algorithm for Sparsity Regularized Estimation
The Journal of Machine Learning Research
Union Support Recovery in Multi-task Learning
The Journal of Machine Learning Research
Structured Variable Selection with Sparsity-Inducing Norms
The Journal of Machine Learning Research
High-dimensional Covariance Estimation Based On Gaussian Graphical Models
The Journal of Machine Learning Research
Trading Accuracy for Sparsity in Optimization Problems with Sparsity Constraints
SIAM Journal on Optimization
Efficient cross-correlation via sparse representation in sensor networks
Proceedings of the 11th international conference on Information Processing in Sensor Networks
Optimization with Sparsity-Inducing Penalties
Foundations and Trends® in Machine Learning
Robust Estimation for an Inverse Problem Arising in Multiview Geometry
Journal of Mathematical Imaging and Vision
Non-convex penalized estimation in high-dimensional models with single-index structure
Journal of Multivariate Analysis
Computational Statistics & Data Analysis
Sparse regression learning by aggregation and Langevin Monte-Carlo
Journal of Computer and System Sciences
Coordinate ascent for penalized semiparametric regression on high-dimensional panel count data
Computational Statistics & Data Analysis
Nowcasting Events from the Social Web with Statistical Learning
ACM Transactions on Intelligent Systems and Technology (TIST)
Statistics and Computing
A Bayesian approach to sparse dynamic network identification
Automatica (Journal of IFAC)
Consistent model selection criteria on high dimensions
The Journal of Machine Learning Research
A multi-stage framework for Dantzig selector and LASSO
The Journal of Machine Learning Research
Variable selection in high-dimensional varying-coefficient models with global optimality
The Journal of Machine Learning Research
A comparison of the lasso and marginal regression
The Journal of Machine Learning Research
Real-time road traffic forecasting using regime-switching space-time models and adaptive LASSO
Applied Stochastic Models in Business and Industry
The dictionary approach for spherical deconvolution
Journal of Multivariate Analysis
An approximation theory approach to learning with l1 regularization
Journal of Approximation Theory
Adjusting for high-dimensional covariates in sparse precision matrix estimation by ℓ1-penalization
Journal of Multivariate Analysis
Weak Recovery Conditions from Graph Partitioning Bounds and Order Statistics
Mathematics of Operations Research
Auto-WEKA: combined selection and hyperparameter optimization of classification algorithms
Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining
FeaFiner: biomarker identification from medical data through feature generalization and selection
Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining
On the equivalent of low-rank linear regressions and linear discriminant analysis based regressions
Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining
Multisample aCGH Data Analysis via Total Variation and Spectral Regularization
IEEE/ACM Transactions on Computational Biology and Bioinformatics (TCBB)
CODA: high dimensional copula discriminant analysis
The Journal of Machine Learning Research
On robust estimation of high dimensional generalized linear models
IJCAI'13 Proceedings of the Twenty-Third international joint conference on Artificial Intelligence
Robust feature selection based on regularized brownboost loss
Knowledge-Based Systems
Nonnegative-lasso and application in index tracking
Computational Statistics & Data Analysis
A note on the one-step estimator for ultrahigh dimensionality
Journal of Computational and Applied Mathematics
LOL selection in high dimension
Computational Statistics & Data Analysis
Variable selection in high-dimension with random designs and orthogonal matching pursuit
The Journal of Machine Learning Research
Consistent selection of tuning parameters via variable selection stability
The Journal of Machine Learning Research
Multi-label learning under feature extraction budgets
Pattern Recognition Letters
Hi-index | 0.25 |
Sparsity or parsimony of statistical models is crucial for their proper interpretations, as in sciences and social sciences. Model selection is a commonly used method to find such models, but usually involves a computationally heavy combinatorial search. Lasso (Tibshirani, 1996) is now being used as a computationally feasible alternative to model selection. Therefore it is important to study Lasso for model selection purposes. In this paper, we prove that a single condition, which we call the Irrepresentable Condition, is almost necessary and sufficient for Lasso to select the true model both in the classical fixed p setting and in the large p setting as the sample size n gets large. Based on these results, sufficient conditions that are verifiable in practice are given to relate to previous works and help applications of Lasso for feature selection and sparse representation. This Irrepresentable Condition, which depends mainly on the covariance of the predictor variables, states that Lasso selects the true model consistently if and (almost) only if the predictors that are not in the true model are "irrepresentable" (in a sense to be clarified) by predictors that are in the true model. Furthermore, simulations are carried out to provide insights and understanding of this result.