Constrained Restoration and the Recovery of Discontinuities
IEEE Transactions on Pattern Analysis and Machine Intelligence
Atomic Decomposition by Basis Pursuit
SIAM Journal on Scientific Computing
On Model Selection Consistency of Lasso
The Journal of Machine Learning Research
Recovering sparse signals with a certain family of nonconvex penalties and DC programming
IEEE Transactions on Signal Processing
Decoding by linear programming
IEEE Transactions on Information Theory
Editorial: The 3rd Special Issue on Optimization Heuristics in Estimation and Modelling Problems
Computational Statistics & Data Analysis
A comparison of typical ℓp minimization algorithms
Neurocomputing
Stationary-sparse causality network learning
The Journal of Machine Learning Research
Hi-index | 0.03 |
High-dimensional data pose challenges in statistical learning and modeling. Sometimes the predictors can be naturally grouped where pursuing the between-group sparsity is desired. Collinearity may occur in real-world high-dimensional applications where the popular l"1 technique suffers from both selection inconsistency and prediction inaccuracy. Moreover, the problems of interest often go beyond Gaussian models. To meet these challenges, nonconvex penalized generalized linear models with grouped predictors are investigated and a simple-to-implement algorithm is proposed for computation. A rigorous theoretical result guarantees its convergence and provides tight preliminary scaling. This framework allows for grouped predictors and nonconvex penalties, including the discrete l"0 and the 'l"0+l"2' type penalties. Penalty design and parameter tuning for nonconvex penalties are examined. Applications of super-resolution spectrum estimation in signal processing and cancer classification with joint gene selection in bioinformatics show the performance improvement by nonconvex penalized estimation.