Robust regression and outlier detection
Robust regression and outlier detection
Feature selection, L1 vs. L2 regularization, and rotational invariance
ICML '04 Proceedings of the twenty-first international conference on Machine learning
On Model Selection Consistency of Lasso
The Journal of Machine Learning Research
IEEE Transactions on Information Theory
Decoding by linear programming
IEEE Transactions on Information Theory
Just relax: convex programming methods for identifying sparse signals in noise
IEEE Transactions on Information Theory
Hi-index | 0.00 |
We study robust high-dimensional estimation of generalized linear models (GLMs); where a small number k of the n observations can be arbitrarily corrupted, and where the true parameter is high dimensional in the "p ≫ n" regime, but only has a small number s of non-zero entries. There has been some recent work connecting robustness and sparsity, in the context of linear regression with corrupted observations, by using an explicitly modeled outlier response vector that is assumed to be sparse. Interestingly, we show, in the GLM setting, such explicit outlier response modeling can be performed in two distinct ways. For each of these two approaches, we give l2 error bounds for parameter estimation for general values of the tuple (n, p, s, k).