From on-line to batch learning
COLT '89 Proceedings of the second annual workshop on Computational learning theory
Using and combining predictors that specialize
STOC '97 Proceedings of the twenty-ninth annual ACM symposium on Theory of computing
EuroCOLT '99 Proceedings of the 4th European Conference on Computational Learning Theory
Prediction, Learning, and Games
Prediction, Learning, and Games
Improved second-order bounds for prediction with expert advice
Machine Learning
Bayesian Inference and Optimal Design for the Sparse Linear Model
The Journal of Machine Learning Research
Sparse Online Learning via Truncated Gradient
The Journal of Machine Learning Research
Aggregation by exponential weighting and sharp oracle inequalities
COLT'07 Proceedings of the 20th annual conference on Learning theory
Dual Averaging Methods for Regularized Stochastic Learning and Online Optimization
The Journal of Machine Learning Research
Stochastic Methods for l1-regularized Loss Minimization
The Journal of Machine Learning Research
Statistics for High-Dimensional Data: Methods, Theory and Applications
Statistics for High-Dimensional Data: Methods, Theory and Applications
Adaptive and optimal online linear regression on l1-balls
ALT'11 Proceedings of the 22nd international conference on Algorithmic learning theory
Sparse regression learning by aggregation and Langevin Monte-Carlo
Journal of Computer and System Sciences
On the generalization ability of on-line learning algorithms
IEEE Transactions on Information Theory
Sequential Prediction of Unbounded Stationary Time Series
IEEE Transactions on Information Theory
Sequential Procedures for Aggregating Arbitrary Estimators of a Conditional Mean
IEEE Transactions on Information Theory
Hi-index | 0.00 |
We consider the problem of online linear regression on arbitrary deterministic sequences when the ambient dimension d can be much larger than the number of time rounds T. We introduce the notion of sparsity regret bound, which is a deterministic online counterpart of recent risk bounds derived in the stochastic setting under a sparsity scenario. We prove such regret bounds for an online-learning algorithm called SeqSEW and based on exponential weighting and data-driven truncation. In a second part we apply a parameter-free version of this algorithm to the stochastic setting (regression model with random design). This yields risk bounds of the same flavor as in Dalalyan and Tsybakov (2012a) but which solve two questions left open therein. In particular our risk bounds are adaptive (up to a logarithmic factor) to the unknown variance of the noise if the latter is Gaussian. We also address the regression model with fixed design.