Toward Efficient Agnostic Learning
Machine Learning - Special issue on computational learning theory, COLT'92
Better subset regression using the nonnegative garrote
Technometrics
EuroCOLT '99 Proceedings of the 4th European Conference on Computational Learning Theory
Prediction, Learning, and Games
Prediction, Learning, and Games
Improved second-order bounds for prediction with expert advice
Machine Learning
Agnostically Learning Halfspaces
SIAM Journal on Computing
Information Theory and Mixing Least-Squares Regressions
IEEE Transactions on Information Theory
Hi-index | 0.00 |
Designing statistical procedures that are provably almost as accurate as the best one in a given family is one of central topics in statistics and learning theory. Oracle inequalities offer then a convenient theoretical framework for evaluating different strategies, which can be roughly classified into two classes: selection and aggregation strategies. The ultimate goal is to design strategies satisfying oracle inequalities with leading constant one and rate-optimal residual term. In many recent papers, this problem is addressed in the case where the aim is to beat the best procedure from a given family of linear smoothers. However, the theory developed so far either does not cover the important case of nearest-neighbor smoothers or provides a suboptimal oracle inequality with a leading constant considerably larger than one. In this paper, we prove a new oracle inequality with leading constant one that is valid under a general assumption on linear smoothers allowing, for instance, to compete against the best nearest-neighbor filters.