The nature of statistical learning theory
The nature of statistical learning theory
Exponentiated gradient versus gradient descent for linear predictors
Information and Computation
Regret bounds for prediction problems
COLT '99 Proceedings of the twelfth annual conference on Computational learning theory
An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
Relative Loss Bounds for Temporal-Difference Learning
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Relative Expected Instantaneous Loss Bounds
COLT '00 Proceedings of the Thirteenth Annual Conference on Computational Learning Theory
A Leave-One-out Cross Validation Bound for Kernel Methods with Applications in Learning
COLT '01/EuroCOLT '01 Proceedings of the 14th Annual Conference on Computational Learning Theory and and 5th European Conference on Computational Learning Theory
Worst-case quadratic loss bounds for prediction using linear functions and gradient descent
IEEE Transactions on Neural Networks
A Leave-One-out Cross Validation Bound for Kernel Methods with Applications in Learning
COLT '01/EuroCOLT '01 Proceedings of the 14th Annual Conference on Computational Learning Theory and and 5th European Conference on Computational Learning Theory
Hi-index | 0.00 |
In this paper, we study a class of sample dependent convex optimization problems, and derive a general sequential approximation bound for their solutions. This analysis is closely related to the regret bound framework in online learning. However we apply it to batch learning algorithms instead of online stochastic gradient decent methods. Applications of this analysis in some classification and regression problems will be illustrated.