Mistake bounds and logarithmic linear-threshold learning algorithms
Mistake bounds and logarithmic linear-threshold learning algorithms
The perception: a probabilistic model for information storage and organization in the brain
Neurocomputing: foundations of research
Exponentiated gradient versus gradient descent for linear predictors
Information and Computation
An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
Relative Loss Bounds for Multidimensional Regression Problems
Machine Learning
General Convergence Results for Linear Discriminant Updates
Machine Learning
The Relaxed Online Maximum Margin Algorithm
Machine Learning
The Robustness of the p-Norm Algorithms
Machine Learning
Convex Optimization
IEEE Transactions on Signal Processing
Online Learning of Complex Prediction Problems Using Simultaneous Projections
The Journal of Machine Learning Research
Online learning by ellipsoid method
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Bundle Methods for Regularized Risk Minimization
The Journal of Machine Learning Research
Double Updating Online Learning
The Journal of Machine Learning Research
Hi-index | 0.00 |
We describe a novel framework for the design and analysis of online learning algorithms based on the notion of duality in constrained optimization. We cast a sub-family of universal online bounds as an optimization problem. Using the weak duality theorem we reduce the process of online learning to the task of incrementally increasing the dual objective function. The amount by which the dual increases serves as a new and natural notion of progress. We are thus able to tie the primal objective value and the number of prediction mistakes using and the increase in the dual. The end result is a general framework for designing and analyzing old and new online learning algorithms in the mistake bound model.