Machine Learning
Theoretical guarantees for algorithms in multi-agent settings
Theoretical guarantees for algorithms in multi-agent settings
Online convex optimization in the bandit setting: gradient descent without a gradient
SODA '05 Proceedings of the sixteenth annual ACM-SIAM symposium on Discrete algorithms
Prediction, Learning, and Games
Prediction, Learning, and Games
Online Passive-Aggressive Algorithms
The Journal of Machine Learning Research
Pegasos: Primal Estimated sub-GrAdient SOlver for SVM
Proceedings of the 24th international conference on Machine learning
Tracking the best hyperplane with a simple budget Perceptron
Machine Learning
The Forgetron: A Kernel-Based Perceptron on a Budget
SIAM Journal on Computing
SVM optimization: inverse dependence on training set size
Proceedings of the 25th international conference on Machine learning
The weighted majority algorithm
SFCS '89 Proceedings of the 30th Annual Symposium on Foundations of Computer Science
A new perspective on an old perceptron algorithm
COLT'05 Proceedings of the 18th annual conference on Learning Theory
Online learning with multiple kernels: A review
Neural Computation
The Journal of Machine Learning Research
Hi-index | 0.00 |
The kernel Perceptron is an appealing online learning algorithm that has a drawback: whenever it makes an error it must increase its support set, which slows training and testing if the number of errors is large. The Forgetron and the Randomized Budget Perceptron algorithms overcome this problem by restricting the number of support vectors the Perceptron is allowed to have. These algorithms have regret bounds whose proofs are dissimilar. In this paper we propose a unified analysis of both of these algorithms by observing that the way in which they remove support vectors can be seen as types of L2-regularization. By casting these algorithms as instances of online convex optimization problems and applying a variant of Zinkevich's theorem for noisy and incorrect gradient, we can bound the regret of these algorithms more easily than before. Our bounds are similar to the existing ones, but the proofs are less technical.