Mistake bounds and logarithmic linear-threshold learning algorithms
Mistake bounds and logarithmic linear-threshold learning algorithms
The weighted majority algorithm
Information and Computation
Machine Learning - Special issue on context sensitivity and concept drift
Machine Learning - Special issue on context sensitivity and concept drift
Large Margin Classification Using the Perceptron Algorithm
Machine Learning - The Eleventh Annual Conference on computational Learning Theory
Linear hinge loss and average margin
Proceedings of the 1998 conference on Advances in neural information processing systems II
General Convergence Results for Linear Discriminant Updates
Machine Learning
The Relaxed Online Maximum Margin Algorithm
Machine Learning
Machine Learning
Machine Learning
Tracking the best linear predictor
The Journal of Machine Learning Research
A new approximate maximal margin classification algorithm
The Journal of Machine Learning Research
The Robustness of the p-Norm Algorithms
Machine Learning
A Second-Order Perceptron Algorithm
SIAM Journal on Computing
Fast Kernel Classifiers with Online and Active Learning
The Journal of Machine Learning Research
IEEE Transactions on Signal Processing
The projectron: a bounded kernel-based Perceptron
Proceedings of the 25th international conference on Machine learning
Tighter perceptron with improved dual use of cached data for model representation and validation
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
Individual sequence prediction using memory-efficient context trees
IEEE Transactions on Information Theory
Bounded Kernel-Based Online Learning
The Journal of Machine Learning Research
A kernel-based Perceptron with dynamic memory
Neural Networks
IScIDE'11 Proceedings of the Second Sino-foreign-interchange conference on Intelligent Science and Intelligent Data Engineering
The Journal of Machine Learning Research
Hi-index | 0.06 |
Shifting bounds for on-line classification algorithms ensure good performance on any sequence of examples that is well predicted by a sequence of smoothly changing classifiers. When proving shifting bounds for kernel-based classifiers, one also faces the problem of storing a number of support vectors that can grow unboundedly, unless an eviction policy is used to keep this number under control. In this paper, we show that shifting and on-line learning on a budget can be combined surprisingly well. First, we introduce and analyze a shifting Perceptron algorithm achieving the best known shifting bounds while using an unlimited budget. Second, we show that by applying to the Perceptron algorithm the simplest possible eviction policy, which discards a random support vector each time a new one comes in, we achieve a shifting bound close to the one we obtained with no budget restrictions. More importantly, we show that our randomized algorithm strikes the optimal trade-off $U = \Theta\bigl(\sqrt{B}\bigr)$between budget B and norm U of the largest classifier in the comparison sequence.