Real and complex analysis, 3rd ed.
Real and complex analysis, 3rd ed.
Mistake bounds and logarithmic linear-threshold learning algorithms
Mistake bounds and logarithmic linear-threshold learning algorithms
Introduction to algorithms
COLT '90 Proceedings of the third annual workshop on Computational learning theory
The perception: a probabilistic model for information storage and organization in the brain
Neurocomputing: foundations of research
A training algorithm for optimal margin classifiers
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
The weighted majority algorithm
Information and Computation
Regularization theory and neural networks architectures
Neural Computation
The nature of statistical learning theory
The nature of statistical learning theory
Exponentiated gradient versus gradient descent for linear predictors
Information and Computation
Derandomizing stochastic prediction strategies
COLT '97 Proceedings of the tenth annual conference on Computational learning theory
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
Machine Learning - Special issue on context sensitivity and concept drift
Fast training of support vector machines using sequential minimal optimization
Advances in kernel methods
Competitive on-line linear regression
NIPS '97 Proceedings of the 1997 conference on Advances in neural information processing systems 10
Further results on the margin distribution
COLT '99 Proceedings of the twelfth annual conference on Computational learning theory
Large Margin Classification Using the Perceptron Algorithm
Machine Learning - The Eleventh Annual Conference on computational Learning Theory
On-line Learning and the Metrical Task System Problem
Machine Learning
An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
The Relaxed Online Maximum Margin Algorithm
Machine Learning
Ridge Regression Learning Algorithm in Dual Variables
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
The Kernel-Adatron Algorithm: A Fast and Simple Learning Procedure for Support Vector Machines
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
Relative loss bounds for on-line density estimation with the exponential family of distributions
UAI'99 Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence
Sequential prediction of individual sequences under general loss functions
IEEE Transactions on Information Theory
Worst-case quadratic loss bounds for prediction using linear functions and gradient descent
IEEE Transactions on Neural Networks
Relative loss bounds for single neurons
IEEE Transactions on Neural Networks
Large Margin Classification for Moving Targets
ALT '02 Proceedings of the 13th International Conference on Algorithmic Learning Theory
Exploiting Cluster-Structure to Predict the Labeling of a Graph
ALT '08 Proceedings of the 19th international conference on Algorithmic Learning Theory
Unsupervised Classifier Selection Based on Two-Sample Test
DS '08 Proceedings of the 11th International Conference on Discovery Science
Concept updating with support vector machines
WAIM'05 Proceedings of the 6th international conference on Advances in Web-Age Information Management
An online algorithm for hierarchical phoneme classification
MLMI'04 Proceedings of the First international conference on Machine Learning for Multimodal Interaction
Online learning with multiple kernels: A review
Neural Computation
Hi-index | 0.00 |
We develop three new techniques to build on the recent advances in online learning with kernels. First,w e show that an exponential speed-up in prediction time per trial is possible for such algorithms as the Kernel-Adatron,the Kernel-Perceptron,and ROMMA for specific additive models. Second,w e show that the techniques of the recent algorithms developed for online linear prediction when the best predictor changes over time may be implemented for kernel-based learners at no additional asymptotic cost. Finally,w e introduce a new online kernel-based learning algorithm for which we give worst-case loss bounds for the Ɛ-insensitive square loss.