On consistency in nonparametric estimation under mixing conditions
Journal of Multivariate Analysis
Nonparametric Time Series Prediction Through Adaptive ModelSelection
Machine Learning
An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
A Theory of Learning and Generalization: With Applications to Neural Networks and Control Systems
A Theory of Learning and Generalization: With Applications to Neural Networks and Control Systems
Support vector machines are universally consistent
Journal of Complexity
On the influence of the kernel on the consistency of support vector machines
The Journal of Machine Learning Research
Support Vector Machine Soft Margin Classifiers: Error Analysis
The Journal of Machine Learning Research
Some Properties of Regularized Kernel Methods
The Journal of Machine Learning Research
Fast rates for support vector machines
COLT'05 Proceedings of the 18th annual conference on Learning Theory
Exponential convergence rates in classification
COLT'05 Proceedings of the 18th annual conference on Learning Theory
Memory-universal prediction of stationary random processes
IEEE Transactions on Information Theory
Consistency of support vector machines and other regularized kernel classifiers
IEEE Transactions on Information Theory
Learning from uniformly ergodic Markov chains
Journal of Complexity
Online Learning with Samples Drawn from Non-identical Distributions
The Journal of Machine Learning Research
Generalization bounds of ERM algorithm with V-geometrically Ergodic Markov chains
Advances in Computational Mathematics
Classification with non-i.i.d. sampling
Mathematical and Computer Modelling: An International Journal
Security analysis of online centroid anomaly detection
The Journal of Machine Learning Research
Consistent identification of Wiener systems: A machine learning viewpoint
Automatica (Journal of IFAC)
Compressed classification learning with Markov chain samples
Neural Networks
Hi-index | 0.00 |
In most papers establishing consistency for learning algorithms it is assumed that the observations used for training are realizations of an i.i.d. process. In this paper we go far beyond this classical framework by showing that support vector machines (SVMs) only require that the data-generating process satisfies a certain law of large numbers. We then consider the learnability of SVMs for @a-mixing (not necessarily stationary) processes for both classification and regression, where for the latter we explicitly allow unbounded noise.