Stationarity detection in the initial transient problem
ACM Transactions on Modeling and Computer Simulation (TOMACS)
A Theory of Learning and Generalization
A Theory of Learning and Generalization
Support Vector Machine Soft Margin Classifiers: Error Analysis
The Journal of Machine Learning Research
Learning Theory: An Approximation Theory Viewpoint (Cambridge Monographs on Applied & Computational Mathematics)
The performance bounds of learning machines based on exponentially strongly mixing sequences
Computers & Mathematics with Applications
Learning from dependent observations
Journal of Multivariate Analysis
Minimum complexity regression estimation with weakly dependent observations
IEEE Transactions on Information Theory - Part 2
Capacity of reproducing kernel spaces in learning theory
IEEE Transactions on Information Theory
Hi-index | 0.00 |
Evaluation for generalization performance of learning algorithms has been the main thread of machine learning theoretical research. The previous bounds describing the generalization performance of the empirical risk minimization (ERM) algorithm are usually established based on independent and identically distributed (i.i.d.) samples. In this paper we go far beyond this classical framework by establishing the generalization bounds of the ERM algorithm with uniformly ergodic Markov chain (u.e.M.c.) samples. We prove the bounds on the rate of uniform convergence/relative uniform convergence of the ERM algorithm with u.e.M.c. samples, and show that the ERM algorithm with u.e.M.c. samples is consistent. The established theory underlies application of ERM type of learning algorithms.