Scale-sensitive dimensions, uniform convergence, and learnability
Journal of the ACM (JACM)
Prediction, learning, uniform convergence, and scale-sensitive dimensions
Journal of Computer and System Sciences - Special issue on the eighth annual workshop on computational learning theory, July 5–8, 1995
A Theory of Learning and Generalization: With Applications to Neural Networks and Control Systems
A Theory of Learning and Generalization: With Applications to Neural Networks and Control Systems
Rademacher and gaussian complexities: risk bounds and structural results
The Journal of Machine Learning Research
Support Vector Machine Soft Margin Classifiers: Error Analysis
The Journal of Machine Learning Research
SVM Soft Margin Classifiers: Linear Programming versus Quadratic Programming
Neural Computation
Learning Theory: An Approximation Theory Viewpoint (Cambridge Monographs on Applied & Computational Mathematics)
The performance bounds of learning machines based on exponentially strongly mixing sequences
Computers & Mathematics with Applications
Minimum complexity regression estimation with weakly dependent observations
IEEE Transactions on Information Theory - Part 2
Capacity of reproducing kernel spaces in learning theory
IEEE Transactions on Information Theory
On the generalization ability of on-line learning algorithms
IEEE Transactions on Information Theory
Learning from uniformly ergodic Markov chains
Journal of Complexity
Learning Performance of Tikhonov Regularization Algorithm with Strongly Mixing Samples
ISNN '09 Proceedings of the 6th International Symposium on Neural Networks on Advances in Neural Networks
Semi-supervised learning based on high density region estimation
Neural Networks
Generalization bounds of ERM algorithm with V-geometrically Ergodic Markov chains
Advances in Computational Mathematics
Hi-index | 0.00 |
The generalization performance is the main concern of machine learning theoretical research. The previous main bounds describing the generalization ability of the Empirical Risk Minimization (ERM) algorithm are based on independent and identically distributed (i.i.d.) samples. In order to study the generalization performance of the ERM algorithm with dependent observations, we first establish the exponential bound on the rate of relative uniform convergence of the ERM algorithm with exponentially strongly mixing observations, and then we obtain the generalization bounds and prove that the ERM algorithm with exponentially strongly mixing observations is consistent. The main results obtained in this paper not only extend the previously known results for i.i.d. observations to the case of exponentially strongly mixing observations, but also improve the previous results for strongly mixing samples. Because the ERM algorithm is usually very time-consuming and overfitting may happen when the complexity of the hypothesis space is high, as an application of our main results we also explore a new strategy to implement the ERM algorithm in high complexity hypothesis space.