An Experimental and Theoretical Comparison of Model SelectionMethods
Machine Learning - Special issue on the eighth annual conference on computational learning theory, (COLT '95)
Learning in Neural Networks: Theoretical Foundations
Learning in Neural Networks: Theoretical Foundations
IEEE Transactions on Information Theory
Hi-index | 0.00 |
In this paper we present new bounds on the generalization error of a classifier f constructed as a convex combination of base classifiers from the class H. The algorithms of combining simple classifiers into a complex one, such as boosting and bagging, have attracted a lot of attention. We obtain new sharper bounds on the generalization error of combined classifiers that take into account both the empirical distribution of "classification margins" and the "approximate dimension" of the classifier, which is defined in terms of weights assigned to base classifiers by a voting algorithm. We study the performance of these bounds in several experiments with learning algorithms.