Learnability and the Vapnik-Chervonenkis dimension
Journal of the ACM (JACM)
Computational learning theory: an introduction
Computational learning theory: an introduction
Bounding the Vapnik-Chervonenkis Dimension of Concept Classes Parameterized by Real Numbers
Machine Learning - Special issue on COLT '93
The nature of statistical learning theory
The nature of statistical learning theory
Data-dependent structural risk minimisation for perceptron decision trees
NIPS '97 Proceedings of the 1997 conference on Advances in neural information processing systems 10
An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
Enlarging the Margins in Perceptron Decision Trees
Machine Learning
Learning in Neural Networks: Theoretical Foundations
Learning in Neural Networks: Theoretical Foundations
Advances in Large Margin Classifiers
Advances in Large Margin Classifiers
Reducing Communication for Distributed Learning in Neural Networks
ICANN '02 Proceedings of the International Conference on Artificial Neural Networks
Covering number bounds of certain regularized linear function classes
The Journal of Machine Learning Research
Function Learning from Interpolation
Combinatorics, Probability and Computing
Generalization Error Bounds for Threshold Decision Lists
The Journal of Machine Learning Research
IEEE Transactions on Information Theory
Local context discrimination in signature neural networks
IWINAC'11 Proceedings of the 4th international conference on Interplay between natural and artificial computation: new challenges on bioinspired applications - Volume Part II
Hi-index | 0.00 |
We consider the generalization error of concept learning when using a fixed Boolean function of the outputs of a number of different classifiers. Here, we take into account the 'margins' of each of the constituent classifiers. A special case is that in which the constituent classifiers are linear threshold functions (or perceptrons) and the fixed Boolean function is the majority function. This corresponds to a 'committee of perceptrons,' an artificial neural network (or circuit) consisting of a single layer of perceptrons (or linear threshold units) in which the output of the network is defined to be the majority output of the perceptrons. Recent work of Auer et al. studied the computational properties of such networks (where they were called 'parallel perceptrons'), proposed an incremental learning algorithm for them, and demonstrated empirically that the learning rule is effective. As a corollary of the results presented here, generalization error bounds are derived for this special case that provide further motivation for the use of this learning rule.