Crytographic limitations on learning Boolean formulae and finite automata
STOC '89 Proceedings of the twenty-first annual ACM symposium on Theory of computing
The Strength of Weak Learnability
Machine Learning
Boosting a weak learning algorithm by majority
COLT '90 Proceedings of the third annual workshop on Computational learning theory
The cascade-correlation learning architecture
Advances in neural information processing systems 2
Handwritten digit recognition with a back-propagation network
Advances in neural information processing systems 2
Training connectionist networks with queries and selective sampling
Advances in neural information processing systems 2
Constructing hidden units using examples and queries
NIPS-3 Proceedings of the 1990 conference on Advances in neural information processing systems 3
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
IEEE Transactions on Pattern Analysis and Machine Intelligence
Improving Performance in Neural Networks Using a Boosting Algorithm
Advances in Neural Information Processing Systems 5, [NIPS Conference]
Generalization Abilities of Cascade Network Architecture
Advances in Neural Information Processing Systems 5, [NIPS Conference]
Estimation of Dependences Based on Empirical Data: Springer Series in Statistics (Springer Series in Statistics)
Building Optimal Committees of Genetic Programs
PPSN VI Proceedings of the 6th International Conference on Parallel Problem Solving from Nature
Ensembles of Learning Machines
WIRN VIETRI 2002 Proceedings of the 13th Italian Workshop on Neural Nets-Revised Papers
MCS '00 Proceedings of the First International Workshop on Multiple Classifier Systems
On the General Application of the Tomographic Classifier Fusion Methodology
MCS '02 Proceedings of the Third International Workshop on Multiple Classifier Systems
FOCS '99 Proceedings of the 40th Annual Symposium on Foundations of Computer Science
Classifier ensembles: Select real-world applications
Information Fusion
Engineering multiversion neural-net systems
Neural Computation
Estimation and decision fusion: A survey
Neurocomputing
A multilayered neuro-fuzzy classifier with self-organizing properties
Fuzzy Sets and Systems
GEP-Induced Expression Trees as Weak Classifiers
ICDM '08 Proceedings of the 8th industrial conference on Advances in Data Mining: Medical Applications, E-Commerce, Marketing, and Theoretical Aspects
Adding Diversity in Ensembles of Neural Networks by Reordering the Training Set
ICANN '08 Proceedings of the 18th international conference on Artificial Neural Networks, Part I
Diversity of ability and cognitive style for group decision processes
Information Sciences: an International Journal
Evolving an Ensemble of Neural Networks Using Artificial Immune Systems
SEAL '08 Proceedings of the 7th International Conference on Simulated Evolution and Learning
Ensemble Methods for Multilayer Feedforward: An Experimental Study
IWANN '03 Proceedings of the 7th International Work-Conference on Artificial and Natural Neural Networks: Part II: Artificial Neural Nets Problem Solving Methods
AAAI'04 Proceedings of the 19th national conference on Artifical intelligence
Multiple classifier application to credit risk assessment
Expert Systems with Applications: An International Journal
"Poor man" vote with M-ary classifiers: application to iris recognition
AVBPA'03 Proceedings of the 4th international conference on Audio- and video-based biometric person authentication
The practical performance characteristics of tomographically filtered multiple classifier fusion
MCS'03 Proceedings of the 4th international conference on Multiple classifier systems
Stacking MF networks to combine the outputs provided by RBF networks
ICANN'07 Proceedings of the 17th international conference on Artificial neural networks
Neuro-fuzzy-combiner: an effective multiple classifier system
International Journal of Knowledge Engineering and Soft Data Paradigms
Edited AdaBoost by weighted kNN
Neurocomputing
Pattern classification driven enhancements for human-in-the-loop decision support systems
Decision Support Systems
The construction of an individual credit risk assessment method: based on the combination algorithms
ICICA'10 Proceedings of the First international conference on Information computing and applications
Ensembles of multilayer feedforward: a new comparison
NN'05 Proceedings of the 6th WSEAS international conference on Neural networks
New results on ensembles of multilayer feedforward
ICANN'05 Proceedings of the 15th international conference on Artificial neural networks: formal models and their applications - Volume Part II
Improving adaptive boosting with k-cross-fold validation
ICIC'06 Proceedings of the 2006 international conference on Intelligent Computing - Volume Part I
Generate different neural networks by negative correlation learning
ICNC'05 Proceedings of the First international conference on Advances in Natural Computation - Volume Part I
Ensembles of multilayer feedforward: some new results
IWANN'05 Proceedings of the 8th international conference on Artificial Neural Networks: computational Intelligence and Bioinspired Systems
ICONIP'06 Proceedings of the 13 international conference on Neural Information Processing - Volume Part I
Remote sensing image classification: a neuro-fuzzy MCS approach
ICVGIP'06 Proceedings of the 5th Indian conference on Computer Vision, Graphics and Image Processing
Predicting shellfish farm closures with class balancing methods
AI'12 Proceedings of the 25th Australasian joint conference on Advances in Artificial Intelligence
Combining analytic kernel models for energy-efficient data modeling and classification
The Journal of Supercomputing
Hi-index | 0.00 |
We compare the performance of three types of neural network-based ensemble techniques to that of a single neural network. The ensemble algorithms are two versions of boosting and committees of neural networks trained independently. For each of the four algorithms, we experimentally determine the test and training error curves in an optical character recognition (OCR) problem as both a function of training set size and computational cost using three architectures. We show that a single machine is best for small training set size while for large training set size some version of boosting is best. However, for a given computational cost, boosting is always best. Furthermore, we show a surprising result for the original boosting algorithm: namely, that as the training set size increases, the training error decreases until it asymptotes to the test error rate. This has potential implications in the search for better training algorithms.