Adaptive algorithms and stochastic approximations
Adaptive algorithms and stochastic approximations
Vector quantization and signal compression
Vector quantization and signal compression
Neural networks and fuzzy systems: a dynamical systems approach to machine intelligence
Neural networks and fuzzy systems: a dynamical systems approach to machine intelligence
Neural networks and the bias/variance dilemma
Neural Computation
Original Contribution: Stacked generalization
Neural Networks
Overfitting and undercomputing in machine learning
ACM Computing Surveys (CSUR)
Voting over Multiple Condensed Nearest Neighbors
Artificial Intelligence Review - Special issue on lazy learning
Averaging/modular techniques for neural networks
The handbook of brain theory and neural networks
Boosted mixture of experts: an ensemble learning scheme
Neural Computation
Finite-Sample Convergence Properties of the LVQ1 Algorithm and the Batch LVQ1 Algorithm
Neural Processing Letters
Combining Artificial Neural Nets: Ensemble and Modular Multi-Net Systems
Combining Artificial Neural Nets: Ensemble and Modular Multi-Net Systems
Self-Organizing Maps
IEEE Transactions on Pattern Analysis and Machine Intelligence
Neural Networks: Tricks of the Trade, this book is an outgrowth of a 1996 NIPS workshop
A Brief Introduction to Boosting
IJCAI '99 Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence
Engineering multiversion neural-net systems
Neural Computation
Research on the high-tech project risk pre-warning index optimization
WiCOM'09 Proceedings of the 5th International Conference on Wireless communications, networking and mobile computing
Data stream classification with artificial endocrine system
Applied Intelligence
Hi-index | 0.00 |
Ensemble learning is a well-established method for improving the generalization performance of learning machines. The idea is to combine a number of learning systems that have been trained in the same task. However, since all the members of the ensemble are operating at the same time, large amounts of memory and long execution times are needed, limiting its practical application. This paper presents a new method (called local averaging) in the context of nearest neighbor (NN) classifiers that generates a classifier from the ensemble with the same complexity as the individual members. Once a collection of prototypes is generated from different learning sessions using a Kohonen's LVQ algorithm, a single set of prototypes is computed by applying a cluster algorithm (such as K-means) to this collection. Local averaging can be viewed either as a technique to reduce the variance of the prototypes or as the result of averaging a series of particular bootstrap replicates. Experimental results using several classification problems confirm the utility of the method and show that local averaging can compute a single classifier that achieves a similar (or even better) accuracy than ensembles generated with voting.