International Journal of Man-Machine Studies - Special Issue: Knowledge Acquisition for Knowledge-based Systems. Part 5
Connectionist learning procedures
Artificial Intelligence
The Strength of Weak Learnability
Machine Learning
Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
The cascade-correlation learning architecture
Advances in neural information processing systems 2
Advances in neural information processing systems 2
Synergy of clustering multiple back propagation networks
Advances in neural information processing systems 2
Original Contribution: Stacked generalization
Neural Networks
Machine Learning - Special issue on multistrategy learning
Combining the results of several neural network classifiers
Neural Networks
Learning from a Population of Hypotheses
Machine Learning - Special issue on COLT '93
IEEE Transactions on Pattern Analysis and Machine Intelligence
Lowering variance of decisions by using artificial neural network portfolios
Neural Computation
Adaptive mixtures of local experts
Neural Computation
Fast learning in networks of locally-tuned processing units
Neural Computation
Boosting classifiers regionally
AAAI '98/IAAI '98 Proceedings of the fifteenth national/tenth conference on Artificial intelligence/Innovative applications of artificial intelligence
Ensembling neural networks: many could be better than all
Artificial Intelligence
Using Correspondence Analysis to Combine Classifiers
Machine Learning
Extracting symbolic rules from trained neural network ensembles
AI Communications - Special issue on Artificial intelligence advances in China
Extracting symbolic rules from trained neural network ensembles
AI Communications - Artificial Intelligence Advances in China
EROS: Ensemble rough subspaces
Pattern Recognition
Machine learning: a review of classification and combining techniques
Artificial Intelligence Review
Supervised Machine Learning: A Review of Classification Techniques
Proceedings of the 2007 conference on Emerging Artificial Intelligence Applications in Computer Engineering: Real Word AI Systems with Applications in eHealth, HCI, Information Retrieval and Pervasive Technologies
Ensembles as a sequence of classifiers
IJCAI'97 Proceedings of the Fifteenth international joint conference on Artifical intelligence - Volume 2
Ensemble classification based on generalized additive models
Computational Statistics & Data Analysis
An empirical evaluation of bagging and boosting
AAAI'97/IAAI'97 Proceedings of the fourteenth national conference on artificial intelligence and ninth conference on Innovative applications of artificial intelligence
Automatic news audio classification based on selective ensemble SVMs
ISNN'05 Proceedings of the Second international conference on Advances in neural networks - Volume Part II
Designing an ensemble classifier over subspace classifiers using iterative convergence routine
Proceedings of the 20th ACM international conference on Information and knowledge management
A resilient voting scheme for improving secondary structure prediction
MIWAI'11 Proceedings of the 5th international conference on Multi-Disciplinary Trends in Artificial Intelligence
Lung cancer cell identification based on artificial neural network ensembles
Artificial Intelligence in Medicine
The build of n-Bits Binary Coding ICBP Ensemble System
Neurocomputing
Ensemble classifier generation using non-uniform layered clustering and Genetic Algorithm
Knowledge-Based Systems
Hi-index | 0.00 |
The primary goal of inductive learning is to generalize well - that is, induce a function that accurately produces the correct output for future inputs. Hansen and Salamon showed that, under certain assumptions, combining the predictions of several separately trained neural networks will improve generalization. One of their key assumptions is that the individual networks should be independent in the errors they produce. In the standard way of performing backpropagation this assumption may be violated, because the standard procedure is to initialize network weights in the region of weight space near the origin. This means that backpropagation's gradient-descent search may only reach a small subset of the possible local minima. In this paper we present an approach to initializing neural networks that uses competitive learning to intelligently create networks that are originally located far from the origin of weight space, thereby potentially increasing the set of reachable local minima. We report experiments on two real-world datasets where combinations of networks initialized with our method generalize better than combinations of networks initialized the traditional way.