Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Hierarchical mixtures of experts and the EM algorithm
Neural Computation
Boosting a weak learning algorithm by majority
Information and Computation
A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
Time-series segmentation using predictive modular neural networks
Neural Computation
Training Algorithm with Incomplete Data for Feed-ForwardNeural Networks
Neural Processing Letters
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
An algorithm for correcting mislabeled data
Intelligent Data Analysis
Adaptive mixtures of local experts
Neural Computation
Interpretable Piecewise Linear Classifier
Neural Information Processing
Agent-Based Approach to Distributed Ensemble Learning of Fuzzy ARTMAP Classifiers
KES-AMSTA '07 Proceedings of the 1st KES International Symposium on Agent and Multi-Agent Systems: Technologies and Applications
Evolutionary multi-feature construction for data reduction: A case study
Applied Soft Computing
Preventing error propagation in semi-supervised learning
ISNN'12 Proceedings of the 9th international conference on Advances in Neural Networks - Volume Part I
Hi-index | 0.00 |
For a supervised learning method, the quality of the training data or the training supervisor is very important in generating reliable neural networks. However, for real world problems, it is not always easy to obtain high quality training data sets. In this research, we propose a learning method for a neural network ensemble model that can be trained with an imperfect training data set, which is a data set containing erroneous training samples. With a competitive training mechanism, the ensemble is able to exclude erroneous samples from the training process, thus generating a reliable neural network. Through the experiment, we show that the proposed model is able to tolerate the existence of erroneous training samples in generating a reliable neural network. The ability of the neural network to tolerate the existence of erroneous samples in the training data lessens the costly task of analyzing and arranging the training data, thus increasing the usability of the neural networks for real world problems.