What size net gives valid generalization?
Neural Computation
Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
The cascade-correlation learning architecture
Advances in neural information processing systems 2
Bayesian methods for adaptive models
Bayesian methods for adaptive models
A practical Bayesian framework for backpropagation networks
Neural Computation
Investigation of the CasCor family of learning algorithms
Neural Networks
Modeling with constructive backpropagation
Neural Networks
Bayesian approach for neural networks—review and case studies
Neural Networks
Incremental Learning with Respect to New Incoming Input Attributes
Neural Processing Letters
Adaptive mixtures of local experts
Neural Computation
Divide and Conquer Neural Networks
Neural Networks
Objective functions for training new hidden units in constructive neural networks
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
A Parallel Incremental Learning Algorithm for Neural Networks with Fault Tolerance
High Performance Computing for Computational Science - VECPAR 2008
Tree architecture pattern distributor: a task decomposition classification approach
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
Recursive hybrid decomposition with reduced pattern training
International Journal of Hybrid Intelligent Systems
Hi-index | 0.02 |
Many constructive learning algorithms have been proposed to find an appropriate network structure for a classification problem automatically. Constructive learning algorithms have drawbacks especially when used for complex tasks and modular approaches have been devised to solve these drawbacks. At the same time, parallel training for neural networks with fixed configurations has also been proposed to accelerate the training process. A new approach that combines advantages of constructive learning and parallelism, output partitioning, is presented in this paper. Classification error is used to guide the proposed incremental-partitioning algorithm, which divides the original data set into several smaller sub-data sets with distinct classes. Each sub-data set is then handled in parallel, by a smaller constructively trained sub-network which uses the whole input vector and produces a portion of the final output vector where each class is represented by one unit. Three classification data sets are used to test the validity of this method, and results show that this method reduces the classification test error.