Neural computing: theory and practice
Neural computing: theory and practice
Skeletonization: a technique for trimming the fat from a network via relevance assessment
Advances in neural information processing systems 1
Instance-Based Learning Algorithms
Machine Learning
The cascade-correlation learning architecture
Advances in neural information processing systems 2
Advances in neural information processing systems 2
A self-adjusting dynamic logic module
Journal of Parallel and Distributed Computing
Neural networks: algorithms, applications, and programming techniques
Neural networks: algorithms, applications, and programming techniques
Readings in Machine Learning
IJCAI'89 Proceedings of the 11th international joint conference on Artificial intelligence - Volume 1
A tree-structured adaptive network for function approximation in high-dimensional spaces
IEEE Transactions on Neural Networks
Output partitioning of neural networks
Neurocomputing
Hi-index | 0.00 |
Determining an effective architecture far a multi-layer feedforward back propagation neural network can be a time-consuming effort. We describe an algorithm called Divide and Conquer Neural Networks (DCN), which creates a feedforward neural network architecture during training, based upon the training examples. The first cell introduced on any layer is trained on all examples. Further cells on a layer are trained primarily on examples not already correctly classified. The learning algorithm is shown to be able to use several different learning rules, including the delta rule and perceptron rule, to modify the link weights one level at a time in the spirit of a perceptron. Error is never propagated backwards through a hidden cell. Examples are shown of networks generated for the exdusive-or, 4 and 5-parity, 2-spirals problem, Iris plant classification, predicting party affiliation from voting records, and the real-valued fuzzy exclusive-or. The results show the algorithm effectively learns viable architectures that can generalize.