Divide and Conquer Neural Networks

  • Authors:
  • Steve G. Romaniuk;Lawrence O. Hall

  • Affiliations:
  • University of South Florida, Tampa, USA;University of South Florida, Tampa, USA

  • Venue:
  • Neural Networks
  • Year:
  • 1993

Quantified Score

Hi-index 0.00

Visualization

Abstract

Determining an effective architecture far a multi-layer feedforward back propagation neural network can be a time-consuming effort. We describe an algorithm called Divide and Conquer Neural Networks (DCN), which creates a feedforward neural network architecture during training, based upon the training examples. The first cell introduced on any layer is trained on all examples. Further cells on a layer are trained primarily on examples not already correctly classified. The learning algorithm is shown to be able to use several different learning rules, including the delta rule and perceptron rule, to modify the link weights one level at a time in the spirit of a perceptron. Error is never propagated backwards through a hidden cell. Examples are shown of networks generated for the exdusive-or, 4 and 5-parity, 2-spirals problem, Iris plant classification, predicting party affiliation from voting records, and the real-valued fuzzy exclusive-or. The results show the algorithm effectively learns viable architectures that can generalize.