Divide-and-conquer learning and modular perceptron networks

  • Authors:
  • Hsin-Chia Fu;Yen-Po Lee;Cheng-Chin Chiang;Hsiao-Tien Pao

  • Affiliations:
  • Dept. of Comput. Sci. & Inf. Eng., Nat. Chiao Tung Univ., Hsinchu;-;-;-

  • Venue:
  • IEEE Transactions on Neural Networks
  • Year:
  • 2001

Quantified Score

Hi-index 0.00

Visualization

Abstract

A novel modular perceptron network (MPN) and divide-and-conquer learning (DCL) schemes for the design of modular neural networks are proposed. When a training process in a multilayer perceptron falls into a local minimum or stalls in a flat region, the proposed DCL scheme is applied to divide the current training data region into two easier to be learned regions. The learning process continues when a self-growing perceptron network and its initial weight estimation are constructed for one of the newly partitioned regions. Another partitioned region will resume the training process on the original perceptron network. Data region partitioning, weight estimating and learning are iteratively repeated until all the training data are completely learned by the MPN. We evaluated and compared the proposed MPN with several representative neural networks on the two-spirals problem and real-world dataset. The MPN achieved better weight learning performance by requiring much less data presentations during the network training phases, and better generalization performance, and less processing time during the retrieving phase