Convergent design of piecewise linear neural networks

  • Authors:
  • Hema Chandrasekaran;Jiang Li;W. H. Delashmit;P. L. Narasimha;Changhua Yu;Michael T. Manry

  • Affiliations:
  • SETI Institute/NASA Ames Research Center, Moffett Field, CA 94035,USA;Department of Electrical Engineering The University of Texas at Arlington, Arlington, TX 76019, USA;Lockheed Martin Missiles and Fire Control, Dallas, TX 75265 21046, USA;Department of Electrical Engineering The University of Texas at Arlington, Arlington, TX 76019, USA;Fastvdo LLC. Columbia, MD 21046, USA;Department of Electrical Engineering The University of Texas at Arlington, Arlington, TX 76019, USA

  • Venue:
  • Neurocomputing
  • Year:
  • 2007

Quantified Score

Hi-index 0.01

Visualization

Abstract

Piecewise linear networks (PLNs) are attractive because they can be trained quickly and provide good performance in many nonlinear approximation problems. Most existing design algorithms for piecewise linear networks are not convergent, non-optimal, or are not designed to handle noisy data. In this paper, four algorithms are presented which attack this problem. They are: (1) a convergent design algorithm which builds the PLN one module at a time using a branch and bound technique; (2) two pruning algorithms which eliminate less useful modules from the network; and (3) a sifting algorithm which picks the best networks out of the many designed. The performance of the PLN is compared with that of the multilayer perceptron (MLP) using several benchmark data sets. Numerical results demonstrate that piecewise linear networks are adequate for many approximation problems.