Enhancing MLP networks using a distributed data representation

  • Authors:
  • S. Narayan;G. A. Tagliarini;E. W. Page

  • Affiliations:
  • Dept. of Math. Sci., North Carolina Univ., Wilmington, NC;-;-

  • Venue:
  • IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
  • Year:
  • 1996

Quantified Score

Hi-index 0.00

Visualization

Abstract

Multilayer perceptron (MLP) networks trained using backpropagation can be slow to converge in many instances. The primary reason for slow learning is the global nature of backpropagation. Another reason is the fact that a neuron in an MLP network functions as a hyperplane separator and is therefore inefficient when applied to classification problems in which decision boundaries are nonlinear. This paper presents a data representational approach that addresses these problems while operating within the framework of the familiar backpropagation model. We examine the use of receptors with overlapping receptive fields as a preprocessing technique for encoding inputs to MLP networks. The proposed data representation scheme, termed ensemble encoding, is shown to promote local learning and to provide enhanced nonlinear separability. Simulation results for well known problems in classification and time-series prediction indicate that the use of ensemble encoding can significantly reduce the time required to train MLP networks. Since the choice of representation for input data is independent of the learning algorithm and the functional form employed in the MLP model, nonlinear preprocessing of network inputs may be an attractive alternative for many MLP network applications