Multilayer Potts Perceptrons With Levenberg–Marquardt Learning

  • Authors:
  • J. -M. Wu

  • Affiliations:
  • -

  • Venue:
  • IEEE Transactions on Neural Networks
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents learning multilayer Potts perceptrons (MLPotts) for data driven function approximation. A Potts perceptron is composed of a receptive field and a $K$ -state transfer function that is generalized from sigmoid-like transfer functions of traditional perceptrons. An MLPotts network is organized to perform translation from a high-dimensional input to the sum of multiple postnonlinear projections, each with its own postnonlinearity realized by a weighted $K$-state transfer function. MLPotts networks span a function space that theoretically covers network functions of multilayer perceptrons. Compared with traditional perceptrons, weighted Potts perceptrons realize more flexible postnonlinear functions for nonlinear mappings. Numerical simulations show MLPotts learning by the Levenberg–Marquardt (LM) method significantly improves traditional supervised learning of multilayer perceptrons for data driven function approximation.