Volterra models and three-layer perceptrons

  • Authors:
  • V. Z. Marmarelis;X. Zhao

  • Affiliations:
  • Dept. of Biomed. Eng., Univ. of Southern California, Los Angeles, CA;-

  • Venue:
  • IEEE Transactions on Neural Networks
  • Year:
  • 1997

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper proposes the use of a class of feedforward artificial neural networks with polynomial activation functions (distinct for each hidden unit) for practical modeling of high-order Volterra systems. Discrete-time Volterra models (DVMs) are often used in the study of nonlinear physical and physiological systems using stimulus-response data. However, their practical use has been hindered by computational limitations that confine them to low-order nonlinearities (i.e., only estimation of low-order kernels is practically feasible). Since three-layer perceptrons (TLPs) can be used to represent input-output nonlinear mappings of arbitrary order, this paper explores the basic relations between DVMs and TLPs with tapped-delay inputs in the context of nonlinear system modeling. A variant of TLP with polynomial activation functions-termed “separable Volterra networks” (SVNs)-is found particularly useful in deriving explicit relations with DVM and in obtaining practicable models of highly nonlinear systems from stimulus-response data. The conditions under which the two approaches yield equivalent representations of the input-output relation are explored, and the feasibility of DVM estimation via equivalent SVN training using backpropagation is demonstrated by computer-simulated examples and compared with results from the Laguerre expansion technique (LET). The use of SVN models allows practicable modeling of high-order nonlinear systems, thus removing the main practical limitation of the DVM approach