Time-delay neural networks: representation and induction of finite-state machines

  • Authors:
  • D. S. Clouse;C. L. Giles;B. G. Horne;G. W. Cottrell

  • Affiliations:
  • California Univ., San Diego, La Jolla, CA;-;-;-

  • Venue:
  • IEEE Transactions on Neural Networks
  • Year:
  • 1997

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this work, we characterize and contrast the capabilities of the general class of time-delay neural networks (TDNNs) with input delay neural networks (IDNNs), the subclass of TDNNs with delays limited to the inputs. Each class of networks is capable of representing the same set of languages, those embodied by the definite memory machines (DMMs), a subclass of finite-state machines. We demonstrate the close affinity between TDNNs and DMM languages by learning a very large DMM (2048 states) using only a few training examples. Even though both architectures are capable of representing the same class of languages, they have distinguishable learning biases. Intuition suggests that general TDNNs which include delays in hidden layers should perform well, compared to IDNNs, on problems in which the output can be expressed as a function on narrow input windows which repeat in time. On the other hand, these general TDNNs should perform poorly when the input windows are wide, or there is little repetition. We confirm these hypotheses via a set of simulations and statistical analysis