Architectural Bias in Recurrent Neural Networks - Fractal Analysis

  • Authors:
  • Peter Tino;Barbara Hammer

  • Affiliations:
  • -;-

  • Venue:
  • ICANN '02 Proceedings of the International Conference on Artificial Neural Networks
  • Year:
  • 2002

Quantified Score

Hi-index 0.00

Visualization

Abstract

We have recently shown that when initiated with "small" weights, recurrent neural networks (RNNs) with standard sigmoid-type activation functions are inherently biased towards Markov models, i.e. even prior to any training, RNN dynamics can be readily used to extract finite memory machines [6,8]. Following [2], we refer to this phenomenon as the architectural bias of RNNs. In this paper we further extend our work on the architectural bias in RNNs by performing a rigorous fractal analysis of recurrent activation patterns.We obtain both lower and upper bounds on various types of fractal dimensions, such as box-counting and Hausdorff dimensions.