Learning Chaotic Attractors by Neural Networks

  • Authors:
  • Rembrandt Bakker;Jaap C. Schouten;C. Lee Giles;Floris C. Takens;Cor M. Van Den Bleek

  • Affiliations:
  • DelftChemTech, Delft University of Technology;Chemical Reactor Engineering, Eindhoven University of Technology;NEC Research Institute;Department of Mathematics,University of Groningen;DelftChemTech, Delft University of Technology

  • Venue:
  • Neural Computation
  • Year:
  • 2000

Quantified Score

Hi-index 0.00

Visualization

Abstract

An algorithm is introduced that trains a neural network to identify chaotic dynamics from a single measured time series. During training, the algorithm learns to short-term predict the time series. At the same time a criterion, developed by Diks, van Zwet, Takens, and de Goede (1996) is monitored that tests the hypothesis that the reconstructed attractors of model-generated and measured data are the same. Training is stopped when the prediction error is low and the model passes this test. Two other features of the algorithm are (1) the way the state of the system, consisting of delays from the time series, has its dimension reduced by weighted principal component analysis data reduction, and (2) the user-adjustable prediction horizon obtained by "error propagation" - partially propagating prediction errors to the next time step.The algorithm is first applied to data from an experimental-driven chaotic pendulum, of which two of the three state variables are known. This is a comprehensive example that shows how well the Diks test can distinguish between slightly different attractors. Second, the algorithm is applied to the same problem, but now one of the two known state variables is ignored. Finally, we present a model for the laser data from the Santa Fe time-series competition (set A). It is the first model for these data that is not only useful for short-term predictions but also generates time series with similar chaotic characteristics as the measured data.