Using neural nets to look for chaos
Conference proceedings on Interpretation of time series from nonlinear mechanical systems
Computational capabilities of recurrent NARX neural networks
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Learning long-term dependencies in NARX recurrent neural networks
IEEE Transactions on Neural Networks
Applying LSTM to Time Series Predictable through Time-Window Approaches
ICANN '01 Proceedings of the International Conference on Artificial Neural Networks
WSEAS Transactions on Information Science and Applications
Using genetic programming to synthesize monotonic stochastic processes
CI '07 Proceedings of the Third IASTED International Conference on Computational Intelligence
Recurrent Neural Networks as Local Models for Time Series Prediction
ICONIP '09 Proceedings of the 16th International Conference on Neural Information Processing: Part II
Reduction of the multivariate input dimension using principal component analysis
PRICAI'06 Proceedings of the 9th Pacific Rim international conference on Artificial intelligence
Novel FTLRNN with gamma memory for short-term and long-term predictions of chaotic time series
Applied Computational Intelligence and Soft Computing
Prediction for chaotic time series based on discrete volterra neural networks
ISNN'06 Proceedings of the Third international conference on Advnaces in Neural Networks - Volume Part II
Prediction of chaotic time series based on multi-scale gaussian processes
IDEAL'06 Proceedings of the 7th international conference on Intelligent Data Engineering and Automated Learning
Inference of hidden variables in systems of differential equations with genetic programming
Genetic Programming and Evolvable Machines
Hi-index | 0.00 |
An algorithm is introduced that trains a neural network to identify chaotic dynamics from a single measured time series. During training, the algorithm learns to short-term predict the time series. At the same time a criterion, developed by Diks, van Zwet, Takens, and de Goede (1996) is monitored that tests the hypothesis that the reconstructed attractors of model-generated and measured data are the same. Training is stopped when the prediction error is low and the model passes this test. Two other features of the algorithm are (1) the way the state of the system, consisting of delays from the time series, has its dimension reduced by weighted principal component analysis data reduction, and (2) the user-adjustable prediction horizon obtained by "error propagation" - partially propagating prediction errors to the next time step.The algorithm is first applied to data from an experimental-driven chaotic pendulum, of which two of the three state variables are known. This is a comprehensive example that shows how well the Diks test can distinguish between slightly different attractors. Second, the algorithm is applied to the same problem, but now one of the two known state variables is ignored. Finally, we present a model for the laser data from the Santa Fe time-series competition (set A). It is the first model for these data that is not only useful for short-term predictions but also generates time series with similar chaotic characteristics as the measured data.