Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Higher order recurrent networks and grammatical inference
Advances in neural information processing systems 2
The Induction of Dynamical Recognizers
Machine Learning - Connectionist approaches to language learning
Exploring the computational capabilities of recurrent neural networks
Exploring the computational capabilities of recurrent neural networks
Multiple paired forward and inverse models for motor control
Neural Networks - Special issue on neural control and robotics: biology and technology
Hi-index | 0.00 |
This study shows how sensory-action sequences of imitating finite state machines (FSMs) can be learned by utilizing the deterministic dynamics of recurrent neural networks (RNNs). Our experiments indicated that each possible combinatorial sequence can be recalled by specifying its respective initial state value and also that fractal structures appear in this initial state mapping after the learning converges. We also observed that the RNN dynamics evolves, going back and forth between regions of window and chaos in the learning process, and that the evolved dynamical structure turns out to be structurally stable after adding a negligible amount of noise.