The power of amnesia: learning probabilistic automata with variable memory length
Machine Learning - Special issue on COLT '94
Finite state automata and simple recurrent networks
Neural Computation
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
We investigate the evolution of performance of finite-context predictive models built upon the recurrent activations of the two types of recurrent neural networks (RNNs), which are trained on strings generated according to the Reber grammar. The first type is a 2nd-order version of the Elman simple RNN trained to perform the next-symbol prediction in a supervised manner. The second RNN is an interesting unsupervised alternative, e.g. the 2nd-order RNN trained by the Bienenstock, Cooper and Munro (BCM) rule [3]. The BCM learning rule seems to fail to organize the RNN state space so as to represent the states of the Reber automaton. However, both RNNs behave as nonlinear iteration function systems (IFSs) and for a large enough number of quantization centers, they give an optimal prediction performance.