Distributed Representations, Simple Recurrent Networks, And Grammatical Structure
Machine Learning - Connectionist approaches to language learning
The power of amnesia: learning probabilistic automata with variable memory length
Machine Learning - Special issue on COLT '94
Strong systematicity in sentence processing by an echo state network
ICANN'06 Proceedings of the 16th international conference on Artificial Neural Networks - Volume Part I
Markovian architectural bias of recurrent neural networks
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
Recurrent neural networks are frequently used in cognitive science community for modeling linguistic structures. More or less intensive training process is usually performed but several works showed that untrained recurrent networks initialized with small weights can be also successfully used for this type of tasks. In this work we demonstrate that the state space organization of untrained recurrent neural network can be significantly improved by choosing appropriate input representations. We experimentally support this notion on several linguistic time series.