Distributed Representations, Simple Recurrent Networks, And Grammatical Structure
Machine Learning - Connectionist approaches to language learning
Neural Computation
On learning context-free and context-sensitive languages
IEEE Transactions on Neural Networks
Markovian architectural bias of recurrent neural networks
IEEE Transactions on Neural Networks
Hi-index | 0.01 |
This work provide a short study of training algorithms useful for adaptation of recurrent connectionist models for symbolic time series modeling tasks. We show that approaches based on Kalman filtration outperform standard gradinet based training algorithms. We propose simple approximation to the Kalman filtration with favorable computational requirements and on several linguistic time series taken from recently published papers we demonstrate superior ability of the proposed method.