Analysis of dynamical recognizers
Neural Computation
Simple Strategies to Encode Tree Automata in Sigmoid Recursive Neural Networks
IEEE Transactions on Knowledge and Data Engineering
Finite-state computation in analog neural networks: steps towards biologically plausible models?
Emergent neural computational architectures based on neuroscience
Encoding Nondeterministic Finite-State Tree Automata in Sigmoid Recursive Neural Networks
Proceedings of the Joint IAPR International Workshops on Advances in Pattern Recognition
On the Need for a Neural Abstract Machine
Sequence Learning - Paradigms, Algorithms, and Applications
Finite-State Computation in Analog Neural Networks: Steps towards Biologically Plausible Models?
Emergent Neural Computational Architectures Based on Neuroscience - Towards Neuroscience-Inspired Computing
Architectural bias in recurrent neural networks: fractal analysis
Neural Computation
Learning precise timing with lstm recurrent networks
The Journal of Machine Learning Research
Inducing grammars from sparse data sets: a survey of algorithms and results
The Journal of Machine Learning Research
Learning Beyond Finite Memory in Recurrent Networks of Spiking Neurons
Neural Computation
Universal Approximation Capability of Cascade Correlation for Structures
Neural Computation
Spatiotemporal Connectionist Networks: A Taxonomy and Review
Neural Computation
Physical time series prediction using Recurrent Pi-Sigma Neural Networks
International Journal of Artificial Intelligence and Soft Computing
Segmented-memory recurrent neural networks
IEEE Transactions on Neural Networks
Learning beyond finite memory in recurrent networks of spiking neurons
ICNC'05 Proceedings of the First international conference on Advances in Natural Computation - Volume Part II
Hi-index | 0.00 |
Recent work has shown that second-order recurrent neuralnetworks (2ORNNs) may be used to infer regular languages. Thispaper presents a modified version of the real-time recurrentlearning (RTRL) algorithm used to train 2ORNNs, that learns theinitial state in addition to the weights. The results of thismodification, which adds extra flexibility at a negligible cost intime complexity, suggest that it may be used to improve thelearning of regular languages when the size of the network issmall.