Efficient simulation of finite automata by neural nets
Journal of the ACM (JACM)
The Induction of Dynamical Recognizers
Machine Learning - Connectionist approaches to language learning
Induction of finite-state languages using second-order recurrent networks
Neural Computation
Learning finite machines with self-clustering recurrent networks
Neural Computation
Constructing deterministic finite-state automata in recurrent neural networks
Journal of the ACM (JACM)
Analysis of dynamical recognizers
Neural Computation
Journal of the ACM (JACM)
On the effect of analog noise in discrete-time analog computations
Neural Computation
Neural Networks: A Comprehensive Foundation
Neural Networks: A Comprehensive Foundation
Introduction to the Theory of Neural Computation
Introduction to the Theory of Neural Computation
Inferring stochastic regular grammars with recurrent neural networks
ICG! '96 Proceedings of the 3rd International Colloquium on Grammatical Inference: Learning Syntax from Sentences
Annealed RNN Learning of Finite State Automata
ICANN 96 Proceedings of the 1996 International Conference on Artificial Neural Networks
A theory of grammatical induction in the connectionist paradigm
A theory of grammatical induction in the connectionist paradigm
Computation: finite and infinite machines
Computation: finite and infinite machines
Introduction to Automata Theory, Languages, and Computation (3rd Edition)
Introduction to Automata Theory, Languages, and Computation (3rd Edition)
Digital Fundamentals (9th Edition)
Digital Fundamentals (9th Edition)
Inductive inference from noisy examples using the hybrid finite state filter
IEEE Transactions on Neural Networks
On the computational power of Elman-style recurrent networks
IEEE Transactions on Neural Networks
Online Text Prediction with Recurrent Neural Networks
Neural Processing Letters
Simple Strategies to Encode Tree Automata in Sigmoid Recursive Neural Networks
IEEE Transactions on Knowledge and Data Engineering
Finite-state computation in analog neural networks: steps towards biologically plausible models?
Emergent neural computational architectures based on neuroscience
Stochastic k-testable Tree Languages and Applications
ICGI '02 Proceedings of the 6th International Colloquium on Grammatical Inference: Algorithms and Applications
Encoding Nondeterministic Finite-State Tree Automata in Sigmoid Recursive Neural Networks
Proceedings of the Joint IAPR International Workshops on Advances in Pattern Recognition
Online Symbolic-Sequence Prediction with Discrete-Time Recurrent Neural Networks
ICANN '01 Proceedings of the International Conference on Artificial Neural Networks
Finite-State Computation in Analog Neural Networks: Steps towards Biologically Plausible Models?
Emergent Neural Computational Architectures Based on Neuroscience - Towards Neuroscience-Inspired Computing
Rule Extraction from Recurrent Neural Networks: A Taxonomy and Review
Neural Computation
State-dependent computation using coupled recurrent networks
Neural Computation
Identification of finite state automata with a class of recurrent neural networks
IEEE Transactions on Neural Networks
Minimizing state transition model for multiclassification by mixed-integer programming
MICAI'05 Proceedings of the 4th Mexican international conference on Advances in Artificial Intelligence
Hi-index | 0.00 |
There has been a lot of interest in the use of discrete-time recurrent neural nets (DTRNN) to learn finite-state tasks, with interesting results regarding the induction of simple finite-state machines from input--output strings. Parallel work has studied the computational power of DTRNN in connection with finite-state computation. This article describes a simple strategy to devise stable encodings of finite-state machines in computationally capable discrete-time recurrent neural architectures with sigmoid units and gives a detailed presentation on how this strategy may be applied to encode a general class of finite-state machines in a variety of commonly used first- and second-order recurrent neural networks. Unlike previous work that either imposed some restrictions to state values or used a detailed analysis based on fixed-point attractors, our approach applies to any positive, bounded, strictly growing, continuous activation function and uses simple bounding criteria based on a study of the conditions under which a proposed encoding scheme guarantees that the DTRNN is actually behaving as a finite-state machine.