Introduction to the theory of neural computation
Introduction to the theory of neural computation
Higher order recurrent networks and grammatical inference
Advances in neural information processing systems 2
The Induction of Dynamical Recognizers
Machine Learning - Connectionist approaches to language learning
On the computational power of neural nets
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Induction of finite-state languages using second-order recurrent networks
Neural Computation
Sequential behavior and learning in evolved dynamical neural networks
Adaptive Behavior
The calculi of emergence: computation, dynamics and induction
Proceedings of the NATO advanced research workshop and EGS topical workshop on Chaotic advection, tracer dynamics and turbulent dispersion
Fixed points in two-neuron discrete time recurrent networks: stability and bifurcation considerations
Finite state machines and recurrent neural networks—automata and dynamical systems approaches
Finite state machines and recurrent neural networks—automata and dynamical systems approaches
On the dynamics of small continuous-time recurrent neural networks
Adaptive Behavior - Special issue on computational neuroethology
Elements of the Theory of Computation
Elements of the Theory of Computation
Essential Dynamical Structure in Learnable Autonomous Robots
Proceedings of the Third European Conference on Advances in Artificial Life
The Complexity of Language Recognition by Neural Networks
Proceedings of the IFIP 12th World Computer Congress on Algorithms, Software, Architecture - Information Processing '92, Volume 1 - Volume I
Computation in discrete-time dynamical systems
Computation in discrete-time dynamical systems
Finite state automata and simple recurrent networks
Neural Computation
A learning algorithm for continually running fully recurrent neural networks
Neural Computation
A convergence result for learning in recurrent neural networks
Neural Computation
Constructing deterministic finite-state automata in recurrent neural networks
Journal of the ACM (JACM)
Analysis of dynamical recognizers
Neural Computation
On the effect of analog noise in discrete-time analog computations
Neural Computation
Classification of temporal patterns in dynamic biological networks
Neural Computation
Computing with continuous-time Liapunov systems
STOC '01 Proceedings of the thirty-third annual ACM symposium on Theory of computing
Approximating the Semantics of Logic Programs by Recurrent Neural Networks
Applied Intelligence
Natural Language Grammatical Inference with Recurrent Neural Networks
IEEE Transactions on Knowledge and Data Engineering
On the emergence of rules in neural networks
Neural Computation
Finite-state computation in analog neural networks: steps towards biologically plausible models?
Emergent neural computational architectures based on neuroscience
Emergent neural computational architectures based on neuroscience
Rule Extraction from Self-Organizing Networks
ICANN '02 Proceedings of the International Conference on Artificial Neural Networks
On the Need for a Neural Abstract Machine
Sequence Learning - Paradigms, Algorithms, and Applications
Emergent Neural Computational Architectures Based on Neuroscience - Towards Neuroscience-Inspired Computing
Finite-State Computation in Analog Neural Networks: Steps towards Biologically Plausible Models?
Emergent Neural Computational Architectures Based on Neuroscience - Towards Neuroscience-Inspired Computing
Continuous-time symmetric Hopfield nets are computationally universal
Neural Computation
Architectural bias in recurrent neural networks: fractal analysis
Neural Computation
On probabilistic analog automata
Theoretical Computer Science
Cognition and the Power of Continuous Dynamical Systems
Minds and Machines
Learning Beyond Finite Memory in Recurrent Networks of Spiking Neurons
Neural Computation
Rule Extraction from Recurrent Neural Networks: A Taxonomy and Review
Neural Computation
On the Computational Complexity of Binary and Analog Symmetric Hopfield Nets
Neural Computation
A note on discreteness and virtuality in analog computing
Theoretical Computer Science
The Crystallizing Substochastic Sequential Machine Extractor: CrySSMEx
Neural Computation
The Dynamics of Associative Learning in Evolved Model Circuits
Adaptive Behavior - Animals, Animats, Software Agents, Robots, Adaptive Systems
ACSC '08 Proceedings of the thirty-first Australasian conference on Computer science - Volume 74
Associative Learning on a Continuum in Evolved Dynamical Neural Networks
Adaptive Behavior - Animals, Animats, Software Agents, Robots, Adaptive Systems
On Global Stability of Delayed BAM Stochastic Neural Networks with Markovian Switching
Neural Processing Letters
IEEE Transactions on Neural Networks
CrySSMEx, a novel rule extractor for recurrent neural networks: overview and case study
ICANN'05 Proceedings of the 15th international conference on Artificial neural networks: formal models and their applications - Volume Part II
Learning beyond finite memory in recurrent networks of spiking neurons
ICNC'05 Proceedings of the First international conference on Advances in Natural Computation - Volume Part II
Robust simulations of turing machines with analytic maps and flows
CiE'05 Proceedings of the First international conference on Computability in Europe: new Computational Paradigms
Recurrent networks for structured data - A unifying approach and its properties
Cognitive Systems Research
Hi-index | 0.00 |
Recurrent neural networks (RNNs) can learn to perform finite state computations. It is shown that an RNN performing a finite state computation must organize its state space to mimic the states in the minimal deterministic finite state machine that can perform that computation, and a precise description of the attractor structure of such systems is given. This knowledge effectively predicts activation space dynamics, which allows one to understand RNN computation dynamics in spite of complexity in activation dynamics. This theory provides a theoretical framework for understanding finite state machine (FSM) extraction techniques and can be used to improve training methods for RNNs performing FSM computations. This provides an example of a successful approach to understanding a general class of complex systems that has not been explicitly designed, e.g., systems that have evolved or learned their internal structure.