Backpropagation in perceptrons with feedback
Neural Computers
Efficient simulation of finite automata by neural nets
Journal of the ACM (JACM)
The Induction of Dynamical Recognizers
Machine Learning - Connectionist approaches to language learning
Induction of finite-state languages using second-order recurrent networks
Neural Computation
Constructing deterministic finite-state automata in recurrent neural networks
Journal of the ACM (JACM)
Analysis of dynamical recognizers
Neural Computation
Journal of the ACM (JACM)
Introduction To Automata Theory, Languages, And Computation
Introduction To Automata Theory, Languages, And Computation
A theory of grammatical induction in the connectionist paradigm
A theory of grammatical induction in the connectionist paradigm
Computation: finite and infinite machines
Computation: finite and infinite machines
Inductive inference from noisy examples using the hybrid finite state filter
IEEE Transactions on Neural Networks
Gradient calculations for dynamic recurrent neural networks: a survey
IEEE Transactions on Neural Networks
Towards novel neuroscience-inspired computing
Emergent neural computational architectures based on neuroscience
Rule Extraction from Recurrent Neural Networks: A Taxonomy and Review
Neural Computation
Hi-index | 0.00 |
Finite-state machines are the most pervasive models of computation, not only in theoretical computer science, but also in all of its applications to real-life problems, and constitute the best characterized computational model. On the other hand, neural networks --proposed almost sixty years ago by McCulloch and Pitts as a simplified model of nervous activity in living beings-- have evolved into a great variety of so-called artificial neural networks. Artificial neural networks have become a very successful tool for modelling and problem solving because of their built-in learning capability, but most of the progress in this field has occurred with models that are very removed from the behaviour of real, i.e., biological neural networks. This paper surveys the work that has established a connection between finite-state machines and (mainly discrete-time recurrent) neural networks, and suggests possible ways to construct finite-state models in biologically plausible neural networks.