Simple Strategies to Encode Tree Automata in Sigmoid Recursive Neural Networks
IEEE Transactions on Knowledge and Data Engineering
Finite-state computation in analog neural networks: steps towards biologically plausible models?
Emergent neural computational architectures based on neuroscience
On the Need for a Neural Abstract Machine
Sequence Learning - Paradigms, Algorithms, and Applications
Finite-State Computation in Analog Neural Networks: Steps towards Biologically Plausible Models?
Emergent Neural Computational Architectures Based on Neuroscience - Towards Neuroscience-Inspired Computing
Universal Approximation Capability of Cascade Correlation for Structures
Neural Computation
Group-Linking Method: A Unified Benchmark for Machine Learning with Recurrent Neural Network
IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences
Hi-index | 0.00 |
In this paper we present an algebraic framework to representfinite state machines (FSMs) in single-layer recurrent neuralnetworks (SLRNNs), which unifies and generalizes some of theprevious proposals. This framework is based on the formulation ofboth the state transition function and the output function of anFSM as a linear system of equations, and it permits an analyticalexplanation of the representational capabilities of first-order andhigher-order SLRNNs. The framework can be used to insert symbolicknowledge in RNNs prior to learning from examples and to keep thisknowledge while training the network. This approach is valid for awide range of activation functions, whenever some stabilityconditions are met. The framework has already been used in practicein a hybrid method for grammatical inference reported elsewhere(Sanfeliu and Alquézar 1994).