System identification: theory for the user
System identification: theory for the user
Learning automata: an introduction
Learning automata: an introduction
Multilayer feedforward networks are universal approximators
Neural Networks
Training a 3-node neural network is NP-complete
COLT '88 Proceedings of the first annual workshop on Computational learning theory
Efficient simulation of finite automata by neural nets
Journal of the ACM (JACM)
Discovering the structure of a reactive environment by exploration
Neural Computation
The Induction of Dynamical Recognizers
Machine Learning - Connectionist approaches to language learning
Random DFA's can be approximately learned from sparse uniform examples
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Local feedback multilayered networks
Neural Computation
On the computational power of neural nets
Journal of Computer and System Sciences
Constructing deterministic finite-state automata in recurrent neural networks
Journal of the ACM (JACM)
Switching and Finite Automata Theory: Computer Science Series
Switching and Finite Automata Theory: Computer Science Series
Shift Register Sequences
IEEE Transactions on Knowledge and Data Engineering
IEEE Transactions on Knowledge and Data Engineering
Vapnik-Chervonenkis Dimension of Recurrent Neural Networks
EuroCOLT '97 Proceedings of the Third European Conference on Computational Learning Theory
Computation: finite and infinite machines
Computation: finite and infinite machines
Introduction to Automata Theory, Languages, and Computation (3rd Edition)
Introduction to Automata Theory, Languages, and Computation (3rd Edition)
Finite state automata and simple recurrent networks
Neural Computation
A learning algorithm for continually running fully recurrent neural networks
Neural Computation
Learning long-term dependencies in NARX recurrent neural networks
IEEE Transactions on Neural Networks
On the computational power of Elman-style recurrent networks
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
This paper proposes a method (Group-Linking Method) that has control over the complexity of the sequential function to construct Finite Memory Machines with minimal order — the machines have the largest number of states based on their memory taps. Finding a machine with maximum number of states is a nontrivial problem because the total number of machines with memory order k is (256)2k-2, a pretty large number. Based on the analysis of Group-Linking Method, it is shown that the amount of data necessary to reconstruct an FMM is the set of strings not longer than the depth of the machine plus one, which is significantly less than that required for traditional greedy-based machine learning algorithm. Group-Linking Method provides a useful systematic way of generating unified benchmarks to evaluate the capability of machine learning techniques. One example is to test the learning capability of recurrent neural networks. The problem of encoding finite state machines with recurrent neural networks has been extensively explored. However, the great representation power of those networks does not guarantee the solution in terms of learning exists. Previous learning benchmarks are shown to be not rich enough structurally in term of solutions in weight space. This set of benchmarks with great expressive power can serve as a convenient framework in which to study the learning and computation capabilities of various network models. A fundamental understanding of the capabilities of these networks will allow users to be able to select the most appropriate model for a given application.