The recurrent cascade-correlation architecture
NIPS-3 Proceedings of the 1990 conference on Advances in neural information processing systems 3
Random DFA's can be approximately learned from sparse uniform examples
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Learning finite machines with self-clustering recurrent networks
Neural Computation
Extraction of rules from discrete-time recurrent neural networks
Neural Networks
Neural Computation
Inductive Inference: Theory and Methods
ACM Computing Surveys (CSUR)
Introduction to Formal Language Theory
Introduction to Formal Language Theory
ICGI '00 Proceedings of the 5th International Colloquium on Grammatical Inference: Algorithms and Applications
ICGI '94 Proceedings of the Second International Colloquium on Grammatical Inference and Applications
Speech Recognition Using Fuzzy Second-Order Recurrent Neural Networks
IWANN '01 Proceedings of the 6th International Work-Conference on Artificial and Natural Neural Networks: Connectionist Models of Neurons, Learning Processes and Artificial Intelligence-Part I
Multi-objective genetic algorithms: Problem difficulties and construction of test problems
Evolutionary Computation
Backpropagation applied to handwritten zip code recognition
Neural Computation
Finite state automata and simple recurrent networks
Neural Computation
A learning algorithm for continually running fully recurrent neural networks
Neural Computation
Multiobjective evolutionary algorithms: a comparative case studyand the strength Pareto approach
IEEE Transactions on Evolutionary Computation
An evolutionary algorithm that constructs recurrent neural networks
IEEE Transactions on Neural Networks
Discrete recurrent neural networks for grammatical inference
IEEE Transactions on Neural Networks
Pruning recurrent neural networks for improved generalization performance
IEEE Transactions on Neural Networks
IWANN'07 Proceedings of the 9th international work conference on Artificial neural networks
Identification of finite state automata with a class of recurrent neural networks
IEEE Transactions on Neural Networks
Pattern Recognition
Hi-index | 0.01 |
Grammatical inference has been extensively studied in recent years as a result of its wide field of application, and in turn, recurrent neural networks have proved themselves to be a good tool for grammatical inference. The learning algorithms for these neural networks, however, have been far less studied than those for feed-forward neural networks. Classical training methods for recurrent neural networks suffer from being trapped in local minimal and having a high computational time. In addition, selecting the optimal size of a neural network for a particular application is a difficult task. This suggests that the problems of developing methods to determine optimal topologies and new training algorithms should be studied. In this paper, we present a multi-objective evolutionary algorithm which is able to determine the optimal size of recurrent neural networks in any particular application. This is specially analyzed in the case of grammatical inference: in particular, we study how to establish the optimal size of a recurrent neural network in order to learn positive and negative examples in a certain language, and how to determine the corresponding automaton using a self-organizing map once the training has been completed.