Mapping classifier systems into neural networks
Advances in neural information processing systems 1
Designing neural networks using genetic algorithms
Proceedings of the third international conference on Genetic algorithms
Optimizing neural networks using faster, more accurate genetic search
Proceedings of the third international conference on Genetic algorithms
Adaptation in natural and artificial systems
Adaptation in natural and artificial systems
1994 Special Issue: Design and evolution of modular neural network architectures
Neural Networks - Special issue: models of neurodynamics and behavior
Genetic algorithms + data structures = evolution programs (3rd ed.)
Genetic algorithms + data structures = evolution programs (3rd ed.)
An introduction to genetic algorithms
An introduction to genetic algorithms
Genetic Algorithms in Search, Optimization and Machine Learning
Genetic Algorithms in Search, Optimization and Machine Learning
Neural Networks: A Comprehensive Foundation
Neural Networks: A Comprehensive Foundation
A new evolutionary system for evolving artificial neural networks
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
Evolving artificial neural networks (ANN) is a new method that, except for the training, was applied to the structure optimization problem. Several methods have been reported in recent years. Most of these methods are either very complicated or are applied under certain restrictions imposed by the designer. In this work, three new classes of evolutionary algorithms, for self-organized neural networks' training, will be presented. First, a modified genetic algorithm (MGA) is used to both evolve and train a population of multi-layered perceptrons (MLP) neural networks and to find a (near) optimum network architecture. This method is a generalisation of an existing one and is used for the first time for biosignal prediction. A different approach that combines ideas from both the evolution and the adaptive signal processing, considers the neural unit as a non-linear system, with p inputs and one output. Then the localized extended Kalman filter (LEKF) can be used as the training algorithm of such a neuron. First, we create a population of such systems with a random number of inputs. Then we apply the genetic operators to evolve the system's structure, using as a fitness function the inverse of the mean square error (MSE) function. After a small number of iterations the algorithm converges to the near optimum network, by means of the MSE minimization. A generalisation of this method, where the evolution is performed in the hidden region, is presented. Each of the proposed classes of algorithms was tested using artificial as well as real world data.