Genetic algorithms + data structures = evolution programs (3rd ed.)
Genetic algorithms + data structures = evolution programs (3rd ed.)
Pattern Recognition and Neural Networks
Pattern Recognition and Neural Networks
Evolution strategies –A comprehensive introduction
Natural Computing: an international journal
Journal of Global Optimization
A Lamarckian Approach for Neural Network Training
Neural Processing Letters
Differential Evolution Training Algorithm for Feed-Forward Neural Networks
Neural Processing Letters
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
A Parallel Differential Evolution Algorithm A Parallel Differential Evolution Algorithm
PARELEC '06 Proceedings of the international symposium on Parallel Computing in Electrical Engineering
Comparison of Stochastic Global Optimization Methods to Estimate Neural Network Weights
Neural Processing Letters
Advances in Differential Evolution
Advances in Differential Evolution
Training feedforward neural networks using genetic algorithms
IJCAI'89 Proceedings of the 11th international joint conference on Artificial intelligence - Volume 1
Differential evolution using a neighborhood-based mutation operator
IEEE Transactions on Evolutionary Computation
Parameter control in evolutionary algorithms
IEEE Transactions on Evolutionary Computation
IEEE Transactions on Evolutionary Computation
A new evolutionary system for evolving artificial neural networks
IEEE Transactions on Neural Networks
Adaptive Memetic Differential Evolution with Global and Local neighborhood-based mutation operators
Information Sciences: an International Journal
Hi-index | 0.00 |
The paper describes two schemes that follow the model of Lamarckian evolution and combine differential evolution (DE), which is a population-based stochastic global search method, with the local optimization algorithm of conjugate gradients (CG). In the first, each offspring is fine-tuned by CG before competing with their parents. In the other CG is used to improve both parents and offspring in a manner that is completely seamless for individuals that survive more than one generation. Experiments involved training weights of feed-forward neural networks to solve three synthetic and four real-life problems. In six out of seven cases the DE---CG hybrid, which preserves and uses information on each solution's local optimization process, outperformed two recent variants of DE.