Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Efficient genetic algorithms for training layered feedforward neural networks
Information Sciences—Informatics and Computer Science: An International Journal
Numerical Optimization of Computer Models
Numerical Optimization of Computer Models
Pattern Recognition and Neural Networks
Pattern Recognition and Neural Networks
Unifying Learning with Evolution Through Baldwinian Evolution and Lamarckism
Advances in Computational Intelligence and Learning: Methods and Applications
A study of the Lamarckian evolution of recurrent neural networks
IEEE Transactions on Evolutionary Computation
Social Organization of Evolving Multiple Classifier System Functioning in Changing Environments
ICANNGA '07 Proceedings of the 8th international conference on Adaptive and Natural Computing Algorithms, Part I
The influence of learning on evolution: A mathematical framework
Artificial Life
A Lamarckian Hybrid of Differential Evolution and Conjugate Gradients for Neural Network Training
Neural Processing Letters
An analysis of Lamarckian learning in changing environments
ECAL'09 Proceedings of the 10th European conference on Advances in artificial life: Darwin meets von Neumann - Volume Part II
Simultaneous evolution of neural network topologies and weights for classification and regression
IWANN'05 Proceedings of the 8th international conference on Artificial Neural Networks: computational Intelligence and Bioinspired Systems
Hi-index | 0.00 |
In Nature, living beings improve their adaptation to surrounding environments by means of two main orthogonal processes: evolution and lifetime learning. Within the Artificial Intelligence arena, both mechanisms inspired the development of non-orthodox problem solving tools, namely: Genetic and Evolutionary Algorithms (GEAs) and Artificial Neural Networks (ANNs). In the past, several gradient-based methods have been developed for ANN training, with considerable success. However, in some situations, these may lead to local minima in the error surface. Under this scenario, the combination of evolution and learning techniques may induce better results, desirably reaching global optima. Comparative tests that were carried out with classification and regression tasks, attest this claim.