Genetic algorithms + data structures = evolution programs (3rd ed.)
Genetic algorithms + data structures = evolution programs (3rd ed.)
Beyond back propagation: using simulated annealing for training neural networks
Journal of End User Computing
Evolution and Optimum Seeking: The Sixth Generation
Evolution and Optimum Seeking: The Sixth Generation
Neural Network Training Using Genetic Algorithms
Neural Network Training Using Genetic Algorithms
Alternative Neural Network Training Methods
IEEE Expert: Intelligent Systems and Their Applications
Gold Price, Neural Networks and Genetic Algorithm
Computational Economics
Steel columns under fire: a neural network based strength model
Advances in Engineering Software
Stochastic Optimization: Algorithms and Applications (Applied Optimization, Volume 54) (Applied Optimization)
Particle Swarm Optimization Algorithm for Agent-Based Artificial Markets
Computational Economics
Neural network output optimization using interval analysis
IEEE Transactions on Neural Networks
A Lamarckian Hybrid of Differential Evolution and Conjugate Gradients for Neural Network Training
Neural Processing Letters
Calibrating artificial neural networks by global optimization
Expert Systems with Applications: An International Journal
Numerical assessment of metamodelling strategies in computationally intensive optimization
Environmental Modelling & Software
Implications of a Reserve Price in an Agent-Based Common-Value Auction
Computational Economics
Hi-index | 0.00 |
Training a neural network is a difficult optimization problem because of numerous local minima. Many global search algorithms have been used to train neural networks. However, local search algorithms are more efficient with computational resources, and therefore numerous random restarts with a local algorithm may be more effective than a global algorithm. This study uses Monte-Carlo simulations to determine the efficiency of a local search algorithm relative to nine stochastic global algorithms when using a neural network on function approximation problems. The computational requirements of the global algorithms are several times higher than the local algorithm and there is little gain in using the global algorithms to train neural networks. Since the global algorithms only marginally outperform the local algorithm in obtaining a lower local minimum and they require more computational resources, the results in this study indicate that with respect to the specific algorithms and function approximation problems studied, there is little evidence to show that a global algorithm should be used over a more traditional local optimization routine for training neural networks. Further, neural networks should not be estimated from a single set of starting values whether a global or local optimization method is used.