Avoiding local minima in feedforward neural networks by simultaneous learning
AI'07 Proceedings of the 20th Australian joint conference on Advances in artificial intelligence
A neural network experiment on the site-specific simulation of potato tuber growth in Eastern Canada
Computers and Electronics in Agriculture
Computers and Electronics in Agriculture
Human lower extremity joint moment prediction: A wavelet neural network approach
Expert Systems with Applications: An International Journal
Hi-index | 0.00 |
Conventional neural-network training algorithms often get stuck in local minima. To find the global optimum, training is conventionally repeated with ten, or so, random starting values for the weights. Here we develop an analytical procedure to determine how many times a neural network needs to be trained, with random starting weights, to ensure that the best of those is within a desirable lower percentile of all possible trainings, with a certain level of confidence. The theoretical developments are validated by experimental results. While applied to neural-network training, the method is generally applicable to nonlinear optimization