Prediction of subsidence due to underground mining by artificial neural networks
Computers & Geosciences
Sign-based learning schemes for pattern classification
Pattern Recognition Letters - Special issue: Artificial neural networks in pattern recognition
Improved sign-based learning algorithm derived by the composite nonlinear Jacobi process
Journal of Computational and Applied Mathematics - Special issue: The international conference on computational methods in sciences and engineering 2004
A simulated annealing-based learning algorithm for block-diagonal recurrent neural networks
AIA'06 Proceedings of the 24th IASTED international conference on Artificial intelligence and applications
The multi-phase method in fast learning algorithms
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
Stopping and restarting strategy for stochastic sequential search in global optimization
Journal of Global Optimization
Hybrid training of feed-forward neural networks with particle swarm optimization
ICONIP'06 Proceedings of the 13th international conference on Neural Information Processing - Volume Part II
Feed-Forward neural network using SARPROP algorithm and its application in radar target recognition
ISNN'06 Proceedings of the Third international conference on Advnaces in Neural Networks - Volume Part II
ISNN'06 Proceedings of the Third international conference on Advnaces in Neural Networks - Volume Part II
A hybrid algorithm for artificial neural network training
Engineering Applications of Artificial Intelligence
Hi-index | 0.00 |
A problem with gradient descent algorithms is that they can converge to poorly performing local minima. Global optimization algorithms address this problem, but at the cost of greatly increased training times. This work examines combining gradient descent with the global optimization technique of simulated annealing (SA). Simulated annealing in the form of noise and weight decay is added to resiliant backpropagation (RPROP), a powerful gradient descent algorithm for training feedforward neural networks. The resulting algorithm, SARPROP, is shown through various simulations not only to be able to escape local minima, but is also able to maintain, and often improve the training times of the RPROP algorithm. In addition, SARPROP may be used with a restart training phase which allows a more thorough search of the error surface and provides an automatic annealing schedule