Introduction to artificial neural systems
Introduction to artificial neural systems
Genetic algorithms + data structures = evolution programs (3rd ed.)
Genetic algorithms + data structures = evolution programs (3rd ed.)
Fundamentals of Artificial Neural Networks
Fundamentals of Artificial Neural Networks
Neural and Adaptive Systems: Fundamentals through Simulations with CD-ROM
Neural and Adaptive Systems: Fundamentals through Simulations with CD-ROM
Pattern Recognition and Machine Learning (Information Science and Statistics)
Pattern Recognition and Machine Learning (Information Science and Statistics)
Neural Networks in a Softcomputing Framework
Neural Networks in a Softcomputing Framework
Numerical Recipes 3rd Edition: The Art of Scientific Computing
Numerical Recipes 3rd Edition: The Art of Scientific Computing
Convexification for data fitting
Journal of Global Optimization
ISNN'13 Proceedings of the 10th international conference on Advances in Neural Networks - Volume Part I
Hi-index | 0.00 |
A method of training multilayer perceptrons (MLPs) to reach a global or nearly global minimum of the standard mean squared error (MSE) criterion is proposed. It has been found that the region in the weight space that does not have a local minimum of the normalized risk-averting error (NRAE) criterion expands strictly to the entire weight space as the risk-sensitivity index increases to infinity. If the MLP under training has enough hidden neurons, the MSE and NRAE criteria are both equal to nearly zero at a global or nearly global minimum. Training the MLP with the NRAE at a sufficiently large risk-sensitivity index can therefore effectively avoid non-global local minima. Numerical experiments show consistently successful convergence from different initial guesses of the weights of the MLP at a risk-sensitivity index over 106. The experiments are conducted on examples with non-global local minima of the MSE criterion that are difficult to escape from by training directly with the MSE criterion.