ACM Transactions on Mathematical Software (TOMS)
Recursive stochastic algorithms for global optimization in Rd
SIAM Journal on Control and Optimization
On Langevin updating in multilayer perceptrons
Neural Computation
Objective functions for training new hidden units in constructive neural networks
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
This paper describes a robust training algorithm based on quasi-Newton process in which online and batch error functions are combined by a weighting coefficient parameter. The parameter is adjusted to ensure that the algorithm gradually changes from online to batch. Furthermore, an analogy between this algorithm and Langevin one is considered. Langevin algorithm is a gradient-based continuous optimization method incorporating Simulated Annealing concept. Neural network training is presented to demonstrate the validity of combined algorithm. The algorithm achieves more robust training and accurate generalization results than other quasi-Newton based training algorithms.