Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Rescaling of variables in back propagation learning
Neural Networks
Effective backpropagation training with variable stepsize
Neural Networks
Optimization: algorithms and consistent approximations
Optimization: algorithms and consistent approximations
Iterative solution of nonlinear equations in several variables
Iterative solution of nonlinear equations in several variables
A class of gradient unconstrained minimization algorithms with adaptive stepsize
Journal of Computational and Applied Mathematics
Automatic Learning Rate Maximization in Large Adaptive Machines
Advances in Neural Information Processing Systems 5, [NIPS Conference]
Acceleration Techniques for the Backpropagation Algorithm
Proceedings of the EURASIP Workshop 1990 on Neural Networks
Numerical Methods for Unconstrained Optimization and Nonlinear Equations (Classics in Applied Mathematics, 16)
Hi-index | 0.00 |
In this paper we propose a framework for developing globally convergent batch training algorithms with adaptive learning rate. The proposed framework provides conditions under which global convergence is guaranteed for adaptive learning rate training algorithms. To this end, the learning rate is appropriately tuned along the given descent direction. Providing conditions regarding the search direction and the corresponding stepsize length this framework can also guarantee global convergence for training algorithms that use a different learning rate for each weight. To illustrate the effectiveness of the proposed approach on various training algorithms simulation results are provided.