Matrix differential equations: a continuous realization process for linear algebra problems
Nonlinear Analysis: Theory, Methods & Applications
Iterative methods for solving linear systems
Iterative methods for solving linear systems
On the momentum term in gradient descent learning algorithms
Neural Networks
Accelerating neural network training using weight extrapolations
Neural Networks
Control System Design
Stability of steepest descent with momentum for quadratic functions
IEEE Transactions on Neural Networks
Dynamic learning rate optimization of the backpropagation algorithm
IEEE Transactions on Neural Networks
Comparative Study of the CG and HBF ODEs Used in the Global Minimization of Nonconvex Functions
ICANN '09 Proceedings of the 19th International Conference on Artificial Neural Networks: Part I
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
Artificial neural network-based system for PET volume segmentation
Journal of Biomedical Imaging
Convergence of a Batch Gradient Algorithm with Adaptive Momentum for Neural Networks
Neural Processing Letters
Deterministic convergence of an online gradient method with momentum
ICIC'06 Proceedings of the 2006 international conference on Intelligent Computing - Volume Part I
ISNN'12 Proceedings of the 9th international conference on Advances in Neural Networks - Volume Part I
Hi-index | 0.00 |
It is pointed out that the so called momentum method, much used in the neural network literature as an acceleration of the backpropagation method is a stationary version of the conjugate gradient method. Connections with the continuous optimization method known as heavy ball with friction are also made. In both cases, adaptive (dynamic) choices of the so called learning rate and momentum parameters are obtained using a control Liapunov function analysis of the system.