A multi-start global minimization algorithm with dynamic search trajectories
Journal of Optimization Theory and Applications
Neural Networks for Optimization and Signal Processing
Neural Networks for Optimization and Signal Processing
Control Perspectives on Numerical Algorithms And Matrix Problems (Advances in Design and Control) (Advances in Design and Control 10)
A second-order pruning step for verified global optimization
Journal of Global Optimization
A review of recent advances in global optimization
Journal of Global Optimization
Comparative Study of the CG and HBF ODEs Used in the Global Minimization of Nonconvex Functions
ICANN '09 Proceedings of the 19th International Conference on Artificial Neural Networks: Part I
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
Two highly efficient second-order algorithms for training feedforward networks
IEEE Transactions on Neural Networks
Hi-index | 0.01 |
This paper presents a unified way to design neural networks characterized as second order ordinary differential equations (ODEs) that aim to find the global minimum of nonconvex scalar functions. These neural networks, alternatively referred to as continuous time algorithms, are interpreted as dynamical closed loop control systems. The design is based on the control Liapunov function (CLF) method. For nonconvex scalar functions, the goal of these algorithms is to produce trajectories, starting from an arbitrarily chosen initial guess, that do not get stuck in local minima, thereby increasing the chances of converging to the global minimum.