A differential-equations algorithm for nonlinear equations
ACM Transactions on Mathematical Software (TOMS)
Algorithm 617: DAFNE: a differential-equations algorithm for nonlinear equations
ACM Transactions on Mathematical Software (TOMS)
Linear programming and the Newton barrier flow
Mathematical Programming: Series A and B
Journal of Optimization Theory and Applications
Multilayer feedforward networks are universal approximators
Neural Networks
An extended continuous Newton method
Journal of Optimization Theory and Applications
A new modified Cholesky factorization
SIAM Journal on Scientific and Statistical Computing
A trajectory-following method for unconstrained optimization
Journal of Optimization Theory and Applications
An ordinary differential equation in nonlinear programming
Nonlinear Analysis: Theory, Methods & Applications
Approximation capabilities of multilayer feedforward networks
Neural Networks
Mathematical control theory: an introduction
Mathematical control theory: an introduction
Trajectory-following algorithms for min-max optimization problems
Journal of Optimization Theory and Applications
Gradient-flow approach for computing a nonlinear-quadratic optimal-output feedback gain matrix
Journal of Optimization Theory and Applications
Modified Projection-Type Methods for Monotone Variational Inequalities
SIAM Journal on Control and Optimization
Sliding Modes in Solving Convex Programming Problems
SIAM Journal on Control and Optimization
Testing Unconstrained Optimization Software
ACM Transactions on Mathematical Software (TOMS)
Neural Networks: A Comprehensive Foundation
Neural Networks: A Comprehensive Foundation
Solving nonlinear complementarity problems with neural networks: a reformulation method approach
Journal of Computational and Applied Mathematics
Neural Networks for Optimization and Signal Processing
Neural Networks for Optimization and Signal Processing
Stability Analysis of Gradient-Based Neural Networks for Optimization Problems
Journal of Global Optimization
A neural network for the linear complementarity problem
Mathematical and Computer Modelling: An International Journal
A new neural network for solving linear programming problems and its application
IEEE Transactions on Neural Networks
A high-performance neural network for solving linear and quadratic programming problems
IEEE Transactions on Neural Networks
A new neural network for solving linear and quadratic programming problems
IEEE Transactions on Neural Networks
A general methodology for designing globally convergent optimization neural networks
IEEE Transactions on Neural Networks
A neural network model for monotone linear asymmetric variational inequalities
IEEE Transactions on Neural Networks
Solving linear programming problems with neural networks: a comparative study
IEEE Transactions on Neural Networks
Neurodynamic Analysis for the Schur Decomposition of the Box Problems
Computational Intelligence and Security
Hi-index | 0.00 |
Dynamical (or ode) system and neural network approaches for optimization have been co-existed for two decades. The main feature of the two approaches is that a continuous path starting from the initial point can be generated and eventually the path will converge to the solution. This feature is quite different from conventional optimization methods where a sequence of points, or a discrete path, is generated. Even dynamical system and neural network approaches share many common features and structures, yet a complete comparison for the two approaches has not been available. In this paper, based on a detailed study on the two approaches, a new approach, termed neurodynamical approach, is introduced. The new neurodynamical approach combines the attractive features in both dynamical (or ode) system and neural network approaches. In addition, the new approach suggests a systematic procedure and framework on how to construct a neurodynamical system for both unconstrained and constrained problems. In analyzing the stability issues of the underlying dynamical (or ode) system, the neurodynamical approach adopts a new strategy, which avoids the Lyapunov function. Under the framework of this neurodynamical approach, strong theoretical results as well as promising numerical results are obtained.