Finite-Time Stability of Continuous Autonomous Systems
SIAM Journal on Control and Optimization
Neural Networks for Combinatorial Optimization: a Review of More Than a Decade of Research
INFORMS Journal on Computing
A compact cooperative recurrent neural network for computing general constrained L1norm estimators
IEEE Transactions on Signal Processing
IEEE Transactions on Neural Networks
Inter-modality mapping in robot with recurrent neural network
Pattern Recognition Letters
Noise-Robust Automatic Speech Recognition Using a Predictive Echo State Network
IEEE Transactions on Audio, Speech, and Language Processing
A dual neural network for kinematic control of redundant robotmanipulators
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
A high-performance neural network for solving linear and quadratic programming problems
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
A Simplified Dual Neural Network for Quadratic Programming With Its KWTA Application
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
This paper presents a class of recurrent neural networks to solve quadratic programming problems. Different from most existing recurrent neural networks for solving quadratic programming problems, the proposed neural network model converges in finite time and the activation function is not required to be a hard-limiting function for finite convergence time. The stability, finite-time convergence property and the optimality of the proposed neural network for solving the original quadratic programming problem are proven in theory. Extensive simulations are performed to evaluate the performance of the neural network with different parameters. In addition, the proposed neural network is applied to solving the k-winner-take-all (k-WTA) problem. Both theoretical analysis and numerical simulations validate the effectiveness of our method for solving the k-WTA problem.