On a Generalization of a Normal Map and Equation
SIAM Journal on Control and Optimization
On Convergence Conditions of an Extended Projection Neural Network
Neural Computation
On the Computational Power of Winner-Take-All
Neural Computation
A Recurrent Neural Network for Solving a Class of General Variational Inequalities
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Another K-winners-take-all analog neural network
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
A Simplified Dual Neural Network for Quadratic Programming With Its KWTA Application
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
The k-winners-take-all (k-WTA) problem is to select klargest inputs from a set of inputs in a network, which has many applications in machine learning. The Cournot-Nash equilibrium is an important problem in economic models . The two problems can be formulated as linear variational inequalities (LVIs). In the paper, a linear case of the general projection neural network (GPNN) is applied for solving the resulting LVIs, and consequently the two practical problems. Compared with existing recurrent neural networks capable of solving these problems, the designed GPNN is superior in its stability results and architecture complexity.