Modified Hebbian learning for curve and surface fitting
Neural Networks
Modified Oja's algorithms for principal subspace and minor subspace extraction
Neural Processing Letters
Neural Networks for Optimization and Signal Processing
Neural Networks for Optimization and Signal Processing
An Extended Projection Neural Network for Constrained Optimization
Neural Computation
A new neural network for solving linear and quadratic programming problems
IEEE Transactions on Neural Networks
Self-organizing algorithms for generalized eigen-decomposition
IEEE Transactions on Neural Networks
A class of learning algorithms for principal component analysis and minor component analysis
IEEE Transactions on Neural Networks
A recurrent neural network for solving nonlinear convex programs subject to linear constraints
IEEE Transactions on Neural Networks
Hi-index | 0.01 |
A continuous recurrent neural network model is presented for computing the largest and smallest generalized eigenvalue of a symmetric positive pair (A,B). Convergence properties to the extremum eigenvalues based upon Liapunov functional with the help of the generalized eigen-decomposition theorem is obtained. Compared with other existing models, this model is also suitable for computing the smallest generalized eigenvalue simply by replacing A by -A as well as maintaining invariant norm property. Numerical simulation further shows the effectiveness of the proposed model.