Recurrent Neural Networks for Computing Pseudoinverses of Rank-Deficient Matrices
SIAM Journal on Scientific Computing
Complex recurrent neural network for computing the inverse and pseudo-inverse of the complex matrix
Applied Mathematics and Computation
A PCA approach for fast retrieval of structural patterns inattributed graphs
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Computers & Mathematics with Applications
Improved neural solution for the Lyapunov matrix equation based on gradient search
Information Processing Letters
Hi-index | 0.09 |
As the efficient calculation of eigenpairs of a matrix, especially, a general real matrix, is significant in engineering, and neural networks run asynchronously and can achieve high performance in calculation, this paper introduces a recurrent neural network (RNN) to extract some eigenpair. The RNN, whose connection weights are dependent upon the matrix, can be transformed into a complex differential system whose variable z(t) is a complex vector. By the analytic expression of |z(t)|^2, the convergence properties of the RNN are analyzed in detail. With general nonzero initial complex vector, the RNN obtains the largest imaginary part of all eigenvalues. By a rearrangement of connection matrix, the largest real part is obtained. A practice of a 7x7 matrix indicates the validity of this method. Two matrices, whose dimensionalities are 50 and 100, respectively, are employed to test the efficiency of this approach when dimension number becomes large. The results imply that the iteration number at which the network enters into equilibrium state is not sensitive with dimensionality. This RNN can be used to estimate the largest modulus of eigenvalues, etc. Compared with other neural networks designed for the similar aims, this RNN is applicable to general real matrices.