Time Series Segmentation Using a Novel Adaptive Eigendecomposition Algorithm
Journal of VLSI Signal Processing Systems
Fast RLS-Like Algorithm for Generalized Eigendecomposition and its Applications
Journal of VLSI Signal Processing Systems
Adaptive algorithms for first principal eigenvector computation
Neural Networks
Robust adaptive modified Newton algorithm for generalized eigendecomposition and its application
EURASIP Journal on Advances in Signal Processing
A constrained optimization approach for an adaptive generalized subspace tracking algorithm
Computers and Electrical Engineering
A methodology to support load test analysis
Proceedings of the 32nd ACM/IEEE International Conference on Software Engineering - Volume 2
Computers & Mathematics with Applications
Convergence proof of matrix dynamics for online linear discriminant analysis
Journal of Multivariate Analysis
Dynamical system for computing largest generalized eigenvalue
ISNN'06 Proceedings of the Third international conference on Advances in Neural Networks - Volume Part I
Multidimensional Systems and Signal Processing
Hi-index | 0.00 |
We discuss a new approach to self-organization that leads to novel adaptive algorithms for generalized eigen-decomposition and its variance for a single-layer linear feedforward neural network. First, we derive two novel iterative algorithms for linear discriminant analysis (LDA) and generalized eigen-decomposition by utilizing a constrained least-mean-squared classification error cost function, and the framework of a two-layer linear heteroassociative network performing a one-of-m classification. By using the concept of deflation, we are able to find sequential versions of these algorithms which extract the LDA components and generalized eigenvectors in a decreasing order of significance. Next, two new adaptive algorithms are described to compute the principal generalized eigenvectors of two matrices (as well as LDA) from two sequences of random matrices. We give a rigorous convergence analysis of our adaptive algorithms by using stochastic approximation theory, and prove that our algorithms converge with probability one