Time Series Segmentation Using a Novel Adaptive Eigendecomposition Algorithm
Journal of VLSI Signal Processing Systems
Adaptive algorithms for first principal eigenvector computation
Neural Networks
Theoretical Computer Science
Global Convergence of a PCA Learning Algorithm with a Constant Learning Rate
Computers & Mathematics with Applications
Convergence analysis of the OJAn MCA learning algorithm by the deterministic discrete time method
Theoretical Computer Science
A stable MCA learning algorithm
Computers & Mathematics with Applications
An incremental learning algorithm of recursive Fisher linear discriminant
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
Adaptive incremental principal component analysis in nonstationary online learning environments
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
A robust and globally convergent PCA learning algorithm
Control and Intelligent Systems
Algorithms and networks for accelerated convergence of adaptive LDA
Pattern Recognition
A family of fuzzy learning algorithms for robust principal component analysis neural networks
IEEE Transactions on Fuzzy Systems
Incremental principal component analysis based on adaptive accumulation ratio
ICONIP'08 Proceedings of the 15th international conference on Advances in neuro-information processing - Volume Part I
Adaptive multiple minor directions extraction in parallel using a PCA neural network
Theoretical Computer Science
Face recognition using difference vector plus KPCA
Digital Signal Processing
On the discrete time dynamics of a self-stabilizing MCA learning algorithm
Mathematical and Computer Modelling: An International Journal
Low complexity adaptive algorithms for Principal and Minor Component Analysis
Digital Signal Processing
Hi-index | 0.01 |
We derive and discuss adaptive algorithms for principal component analysis (PCA) that are shown to converge faster than the traditional PCA algorithms due to Oja and Karhunen (1985), Sanger (1989), and Xu (1993). It is well known that traditional PCA algorithms that are derived by using gradient descent on an objective function are slow to converge. Furthermore, the convergence of these algorithms depends on appropriate choices of the gain sequences. Since online applications demand faster convergence and an automatic selection of gains, we present new adaptive algorithms to solve these problems. We first present an unconstrained objective function, which can be minimized to obtain the principal components. We derive adaptive algorithms from this objective function by using: (1) gradient descent; (2) steepest descent; (3) conjugate direction; and (4) Newton-Raphson methods. Although gradient descent produces Xu's LMSER algorithm, the steepest descent, conjugate direction, and Newton-Raphson methods produce new adaptive algorithms for PCA. We also provide a discussion on the landscape of the objective function, and present a global convergence proof of the adaptive gradient descent PCA algorithm using stochastic approximation theory. Extensive experiments with stationary and nonstationary multidimensional Gaussian sequences show faster convergence of the new algorithms over the traditional gradient descent methods. We also compare the steepest descent adaptive algorithm with state-of-the-art methods on stationary and nonstationary sequences