Principal component neural networks: theory and applications
Principal component neural networks: theory and applications
EM algorithms for PCA and SPCA
NIPS '97 Proceedings of the 1997 conference on Advances in neural information processing systems 10
Mixtures of probabilistic principal component analyzers
Neural Computation
A constrained EM algorithm for principal component analysis
Neural Computation
Sequential EM learning for subspace analysis
Pattern Recognition Letters
Neural Networks: A Comprehensive Foundation (3rd Edition)
Neural Networks: A Comprehensive Foundation (3rd Edition)
A new probabilistic approach to on-line learning in artificial neural networks
ASMCSS'09 Proceedings of the 3rd International Conference on Applied Mathematics, Simulation, Modelling, Circuits, Systems and Signals
Distributed static linear Gaussian models using consensus
Neural Networks
Hi-index | 0.01 |
A common derivation of principal component analysis (PCA) is based on the minimization of the squared-error between centered data and linear model, corresponding to the reconstruction error. In fact, minimizing the squared-error leads to principal subspace analysis where scaled and rotated principal axes of a set of observed data, are estimated. In this paper, we introduce and investigate an alternative error measure, integrated-squared error (ISE), the minimization of which determines the exact principal axes (without rotational ambiguity) of a set of observed data. We show that exact principal directions emerge from the minimization of ISE. We present a simple EM algorithm, 'EM-ePCA', which is similar to EM-PCA [S.T. Roweis, EM algorithms for PCA and SPCA, in: Advances in Neural Information Processing Systems, vol. 10, MIT Press, Cambridge, 1998, pp. 626-632.], but finds exact principal directions without rotational ambiguity. In addition, we revisit the generalized Hebbian algorithm (GHA) and show that it emerges from the ISE minimization in a single-layer linear feedforward neural network.