A view of the EM algorithm that justifies incremental, sparse, and other variants
Learning in graphical models
Mixtures of probabilistic principal component analyzers
Neural Computation
Self-Organizing Maps
Supervised dimension reduction of intrinsically low-dimensional data
Neural Computation
Incremental Nonlinear Dimensionality Reduction by Manifold Learning
IEEE Transactions on Pattern Analysis and Machine Intelligence
Learning Nonlinear Image Manifolds by Global Alignment of Local Linear Models
IEEE Transactions on Pattern Analysis and Machine Intelligence
Self-organizing mixture models
Neurocomputing
How to train a classifier based on the huge face database?
AMFG'05 Proceedings of the Second international conference on Analysis and Modelling of Faces and Gestures
Face detection based on the manifold
AVBPA'05 Proceedings of the 5th international conference on Audio- and Video-Based Biometric Person Authentication
Hi-index | 0.00 |
Mixtures of Principal Component Analyzers can be used to model high dimensional data that lie on or near a low dimensional manifold. By linearly mapping the PCA subspaces to one global low dimensional space, we obtain a 'global' low dimensional coordinate system for the data. As shown by Roweis et al., ensuring consistent global low-dimensional coordinates for the data can be expressed as a penalized likelihood optimization problem. We show that a restricted form of the Mixtures of Probabilistic PCA model allows for a more efficient algorithm. Experimental results are provided to illustrate the viability method.