Mixtures of probabilistic principal component analyzers
Neural Computation
Learning probability distributions
Learning probability distributions
Sparse bayesian learning and the relevance vector machine
The Journal of Machine Learning Research
Bayesian Analysis of Mixtures of Factor Analyzers
Neural Computation
Learning Nonlinear Image Manifolds by Global Alignment of Local Linear Models
IEEE Transactions on Pattern Analysis and Machine Intelligence
Fast Transformation-Invariant Component Analysis
International Journal of Computer Vision
Finding the Homology of Submanifolds with High Confidence from Random Samples
Discrete & Computational Geometry
Random Projections of Smooth Manifolds
Foundations of Computational Mathematics
Nonparametric factor analysis with beta process priors
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Block-sparsity: Coherence and efficient recovery
ICASSP '09 Proceedings of the 2009 IEEE International Conference on Acoustics, Speech and Signal Processing
Exploiting structure in wavelet-based Bayesian compressive sensing
IEEE Transactions on Signal Processing
Sampling theorems for signals from the union of finite-dimensional linear subspaces
IEEE Transactions on Information Theory
Block-sparse signals: uncertainty relations and efficient recovery
IEEE Transactions on Signal Processing
Greed is good: algorithmic results for sparse approximation
IEEE Transactions on Information Theory
Compressed Sensing and Redundant Dictionaries
IEEE Transactions on Information Theory
Modeling the manifolds of images of handwritten digits
IEEE Transactions on Neural Networks
Hi-index | 35.68 |
Nonparametric Bayesian methods are employed to constitute a mixture of low-rank Gaussians, for data x ∈ RN that are of high dimension N but are constrained to reside in a low-dimensional subregion of RN. The number of mixture components and their rank are inferred automatically from the data. The resulting algorithm can be used for learning manifolds and for reconstructing signals from manifolds, based on compressive sensing (CS) projection measurements. The statistical CS inversion is performed analytically. We derive the required number of CS random measurements needed for successful reconstruction, based on easily-computed quantities, drawing on block-sparsity properties. The proposed methodology is validated on several synthetic and real datasets.