Laplacian Eigenmaps for dimensionality reduction and data representation
Neural Computation
Principal Manifolds and Nonlinear Dimensionality Reduction via Tangent Space Alignment
SIAM Journal on Scientific Computing
Image Manifolds which are Isometric to Euclidean Space
Journal of Mathematical Imaging and Vision
Manifold Regularization: A Geometric Framework for Learning from Labeled and Unlabeled Examples
The Journal of Machine Learning Research
Dimensionality Reduction of Multimodal Labeled Data by Local Fisher Discriminant Analysis
The Journal of Machine Learning Research
Discriminative Locality Alignment
ECCV '08 Proceedings of the 10th European Conference on Computer Vision: Part I
Geometric Mean for Subspace Selection
IEEE Transactions on Pattern Analysis and Machine Intelligence
Transductive Component Analysis
ICDM '08 Proceedings of the 2008 Eighth IEEE International Conference on Data Mining
Iterative subspace analysis based on feature line distance
IEEE Transactions on Image Processing
Discriminant Locally Linear Embedding With High-Order Tensor Data
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Hi-index | 0.01 |
This paper presents varifold learning, a learning framework based on the mathematical concept of varifolds. Different from manifold based methods, our varifold learning framework does not treat data as being sampled from a manifold; but rather, we presume a weaker varifold structure, based upon which we utilize a Grassmannian manifold at each data point, and convert Grassmannian Laplacians to form the varifold Laplacian by applying linear transformations and aggregating over data points. Two algorithms based on the proposed varifold Laplacian, namely varifold Laplacian eigenmaps and varifold transduction are given, together with theoretical convergence results. Experiments are done on toy and real data sets, and our method consistently gives competitive results suggesting its utility.