Sparse Greedy Matrix Approximation for Machine Learning
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Laplacian Eigenmaps for dimensionality reduction and data representation
Neural Computation
Rademacher and gaussian complexities: risk bounds and structural results
The Journal of Machine Learning Research
Kernel Methods for Pattern Analysis
Kernel Methods for Pattern Analysis
Semi-Supervised Learning on Riemannian Manifolds
Machine Learning
Manifold Regularization: A Geometric Framework for Learning from Labeled and Unlabeled Examples
The Journal of Machine Learning Research
Sampling from large matrices: An approach through geometric functional analysis
Journal of the ACM (JACM)
On the Effectiveness of Laplacian Normalization for Graph Semi-supervised Learning
The Journal of Machine Learning Research
Vertex cover might be hard to approximate to within 2-ε
Journal of Computer and System Sciences
Graph sparsification by effective resistances
STOC '08 Proceedings of the fortieth annual ACM symposium on Theory of computing
An RKHS for multi-view learning and manifold co-regularization
Proceedings of the 25th international conference on Machine learning
Large margin vs. large volume in transductive learning
Machine Learning
Proceedings of the forty-first annual ACM symposium on Theory of computing
A better approximation ratio for the vertex cover problem
ACM Transactions on Algorithms (TALG)
Transductive rademacher complexity and its applications
COLT'07 Proceedings of the 20th annual conference on Learning theory
Semi-Supervised Learning
Graph-Based Semi-Supervised Learning and Spectral Kernel Design
IEEE Transactions on Information Theory
Sparse semi-supervised learning on low-rank kernel
Neurocomputing
Hi-index | 0.01 |
Representing manifolds using fewer examples has the advantages of eliminating the influence of outliers and noisy points and simultaneously accelerating the evaluation of predictors learned from the manifolds. In this paper, we give the definition of manifold-preserving sparse graphs as a representation of sparsified manifolds and present a simple and efficient manifold-preserving graph reduction algorithm. To characterize the manifold-preserving properties, we derive a bound on the expected connectivity between a randomly picked point outside of a sparse graph and its closest vertex in the sparse graph. We also bound the approximation ratio of the proposed graph reduction algorithm. Moreover, we apply manifold-preserving sparse graphs to semi-supervised learning and propose sparse Laplacian support vector machines (SVMs). After characterizing the empirical Rademacher complexity of the function class induced by the sparse Laplacian SVMs, which is closely related to their generalization errors, we further report experimental results on multiple data sets which indicate their feasibility for classification.