Transformation Invariance in Pattern Recognition-Tangent Distance and Tangent Propagation
Neural Networks: Tricks of the Trade, this book is an outgrowth of a 1996 NIPS workshop
Laplacian Eigenmaps for dimensionality reduction and data representation
Neural Computation
Learning the Kernel Matrix with Semidefinite Programming
The Journal of Machine Learning Research
A kernel view of the dimensionality reduction of manifolds
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Formulating context-dependent similarity functions
Proceedings of the 13th annual ACM international conference on Multimedia
Beyond the point cloud: from transductive to semi-supervised learning
ICML '05 Proceedings of the 22nd international conference on Machine learning
Metric Learning for Text Documents
IEEE Transactions on Pattern Analysis and Machine Intelligence
On Learning Vector-Valued Functions
Neural Computation
Learning Eigenfunctions Links Spectral Embedding and Kernel PCA
Neural Computation
Model-based transductive learning of the kernel matrix
Machine Learning
Learning Distance Metrics with Contextual Constraints for Image Retrieval
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 2
Learning Spectral Clustering, With Application To Speech Separation
The Journal of Machine Learning Research
Learning a Mahalanobis distance metric for data clustering and classification
Pattern Recognition
SSPS: A Semi-Supervised Pattern Shift for Classification
Neural Processing Letters
Learning low-rank kernel matrices for constrained clustering
Neurocomputing
Dimensionality reduction by Mixed Kernel Canonical Correlation Analysis
Pattern Recognition
Towards ontological similarity for spatial hierarchies
Proceedings of the Third ACM SIGSPATIAL International Workshop on Querying and Mining Uncertain Spatio-Temporal Data
Journal of Information Science
Hi-index | 0.00 |
Distance metric learning and nonlinear dimensionality reduction are two interesting and active topics in recent years. However, the connection between them is not thoroughly studied yet. In this paper, a transductive framework of distance metric learning is proposed and its close connection with many nonlinear spectral dimensionality reduction methods is elaborated. Furthermore, we prove a representer theorem for our framework, linking it with function estimation in an RKHS, and making it possible for generalization to unseen test samples. In our framework, it suffices to solve a sparse eigenvalue problem, thus datasets with 105 samples can be handled. Finally, experiment results on synthetic data, several UCI databases and the MNIST handwritten digit database are shown.