Similarity metric learning for a variable-kernel classifier
Neural Computation
Discriminant Adaptive Nearest Neighbor Classification
IEEE Transactions on Pattern Analysis and Machine Intelligence
Locally Adaptive Metric Nearest-Neighbor Classification
IEEE Transactions on Pattern Analysis and Machine Intelligence
Laplacian Eigenmaps for dimensionality reduction and data representation
Neural Computation
Convex Optimization
Learning the Kernel Matrix with Semidefinite Programming
The Journal of Machine Learning Research
Learning a Mahalanobis Metric from Equivalence Constraints
The Journal of Machine Learning Research
Learning Distance Metrics with Contextual Constraints for Image Retrieval
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 2
Clustering, dimensionality reduction, and side information
Clustering, dimensionality reduction, and side information
Kernel-based distance metric learning for content-based image retrieval
Image and Vision Computing
Learning nonparametric kernel matrices from pairwise constraints
Proceedings of the 24th international conference on Machine learning
Semisupervised Clustering with Metric Learning using Relative Comparisons
IEEE Transactions on Knowledge and Data Engineering
Learning a Mahalanobis distance metric for data clustering and classification
Pattern Recognition
Semi-supervised metric learning using pairwise constraints
IJCAI'09 Proceedings of the 21st international jont conference on Artifical intelligence
An Optimal Global Nearest Neighbor Metric
IEEE Transactions on Pattern Analysis and Machine Intelligence
Optimization of k nearest neighbor density estimates
IEEE Transactions on Information Theory
The optimal distance measure for nearest neighbor classification
IEEE Transactions on Information Theory
Large margin nearest neighbor classifiers
IEEE Transactions on Neural Networks
A Kernel Approach for Semisupervised Metric Learning
IEEE Transactions on Neural Networks
Convergence of GCM and its application to face recognition
AICI'10 Proceedings of the 2010 international conference on Artificial intelligence and computational intelligence: Part I
Face recognition using consistency method and its variants
RSKT'10 Proceedings of the 5th international conference on Rough set and knowledge technology
Learning low-rank kernel matrices for constrained clustering
Neurocomputing
Learning from pairwise constraints by Similarity Neural Networks
Neural Networks
Fast neighborhood component analysis
Neurocomputing
Semi-supervised clustering with discriminative random fields
Pattern Recognition
Pattern classification and clustering: A review of partially supervised learning approaches
Pattern Recognition Letters
Hi-index | 0.01 |
Distance metric plays an important role in many machine learning algorithms. Recently, there has been growing interest in distance metric learning for semi-supervised setting. In the last few years, many methods have been proposed for metric learning when pairwise similarity (must-link) and/or dissimilarity (cannot-link) constraints are available along with unlabeled data. Most of these methods learn a global Mahalanobis metric (or equivalently, a linear transformation). Although some recently introduced methods have devised nonlinear extensions of linear metric learning methods, they usually allow only limited forms of distance metrics and also can use only similarity constraints. In this paper, we propose a nonlinear metric learning method that learns a completely flexible distance metric via learning a nonparametric kernel matrix. The proposed method uses both similarity and dissimilarity constraints and also the topological structure of the data to learn an appropriate distance metric. Our method is formulated as a convex optimization problem for learning a kernel matrix. This convex problem allows us to give a local-optimum-free metric learning method. Experimental results on synthetic and real-world data sets show that the proposed method outperforms the recently introduced metric learning methods for semi-supervised clustering.