Deterministic annealing EM algorithm
Neural Networks
Nonlinear component analysis as a kernel eigenvalue problem
Neural Computation
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Convex Optimization
Learning a Mahalanobis Metric from Equivalence Constraints
The Journal of Machine Learning Research
Learning Distance Metrics with Contextual Constraints for Image Retrieval
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 2
Information-theoretic metric learning
Proceedings of the 24th international conference on Machine learning
Learning nonparametric kernel matrices from pairwise constraints
Proceedings of the 24th international conference on Machine learning
Learning a Mahalanobis distance metric for data clustering and classification
Pattern Recognition
SimpleNPKL: simple non-parametric kernel learning
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Semi-supervised metric learning using pairwise constraints
IJCAI'09 Proceedings of the 21st international jont conference on Artifical intelligence
A Kernel Approach for Semisupervised Metric Learning
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
Distance metric learning is a powerful approach to deal with the clustering problem with side information. For semi-supervised clustering, usually a set of pairwise similarity and dissimilarity constraints is provided as supervisory information. Although some of the existing methods can use both equivalence (similarity) and inequivalence (dissimilarity) constraints, they are usually limited to learning a global Mahalanobis metric (i.e., finding a linear transformation). Moreover, they find metrics only according to the data points appearing in constraints, and cannot utilize information of other data points. In this paper, we propose a probabilistic metric learning algorithm which uses information of unconstrained data points (data points which do not appear in neither positive nor negative constraints) along with both positive and negative constraints. We also kernelize our metric learning method based on the kernel trick which provides a non-linear version of the learned metric. Experimental results on synthetic and real-world data sets demonstrate the effectiveness of the proposed metric learning algorithm.