Algorithmic Applications of Low-Distortion Geometric Embeddings
FOCS '01 Proceedings of the 42nd IEEE symposium on Foundations of Computer Science
A Nonlinear Feature Extraction Algorithm Using Distance Transformation
IEEE Transactions on Computers
Locally linear metric adaptation for semi-supervised clustering
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Classification of gene-expression data: The manifold-based metric learning way
Pattern Recognition
BoostCluster: boosting clustering by pairwise constraints
Proceedings of the 13th ACM SIGKDD international conference on Knowledge discovery and data mining
Metric learning by discriminant neighborhood embedding
Pattern Recognition
Learning a Mahalanobis distance metric for data clustering and classification
Pattern Recognition
International Journal of Knowledge Engineering and Soft Data Paradigms
Retrieval based interactive cartoon synthesis via unsupervised bi-distance metric learning
MM '09 Proceedings of the 17th ACM international conference on Multimedia
Semi-supervised metric learning using pairwise constraints
IJCAI'09 Proceedings of the 21st international jont conference on Artifical intelligence
Kernel-Based metric adaptation with pairwise constraints
ICMLC'05 Proceedings of the 4th international conference on Advances in Machine Learning and Cybernetics
Local discriminative distance metrics ensemble learning
Pattern Recognition
Hi-index | 0.00 |
Distance-based methods in pattern recognition and machine learning have to rely on a similarity or dissimilarity measure between patterns in the input space. For many applications, Euclidean distance in the input space is not a good choice and hence more complicated distance metrics have to be used. In this paper, we propose a parametric method for metric learning based on class label information. We first define a dissimilarity measure that can be proved to be metric. It has the favorable property that between-class dissimilarity is always larger than within-class dissimilarity. We then perform parametric learning to find a regression mapping from the input space to a feature space, such that the dissimilarity between patterns in the input space is approximated by the Euclidean distance between points in the feature space. Parametric learning is performed using the iterative majorization algorithm. Experimental results on real-world benchmark data sets show that this approach is promising.