Similarity metric learning for a variable-kernel classifier
Neural Computation
Discriminant Adaptive Nearest Neighbor Classification
IEEE Transactions on Pattern Analysis and Machine Intelligence
Locally Adaptive Metric Nearest-Neighbor Classification
IEEE Transactions on Pattern Analysis and Machine Intelligence
Clustering based on conditional distributions in an auxiliary space
Neural Computation
Constrained K-means Clustering with Background Knowledge
ICML '01 Proceedings of the Eighteenth International Conference on Machine Learning
ICML '02 Proceedings of the Nineteenth International Conference on Machine Learning
Semi-supervised Clustering by Seeding
ICML '02 Proceedings of the Nineteenth International Conference on Machine Learning
Clustering with Instance-level Constraints
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Adaptive Kernel Metric Nearest Neighbor Classification
ICPR '02 Proceedings of the 16 th International Conference on Pattern Recognition (ICPR'02) Volume 3 - Volume 3
Parametric distance metric learning with label information
IJCAI'03 Proceedings of the 18th international joint conference on Artificial intelligence
Semi-supervised model-based document clustering: A comparative study
Machine Learning
Neighbor search with global geometry: a minimax message passing algorithm
Proceedings of the 24th international conference on Machine learning
Robust path-based spectral clustering
Pattern Recognition
Improving fuzzy clustering of biological data by metric learning with side information
International Journal of Approximate Reasoning
Semi-supervised graph clustering: a kernel approach
Machine Learning
Distance metric learning vs. Fisher discriminant analysis
AAAI'08 Proceedings of the 23rd national conference on Artificial intelligence - Volume 2
A scalable kernel-based algorithm for semi-supervised metric learning
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
Information Retrieval Perspective to Nonlinear Dimensionality Reduction for Data Visualization
The Journal of Machine Learning Research
Image classification from small sample, with distance learning and feature selection
ISVC'07 Proceedings of the 3rd international conference on Advances in visual computing - Volume Part II
Transfer metric learning by learning task relationships
Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining
Kernel-Based metric adaptation with pairwise constraints
ICMLC'05 Proceedings of the 4th international conference on Advances in Machine Learning and Cybernetics
Transfer Metric Learning with Semi-Supervised Extension
ACM Transactions on Intelligent Systems and Technology (TIST)
Semi-supervised fuzzy c-means clustering of biological data
WILF'05 Proceedings of the 6th international conference on Fuzzy Logic and Applications
Active selection of clustering constraints: a sequential approach
Pattern Recognition
Hi-index | 0.00 |
Many supervised and unsupervised learning algorithms are very sensitive to the choice of an appropriate distance metric. While classification tasks can make use of class label information for metric learning, such information is generally unavailable in conventional clustering tasks. Some recent research sought to address a variant of the conventional clustering problem called semi-supervised clustering, which performs clustering in the presence of some background knowledge or supervisory information expressed as pairwise similarity or dissimilarity constraints. However, existing metric learning methods for semi-supervised clustering mostly perform global metric learning through a linear transformation. In this paper, we propose a new metric learning method which performs nonlinear transformation globally but linear transformation locally. In particular, we formulate the learning problem as an optimization problem and present two methods for solving it. Through some toy data sets, we show empirically that our locally linear metric adaptation (LLMA) method can handle some difficult cases that cannot be handled satisfactorily by previous methods. We also demonstrate the effectiveness of our method on some real data sets.