Relational instance-based learning with lists and terms
Machine Learning - Special issue on inducive logic programming
Boosting margin based distance functions for clustering
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Pattern Recognition and Machine Learning (Information Science and Statistics)
Pattern Recognition and Machine Learning (Information Science and Statistics)
Distances and (Indefinite) Kernels for Sets of Objects
ICDM '06 Proceedings of the Sixth International Conference on Data Mining
New Algorithms for Efficient High-Dimensional Nonparametric Classification
The Journal of Machine Learning Research
Stability of feature selection algorithms: a study on high-dimensional spaces
Knowledge and Information Systems
Information-theoretic metric learning
Proceedings of the 24th international conference on Machine learning
Optimized cutting plane algorithm for support vector machines
Proceedings of the 25th international conference on Machine learning
Fast solvers and efficient implementations for distance metric learning
Proceedings of the 25th international conference on Machine learning
Distance Metric Learning for Large Margin Nearest Neighbor Classification
The Journal of Machine Learning Research
Hi-index | 0.00 |
In this work we propose a novel framework for learning a (dis)similarity function We cast the learning problem as a binary classification task or a regression task in which the new learning instances are the pairwise absolute differences of the original instances Under the classification approach the class label we assign to a specific pairwise difference indicates whether the two original instances associated with the difference are members of the same class or not Under the regression approach we assign positive target values to the pairwise differences of instances from different classes and negative target values to the differences of instances of the same class The computation of the (dis)similarity of two examples amounts to the computation of prediction scores for classification, or the prediction of a continuous value for regression The proposed framework is very general as we are free to use any learning algorithm Moreover, our formulation generally leads to a (dis-)similarity which, depending on the learning algorithm, can be efficient and simple to learn Experiments performed on a number of classification problems demonstrate the effectiveness of the proposed approach.