Improving nearest neighbor classification with cam weighted distance
Pattern Recognition
Weighted locally linear embedding for dimension reduction
Pattern Recognition
Kernel-based metric learning for semi-supervised clustering
Neurocomputing
Generalized iterative RELIEF for supervised distance metric learning
Pattern Recognition
Learning low-rank kernel matrices for constrained clustering
Neurocomputing
A boosting approach for supervised Mahalanobis distance metric learning
Pattern Recognition
Boosting k-NN for Categorization of Natural Scenes
International Journal of Computer Vision
A novel local patch framework for fixing supervised learning models
Proceedings of the 21st ACM international conference on Information and knowledge management
Hi-index | 0.14 |
A quadratic metric dAO (X, Y) =[(X - Y)T AO(X - Y)]驴 is proposed which minimizes the mean-squared error between the nearest neighbor asymptotic risk and the finite sample risk. Under linearity assumptions, a heuristic argument is given which indicates that this metric produces lower mean-squared error than the Euclidean metric. A nonparametric estimate of Ao is developed. If samples appear to come from a Gaussian mixture, an alternative, parametrically directed distance measure is suggested for nearness decisions within a limited region of space. Examples of some two-class Gaussian mixture distributions are included.