Similarity metric learning for a variable-kernel classifier
Neural Computation
Discriminant Adaptive Nearest Neighbor Classification
IEEE Transactions on Pattern Analysis and Machine Intelligence
Applying Neighborhood Consistency for Fast Clustering and Kernel Density Estimation
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Volume 2 - Volume 02
Learning a Mahalanobis Metric from Equivalence Constraints
The Journal of Machine Learning Research
Local distance-based classification
Knowledge-Based Systems
Distance Metric Learning for Large Margin Nearest Neighbor Classification
The Journal of Machine Learning Research
An efficient algorithm for local distance metric learning
AAAI'06 Proceedings of the 21st national conference on Artificial intelligence - Volume 1
Learning a locality discriminating projection for classification
Knowledge-Based Systems
Edited AdaBoost by weighted kNN
Neurocomputing
Learning to rank with document ranks and scores
Knowledge-Based Systems
Face recognition using discriminant sparsity neighborhood preserving embedding
Knowledge-Based Systems
Hi-index | 0.00 |
When there exist an infinite number of samples in the training set, the outcome from nearest neighbor classification (kNN) is independent on its adopted distance metric. However, it is impossible that the number of training samples is infinite. Therefore, selecting distance metric becomes crucial in determining the performance of kNN. We propose a novel two-level nearest neighbor algorithm (TLNN) in order to minimize the mean-absolute error of the misclassification rate of kNN with finite and infinite number of training samples. At the low-level, we use Euclidean distance to determine a local subspace centered at an unlabeled test sample. At the high-level, AdaBoost is used as guidance for local information extraction. Data invariance is maintained by TLNN and the highly stretched or elongated neighborhoods along different directions are produced. The TLNN algorithm can reduce the excessive dependence on the statistical method which learns prior knowledge from the training data. Even the linear combination of a few base classifiers produced by the weak learner in AdaBoost can yield much better kNN classifiers. The experiments on both synthetic and real world data sets provide justifications for our proposed method.