Small Sample Size Effects in Statistical Pattern Recognition: Recommendations for Practitioners
IEEE Transactions on Pattern Analysis and Machine Intelligence
Data Compression and Local Metrics for Nearest Neighbor Classification
IEEE Transactions on Pattern Analysis and Machine Intelligence
A class-dependent weighted dissimilarity measure for nearest neighbor classification problems
Pattern Recognition Letters
Locally Adaptive Metric Nearest-Neighbor Classification
IEEE Transactions on Pattern Analysis and Machine Intelligence
Adaptive Quasiconformal Kernel Nearest Neighbor Classification
IEEE Transactions on Pattern Analysis and Machine Intelligence
ICPR '04 Proceedings of the Pattern Recognition, 17th International Conference on (ICPR'04) Volume 2 - Volume 02
Learning Weighted Metrics to Minimize Nearest-Neighbor Classification Error
IEEE Transactions on Pattern Analysis and Machine Intelligence
An empirical analysis of the probabilistic K-nearest neighbour classifier
Pattern Recognition Letters
Supervised locally linear embedding
ICANN/ICONIP'03 Proceedings of the 2003 joint international conference on Artificial neural networks and neural information processing
Considerations about sample-size sensitivity of a family of editednearest-neighbor rules
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Hi-index | 0.00 |
A probabilistic k-nn (PKnn) method was introduced in [13] under the Bayesian point of view. This work showed that posterior inference over the parameter k can be performed in a relatively straightforward manner using Markov Chain Monte Carlo (MCMC) methods. This method was extended by Everson and Fieldsen [14] to deal with metric learning. In this work we propose two different dissimilarities functions to be used inside this PKnn framework. These dissimilarities functions can be seen as a simplified version of the full-covariance distance functions just proposed. Furthermore we propose to use a classdependent dissimilarity function as proposed in [8] aim at improving the k-nn classifier. In the present work we pursue a simultaneously learning of the dissimilarity function parameters together with the parameter k of the k-nn classifier. The experiments show that this simultaneous learning lead to an improvement of the classifier with respect to the standard k-nn and state-of-the-art technique as well.