An algorithm for finding nearest neighbours in (approximately) constant average time
Pattern Recognition Letters
A fast branch & bound nearest neighbour classifier in metric spaces
Pattern Recognition Letters
Extension to C-means Algorithm for the Use of Similarity Functions
PKDD '99 Proceedings of the Third European Conference on Principles of Data Mining and Knowledge Discovery
Improvements of TLAESA nearest neighbour search algorithm and extension to approximation search
ACSC '06 Proceedings of the 29th Australasian Computer Science Conference - Volume 48
A Branch and Bound Algorithm for Computing k-Nearest Neighbors
IEEE Transactions on Computers
A Tabular Pruning Rule in Tree-Based Fast Nearest Neighbor Search Algorithms
IbPRIA '07 Proceedings of the 3rd Iberian conference on Pattern Recognition and Image Analysis, Part II
CIARP '08 Proceedings of the 13th Iberoamerican congress on Pattern Recognition: Progress in Pattern Recognition, Image Analysis and Applications
On the least cost for proximity searching in metric spaces
WEA'06 Proceedings of the 5th international conference on Experimental Algorithms
CIARP '08 Proceedings of the 13th Iberoamerican congress on Pattern Recognition: Progress in Pattern Recognition, Image Analysis and Applications
Hi-index | 0.00 |
The knearest neighbor (k-NN) classifier has been extensively used as a nonparametric technique in Pattern Recognition. However, in some applications where the training set is large, the exhaustive k-NNclassifier becomes impractical. Therefore, many fast k-NNclassifiers have been developed to avoid this problem. Most of these classifiers rely on metric properties, usually the triangle inequality, to reduce the number of prototype comparisons. However, in soft sciences, the prototypes are usually described by qualitative and quantitative features (mixed data), and sometimes the comparison function does not satisfy the triangle inequality. Therefore, in this work, a fast kmost similar neighbor (k-MSN) classifier, which uses a Tree structure and an Approximating and Eliminating approach for Mixed Data, not based on metric properties (Tree AEMD), is introduced. The proposed classifier is compared against other fast k-NNclassifiers.