Data structures and algorithms for nearest neighbor search in general metric spaces
SODA '93 Proceedings of the fourth annual ACM-SIAM Symposium on Discrete algorithms
An Algorithm for Finding Best Matches in Logarithmic Expected Time
ACM Transactions on Mathematical Software (TOMS)
ACM Computing Surveys (CSUR)
A Fast Nearest-Neighbor Algorithm Based on a Principal Axis Search Tree
IEEE Transactions on Pattern Analysis and Machine Intelligence
Near Neighbor Search in Large Metric Spaces
VLDB '95 Proceedings of the 21th International Conference on Very Large Data Bases
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
Some approaches to improve tree-based nearest neighbour search algorithms
Pattern Recognition
A Branch and Bound Algorithm for Computing k-Nearest Neighbors
IEEE Transactions on Computers
CIARP '08 Proceedings of the 13th Iberoamerican congress on Pattern Recognition: Progress in Pattern Recognition, Image Analysis and Applications
A Pruning Rule Based on a Distance Sparse Table for Hierarchical Similarity Search Algorithms
SSPR & SPR '08 Proceedings of the 2008 Joint IAPR International Workshop on Structural, Syntactic, and Statistical Pattern Recognition
Combining elimination rules in tree-based nearest neighbor search algorithms
SSPR&SPR'10 Proceedings of the 2010 joint IAPR international conference on Structural, syntactic, and statistical pattern recognition
Hi-index | 0.00 |
Some fast nearest neighbor search (NNS) algorithms using metric properties have appeared in the last years for reducing computational cost. Depending on the structure used to store the training set, different strategies to speed up the search have been defined. For instance, pruning rules avoid the search of some branches of a tree in a tree-based search algorithm. In this paper, we propose a new and simple pruning rule that can be used in most of the tree-based search algorithms. All the information needed by the rule can be stored in a table (at preprocessing time). Moreover, the rule can be computed in constant time. This approach is evaluated through real and artificial data experiments. In order to test its performance, the rule is compared to and combined with other previously defined rules.