A Simple Algorithm for Nearest Neighbor Search in High Dimensions
IEEE Transactions on Pattern Analysis and Machine Intelligence
An Algorithm for Finding Best Matches in Logarithmic Expected Time
ACM Transactions on Mathematical Software (TOMS)
Multidimensional binary search trees used for associative searching
Communications of the ACM
Geometric Hashing: An Overview
IEEE Computational Science & Engineering
Multidimensional Indexing for Recognizing Visual Shapes
IEEE Transactions on Pattern Analysis and Machine Intelligence
IEEE Transactions on Pattern Analysis and Machine Intelligence
Shape Indexing Using Approximate Nearest-Neighbour Search in High-Dimensional Spaces
CVPR '97 Proceedings of the 1997 Conference on Computer Vision and Pattern Recognition (CVPR '97)
ICCV '03 Proceedings of the Ninth IEEE International Conference on Computer Vision - Volume 2
Distinctive Image Features from Scale-Invariant Keypoints
International Journal of Computer Vision
A fast and robust image registration method based on an early consensus paradigm
Pattern Recognition Letters
Multidimensional Binary Search Trees in Database Applications
IEEE Transactions on Software Engineering
Hi-index | 0.00 |
Feature matching plays a key role in many image processing applications. The matching efficacy is crucially determined by the description of features. To be robust and distinctive, feature vectors usually have high dimensions. Thus accurately finding the nearest neighbor of a high dimensional query point in the target image becomes essential. In this paper, we propose a multiple kd-trees method to locate the nearest neighbor for high dimension feature points. First, we project feature points to three hyper-planes corresponding coordinate axes that are with the first three greatest variances. Secondly, for points projected on each splitting hyper-plane, two kd-trees are built that one is the conventional kd-tree and the other has a first split on the hyper-plane with the second largest variance. Thus in total six kd-trees are built to compensate the deficiency of projection may have. Although our method requires a longer time in tree construction, it is still quite efficient (not more than 0.62 seconds for 1000 data of dimension 50). The experiment showed that our method improves the precision of the nearest neighbor searching problem. When the dimension of data is 64 or 128, the average improvement on precision can reach 28% (dimension fixed) and 53% (number of backtracking fixed).