Generalizing the hough transform to detect arbitrary shapes
Readings in computer vision: issues, problems, principles, and paradigms
Local Grayvalue Invariants for Image Retrieval
IEEE Transactions on Pattern Analysis and Machine Intelligence
An Affine Invariant Interest Point Detector
ECCV '02 Proceedings of the 7th European Conference on Computer Vision-Part I
Distinctive Image Features from Scale-Invariant Keypoints
International Journal of Computer Vision
Matching with PROSAC " Progressive Sample Consensus
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Volume 1 - Volume 01
Multi-Image Matching Using Multi-Scale Oriented Patches
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Volume 1 - Volume 01
A Performance Evaluation of Local Descriptors
IEEE Transactions on Pattern Analysis and Machine Intelligence
Keypoint Recognition Using Randomized Trees
IEEE Transactions on Pattern Analysis and Machine Intelligence
Speeded-Up Robust Features (SURF)
Computer Vision and Image Understanding
Description of interest regions with local binary patterns
Pattern Recognition
Pose tracking from natural features on mobile phones
ISMAR '08 Proceedings of the 7th IEEE/ACM International Symposium on Mixed and Augmented Reality
Rover visual obstacle avoidance
IJCAI'81 Proceedings of the 7th international joint conference on Artificial intelligence - Volume 2
ASIFT: A New Framework for Fully Affine Invariant Image Comparison
SIAM Journal on Imaging Sciences
Machine learning for high-speed corner detection
ECCV'06 Proceedings of the 9th European conference on Computer Vision - Volume Part I
Augmented reality supporting user-centric building information management
The Visual Computer: International Journal of Computer Graphics
Hi-index | 0.00 |
This paper describes a method for feature-based matching which offers very fast runtime performance due to the simple quantised patches used for matching and a tree-based lookup scheme which prevents the need for exhaustively comparing each query patch against the entire feature database. The method enables seven independently moving targets in a test sequence to be localised in an average total processing time of 6.03 ms per frame.A training phase is employed to identify the most repeatable features from a particular range of viewpoints and to learn a model for the patches corresponding to each feature. Feature models consist of independent histograms of quantised intensity for each pixel in the patch, which we refer to as Histogrammed Intensity Patches (HIPs). The histogram values are thresholded and the feature model is stored in a compact binary representation which requires under 60 bytes of memory per feature and permits the rapid computation of a matching score using bitwise operations.The method achieves better matching robustness than the state-of-the-art fast localisation schemes introduced by Wagner et al. (IEEE International Symposium on Mixed and Augmented Reality, 2008). Additionally both the runtime memory usage and computation time are reduced by a factor of more than four.