Distinctive Image Features from Scale-Invariant Keypoints
International Journal of Computer Vision
Global Localization and Relative Pose Estimation Based on Scale-Invariant Features
ICPR '04 Proceedings of the Pattern Recognition, 17th International Conference on (ICPR'04) Volume 4 - Volume 04
A Performance Evaluation of Local Descriptors
IEEE Transactions on Pattern Analysis and Machine Intelligence
Scalable Recognition with a Vocabulary Tree
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 2
Are GSM Phones THE Solution for Localization?
WMCSA '06 Proceedings of the Seventh IEEE Workshop on Mobile Computing Systems & Applications
Combining image descriptors to effectively retrieve events from visual lifelogs
MIR '08 Proceedings of the 1st ACM international conference on Multimedia information retrieval
Interactive museum guide: accurate retrieval of object descriptions
AMR'06 Proceedings of the 4th international conference on Adaptive multimedia retrieval: user, context, and feedback
SURF: speeded up robust features
ECCV'06 Proceedings of the 9th European conference on Computer Vision - Volume Part I
Place lab: device positioning using radio beacons in the wild
PERVASIVE'05 Proceedings of the Third international conference on Pervasive Computing
Place recognition via 3d modeling for personal activity lifelog using wearable camera
MMM'12 Proceedings of the 18th international conference on Advances in Multimedia Modeling
Combining wearable sensors for location-free monitoring of gait in older people
Journal of Ambient Intelligence and Smart Environments
Hi-index | 0.00 |
The SenseCam is a wearable camera that automatically takes photos of the wearer's activities, generating thousands of images per day. Automatically organising these images for efficient search and retrieval is a challenging task, but can be simplified by providing semantic information with each photo, such as the wearer's location during capture time. We propose a method for automatically determining the wearer's location using an annotated image database, described using SURF interest point descriptors. We show that SURF out-performs SIFT in matching SenseCam images and that matching can be done efficiently using hierarchical trees of SURF descriptors. Additionally, by re-ranking the top images using bi-directional SURF matches, location matching performance is improved further.