Distinctive Image Features from Scale-Invariant Keypoints
International Journal of Computer Vision
A Mobile Vision System for Urban Detection with Informative Local Descriptors
ICVS '06 Proceedings of the Fourth IEEE International Conference on Computer Vision Systems
An Attentive Machine Interface Using Geo-Contextual Awareness for Mobile Vision Tasks
Proceedings of the 2008 conference on ECAI 2008: 18th European Conference on Artificial Intelligence
HPAT indexing for fast object/scene recognition based on local appearance
CIVR'03 Proceedings of the 2nd international conference on Image and video retrieval
Searching the web with mobile images for location recognition
CVPR'04 Proceedings of the 2004 IEEE computer society conference on Computer vision and pattern recognition
Multimodal reference resolution for mobile spatial interaction in urban environments
Proceedings of the 4th International Conference on Automotive User Interfaces and Interactive Vehicular Applications
Hi-index | 0.00 |
Mobile vision services have recently been proposed for the support of urban nomadic users. While camera phones with image based recognition of urban objects provide intuitive interfaces for the exploration of urban space and mobile work, similar methodology can be applied to vision in mobile robots and autonomous aerial vehicles. A major issue for the performance of the service - involving indexing into a huge amount of reference images - is ambiguity in the visual information. We propose to exploit geo-information in association with visual features to restrict the search within a local context. In a mobile image retrieval task of urban object recognition, we determine object hypotheses from (i) mobile image based appearance and (ii) GPS based positioning, and investigate the performance of Bayesian information fusion with respect to benchmark geo-referenced image databases (TSG-20, TSG-40). This work specifically proposes to introduce position information as geo-contextual priors for geo-attention based object recognition to better prime the vision task. The results from geo-referenced image capture in an urban scenario prove a significant increase in recognition accuracy ( 10%) when using the geo-contextual information in contrast to omitting geo-information, the application of geo-attention is capable to improve accuracy by further 5%.