Distinctive Image Features from Scale-Invariant Keypoints
International Journal of Computer Vision
Scalable Recognition with a Vocabulary Tree
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 2
World-scale mining of objects and events from community photo collections
CIVR '08 Proceedings of the 2008 international conference on Content-based image and video retrieval
An iterative image registration technique with an application to stereo vision
IJCAI'81 Proceedings of the 7th international joint conference on Artificial intelligence - Volume 2
Wikireality: augmenting reality with community driven websites
ICME'09 Proceedings of the 2009 IEEE international conference on Multimedia and Expo
SURF: speeded up robust features
ECCV'06 Proceedings of the 9th European conference on Computer Vision - Volume Part I
Mobile visual search from dynamic image databases
SCIA'11 Proceedings of the 17th Scandinavian conference on Image analysis
Technical Section: Exploring the use of handheld AR for outdoor navigation
Computers and Graphics
Review: Mobile guides: Taxonomy of architectures, context awareness, technologies and applications
Journal of Network and Computer Applications
Hi-index | 0.00 |
We present an augmented reality tourist guide on mobile devices. Many of latest mobile devices contain cameras, location, orientation and motion sensors. We demonstrate how these devices can be used to bring tourism information to users in a much more immersive manner than traditional text or maps. Our system uses a combination of camera, location and orientation sensors to augment live camera view on a device with the available information about the objects in the view. The augmenting information is obtained by matching a camera image to images in a database on a server that have geotags in the vicinity of the user location. We use a subset of geotagged English Wikipedia pages as the main source of images and augmenting text information. At the time of publication our database contained 50 K pages with more than 150 K images linked to them. A combination of motion estimation algorithms and orientation sensors is used to track objects of interest in the live camera view and place augmented information on top of them.