Real-Time Simultaneous Localisation and Mapping with a Single Camera
ICCV '03 Proceedings of the Ninth IEEE International Conference on Computer Vision - Volume 2
First Steps Towards Handheld Augmented Reality
ISWC '03 Proceedings of the 7th IEEE International Symposium on Wearable Computers
Parallel Tracking and Mapping for Small AR Workspaces
ISMAR '07 Proceedings of the 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality
Automatic Reconstruction of Wide-Area Fiducial Marker Models
ISMAR '07 Proceedings of the 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality
Video-rate localization in multiple maps for wearable augmented reality
ISWC '08 Proceedings of the 2008 12th IEEE International Symposium on Wearable Computers
A comparison of loop closing techniques in monocular SLAM
Robotics and Autonomous Systems
A dataset and evaluation methodology for template-based tracking algorithms
ISMAR '09 Proceedings of the 2009 8th IEEE International Symposium on Mixed and Augmented Reality
A setup for evaluating detectors and descriptors for visual tracking
ISMAR '09 Proceedings of the 2009 8th IEEE International Symposium on Mixed and Augmented Reality
Combining monoSLAM with object recognition for scene augmentation using a wearable camera
Image and Vision Computing
SURF: speeded up robust features
ECCV'06 Proceedings of the 9th European conference on Computer Vision - Volume Part I
Hi-index | 0.00 |
Indoor localization poses is a challenge to computer vision research, since one may not make use of GPS-based devices. A classic approach commonly used in museums, research institutes, etc, is the use of fiducial marker to track the users position. However, this approach is intrusive into the ambient and not always possible. A possible solution would be natural marker detection, but algorithms for this, such as SURF, have not yet achieved real-time performance. A promising approach is a Visual Simultaneous Localization and Mapping (VSLAM) algorithm, which, starting from a known position, is capable of generating a map of the surrounding environment in portable systems. The problem of SLAM algorithms is theirs error accumulation that builds up during the movement. This work presents an algorithm to locate 3D positions in non-instrumented indoor environments using a web camera. We define a hybrid approach, using a pattern-recognition method to reinitialize whenever possible a VSLAM algorithm. An implementation of the proposed algorithm use well-known computer vision algorithms, such as SURF and Davison's SLAM. In addition, tests were made on datasets from walks inside a room. Results indicate that our approach is better than a fiducial marker tracking and pure SLAM tracking in our test environment.