Real-time vision-based camera tracking for augmented reality applications
VRST '97 Proceedings of the ACM symposium on Virtual reality software and technology
Efficient Region Tracking With Parametric Models of Geometry and Illumination
IEEE Transactions on Pattern Analysis and Machine Intelligence
Linear Markers for Robot Navigation with Panoramic Vision
CRV '04 Proceedings of the 1st Canadian Conference on Computer and Robot Vision
Stable Real-Time 3D Tracking Using Online and Offline Information
IEEE Transactions on Pattern Analysis and Machine Intelligence
Combining Edge and Texture Information for Real-Time Accurate 3D Camera Tracking
ISMAR '04 Proceedings of the 3rd IEEE/ACM International Symposium on Mixed and Augmented Reality
Pose determination and plane measurement using a trapezium
Pattern Recognition Letters
Real-time camera tracking for marker-less and unprepared augmented reality environments
Image and Vision Computing
3D SSD tracking with estimated 3D planes
Image and Vision Computing
Omnidirectional Vision Tracking and Positioning for Vehicles
ICNC '08 Proceedings of the 2008 Fourth International Conference on Natural Computation - Volume 06
Monocular Model-Based 3D Location for Autonomous Robots
MICAI '08 Proceedings of the 7th Mexican International Conference on Artificial Intelligence: Advances in Artificial Intelligence
Hi-index | 0.00 |
The aim of this paper is to propose a new monocular-vision strategy for real-time positioning under augmented reality conditions. This is an important aspect to be solved in augmented reality (AR) based navigation in non-controlled environments. In this case, the position and orientation of the moving observer, who usually wears a head mounted display and a camera, must be calculated as accurately as possible in real time. The method is based on analyzing the properties of the projected image of a single pattern consisting of eight small dots which belong to a circle and one dot more at the center of it. Due to the simplicity of the pattern and the low computational cost in the image processing phase, the system is capable of working under on-line requirements. This paper presents a comparison of our strategy with other pose solutions which have been applied in AR or robotic environments.