Marker-less Tracking for AR: A Learning-Based Approach
ISMAR '02 Proceedings of the 1st International Symposium on Mixed and Augmented Reality
Real-Time Simultaneous Localisation and Mapping with a Single Camera
ICCV '03 Proceedings of the Ninth IEEE International Conference on Computer Vision - Volume 2
Recursive Unsupervised Learning of Finite Mixture Models
IEEE Transactions on Pattern Analysis and Machine Intelligence
IEEE Transactions on Pattern Analysis and Machine Intelligence
Fusing Points and Lines for High Performance Tracking
ICCV '05 Proceedings of the Tenth IEEE International Conference on Computer Vision - Volume 2
Online camera pose estimation in partially known and dynamic scenes
ISMAR '06 Proceedings of the 5th IEEE and ACM International Symposium on Mixed and Augmented Reality
Fusion of 3d and appearance models for fast object detection and pose estimation
ACCV'06 Proceedings of the 7th Asian conference on Computer Vision - Volume Part II
Acquisition of High Quality Planar Patch Features
ISVC '08 Proceedings of the 4th International Symposium on Advances in Visual Computing
Hi-index | 0.00 |
In dynamic scenes with occluding objects many features need to be tracked for a robust real-time camera pose estimation. An open problem is that tracking too many features has a negative effect on the real-time capability of a tracking approach. This paper proposes a method for the feature management which performs a statistical analysis of the ability to track a feature and then uses only those features which are very likely to be tracked from a current camera position. Thereby a large set of features in different scales is created, where every feature holds a probability distribution of camera positions from which the feature can be tracked successfully. As only the feature points with the highest probability are used in the tracking step, the method can handle a large amount of features in different scale without losing the ability of real time performance. Both the statistical analysis and the reconstruction of the features' 3D coordinates are performed online during the tracking and no preprocessing step is needed.