Three-dimensional object recognition from single two-dimensional images
Artificial Intelligence
Robust model-based motion tracking through the integration of search and estimation
International Journal of Computer Vision
On View Likelihood and Stability
IEEE Transactions on Pattern Analysis and Machine Intelligence
Perceptual Organization and Visual Recognition
Perceptual Organization and Visual Recognition
Distinctive Image Features from Scale-Invariant Keypoints
International Journal of Computer Vision
Selecting Canonical Views for View-Based 3-D Object Recognition
ICPR '04 Proceedings of the Pattern Recognition, 17th International Conference on (ICPR'04) Volume 2 - Volume 02
SURF: speeded up robust features
ECCV'06 Proceedings of the 9th European conference on Computer Vision - Volume Part I
Towards stratified model-based environmental visual perception for humanoid robots
Pattern Recognition Letters
Hi-index | 0.00 |
The problem of accurate 6-DoF pose estimation of 3D objects based on their shape has so far been solved only for specific object geometries. Edge-based recognition and tracking methods rely on the extraction of straight line segments or other primitives. Straight-forward extensions of 2D approaches are potentially more general, but assume a limited range of possible view angles. The general problem is that a 3D object can potentially produce completely different 2D projections depending on the view angle. One way to tackle this problem is to use canonical views. However, accurate shape-based 6-DoF pose estimation requires more information than matching of canonical views can provide. In this paper, we present a novel approach to 6-DoF pose estimation of single-colored objects based on their shape. Our approach combines stereo triangulation with matching against a high-resolution view set of the object, each view having associated orientation information. The errors that arise from separating the position and orientation computation in first place are corrected by a subsequent correction procedure based on online 3D model projection. The proposed approach can estimate the pose of a single object within 20 ms using conventional hardware.