3D Photography Using Shadows in Dual-Space Geometry
International Journal of Computer Vision
Visual Modeling with a Hand-Held Camera
International Journal of Computer Vision
An Enhanced Positioning Algorithm for a Self-Referencing Hand-Held 3D Sensor
CRV '06 Proceedings of the The 3rd Canadian Conference on Computer and Robot Vision
Stereo Processing by Semiglobal Matching and Mutual Information
IEEE Transactions on Pattern Analysis and Machine Intelligence
Rollin' Justin: mobile platform with variable base
ICRA'09 Proceedings of the 2009 IEEE international conference on Robotics and Automation
Efficient camera-based pose estimation for real-time applications
IROS'09 Proceedings of the 2009 IEEE/RSJ international conference on Intelligent robots and systems
Efficient camera-based pose estimation for real-time applications
IROS'09 Proceedings of the 2009 IEEE/RSJ international conference on Intelligent robots and systems
Manipulator and object tracking for in-hand 3D object modeling
International Journal of Robotics Research
Combined 2D-3D categorization and classification for multimodal perception systems
International Journal of Robotics Research
RGB-D mapping: Using Kinect-style depth cameras for dense 3D modeling of indoor environments
International Journal of Robotics Research
International Journal of Robotics Research
Multi-view dense 3D modelling of untextured objects from a moving projector-cameras system
Machine Vision and Applications
Hi-index | 0.00 |
In the context of 3-D scene modeling, this work aims at the accurate estimation of the pose of a close-range 3-D modeling device, in real-time and passively from its own images. This novel development makes it possible to abandon using inconvenient, expensive external positioning systems. The approach comprises an ego-motion algorithm tracking natural, distinctive features, concurrently with customary 3-D modeling of the scene. The use of stereo vision, an inertial measurement unit, and robust cost functions for pose estimation further increases performance. Demonstrations and abundant video material validate the approach.