MLESAC: a new robust estimator with application to estimating image geometry
Computer Vision and Image Understanding - Special issue on robusst statistical techniques in image understanding
Digital Picture Processing
Moment Forms Invariant to Rotation and Blur in Arbitrary Number of Dimensions
IEEE Transactions on Pattern Analysis and Machine Intelligence
Shape Indexing Using Approximate Nearest-Neighbour Search in High-Dimensional Spaces
CVPR '97 Proceedings of the 1997 Conference on Computer Vision and Pattern Recognition (CVPR '97)
Multiple View Geometry in Computer Vision
Multiple View Geometry in Computer Vision
An Efficient Solution to the Five-Point Relative Pose Problem
IEEE Transactions on Pattern Analysis and Machine Intelligence
Scale & Affine Invariant Interest Point Detectors
International Journal of Computer Vision
Distinctive Image Features from Scale-Invariant Keypoints
International Journal of Computer Vision
A Performance Evaluation of Local Descriptors
IEEE Transactions on Pattern Analysis and Machine Intelligence
Real Time Localization and 3D Reconstruction
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 1
Pattern Recognition and Machine Learning (Information Science and Statistics)
Pattern Recognition and Machine Learning (Information Science and Statistics)
MonoSLAM: Real-Time Single Camera SLAM
IEEE Transactions on Pattern Analysis and Machine Intelligence
PCA-SIFT: a more distinctive representation for local image descriptors
CVPR'04 Proceedings of the 2004 IEEE computer society conference on Computer vision and pattern recognition
SURF: speeded up robust features
ECCV'06 Proceedings of the 9th European conference on Computer Vision - Volume Part I
Learning efficient policies for vision-based navigation
IROS'09 Proceedings of the 2009 IEEE/RSJ international conference on Intelligent robots and systems
Efficient vision-based navigation
Autonomous Robots
Discovery, localization and recognition of smart objects by a mobile robot
SIMPAR'10 Proceedings of the Second international conference on Simulation, modeling, and programming for autonomous robots
How to localize humanoids with a single camera?
Autonomous Robots
Hi-index | 0.00 |
Motion blur is a severe problem in images grabbed by legged robots and, in particular, by small humanoid robots. Standard feature extraction and tracking approaches typically fail when applied to sequences of images strongly affected by motion blur. In this paper, we propose a new feature detection and tracking scheme that is robust even to non-uniform motion blur. Furthermore, we developed a framework for visual odometry based on features extracted out of and matched in monocular image sequences. To reliably extract and track the features, we estimate the point spread function (PSF) of the motion blur individually for image patches obtained via a clustering technique and only consider highly distinctive features during matching. We present experiments performed on standard datasets corrupted with motion blur and on images taken by a camera mounted on walking small humanoid robots to show the effectiveness of our approach. The experiments demonstrate that our technique is able to reliably extract and match features and that it is furthermore able to generate a correct visual odometry, even in presence of strong motion blur effects and without the aid of any inertial measurement sensor.