International Journal of Computer Vision - 1998 Marr Prize
Multiple View Geometry in Computer Vision
Multiple View Geometry in Computer Vision
Preemptive RANSAC for Live Structure and Motion Estimation
ICCV '03 Proceedings of the Ninth IEEE International Conference on Computer Vision - Volume 2
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 1
Real-Time Visual SLAM with Resilience to Erratic Motion
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 1
MonoSLAM: Real-Time Single Camera SLAM
IEEE Transactions on Pattern Analysis and Machine Intelligence
Parallel Tracking and Mapping for Small AR Workspaces
ISMAR '07 Proceedings of the 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality
Undelayed initialization of line segments in monocular SLAM
IROS'09 Proceedings of the 2009 IEEE/RSJ international conference on Intelligent robots and systems
Machine learning for high-speed corner detection
ECCV'06 Proceedings of the 9th European conference on Computer Vision - Volume Part I
Inverse Depth Parametrization for Monocular SLAM
IEEE Transactions on Robotics
Mobile robot localization through identifying spatial relations from detected corners
IWINAC'11 Proceedings of the 4th international conference on Interplay between natural and artificial computation: new challenges on bioinspired applications - Volume Part II
Mobile robot map building from time-of-flight camera
Expert Systems with Applications: An International Journal
Hi-index | 0.00 |
In the robotics and computer vision communities, localization and mapping of an unknown environment is a well studied problem. To tackle this problem in real-time using a single camera, state-of-the-art Simultaneous Localization and Mapping (SLAM) or Structure from Motion (SfM) algorithms can be used. To create the model of the unknown environment, the camera moves and adds to the map from point to point, and assumes that these detected points are unique 3D corners. However, the scene usually contains false 3D corners, lying at e.g. occlusion boundaries. Inserting these points into the map may lead to SLAM failure or to less accurate estimations in SfM. In this work, a corner selection scheme is proposed that exploits the amplitude and depth signals of a Time-of-Flight (ToF) camera. The selection scheme detects false 3D corners based on a 3D cornerness measure. We then prove that the rejection of these corners increases the accuracy with a simulated SfM example and show the results of using our selection scheme with the ToF camera sequences.