Identification of Man-Made Regions in Unmanned Aerial Vehicle Imagery and Videos
IEEE Transactions on Pattern Analysis and Machine Intelligence
Mean Shift: A Robust Approach Toward Feature Space Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
Shape Matching and Object Recognition Using Shape Contexts
IEEE Transactions on Pattern Analysis and Machine Intelligence
Shape Matching and Object Recognition Using Shape Contexts
Shape Matching and Object Recognition Using Shape Contexts
Efficient Graph-Based Image Segmentation
International Journal of Computer Vision
An Automated Method for Large-Scale, Ground-Based City Model Acquisition
International Journal of Computer Vision
Distinctive Image Features from Scale-Invariant Keypoints
International Journal of Computer Vision
A Two-Stage Level Set Evolution Scheme for Man-Made Objects Detection in Aerial Images
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Volume 1 - Volume 01
Automatic 3D to 2D Registration for the Photorealistic Rendering of Urban Scenes
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Volume 2 - Volume 02
Alignment of Continuous Video onto 3D Point Clouds
IEEE Transactions on Pattern Analysis and Machine Intelligence
Multiview Geometry for Texture Mapping 2D Images Onto 3D Range Data
CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 2
Hi-index | 0.00 |
This paper proposes an automatic method for registering images from different sensors, particularly 2D optical sensors and 3D range sensors, without any assumption about initial alignment. Many existing methods try to reconstruct 3D points from 2D image sequences, and then match 3D primitives from both sides. The availability of appropriate multiple images associated with 3D range data, the well-known challenge of inferring 3D from 2D and the difficulty of establishing correspondences among 3D primitives when there is no pre-knowledge about initial pose estimation, lead us to a different approach based on region matching between optical images and depth images projected from range data. This paper details our interest region extraction method for optical images and also the efficient region matching component. Experiments using several cities' aerial images and LiDAR (Light Detection and Ranging) data illustrate the effectiveness of the proposed approach even when facing considerably geometric distortions.