Representations that uniquely characterize images modulo translation, rotation, and scaling
Pattern Recognition Letters
Local Grayvalue Invariants for Image Retrieval
IEEE Transactions on Pattern Analysis and Machine Intelligence
Multiple View Geometry in Computer Vision
Multiple View Geometry in Computer Vision
Scale & Affine Invariant Interest Point Detectors
International Journal of Computer Vision
Distinctive Image Features from Scale-Invariant Keypoints
International Journal of Computer Vision
Moment invariants for recognition under changing viewpoint and illumination
Computer Vision and Image Understanding - Special issue on color for image indexing and retrieval
A Performance Evaluation of Local Descriptors
IEEE Transactions on Pattern Analysis and Machine Intelligence
SURF: speeded up robust features
ECCV'06 Proceedings of the 9th European conference on Computer Vision - Volume Part I
Hi-index | 0.00 |
Points matching between two or more images of a scene shot from different viewpoints is the crucial step to defining epipolar geometry between views, recover the camera's egomotion or build a 3D model of the framed scene. Unfortunately in most of the common cases robust correspondences between points in different images can be defined only when small variations in viewpoint position, focal length or lighting are present between images. In all the other conditions ad hoc assumptions on the 3D scene or just weak correspondences through statistical approaches can be used. In this paper, we present a novel matching method where depth-maps, nowadays available from cheap and off the shelf devices, are integrated with 2D images to provide robust descriptors even when wide baseline or strong lighting variations are present. We show how depth information can highly improve matching in wide-baseline contexts with respect to state-of-the-art descriptors for simple images.