Matching Point Features with Ordered Geometric, Rigidity, and Disparity Constraints
IEEE Transactions on Pattern Analysis and Machine Intelligence
A Global Solution to Sparse Correspondence Problems
IEEE Transactions on Pattern Analysis and Machine Intelligence
Affine/ Photometric Invariants for Planar Intensity Patterns
ECCV '96 Proceedings of the 4th European Conference on Computer Vision-Volume I - Volume I
Multi-view Matching for Unordered Image Sets, or "How Do I Organize My Holiday Snaps?"
ECCV '02 Proceedings of the 7th European Conference on Computer Vision-Part I
Visual Modeling with a Hand-Held Camera
International Journal of Computer Vision
Scale & Affine Invariant Interest Point Detectors
International Journal of Computer Vision
Distinctive Image Features from Scale-Invariant Keypoints
International Journal of Computer Vision
A Performance Evaluation of Local Descriptors
IEEE Transactions on Pattern Analysis and Machine Intelligence
SURF: speeded up robust features
ECCV'06 Proceedings of the 9th European conference on Computer Vision - Volume Part I
Hi-index | 0.00 |
Most existing feature-point matching algorithms rely on photometric region descriptors to distinct and match feature points in two images. In this paper, we propose an efficient feature-point matching algorithm for finding point correspondences between two uncalibrated images separated by small or mid camera baselines. The proposed algorithm does not rely on photometric descriptors for matching. Instead, only the motion smoothness constraint is used, which states that the correspondence vectors within a small neighborhood usually have similar directions and magnitudes. The correspondences of feature points in a neighborhood are collectively determined in such a way that the smoothness of the local correspondence field is maximized. The smoothness constraint is self-contained in the correspondence field and is robust to the camera motion, scene structure, illumination, etc. This makes the entire point-matching process texture-independent, descriptor-free and robust. The experimental results show that the proposed method performs much better than the intensity-based block-matching technique, even when the image contrast varies clearly across images.