Occlusion Boundaries from Motion: Low-Level Detection and Mid-Level Reasoning
International Journal of Computer Vision
Local detection of occlusion boundaries in video
Image and Vision Computing
Bottom-up and top-down object matching using asynchronous agents and a contrario principles
ICVS'08 Proceedings of the 6th international conference on Computer vision systems
ACCV'06 Proceedings of the 7th Asian conference on Computer Vision - Volume Part II
ECCV'06 Proceedings of the 9th European conference on Computer Vision - Volume Part II
Genetic programming as strategy for learning image descriptor operators
Intelligent Data Analysis
Hi-index | 0.00 |
Current feature-based object recognition methods use information derived from local image patches. For robustness, features are engineered for invariance to various transformations, such as rotation, scaling, or affine warping. When patches overlap object boundaries, however, errors in both detection and matching will almost certainly occur due to inclusion of unwanted background pixels. This is common in real images, which often contain significant background clutter, objects which are not heavily textured, or objects which occupy a relatively small portion of the image. We suggest improvements to the popular Scale Invariant Feature Transform (SIFT) which incorporate local object boundary information. The resulting feature detection and descriptor creation processes are invariant to changes in background. We call this method the Background and Scale Invariant Feature Transform (BSIFT). We demonstrate BSIFT's superior performance in feature detection and matching on synthetic and natural images.