On the Verification of Hypothesized Matches in Model-Based Recognition
IEEE Transactions on Pattern Analysis and Machine Intelligence
Localization vs. identification of semi-algebraic sets
COLT '93 Proceedings of the sixth annual conference on Computational learning theory
Grouping-Based Nonadditive Verification
IEEE Transactions on Pattern Analysis and Machine Intelligence
Bounds on Shape Recognition Performance
IEEE Transactions on Pattern Analysis and Machine Intelligence
Grouping-Based Nonadditive Verification
IEEE Transactions on Pattern Analysis and Machine Intelligence
VC-Dimension Analysis of Object Recognition Tasks
Journal of Mathematical Imaging and Vision
Predicting Performance of Object Recognition
IEEE Transactions on Pattern Analysis and Machine Intelligence
Some Tradeoffs and a New Algorithm for Geometric Hashing
ICPR '98 Proceedings of the 14th International Conference on Pattern Recognition-Volume 2 - Volume 2
An A Contrario Decision Method for Shape Element Recognition
International Journal of Computer Vision
Performance characterization in computer vision: A guide to best practices
Computer Vision and Image Understanding
On Straight Line Segment Detection
Journal of Mathematical Imaging and Vision
Adaptive image retrieval based on the spatial organization of colors
Computer Vision and Image Understanding
A Computational Learning Theory of Active Object Recognition Under Uncertainty
International Journal of Computer Vision
Hi-index | 0.14 |
Many recognition procedures rely on the consistency of a subset of data features with a hypothesis as the sufficient evidence to the presence of the corresponding object. We analyze here the performance of such procedures, using a probabilistic model, and provide expressions for the sufficient size of such data subsets, that, if consistent, guarantee the validity of the hypotheses with arbitrary confidence. We focus on 2D objects and the affine transformation class, and provide, for the first time, an integrated model which takes into account the shape of the objects involved, the accuracy of the data collected, the clutter present in the scene, the class of the transformations involved, the accuracy of the localization, and the confidence we would like to have in our hypotheses. Interestingly, it turns out that most of these factors can be quantified cumulatively by one parameter, denoted "effective similarity," which largely determines the sufficient subset size. The analysis is based on representing the class of instances corresponding to a model object and a group of transformations, as members of a metric space, and quantifying the variation of the instances by a metric cover.