Hierarchical Chamfer Matching: A Parametric Edge Matching Algorithm
IEEE Transactions on Pattern Analysis and Machine Intelligence
Introduction to algorithms
Spectral Segmentation with Multiscale Graph Decomposition
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Volume 2 - Volume 02
Contour-Based Learning for Object Detection
ICCV '05 Proceedings of the Tenth IEEE International Conference on Computer Vision (ICCV'05) Volume 1 - Volume 01
Parametric correspondence and chamfer matching: two new techniques for image matching
IJCAI'77 Proceedings of the 5th international joint conference on Artificial intelligence - Volume 2
Shape context and chamfer matching in cluttered scenes
CVPR'03 Proceedings of the 2003 IEEE computer society conference on Computer vision and pattern recognition
A boundary-fragment-model for object detection
ECCV'06 Proceedings of the 9th European conference on Computer Vision - Volume Part II
Proceedings of the 2010 conference on Artificial Intelligence Research and Development: Proceedings of the 13th International Conference of the Catalan Association for Artificial Intelligence
Testing image segmentation for topological SLAM with omnidirectional images
MICAI'10 Proceedings of the 9th Mexican international conference on Advances in artificial intelligence: Part I
Hi-index | 0.00 |
Shape is one of the useful information for object detection. The human visual system can often recognize objects based on the 2-D outline shape alone. In this paper, we address the challenging problem of shape matching in the presence of complex background clutter and occlusion. To this end, we propose a graph-based approach for shape matching. Unlike prior methods which measure the shape similarity without considering the relation among edge pixels, our approach uses the connectivity of edge pixels by generating a graph. A group of connected edge pixels, which is represented by an "edge" of the graph, is considered together and their similarity cost is defined for the "edge" weight by explicit comparison with the corresponding template part. This approach provides the key advantage of reducing ambiguity even in the presence of background clutter and occlusion. The optimization is performed by means of a graph-based dynamic algorithm. The robustness of our method is demonstrated for several examples including long video sequences. Finally, we applied our algorithm to our grasping robot system by providing the object information in the form of prompt hand-drawn templates.