Mean Shift: A Robust Approach Toward Feature Space Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
Contour and Texture Analysis for Image Segmentation
International Journal of Computer Vision
Building Large Knowledge-Based Systems; Representation and Inference in the Cyc Project
Building Large Knowledge-Based Systems; Representation and Inference in the Cyc Project
Combining Belief Networks and Neural Networks for Scene Segmentation
IEEE Transactions on Pattern Analysis and Machine Intelligence
A Region-Based Fuzzy Feature Matching Approach to Content-Based Image Retrieval
IEEE Transactions on Pattern Analysis and Machine Intelligence
Recognizing Surfaces Using Three-Dimensional Textons
ICCV '99 Proceedings of the International Conference on Computer Vision-Volume 2 - Volume 2
The Journal of Machine Learning Research
Learning to Detect Natural Image Boundaries Using Local Brightness, Color, and Texture Cues
IEEE Transactions on Pattern Analysis and Machine Intelligence
Segmentation Induced by Scale Invariance
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Volume 1 - Volume 01
A Hierarchical Field Framework for Unified Context-Based Classification
ICCV '05 Proceedings of the Tenth IEEE International Conference on Computer Vision - Volume 2
Integrating Representative and Discriminative Models for Object Category Detection
ICCV '05 Proceedings of the Tenth IEEE International Conference on Computer Vision - Volume 2
Hi-index | 0.00 |
We present a novel method for detecting the boundaries between objects in images that uses a large, hierarchical, semantic ontology - WordNet. The semantic object hierarchy in WordNet grounds this ill-posed segmentation problem, so that true boundaries are defined as edges between instances of different classes, and all other edges are clutter. To avoid fully classifying each pixel, which is very difficult in generic images, we evaluate the semantic similarity of the two regions bounding each edge in an initial oversegmentation. Semantic similarity is computed using WordNet enhanced with appearance information, and is largely orthogonal to visual similarity. Hence two regions with very similar visual attributes, but from different categories, can have a large semantic distance and therefore evidence of a strong boundary between them, and vice versa. The ontology is trained with images from the UC Berkeley image segmentation benchmark, extended with manual labeling of the semantic content of each image segment. Results on boundary detection against the benchmark images show that semantic similarity computed through WordNet can significantly improve boundary detection compared to generic segmentation.